After playing yesterday a bit with INSERT … SELECT I decided to check is Innodb locks are relly as efficient in terms of low resource usage as they are advertised.

Lets start with a bit of background – in Innodb row level locks are implemented by having special lock table, located in the buffer pool where small record allocated for each hash and for each row locked on that page bit can be set. This in theory can give overhead as low as few bits per row, so lets see how it looks in practice:

I used the same sample table as yesterday:

Table with rather small row size, containing 1638400 rows.

Now lets lock all rows in this table and see how long does it take:

Looking at SHOW INNODB STATUS we can see:

History list length 5
Total number of lock structs in row lock hash table 6092
LIST OF TRANSACTIONS FOR EACH SESSION:
—TRANSACTION 0 42306982, ACTIVE 89 sec, process no 10099, OS thread id 1878960
6093 lock struct(s), heap size 503104

So 503104 bytes are used to lock 1638400 rows, making it less than 3 bits per locked row;

Now lets see how much exclusinve locks take:

Interestingly enough this statement already takes 1.5 seconds longer (I repeated runs many time to ensure it is repeatable) so exclusive locks seems to be more expensive to set.

On other hand it still consumes exactly same amount of memory:

History list length 5
Total number of lock structs in row lock hash table 6092
LIST OF TRANSACTIONS FOR EACH SESSION:
—TRANSACTION 0 42306989, ACTIVE 195 sec, process no 10099, OS thread id 1878960
6093 lock struct(s), heap size 503104
MySQL thread id 7429, query id 24542 localhost root
show innodb status

Note, running statement in consistent read mode which does not set any locks is going to be faster than any of locking modes, however we can see locking overhead is not that large:

So we can lock 1638400 rows in 2.5 seconds on this pretty old box, which means locking speed will be few millions per second on modern systems.

So we have tested how much memory is using locking all the rows – so all rows are locked from the page. In other test we’ll lock small random portion of rows (about 1/16) with random distribution to the pages (column j was populated by md5(rand()) values) :

Looking at SHOW INNODB STATUS we can see:

Total number of lock structs in row lock hash table 440
LIST OF TRANSACTIONS FOR EACH SESSION:
—TRANSACTION 0 42306999, ACTIVE 133 sec, process no 10099, OS thread id 1878960
441 lock struct(s), heap size 44352

So we locked over 100K rows using about 44KB. This is still quite efficient using less than 4 bits per locked row.

In practice this means memory consumption by row level locks should not be the problem even for rather large databases – even billion of locked rows should take half GB of memory, which is small fraction of memory used on serious systems. Furtermore you would unlikely need or want to lock every row in your table/database which makes it even smaller problem.

8 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
Dimitriy A

We have a table that contains 4 months of data. The requirement is to keep 3 months of data and delete the rest. With under 4 million rows in the table, 760,000 are matched by the where clause in delete statement. I tried to calculate how much space would be used up by the locks given that less than 4 bits per locked row will be used. 760000 * 4 / 8 = 380000. This is about 380 KB. With the InnoDB buffer pool set to 10 MB, I still get the error: “The total number of locks exceeds the lock table size”. With no other activity on the database, this seems like the requirement is higher. Does the size of the row has anything to do with this? We do store large blobs in the table.

Foobarista

Dimitriy: we’ve been playing a lot with DELETE and if your delete doesn’t use an index, InnoDB will put an X lock on every row in the table, so make sure your query uses an index if one is convenient. (Note that we found that MySQL doesn’t do a great job with indexing DELETEs that use a subquery against another table in the WHERE – you should replace it with a DELETE using a FROM and explicit join so it plans index use more predictably.) Also, I think (although I’m not sure) that pages with locked rows stay pinned in memory. In any case, you probably should bump up the buffer to something a whole lot bigger as the “buffer pool” is both cached pages and active lock tables.

Dimitriy A

Thanks Foobarista and Peter. I do realize that the buffer pool is small for the new functionality we will be adding to production to keep it trimmed. What I am saying is that it seems to me that the locks are taking up more memory then what Peter has proposed. My goal was also to see how much memory the locks take up. As the delete is running in our development environment, I see that the number of lock structures and heap size is growing as time passes. If Peter has looked at InnoDB status file before the heap size and lock structures reached a peak during the query run, than maybe he is getting incorrect results. Or maybe I am all wrong and need to be pointed in the right direction.

Sproogs

Hi following the original thread i performed both select statements (lock in and for update) on the same table and it produced two different numbers for lock structs for both transactions.

What would be the reason for this??

Andrew

Dimitry: Strictly speaking, the error message said the *number* of locks was too high, not that the *memory* consumed by those locks was too high. Is there perhaps a separate setting governing the table size, or is this implicitly controlled by the size of the buffer?

Rajeev

Is it possible to know the specific row ids which are locked? I need this info for debugging a lock wait timeout exception in my Java application.

Thanks for sharing this knowledge.

regards,
Rajeev

Ebby

I am considering changing web-applications from VB6 to PhP/MySQL/Apache/InnoDB. Do you think you can show a program where row-locks are used by one user to lock records for edit/update, so the transaction is not interrupted until the transaction is finished/committed? A example of a program is worth a 1000 instructions. Please do this for us. I am sure there are many in this situation.