Everyone using Innodb tables probably got use to the fact Innodb tables perform non locking reads, meaning unless you use some modifiers such as LOCK IN SHARE MODE or FOR UPDATE, SELECT statements will not lock any rows while running.

This is generally correct, however there a notable exception – INSERT INTO table1 SELECT * FROM table2. This statement will perform locking read (shared locks) for table2 table. It also applies to similar tables with where clause and joins. It is important for tables which is being read to be Innodb – even if writes are done in MyISAM table.

So why was this done, being pretty bad for MySQL Performance and concurrency ? The reason is – replication. In MySQL before 5.1 replication is statement based which means statements replied on the master should cause the same effect as on the slave. If Innodb would not locking rows in source table other transaction could modify the row and commit before transaction which is running INSERT .. SELECT statement. This would make this transaction to be applied on the slave before INSERT… SELECT statement and possibly result in different data than on master. Locking rows in the source table while reading them protects from this effect as other transaction modifies rows before INSERT … SELECT had chance to access it it will also be modified in the same order on the slave. If transaction tries to modify the row after it was accessed and so locked by INSERT … SELECT, transaction will have to wait until statement is completed to make sure it will be executed on the slave in proper order. Gets pretty complicated ? Well all you need to know it had to be done fore replication to work right in MySQL before 5.1.

In MySQL 5.1 this as well as few other problems should be solved by row based replication. I’m however yet to give it real stress tests to see how well it performs 🙂

One more thing to keep into account – INSERT … SELECT actually performs read in locking mode and so partially bypasses versioning and retrieves latest committed row. So even if you’re operation in REPEATABLE-READ mode, this operation will be performed in READ-COMMITTED
mode, potentially giving different result compared to what pure SELECT would give. This by the way applies to SELECT .. LOCK IN SHARE MODE and SELECT … FOR UPDATE as well.

One my ask what is if I’m not using replication and have my binary log disabled ? If replication is not used you can enable innodb_locks_unsafe_for_binlog option, which will relax locks which Innodb sets on statement execution, which generally gives better concurrency. However as the name says it makes locks unsafe fore replication and point in time recovery, so use innodb_locks_unsafe_for_binlog option with caution.

Note disabling binary logs is not enough to trigger relaxed locks. You have to set innodb_locks_unsafe_for_binlog=1 as well. This is done so enabling binary log does not cause unexpected changes in locking behavior and performance problems. You also can use this option with replication sometimes, if you really know what you’re doing. I would not recommend it unless it is really needed as you might not know which other locks will be relaxed in future versions and how it would affect your replication.

So what are safe workarounds if you’re using replication ?

The most general one is to use:

instead of:

INSERT … INTO OUTFILE does not have to set extra locks.

If you use this aproach make sure to delete file after it is loaded back (it has to be done outside of MySQL Server) as otherwise the script will fail second time.

If you need result to be even closer to one of INSERT … SELECT you may execute this transaction in READ-COMMITTED isolation mode.

Other workarounds are less general purpose. For example if you’re doing batch processing which is well indexed you might chop transactions and process rows by small bulks, which do not cause long enough locks to cause the problems.

To complete this article I should show how wait caused by this statement will look in SHOW INNODB STATUS:

—TRANSACTION 0 42304626, ACTIVE 14 sec, process no 29895, OS thread id 2894768 updating or deleting
mysql tables in use 1, locked 1
LOCK WAIT 3 lock struct(s), heap size 320, undo log entries 1
MySQL thread id 1794760, query id 6994946 localhost root Updating
update sample set j=0 where i=5
——- TRX HAS BEEN WAITING 14 SEC FOR THIS LOCK TO BE GRANTED:
RECORD LOCKS space id 0 page no 33504 n bits 328 index j of table test/sample trx id 0 42304626 lock_mode X locks rec but not gap waiting
Record lock, heap no 180 PHYSICAL RECORD: n_fields 2; compact format; info bits 0
0: len 30; hex 306338386465646233353863643936633930363962373361353736383261; asc 0c88dedb358cd96c9069b73a57682a;…(truncated); 1: len 4; hex 00000005; asc ;;

——————
—TRANSACTION 0 42304624, ACTIVE 37 sec, process no 29895, OS thread id 4058032 fetching rows, thread declared inside InnoDB 3
mysql tables in use 1, locked 1
2539 lock struct(s), heap size 224576
MySQL thread id 1794751, query id 6994931 localhost root Sending data
insert into test select * from sample
——–

As you can see INSERT… SELECT has a lot of lock structs, which means it has locked a lot of rows. “fetching rows” of course means it is still going. In this case write is done to MyISAM table so we’ll not see any write activity.

Other transaction which happes to be simple primary key update is waiting on sample table for this record to be unlocked.

21 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
Bill Krahmer

Though it takes longer, You could also use cursors to avoid the locks. Examples can be found on this URL and in the comments:
http://dev.mysql.com/doc/refman/5.0/en/cursors.html

kai

INSERT INTO table1 SELECT * FROM table2

can I just place a table-level lock on table2 to improve the performance? In my use case, there’s only no one touching table2 excep the above query.

Quang

It’s very helpful information.
Minor typo: “SELECT * FROM tbl1 INFO OUTFILE ‘/tmp/tbl1.txt’;”, it should be INTO instead of INFO

VRaj

Is this the same case for myISAM. What is locking spec for insert into tb1 ..select tb2 where both are myISAM tables.

Artem

Has anyone tried changing the isolation level instead of using OUTFILEs.

SET TRANSACTION ISOLATION LEVEL READ COMMITTED;
INSERT INTO TemporaryTable SELECT … FROM HighlyContentiousTableInInnoDb;

See my stackoverflow question/answer for more info: http://stackoverflow.com/questions/2640898/how-to-improve-insert-into-select-locking-behavior

Sylvain261

Artem,

I’ve made a very simple test (mysql 5.1.45) with 2 concurrent process :

Process 1 :
INSERT INTO table2 SELECT * FROM table1 ;

Process 2 :
UPDATE table1 SET myCol= ‘X’ WHERE myCol= ‘XX’;

Test case 1 : REPEATABLE-READ
=> Process 2 is locked until process 1 finish.

Test case 2 : READ COMMITTED
=> Process 2 is not locked

So my understanding is that, if you don’t mind that your inserted table might be not consistent in your slave with the one in your master : using READ COMMITTED transaction isolation level is the way to go, it’s much simpler than the INSERT … INTO OUTFILE workaround posted by peter.
Am I wrong ?

Sylvain261

Answer to myself :
You are wrong !

I’ve got the exception : Transaction level ‘READ-COMMITTED’ in InnoDB is not safe for binlog mode ‘STATEMENT'(INSERT INTO XXX SELECT YYY…)

It seems that, it is not permitted to issue statement that might break consistency between master and slave.
So…In statement replication mode, the OUTFILE workaround seems to be the only one that work if you don’t what to set innodb_locks_unsafe_for_binlog to 1.

Sorry Peter.
Does anyone have an extra workaround ?
Thxs.

Sylvain261

Me again.
I think that a good workaround might be to change the replication mode for the session issuing the SELECT INTO.
SET TRANSACTION ISOLATION LEVEL READ COMMITTED;
SET binlog_format = MIXED;
INSERT INTO ….SELECT … FROM ….
With this two settings :
– There is no lock on the selected tables
– The replication of the insert into select statement is done with row base replication witch is save in this isolation level (changing the replication mode is done only for the active session, it should not create application issues).

I’ve not tested it yet.

jigen

Hi,
I am having problem with Selects …between… timestamp..
for example:
I have this select to select from a 200k rows…
SELECT actionid FROM tablelogs WHERE timestampfield BETWEEN UNIX_TIMESTAMP(‘2010-10-01 ‘) AND UNIX_TIMESTAMP(‘2011-01-01’) GROUP BY actionid….

this select is taking ages to process…. now, is there a way to optimize this select or speed up?
The tablelog is innodb with index at actionid (INT).

Also, if it will help, when I use the command select * from tablelogs vs. select actionid from tablelogs, the latter is taking ages as well while the one with * takes only 0.8 seconds. The table has 7 fields….

I am using MySQL 5+ in SQLYOG … any ideas?

thanks

Andrew Ferk

@Sylvain261

I tested
SET TRANSACTION ISOLATION LEVEL READ COMMITTED;
SET binlog_format = MIXED;
INSERT INTO ….SELECT … FROM ….
and it works without error and does binlog as row. The issue with that is the potential for huge overhead. The INSERT INTO … SELECT I’m running generates about 2.5 million records (or almost a GB in the binlog).

Thank you.

Andrew Ferk

@Sylvain261

I tested
SET TRANSACTION ISOLATION LEVEL READ COMMITTED;
SET binlog_format = MIXED;
INSERT INTO ….SELECT … FROM ….
and it works without error and does binlog as row. The issue with that is the potential for huge overhead. The INSERT INTO … SELECT I’m running generates about 2.5 million records (or almost a GB in the binlog).

Thank you.

vishnu rao

hi peter,

facebook urges people to be cautious while using “load data infile” in production. they report that many a time mysql had crashed.

any thoughts on this ? many bugs have been also reported around this.

http://www.facebook.com/note.php?note_id=394073200932

adelainev

Does this behavior still apply to MySQL 5.5+ ?

josh

+1 for @adelainev. I have the same question whether this behavior still applies in MySQL 5.5+. This post was written in 2006, so just want to see where things stand currently.

mathew

@josh: No this does not applies in MySQL 5.5+. I’ve already check that on a table containig 1 mln rows using insert into select from this table. No lock has been aquired and on the other thread i could simply update a row that should be blocked. But if you would like to have this exact behaviour you can use mentioned above: “lock in share mode” because this does the lock to be aqquired on the second table

sunil kusre

I am using AWS MySQL RDS, InnoDB engine. Our requirement is to insert data in same table and read from same table.

insert into … SAME_TABLE select my_id+1,.,.,.,.,.,.,. from SAME_TABLE where myid = input_my_id (myid is 1st column of composite (5 columns) primary key).

Reading and Writing in same table. For 22,518,912 rows it is taking 14min ( avg_row_size is 136 around 2.85 GB).
I wrote separate script to split the same insert in 8 chunks then it took 10min.

Question : Can 14min time be reduce further?
Is there anyway I can improve the performance? We can use OUTFILE method as suggested by Peter because of permission.
What is the best way to handle this?

josh

@sunil — do you mean you “can’t” use the outfile method? If you have permissions, give it a shot.

sunil kusre

Yes Josh. OUTFILE option write file in MySQL server side. We are using MySQL RDS, as far as I know, no one get access to MySQL RDS server location.

Rui

@mathew: Wrong, it’s still the same in 5.5. Here is the related output from show engine innodb status:
—TRANSACTION BB0E42A, ACTIVE 3 sec fetching rows, thread declared inside InnoDB 334
mysql tables in use 4, locked 4
3264 lock struct(s), heap size 440760, 799203 row lock(s)
MySQL thread id 38, OS thread handle 0x2b3c1c776940, query id 16838101 localhost root Copying to tmp table
create table t as SELECT ld.label_id, cai.inventory_id, COUNT(1) as installed_count from client_app_inventory cai inner join device d on (cai.client_id = d.client_id) inner join label_to_device ld on (ld.device_id = d.id) where ld.trash = ‘f’ and cai.trash = ‘f’ and (cai.status is NULL) group by ld.label_id, cai.inventory_id

All 4 tables in the query are locked!

Django Eijgensteijn

The OUTPUT FILE method is not compatible with using INSERT INTO … SELECT … ON DUPLICATE KEY UPDATE …
Any tips?