How long it may take MySQL with Innodb tables to shut down ? It can be quite a while.
In default configuration innodb_fast_shutdown=ON the main job Innodb has to do to complete shutdown is flushing dirty buffers. The number of dirty buffers in the buffer pool varies depending on innodb_max_dirty_pages_pct as well as workload and innodb_log_buffer_size and can be anywhere from 10 to 90% in the real life workloads. Innodb_buffer_pool_pages_dirty status will show you the actual data. Now the flush speed also depends on number of factors. First it is your storage configuration – you may be looking at less than 200 writes/sec for single entry level hard drive to tens of thousands of writes/sec for high end SSD card. Flushing can be done using multiple threads (in XtraDB and Innodb Plugin at least) so it scales well with multiple hard drives. The second important variable is your workload, especially how dirty pages would line up on the hard drive. If there are a lot of sequential pages which are dirty Innodb will be able to use larger size IOs – up to 1MB flushing dirty pages which can be a lot faster than flushing data page by page.

So if we have system with single hard drive doing 200 IO/ssc, 48G buffer pool which is 90% dirty and completely random page writes we’ll look at 13500 seconds or about 5min per 1GB of Buffer pool size.
This is worse case scenario though it is quite common in practice to see shutdown time of about 1min per GB of buffer pool per hard drive.

Baron has written a nice post how to decrease innodb shutdown time which you may want to read on this topic.

11 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
Rob Wultsch

I have a somewhat tangential topic about InnoDB shutdown. I have a number of servers which in the somewhat older (major release is still supported…) version of MySQL leak memory. MySQL on these servers takes a very long time to shutdown. Recently we “kill -9” an instance long after it should have flushed all of it’s data to disk and when restarted the error log did not show crash recovery. Is there any good way to know when InnoDB is shutdown when MySQL is not yet?

Rob Wultsch

I did not run that deployment, but if I remember correctly we did not see the shutdown completed line in the error log. Are there any other trustworthy ways to detect if innodb is not in use? (lsof, gdb, etc?)

Shlomi Noach

Hi,
I use a script I’ve written, part of openark-kit, called oak-prepare-shutdown,
which safely closes down replication (verifying no temporary tables remain), lowers the dirty pages percent to 0, and waits for the dirty pages count to reach minimal level (it follows its decrease until it does not decrease for 10 consequent seconds). It then simply quits.

Typical usage would be:
oak-prepare-shutdown –user=root –ask-pass -v && /etc/init.d/mysql stop

I did not have time yet to write documentation and wrap it up nicely. But I’ve uploaded it anyway. It is available in openarkkit downloads page, revision 122.

Morgan Tocker

I’m waiting for part 2 – how long InnoDB warm up might take? I have my own ideas, but I’d like to see how you would put it in words.

Simon J Mudd

warmup takes, well it depends. On what? Basically on what you expect the datbabase server to perform like when you start to give it load. On several mysqld servers I manage the applications expect quite a short time to connect, get their data, and then process it. Usually this is less than 5 seconds. The cold database servers may produce some results in time, but generally if given hundreds of connections and queries the cold cache does not have the data so mysqld must go to disk. Warming up a db server by filling up say 10-20GB of the innodb buffer pool takes me about 5-10 minutes to achieve, after which time, responses are reasonable. That said 5 or 10 minutes can be a long time to wait.

akhan

In my case innodb shutdown and startup both took long i.e. 7 mins each

130514 7:57:39 InnoDB: Starting shutdown…
130514 8:04:26 InnoDB: Shutdown completed; log sequence number 4 3582590782
130514 8:04:26 [Note] /usr/sbin/mysqld: Shutdown complete
….
130514 8:05:00 InnoDB: Initializing buffer pool, size = 4.0G
130514 8:12:29 InnoDB: Completed initialization of buffer pool

Why initializing innodb buffer pool (size 4.0G) would take so long?

Artyom Konovalenko

FYI. It took 15 hours to complete shutdown.
History log size was 219M before shutting down (that was that main reason of shutting down).
There is only one big compressed .ibd table in the database. One purging thread.
HW SSD RAID 1+0. IO was not the limiting factor when shutting down.

2018-01-11 00:44:25 5355 [Note] InnoDB: Starting shutdown…
2018-01-11 00:44:26 7f8c5bff9700 InnoDB: Dumping buffer pool(s) to .//ib_buffer_pool
2018-01-11 00:44:28 7f8c5bff9700 InnoDB: Buffer pool(s) dump completed at 180111 0:44:28
2018-01-11 00:45:26 5355 [Note] InnoDB: Waiting for purge thread to be suspended

2018-01-11 15:40:13 5355 [Note] InnoDB: Waiting for purge thread to be suspended
2018-01-11 15:40:52 5355 [Note] InnoDB: Waiting for page_cleaner to finish flushing of buffer pool
2018-01-11 15:41:01 5355 [Note] InnoDB: Shutdown completed; log sequence number 12480522102178

Regards,
Arty

Prakash Sharma

Dear Team,

We have Performed minor version upgrade of Native MySQL(GA) from version 5.6.39 to 5.6.41 .However the upgrade was stuck for more than 4.5 hours with message “InnoDB: Waiting for purge thread to be suspended” ,this caused us a downtime for 4.5+ hours . We are using default innodb_purge_threads value i.e 1 and hence are not hitting (Bug #11765863, Bug #58868, Bug #60939) which are already fixed in 5.6.5.

I would like to know the scenarios under which Innodb has to wait for purge thread to be suspended before shutting down Innodb Plugin.

Any thoughts on this are appreciated!