Need help with your database environment? Talk to a Percona expert.
Peter wrote a post a while ago about choosing a good InnoDB log file size. Not to pick on Peter, but the post actually kind of talks about a lot of things and then doesn’t tell you how to choose a good log file size! So I thought I’d clarify it a little.

The basic point is that your log file needs to be big enough to let InnoDB optimize its I/O, but not so big that recovery takes a long time. That much Peter covered really well. But how do you choose that size? I’ll show you a rule of thumb that works pretty well.

In most cases, when people give you a formula for choosing a configuration setting, you should look at it with skepticism. But in this case you can calculate a reasonable value, believe it or not. Run these queries at your server’s peak usage time:

Notice the log sequence number. That’s the total number of bytes written to the transaction log. So, now you can see how many MB have been written to the log in one minute. (The technique I showed here works on all versions of MySQL. In 5.0 and newer, you can just watch Innodb_os_log_written from SHOW GLOBAL STATUS, too.)

As a rough rule of thumb, you can make the log big enough that it can hold at most an hour or so of logs. That’s generally plenty of data for InnoDB to work with; an hour’s worth is more than enough so that it can reorder the writes to use sequential I/O during the flushing and checkpointing process. At this rate, this server could use about 110 MB of logs, total. Round it up to 128 for good measure. Since there are two log files by default, divide that in half, and now you can set

Does that look surprisingly small? It might. I commonly see log file sizes in the gigabyte ranges. But that’s generally a mistake. The server I used for the measurements above is a big one doing a lot of work, not a toy. Log file sizes can’t be left at the default 5MB for any real workload, but they often don’t need to be as big as you might think, either.

If this rule-of-thumb calculation ends up showing you that your log file size ought to be many gigabytes, well, you have a more active write workload. Perhaps you’re inserting a lot of big rows or something. In this case you might want to make the log smaller so you don’t end up with GB of logs. But also realize this: the recovery time depends not only on the total log file size, but the number of entries in it. If you’re writing huge entries to the log, fewer log entries will fit into a given log file size, which will generally make recovery faster than you might expect with a big log.

However, most of the time when I run this calculation, I end up finding that the log file size needs to be a lot smaller than it’s configured to be. In part that’s because InnoDB’s log entries are very compact. The other reason is that the common advice to size the logs as a fraction of the buffer pool size is just wrong.

One final note: huge buffer pools or really unusual workloads may require bigger (or smaller!) log sizes. This is where formulas break down and judgment and experience are needed. But this “rule of thumb” is generally a good sane place to start.

More Resources

Posts

Database Tools

52 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
Oz Solomon

Baron, thanks for a great article.

I’ve read in a few places that changing innodb_log_file_size after the fact can cause problems, especially when you have slaves. In you experience, is this something I should be worried about or is it safe to update the conf file and restart?

(I’m using 5.0.45 and 5.0.67 community)

Thanks!

Sheeri K. Cabral

Interesting, I never thought about it in such depth — I usually just log on to the system and see how often the log rotates — for instance, just now here’s a sample from a production machine:

/var/lib/mysql> ls -lrth ib_logfile*
-rw-r–r– 1 mysql mysql 400M Nov 22 09:00 ib_logfile1
-rw-r–r– 1 mysql mysql 400M Nov 23 10:51 ib_logfile0

Granted, this is a slow time, but as you can see, there’s at *least* about 2 hours’ worth of space in the logs, as the logfile was “rotated” at 9 am, and it’s 10:51 right now and the logfile is still current. I can logon later and see when the ib_logfile0 rotated as well.

Your way works too, but depends on getting the right 60 seconds for the average workload at peak time. Certainly your method lands you in the right ballpark, and my method is inaccurate for many other reasons — I’m getting the average over what is usually a large file size, so if the peak time lasts 1 hour and the log rotates approximately every 2 hours, I probably don’t want to cut the log file size in 1/2, maybe more like 2/3 or 3/4, because the volume is averaged.

I think both methods definitely have their merit. I’ve found it’s easier to have clients understand their log file size is too big because “look, you only want an hour’s worth, and here I can show you with an ‘ls -l’, you have at least 4 hours’ worth, so let’s try cutting it in half.”

Though I think the nature of our work can be different — we have 100% transparency to the client, and only in dire emergencies do we do stuff without getting the client’s approval first. Most of what we do is recommendations (most recommendations are followed, but still)….we’re more a part of the DBA team, and Percona’s structure (from what I understand) is more of a “we’ll help you get standing on your feet, and prepare you for the future” but aren’t a day-to-day part of the existing DBA team.

Jette

I guess this has changed since 2008. These days the two files always has the same mtime. So I guess your “trick” doesn’t work anymore.

Quote: The first log file will always have the checkpoint info written to it, so that is why even if ib_logfile1 is the “active” logfile, you will still see writes to ib_logfile0.
https://www.pythian.com/blog/ib_logfiles-mtime-mystery/

Sheeri K. Cabral

(oh, and it’s fairly easy to have monitoring mark when a file was rotated, so I can have it figured out automatedly. Then again, the nature of our works are also different — your clients have more dire emergencies than ours, likely because yours come to you in dire emergencies, and we may have some come to us in a dire emergency, but often they stay along for general day-to-day DBA stuff….)

But — this would be a good check for a MySQL Monitoring system, whether it’s the Enterprise offering from Sun, or a cacti monitoring thing (frequency of log rotation) or an OurDelta thing.

Matt

Sheeri, Look at your timestamps again – it’s more like 26 hours than 2!

Ketan Bengali

Hi Baron,

Thanks a lot for the article. It was really good. I had a query about innodb_max_dirty_pages parameter and it’s impact on log file size. Consider a case where innodb_buffer_size is set to 2GB and max_dirty_pages is set to 90%. What it means at any point in time MySQL could have upto (2*0.9) 1.8GB worth of committed and unflushed data in the memory. After MySQL crash and during recovery, MySQL should recover 1.8GB worth of data from log files. Even if log files are not utilized heavily keeping its size small and overwriting it would result in permanently unrecoverable MySQL instance. Does anyone faced this problem before? Is parameter (innodb_max_dirty_pages*innodb_buffer_size) even used in calculating log file size? If not then how does MySQL recover unflushed data after crash?

Thanks,

Ketan. Bengali

Kari Pätilä

Does anyone have an idea why I can’t set innodb_log_file_size without crashing MySQL? The restart says OK, but nothing seems to work. Other innodb variables produce no problems.

Kari Pätilä

Thank you for the quick response.

Ketan Bengali

Thanks a lot Baron for clarifing it. It was really useful.

Thanks,

Ketan Bengali

Jesper Wisborg Krogh

I have got the impression that if you get the error message:

081223 7:54:18 InnoDB: ERROR: the age of the last checkpoint is xxxxx,
InnoDB: which exceeds the log group capacity yyyyy.
InnoDB: If you are using big BLOB or TEXT rows, you must set the
InnoDB: combined size of log files at least 10 times bigger than the
InnoDB: largest such row.

then it is necessary to increase the value of innodb_log_file_size. Is that only a matter of obtaining maximal performance or can it have more severe implications as well such as losing ACID compliancy, corrupt data, etc.?

Jesper Wisborg Krogh

That sounds good, however I assume there is no other alternative than to increase the value of innodb_log_file_size or to make the transactions smaller? In our case innodb_log_file_size is already at 512 MB, so we would prefer it not to increase to all that much.

Jesper Wisborg Krogh

mysql> SHOW GLOBAL VARIABLES LIKE ‘innodb_log_file_size’;
+———————-+———–+
| Variable_name | Value |
+———————-+———–+
| innodb_log_file_size | 536870912 |
+———————-+———–+
1 row in set (0.00 sec)

The error message in the log states that the age of the last checkpoint is 966364465 and the capacity is 966363956. I’m not sure why there capacity is a bit smaller than two times the innodb_log_file_size, but I assume there are some space being used for bookkeeping.

It is one particular transaction that seems to trigger the error and that has a total of nine columns: 3 unsigned ints, 4 ints, 1 unsigned tinyint, and one datetime. It is a cache table so the transaction first deletes all rows and then repopulates it following by a couple of table scans to manipulate the data in an ordered update. The total number of rows is 1286113 (determined with SELECT COUNT(*)). The file containing the data for the table is 772 MB. According to SHOW TABLE STATUS the stats is:

Data_length: 165838848
Index_length: 219807744
Data_free: 0
Comment: InnoDB free: 399360 kB

Other things are going on in the server at the same time. I tried to do a test on our development system and it will run on an idle system with innodb_log_file_size set to 512 MB, so it might also be that we can get around it by trying to reschedule the tasks, however I suspect even so we would be running close to the limit. Which brings me to: is there anyway to monitor whether you are getting close to using all the available space in the innodb log files?

Jesper Wisborg Krogh

Thanks a lot – I had considered that as an option as well as well as keeping track of which rows actually need to be changed. It will of course require a bit more work and bookkeeping, but it should be doable and I believe in this case it is the best long term solution.

Merry Christmas

Adam

Heh, I am messing with this now for my research and I’m quite certain it says right in the readme that it should be 25% of your buffer pool size, which should be 50-80% of available memory. To me it seems like 64M is indeed way too low.

Vladimir Rusinov

Hi. Here is translation to Russian (if someone interested): http://greenmice.info/ru/node/92

John Marc

Yow! Guess I need to up this!

-rw-rw—- 1 mysql mysql 100M Mar 28 16:49 ib_logfile0
-rw-rw—- 1 mysql mysql 100M Mar 28 16:50 ib_logfile1

Using Barons calcs I am looking at 200MB, and Sheens method says 600MB?

Andy Geers

I’m slightly confused by this article – it seems to match very closely the advice given by the InnoDB developers for the innodb_log_buffer_size setting here: http://www.innodb.com/wp/wp-content/uploads/2007/05/uc2007-innodb-performance-optimization.pdf

… rather than the innodb_log_*file*_size setting that you are talking about. Can you confirm that you definitely mean the setting you’re referring to?

Catalin

Hi,

Do you guys know why my innodb data and log files haven’t changed ?
ibdata1 has 1.8G
ib_logfile0 has 5M
ib_logfile1 has 5M

Here are my settings:

innodb_data_home_dir = /var/lib/mysql/
innodb_data_file_path = ibdata1:1000M;ibdata2:10M:autoextend
innodb_log_group_home_dir = /var/lib/mysql/
innodb_log_arch_dir = /var/lib/mysql/
innodb_buffer_pool_size = 2048M
innodb_additional_mem_pool_size = 20M
innodb_log_file_size = 512M
innodb_log_files_in_group = 2
innodb_log_buffer_size = 8M
innodb_thread_concurrency = 8
innodb_flush_log_at_trx_commit = 1
innodb_lock_wait_timeout = 50

I stopped mysqld, deleted the data+log files, but when I start mysql again the log shows:

100601 09:08:30 mysqld started
InnoDB: The first specified data file ./ibdata1 did not exist:
InnoDB: a new database to be created!
100601 9:08:30 InnoDB: Setting file ./ibdata1 size to 10 MB
InnoDB: Database physically writes the file full: wait…
100601 9:08:30 InnoDB: Log file ./ib_logfile0 did not exist: new to be created
InnoDB: Setting log file ./ib_logfile0 size to 5 MB
InnoDB: Database physically writes the file full: wait…
100601 9:08:30 InnoDB: Log file ./ib_logfile1 did not exist: new to be created
InnoDB: Setting log file ./ib_logfile1 size to 5 MB
InnoDB: Database physically writes the file full: wait…
InnoDB: Doublewrite buffer not found: creating new
InnoDB: Doublewrite buffer created
InnoDB: Creating foreign key constraint system tables
InnoDB: Foreign key constraint system tables created
100601 9:08:30 InnoDB: Started; log sequence number 0 0
100601 9:08:30 [Note] /usr/libexec/mysqld: ready for connections.
Version: ‘5.0.77’ socket: ‘/var/lib/mysql/mysql.sock’ port: 3306 Source distribution
A mysqld process already exists at Tue Jun 1 09:21:35 BST 2010
100601 9:39:04 InnoDB: ERROR: the age of the last checkpoint is 9434190,
InnoDB: which exceeds the log group capacity 9433498.
InnoDB: If you are using big BLOB or TEXT rows, you must set the
InnoDB: combined size of log files at least 10 times bigger than the
InnoDB: largest such row.

AFTAB

When calculating innodb log file space usage; If we consider only “Log sequence number” and doesn’t account for checkpoints (Last checkpoint at) at all. When activity is flushed to the disk – if InnoDB has reached a checkpoint multiple times within the time (60 seconds apart) between status outputs, you might be setting the file size inappropriately??

Anyway this is out of date as of MySQL 5.5, recovery times have been greatly improved and the whole log file flushing algorithm has been improved. In 5.5 you generally want larger log files as recovery is improved. Thus, the large innodb log files allow you to take advantage of the new algorithm.

honeybee

one thing I don’t quiet understand:
when change mysql iblog size, we need to shutdown mysql, move/or remove the old iblog, and restart with new size indicated in my.cnf. isn’t the old iblog contains “uncommit changes”/innodb crash recovery information? if we move/or remove the old iblog, where does innodb gets that information upon next restart?

honeybee

Thanks, Baron!

Josh K

What if the calculation above shows that I should have an innodb log file size of 1-2 MB? I’m running Percona server on an ec2 small instance with the XFS filesystem for the mysql data directory.

Tien Phan

Hi!

I come from Viet Nam. Now i want to check capacity (free space & used space) of datafile on InnoDB (MySQL). How to command?
I just research for MySQL, hope you help me. Big thanks!

Bro

Tien Phan, I just come from ZaZhopinsk. How do I install mysql on my Pentium Pro 133Mhz database server?

Jayaram

hello Sheeri K. Cabral s.

My log files are looking below

ls -lrth ib_logfile*
-rw-rw—-. 1 mysql mysql 256M Nov 28 12:26 ib_logfile0
-rw-rw—-. 1 mysql mysql 256M Nov 28 12:26 ib_logfile1

Shall I need to split the log files

M

It seems that in 5.5 the second log file is touched even if a rotation has not occurred, that is to say that the ls -l approach is misleading.

Sample:

# ls -l /var/lib/mysql/ib_logfile* && date && mysql -BNe’show engine innodb status\G’ | grep sequence && sleep 60 && date && mysql -BNe’show engine innodb status\G’ | grep sequence && ls -l /var/lib/mysql/ib_logfile*
-rw-rw—- 1 mysql mysql 536870912 May 21 22:23 /var/lib/mysql/ib_logfile0
-rw-rw—- 1 mysql mysql 536870912 May 21 22:23 /var/lib/mysql/ib_logfile1
Tue May 21 22:23:15 CDT 2013
Log sequence number 17750417191
Tue May 21 22:24:15 CDT 2013
Log sequence number 17750981011
-rw-rw—- 1 mysql mysql 536870912 May 21 22:24 /var/lib/mysql/ib_logfile0
-rw-rw—- 1 mysql mysql 536870912 May 21 22:24 /var/lib/mysql/ib_logfile1

Of course it is worth reiterating the point made by AFTAB, crash recovery is far cheaper in modern versions of InnoDB:

https://blogs.oracle.com/mysqlinnodb/entry/innodb_recovery_is_now_faster

And from the manual:

http://dev.mysql.com/doc/refman/5.5/en/innodb-performance-recovery.html

“If you kept the size of your redo log files artificially low because recovery took a long time, you can consider increasing the file size.”

M

Scratch my previous comment, this is seemingly not repeatable today: (despite the fact that i ran this test several times over the course of ten minutes yesterday, maybe there is some cleanup thread that can do this under certain conditions)

# ls -l /var/lib/mysql/ib_logfile* && date && mysql -BNe’show engine innodb status\G’ | grep sequence && sleep 60 && date && mysql -BNe’show engine innodb status\G’ | grep sequence && ls -l /var/lib/mysql/ib_logfile*
-rw-rw—- 1 mysql mysql 536870912 May 22 13:33 /var/lib/mysql/ib_logfile0
-rw-rw—- 1 mysql mysql 536870912 May 22 12:55 /var/lib/mysql/ib_logfile1
Wed May 22 13:34:00 CDT 2013
Log sequence number 18342929533
Wed May 22 13:35:00 CDT 2013
Log sequence number 18343465682
-rw-rw—- 1 mysql mysql 536870912 May 22 13:35 /var/lib/mysql/ib_logfile0
-rw-rw—- 1 mysql mysql 536870912 May 22 12:55 /var/lib/mysql/ib_logfile1

Baron

According to this link, https://mariadb.atlassian.net/browse/MDEV-4662 Jesper is correct and I am wrong: when InnoDB warns about not enough space in the logs, it’s overwritten its last checkpoint. Thanks to Jeremy Cole for finding out the truth.

Andrija

Hi, when examining our server (bloated drupal installation with a lot of data), I got over 400MB writen per minute (yes, not an error, this is what Drupal does when to bloated with high number of visits…)

My innodb log size is 256MB right now (well, 2 files of 256M)…Does it make any sense that try 1GB size, or more ? I know recovery time will be much greater…
Thanks

Jack Tors

mysql> pager grep sequence;
PAGER set to ‘grep sequence’
mysql> show engine innodb status\G select sleep(60); show engine innodb status\G
Log sequence number 49 2161386250

You also state that you can get the same information with “show global status”
Innodb_os_log_written | 3067036160

That’s not the same information, in fact its an entirely different number.
I wouldn’t trust this blog, especially since there is no mention of what problems will be caused by changing the logfile size.

anonymous

@Jack Tors you’re supposed to look at the DIFFERENCE between these numbers after 60 seconds, not the current literal values.

No2 Maximus benefits

Thanks for finally taslking about >How to calculate a good InnoDB log file size
– MySQL Performance Blog <Liked it!

Glas

How long can the db survive with “InnoDB: ERROR: the age of the last checkpoint is…which exceeds the log group capacity…”?
It is something that should be fixed immediately or there is just a performance problem?

Adrián

Do you think that 5 MB or 64 MB is a value for real world workload ? These are my numbers;
select (283659589588754 – 283659393725516)/1024/1024 as MB_per_min;
+————–+
| MB_per_min |
+————–+
| 186.78973961 |
+————–+

This is 10 GB for one our… 🙁 I am reading this because I am hitting this problem at backup: https://www.percona.com/forums/questions-discussions/percona-xtrabackup/14029-backup-fails-with-log-block-numbers-mismatch

Bjoern

I have a similar problem as Adrián
following this I would need 8GB log_file_size

181.91 M/s

Richlv

looks like formatting in the example commands got wrong – in “show engine innodb statusG select sleep(60); show engine innodb statusG”, add a backslash before each “G”

CESAR MURILO DA SILVA JUNIOR

Thank you, it helped me.

How do I do this calculation in MariaDB 10.0.38?

show engine innodb statusG select sleep(60); show engine innodb statusG;
ERROR 1064 (42000): You have an error in your SQL syntax; check the manual that corresponds to your MariaDB server version for the right syntax to use near ‘statusG select sleep(60)’ at line 1
ERROR 1064 (42000): You have an error in your SQL syntax; check the manual that corresponds to your MariaDB server version for the right syntax to use near ‘statusG’ at line 1

I already solved by adding backslash before G, as the friend above said.

My innodb_log_file_size is 50MB. The calculation you presented resulted in 0.71658Mb / s * 60 = 43Mb. Does this mean I can leave it as it is? Is log churn by default 1 hour? How do I check turnover time?

Eric Thirolle

for MySQL 5+, here is a one-liner to collect the value of MB per minute written to the InnoDB log:

select VARIABLE_VALUE from information_schema.global_status where variable_name=’Innodb_os_log_written’ INTO @log1; select sleep(60); select VARIABLE_VALUE from information_schema.global_status where variable_name=’Innodb_os_log_written’ INTO @log2; select (@log2 – @log1) / 1024 / 1024 as MB_per_min;