Optimize MySQL performance like a pro with Percona Monitoring and Management

My last post about Innodb Performance Optimization got a lot of comments choosing proper innodb_buffer_pool_size and indeed I oversimplified things a bit too much, so let me write a bit better description.

Innodb Buffer Pool is by far the most important option for Innodb Performance and it must be set correctly. I’ve seen a lot of clients which came through extreme sufferings leaving it at default value (8M). So if you have dedicated MySQL Box and you’re only using Innodb tables you will want to give all memory you do not need for other needs for Innodb Buffer Pool.

This of course assumes your database is large so you need large buffer pool, if not – setting buffer pool a bit larger than your database size will be enough. You also should account for growth of course. You need buffer pool a bit (say 10%) larger than your data (total size of Innodb TableSpaces) because it does not only contain data pages – it also contain adaptive hash indexes, insert buffer, locks which also take some time. Though it is not as critical – for most workloads if you will have your Innodb Buffer Pool 10% less than your database size you would not loose much anyway.

You also may choose to set buffer pool as if your database size is already larger than amount of memory you have – so you do not forget to readjust it later. This is also valid approach as if it is Dedicated MySQL Server you may not have a good use for that memory anyway.

Another thing you should keep into account is Innodb allocates more memory in structures related to buffer pool than you specify for it – I just checked on our set of boxes which use 12GB buffer pool (with very small additional pool and log buffer) and total amount of memory allocated by Innodb is typically a bit over 13GB. Vadim has posted some numbers on it a while back.

After you have decided with database size you need to check if there are any restrictions on Innodb Buffer Size you can use. Typically you would see restriction applying only on 32bit systems but we see these can be still spotted in a wild, especially in Windows world. The restriction would normally apply to total amount of memory process can allocate so make sure to leave space for other MySQL needs while factoring this in.

The next step would be to decide How Much Memory do you need for other needs. This needs would be OS needs – your system processes, page tables, socket buffers etc all need memory. I would put this to 256M for small sized boxes to 5% of memory size on the big boxes, though it can be even less than that. Besides Operating System needs you also have MySQL needs – these include MySQL buffers – query cache, key_buffer, mysql threads, temporary tables, per thread sort buffer which can be allocated. There are also things like innodb additional memory pool (which can grow more than memory you allocated for it, especially in case you have large amount of tables).

I could tell you some numbers, for example sum up all your global buffers plus add 1MB for each connection you’re planning to have but in reality the number can vary significantly depending on the load. Idle connections for example will consume significantly less memory than connections doing work with huge temporary tables and otherwise running complex queries. It is usually much better to simply check it. Start MySQL With 10GB Innodb buffer pool for example and see how large RSS and VSZ get in “ps” output on Unix Systems. If it gets to 12GB when you need 2GB for other stuff, and you can increase it a bit to be on the safe size and scale appropriately.

The third important memory consumer would be OS cache. You want to bypass cache for your Innodb tables but there are other things you need OS cache for – MyISAM tables (mysql database, temporary etc) will need it, .frm file, binary logs, or relay logs, Innodb Transactional Logs also like to be cached otherwise OS will need to do reads to serve writes to these log files as IO to the log files is not aligned to the page boundary. Finally you likely have some system script/processes running on the system which also need some cache. The number can be a lot different depending on system workload but generally I’d see values from 200MB to 1GB good estimates for this number.

Eliminate Double Buffering – This is again very important for buffer pool size choice. You do not want OS to cache what Innodb is caching already. Innodb cache is more efficient compared to OS cache because there is no copying, due to adaptive hash indexes, ability to buffer writes and number of other factors so you just want to make your OS to give a way to Innodb. Note it is not enough to block OS from swapping – as I already mentioned the OS cache will be needed for other things and if you will not make Innodb to bypass OS buffering Innodb TableSpace IO will wipe out cache because it typically makes most of the IO on the system. On Windows you do not need to do anything. On Linux, FreeBSD, Solaris you need to set innodb_flush_method=O_DIRECT. On other Operating Systems you may be able to select it on OS level but make sure to do it. There is a small niche case when it hurts – when you do not have RAID with BBU and your workload is very write intensive but there are always exceptions.

Make your OS Happy The other challenge you may have is making your OS happy and avoiding swapping out MySQL Process or other important processes to make room for file cache. OS may find it unfair there is MySQL process which consumes 95% of memory and the cache is just couple of percent. Some people try to solve it with disabling swap file but it can hurt another way – OS may kill MySQL Process running out of memory (or thinking it is running out of memory) which may happen ie in case of unexpected connection spike. Plus not all kernels work quite well with swap disabled and there are other reasons against it. For some people having no swap works, though they usually play on the safe side, having enough “free” memory used as Cache and Buffers. Kevin Burton wrote a good post about his experiments.

Depending on OS you may want to do different VM memory adjustments. You may want to make MySQL to use Large Pages for allocating Innodb Buffer Pool and few other buffers, which may have other performance benefits as well. Tuning your VM to be less eager to swap things by echo 0 > /proc/sys/vm/swappiness is another helpful change though it does not always save you from swapping. Some specific kernel versions may have other settings to play with. Finally you can try locking MySQL in memory by using –memlock – just be careful as in case you have memory usage spike you may have MySQL Server being killed by OS instead of temporary swapping few things out.

Two things to note about OS Swapping. First looking at “swap used” is not really helpful because you do not know what lies in the swap – there are portions of both MySQL Process and other processes which you do not need during normal operation so it is OK to get them in the swap. Make sure however the swapping is not happening ie your VMSTAT “si/so” columns are zero on Linux. Couple of swaps per minute would not hurt bad but if you’re doing 100+ pages per second of swap IO you’re in trouble.

Second, many people think – who cares if some of buffer pool is swapped out, I would have one IO to fetch page if I would have small buffer pool and now I have one IO to fetch page data swapped out. This is very wrong thinking. First OS would have to swap even clean page from Buffer Pool while Innodb can simply discard that pages in case of memory pressure. But what is more important Innodb algorithms are finely tuned with consideration what is in memory and what is on disk, for example when Innodb tries to avoid holding latches while doing IO while there could be locks set while accessing pages in the buffer pool. If page turns out to be in swap rather than memory you will have another threads piling up waiting on the same lock till IO completion, while they may well have all data they need to proceed in the innodb buffer pool.

I guess these clarifications work better than 70-80% recommendation and of course you should not stick just to 50GB if you have 64G of memory – values of 56-60G would likely make more sense, and depending on bunch of other settings 12G or 14G may well be good choice for 16GB Box, though I would take care at values close to 14G as there is not much room left for other things.

P.S I only described Innodb Buffer Pool selection for dedicated Innodb system. If you have fair amount of MyISAM, Archive, PBXT, Falcon or other storage engines then you will get into complex balancing game besides considering all these factors.

More resources:

Posts

Webinars

Presentations

Free eBooks

Tools

34 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
tom

Have you any recommendation about buffer settings for people using both InnoDB and MyIsam on the same server?

Is there any rule of thumb to follow in this case? In my case I prefer to keep logging data in MyIsam table because they are more compact (use much less space) and the insert speed is higher (in my case)

kabonfootprint

Kabonfootprint is the one who open our mind in a better place. kabonfootprint

So when not using MyISAM, it would probably be helpful to reduce all its relevant buffers. Could you post more details about it?

zhang

This may be little bit off topic.
With 5.1, if your innodb_buffer_pool is FULL (indicated by “Free buffers”) and you still don’t have everything cached (Buffer pool hit rate < 1000), your response time will go 2 or 3 times longer than the response time you get while building up the cache.
Example, say you have 100 processes reading off the database, when you first start the server (cache is empty) and started the processes, you may get an average response time of 50ms. Then it slowly dropped to 20ms while cache is building up (free buffer decrease, cache hit ratio increase). But now, your cache is full and your cache hit ratio is at 95%, all of sudden, the response time would jump to 60ms or even 80ms.

Baron Schwartz

This is easy to explain as flushing dirty pages to make some room for pages that aren’t in the buffer pool. Watch the number of *dirty* pages. Compare that to innodb_buffer_pool_max_dirty_pct.

Is this specific to 5.1? Have you see it behave differently on 5.0? I would expect to see the same behavior in 5.0, but tell me if I’m wrong.

zhang

That’s exactly the reason I brought it up here. 5.0.67 and 5.0.67 with google patch do NOT have this behavior. The response time stabilized after the buffer pool is full.
Seems like 5.1 has problems to manage eviction.

Baron Schwartz

I haven’t seen this in the real world, but I don’t disbelieve you. If you want help solving this, I am pretty confident we can find out what is going on and either find a workaround or discuss the possibility of a patch. You could also try a bug report via MySQL, but I don’t know how urgent this is for you to solve.

Cristian

My server configuration
2GB /intelquad

PROBLEM : i start my sql 4.0.12 with innodb_buffer_pool_size = 128
After starting the server my.cnf becomes my_cnf.bak.

but randomly the configuration is reseted and all my previous config is set to the default values inlcuding innodb_buffer_pool_size which is set to 5Mb or so

This happes only since this week.

Do you know what to do ?

I am kind of very stressed due to this .

I can;t migrate now to other version of mysql.

Alex

I now realize I know absolutely nothing about MySQL. Thanks for all the info (book and blog).

SQL Dan

Is the INNODB_BUFFER_POOL shared across all databases or allocated per database? I’m getting ready to run a mysql server in a VMWare environment with a not very busy database. Maybe 10 concurrent users max. With about 5000 records being added per year max per database. I was thinking about running the virtual OS with 2GB of RAM.

Robert

What about the windows environment? You said it is not critical because “On Windows you do not need to do anything”. Just a one kind suggestion that would be good also for commenters above SQL Dan and Cristian.
Obviously you are with your mind set on super-mega-enterprise structures 2000 users per minute and I understand you, but what about a small MySQL InnoDB database with just 20 tables and ibdata1 size (now) 70MB and a little office with less than 15 users?

Can you kindly suggest us a safe memory amount and innodb setup for that memory?

Really thank you if you have the time to tip us on a good setup for our “small” environments

Robert

Robert

I’m asking your tip to prevent another issue like the one happen to our friend.

Please note that I landed into your interesting blog because the intel mainboard of a friend “died” in the middle of a working day.

For an unknown reason… the crash recovery takes to nothing. Disks were raid 5 and we did a raid data recovery for him. Quite everything copied fine expecially the mysql directory…
Cool (I thought) now with a working mysql installation + the exact logs file size + all the db folders and a simple “mysqld –innodb_log_file_size=xxxxxxxxx –innodb_force_recovery=6” (where xxxxxxxxx is the phisical size in bytes of the logs) I can dump tables and restart them really soon… It is not like that..
Bad story… their db operations were started in jan 2007, the server died on 20 january the 2010 but dumps are extracting only up to september 2009… then mysqld crashes. I suppose there is something bad inside the ibdata file. Because it is only 50MB I copied it and gave it a look with notepad++. I see the missing records… but there is no way (at least with our knowledge) to dump them.
To be true looks like that they are “in a strange position” into the ibdata file compared with those that dumps before the crash.

And this is the reason because I worry about the memory and innodb files and memory setup: ib_logfiles on their and also on our server are 10MB while ibdata is 50MB, users are 6 and server was win 2003 server with 2GB RAM (xeon and a raid 5 system). Can this be the reason for a failed crash recovery?

Our server is similar.

Thank you for any tip and or answer… and also any help if you have any suggestion for the case above.

Robert

Kevin

I’m a bit confused about the innodb_db_buffer_pool_size. You mention that you should set this to as large as your database, but our database is about 30GB worth of innodb tables. I’m assume most of the common servers out there are only running 8GB to 16GB of RAM. So in order for innodb tables to run in efficiently, my database cannot exceed 16GB ?

I’m pretty sure that I’m missing something here.

ronald

Hi Peter.

Thanks for the info, perhaps you could offer some advise for an innodb performance issue
i am having.

I’ve just inherited a WebRT Ticketing MySQL DB which is 123GB is size, there is a lot
of images in the db. Historically i’ve worked mainly with smaller clustered DB’s with NDB
so an InnoDB db of this size is new territory for me. I’ve run mysqltuner & below is some of the output:

[!!] Highest connection usage: 100% (101/100)
[!!] Query cache prunes per day: 26846
[!!] Temporary tables created on disk: 38% (1M on disk / 4M total)
[!!] InnoDB data size / buffer pool: 123.1G/8.0G

Variables to adjust:
max_connections (> 100)
wait_timeout (< 28800)
interactive_timeout ( 64M)
tmp_table_size (> 128M)
max_heap_table_size (> 128M)
innodb_buffer_pool_size (>= 123G)

The server itself has 16GB RAM, surely i cannot increase the innodb_buffer_pool_size more than the memory on the system.
Any thoughts to increase performance/decrease disk I/O or redesign idea ?

Regards

Ronald

Paul

Hi Peter,

I have a dedicated server with 2 GB RAM running InnoDB and have a large database that is roughly 10 GB. How high should I set innodb_buffer_pool_size to be?

Thanks,
Paul

jeff hill

Ronald,

Correct, you cannot set it higher than you have unless you have a swap file larger than that, which you could do but would be VERY inefficient…With that setup I would make sure I had good indexes on my tables. Without any MyISAM tables I would change the buffer pool somewhere in between 8-10 G (probably closer to 8, and then monitor system memory usage before raising more, because I’m careful like that).

Paul,

Completely depends on what you do with the database, and if you have MyISAM as well. You must leave at least 10% of your memory for your system to run smoothly, especially with that small amount of RAM.

If I were you I’d buy more RAM. At least double it, unless you’re on a 32 bit system, and then I would switch to a 64 bit system.

If you have to run with what you have, and you have no MyISAM, then you should be safe to set it at 1.5 G, although you will have to do the math to see how many connections you have at once and how much memory each connection takes up…I’m guessing you don’t have a lot of connections.

Peter,

Don’t mean to step on your blog, just seams you get a lot of the same type of questions, so I thought I’d reply…probably won’t happen again.

stinky

First of all – great information and great work Peter et al.

I now know we have a major issue with our innodb_buffer_pool_size configuration…..

Due to the fact that we are using managed virtual services we need to buy memory by chunks. So are there any stats, from ‘show innodb status;’ I assume, I can use to know if we are starting to make headway in the increases we will be making (by hiring more memory and allocating to innodb_buffer_pool_size). Of course we can do bench marking to know when something is improving, but it is always good to fall back onto stats from the system. We need to make the balance between size of the buffer and cost of memory.

Less importantly, when the innodb_buffer_pool_size is grossly increased (ie 100 times) is there anything else that we should consider? The main thing I can find is that innodb_log_file_size should be increased to about 25% of innodb_buffer_pool_size.

Thanks again,

Nick Whalan

Sridhar Subramaniam

I have machine with configuration of intel(R) Xeon(R) CPU 5160 @3.00GHz 6 GB RAM. and mysql with Innodb tables.
100mb of data is available. i have set normal configuration in ini with query_cache=64mb and innodb_additional_mem_pool_size 16M
innodb_flush_log_at_trx_commit 0
innodb_log_buffer_size 8M
innodb_buffer_pool_size 1G
innodb_log_file_size 24M
innodb_thread_concurrency 16
innodb_lock_wait_timeout 120
max_heap_table_size 64M
max_allowed_packet 16M
query_cache_limit 2M
query_cache_min_res_unit
slave_net_timeout
innodb_data_file_path ibdata1:10M:autoextend
innodb_file_io_threads 4
thread_concurrency 8
innodb_max_dirty_pages_pct 90

When i run the package to generate a report it takes 15 seconds to finish of the process. but normally in PARADOX(the same kind of data) it takes only 3 seconds. can anybody suggest me to fine tune the ini configuration. whatever changes done in innodb parameters the result stays same. even when i set innodb_buffer_pool_size = 48M instead of 1G the result is same. only difference i could find is when i make query_cache = 0 then the process is increased to 20 seconds instead of 15 seconds. so i set again query_Cache=64M. Can any body suggest me to adjust the configuration to keep my process time same as in paradox?

Thanks in advance

Furkan Kuru

Hello,

My Mysql server is heavily loaded, now 300 qps average.

It uses %50 Cpu in average and just 700MB of ram. My server has 8GB and it has over 3GB free. The slow query log seems fine. There are very few and not frequent ones.

I want to be sure that it is returning the cached results and do not touch the disk unnecessarily.

I think the linux OS caches the innodb file but can I trust on that? And is there any good practice to lower cpu usage through buffering or caching?

innodb_buffer_pool_size is set to default value. (8mb)

I have Innodb, MyIsam and Memory tables mixed.

Here is an output from a tuner script

INNODB STATUS
Current InnoDB index space = 238 M
Current InnoDB data space = 294 M
Current InnoDB buffer pool free = 0 %
Current innodb_buffer_pool_size = 8 M

KEY BUFFER
Current MyISAM index space = 113 M
Current key_buffer_size = 192 M
Key cache miss rate is 1 : 63
Key buffer free ratio = 74 %
Your key_buffer_size seems to be fine

QUERY CACHE
Query cache is enabled
Current query_cache_size = 256 M
Current query_cache_used = 19 M
Current query_cache_limit = 1 M
Current Query cache Memory fill ratio = 7.64 %
Current query_cache_min_res_unit = 4 K
Query Cache is 28 % fragmented

sedat

hi there, ain’t there anyone sees some errors like “incorrect information in file ..” when change the innodb_buffer_pool_size and innodb_log_file_size? i see them and couldn’t solve this issue for now. have you any idea to solve this?

powpow

hi there,

thx in the first place for your deep-going background information.

my question: i want to tune a big database. we recently upgrade to mysql 5.5.8 @ freebsd 7.1.
the main db is innodb, myisam is used for the mysql-system-tables(small-sized).

the production-innodb-database allocates now of about 50GB of filesystem-space, the dump is about 19GB big. i know that innodb – ibdata – files will never shrink, so i have to drop and import the data to get rid of this.
the host hast 16GB of RAM, most of this would be available to mysql because there is no other productive service running.

so far i have learned(thanx to your blog!) i definitely want to increase innodb_puffer_pool_size(which is set to just 386MB).
my question is: would i get an additional performance-benefit if i ‘refragment’ the ibdata-file by dropping / importing or would this be a useless exercise?

thx in advance!
harri

Jochen Lillich

@sedat:

If you change innodb_log_size, you’ll have to remove the old log files manually so MySQL can create new ones. See also http://www.mysqlperformanceblog.com/2008/11/21/how-to-calculate-a-good-innodb-log-file-size/comment-page-1/#comment-390952

Best regards,
Jochen

Prem

I have a Linux Machine withr 16GB RAM. Have installed mysql and it shows innodb_buffer_pool_size=1173G.
Tried increasing it to 2048M and restarted mysql. When I checked the status of mysql it is not Running.
Again if go back to 1173M it is functioning properly.
Any troubleshooting tips would be great?

David

Prem: Sounds like you have a 32-bit version of mysql installed likely because you have a 32-bit Linux. MySQL can not use more than 2G of ram on 32 bit platform. The 1173M plus the other buffers you have (or defaults) and its overhead are not letting you go much higher.

Alexander

32bit PAE linux kernel not resolve problem with 2Gb limit?

Wes

@Alexander,

PAE will solve the problem with a 32-bit system addressing more than 3.5G of system RAM, creating a lot more space for distinct process working segments. PAE by itself does not solve the problem of 32-bit programs accessing > 2G of useful RAM. That’s a totally separate question that involves some trickery from the people writing the software. Normal 32-bit applications can only “see” 32-bits worth of system RAM, regardless of how much the system actually has.

@Peter,

I’m playing around with buffer pool settings in a staging environment (and..okay, maybe a bit in prod). I set my buffer_pool size to 40000M on a box with 55G. I was expecting to see MySQL immediately allocate the entire 40G chunk so that it was a guaranteed allocation. That doesn’t appear to be the case. Could you be so kind as to elaborate on how innodb preallocates buffer space?

Best,

Wes

vhilly

Question, I have 8GB RAM server and almost 20GB size(as of now) of database and allow 500 max connections, what is the right mysql configuration for me? This dedicated mysql server is for my otrs application
the problem is my database is always having high load average and cpu usage. I dont know what to do

Mario Splivalo

Vhilly – you should set up some kind of monitoring/trending to see what exactly is going on with your database. I’d go with munin as it is super-simple to set up and there are decent mysql plugins for it (if you’re running latest Ubuntu you’ll find a neat collection of plugins that will show you all the various mysql internals – innodb included).

It is always wise to give database server all the memory you can give it. If you have 8GB box, give 6GB to MySQ. 2GB should be enough for operating system and various caches. Things get a bit more complicated if you still use MyISAM (which you really really should not).

Peter, referring to your initial comment that ‘innodb_buffer_pool’ should be cca 10% larger than your dataset size – this is a bit misleading – I’ve sees setups where this is blindly followed and then they end up with 60GB in innodb_buffer_pool on 16GB of RAM boxes. Better suggestion would be – if you’re only using InnoDB (which you really should!), give mysql cca 80% of the RAM your box has, making sure you have a gig or two for the OS and other thingies. (I know it is not nearly as simple as that as you should count in the max_connections, various sort buffers, etc, etc…)

Sachin

Hello,
In mysql my.cnf i mention innodb buffer pool size 8GB but showing total memory allocated 9Gb in INNODB MONITOR OUTPUT, Why its happening? why it’s not showing the proper figure which i mention in my.cnf file

Cena

I followed the steps given in this page – Now I am in dilemma, whether I am getting the best performance from my configuration http://www.rathishkumar.in/2017/01/how-to-allocate-innodb-buffer-pool-size-in-mysql.html according to that page, I should decrease the buffer pool size, but still i am confusion.

ecommerce

not bad peter, thanks for sharing this