One of the most painful troubleshooting tasks with MySQL is troubleshooting memory usage. The problem usually starts like this – you have configured MySQL to use reasonable global buffers, such as
innodb_buffer_size, key_buffer_size etc, you have reasonable amount of connections but yet MySQL takes much more memory than you would expect, causing swapping or other problems.

This simple problem on the surface becomes challenge with MySQL because there are no clear resource usage metrics available, and so in most cases you do not know where exactly memory is allocated. This was not much of the problem in MySQL 3.23 when there would only handful of places where memory could be allocated but it is a lot larger problems with MySQL 5.5 with addition of user variables, stored procedures, prepared statements etc which can be a memory hog.

My intent with this post is dual. I would encourage MySQL Team at Oracle, MariaDB team or Drizzle team to take a look into solving this problem. You will be thanked by a lot of people running MySQL in wild. I also wanted to share some troubleshooting techniques I use.

You might also find this post by Alexander Rubin useful.

Plot Memory Usage

First I would like to see MySQL memory consumption plotted. I use “VSZ” columns from “ps” output on Linux. It helps me to understand how this memory allocation happens – does it grows slowly and never ending which would correspond to memory leak or resource leak, does it spikes at certain times which I may be able to correlate to some events ?

If you can’t get fancy graph quickly you can use this simple script:

Check for Table Cache Related Allocations

There are cases when MySQL will allocate a lot of memory for table cache, especially if you’re using large blobs. It is easy to check though. Run “FLUSH TABLES” and see whenever memory usage goes down. Note though because of how memory is allocated from OS you might not see “VSZ” going down. What you might see instead is flushing tables regularly or reducing table cache reduces memory consumption to be withing the reason.

Connection Related Allocations
Another set of buffers correspond to connections – orphaned prepared statements, user variables, huge network buffer (can grow up to max_packet_size per connection) are all connection buffers and so if you close connection MySQL can clean them up. Killing connections (or stopping related applications) and observing whenever memory usage shrinks can be very helpful in diagnostics. If you have many applications it might make sense to shut them down one by one so it is easier to understand which of the applications were responsible for memory allocation.

If you figured out it is related to connections and identified application which causes excessive memory usage you might look at how it uses MySQL to identify potential causes. Is it working with large blobs ? Using user variables ? Prepared Statements ? memory tables ? In a lot of cases you have to guess and test, you can’t get information of how much memory connection uses and for which purposes it is allocated. Reducing various per connection variables is how you can test it, though it does not cover everything. For prepared statements you might want to look at Prepared_stmt_count to see how many prepared statements are allocated on server and Com_stmt_send_long_data to see whenever sending long data feature is used as it can increase amount of resources prepared statements take on server. There is no comparable variables of how many user variables are allocated (and how much memory they use).

Memory Tables

MEMORY tables can take memory. There are implicit MEMORY tables which are allocated for query execution, which size can be controlled by tmp_table_size and which also only exist for duration of query execution so it is usually easy to catch them. There are also explicit MEMORY tables you can create both as permanent and temporary. There is a max_heap_table_size variable which allows you to limit size of MEMORY tables (the limit applies both to implicit and explicit ones) but as there is no control of how many tables application can create it does not really allows to restrict memory usage. For Permanent tables it is easy. We can look at information_schema to see how much memory is being used by current MEMORY tables:

This however will not cover TEMPORARY tables some of your connections might have created and forgot to remove (or still use for processing). Of course you will see these tables going away if you close connection. In Percona Server you can do better as you can query temporary tables too:

You can even go deeper to check which sessions have created which temporary tables (both in memory and not):

Innodb Memory Usage

Finally it is often helpful to check how much memory Innodb has allocated. In fact this is often one of the first things I do as it is least intrusive. Run SHOW ENGINE INNODB STATUS and look for memory information block, which can use like this:

I’m using the information from Percona Server again which provides a lot more insight about how memory is used. I would note though the output is a bit confusing as “Total Memory Allocated” is really not the total any more as Innodb has moved allocating memory from operation system directly not from addition memory pool (hence 0). Such allocations are not seen in “Total memory allocated” line yet they are seen in one of the lines below. The most likely cause for Innodb run away memory allocation is Dictionary cache though there could be other reasons too.

Memory Leaks
There are many kinds of memory leaks and I would say these are rather rare in MySQL. Most suspected memory leaks end up being some run away resource usage, though these can happen. Some memory leaks might happen per connection and they will be gone when connection is closed, other correspond to global memory allocation and will result in increased memory allocation until server is restarted. I would suspect memory leak when you see memory usage growing which can’t be connected to any known resource use. For example for global memory leaks you would see memory usage continues to grow even if you close connections and tables regularly. Another common thing about memory leaks is because it is memory which is allocated and forgotten about, unless it is very small blocks, it should be just swapped out and when never needed again. So if you see swap file used space gradually growing and there are “swap outs” but a lot less “swap ins” chances are it is caused by memory leak.

Dealing with memory leaks is rather complicated as good tools to detect memory leaks like valgrind are too slow to run in production. So the best thing to do in this case is see whenever you can create isolated repeatable test cases based on your application, which can illustrate memory leak and when it can be found and fixed. This is where your MySQL Support contract can be handy.

Conclusion
Understanding where MySQL can allocate memory can help us to find the cause in most cases. It is not as straightforward as it should be and I’m very hopeful future releases of MySQL, MariaDB or Drizzle bring improvements in this space allowing us to see directly for what purpose memory is allocated and so detect all kinds of memory usage problems easier.

7 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
Art van Scheppingen

Thanks for the insights!
Wouldn’t it be an idea to add a couple of these to the Percona Monitoring templates? I mean: I can’t remember seeing the memory tables in the templates while it may be giving a lot of extra information.
I guess my working day will be quite occupied with some practice tomorrow. 😉

Przemek

I think it’s worth to say here that the InnoDB team has already made some effort to address memory usage problems: http://blogs.innodb.com/wp/2011/12/improving-innodb-memory-usage-continued/ . Results seem very promising, and also maybe these fixes will allow more precise memory usage monitoring.

Cédric

Thx for this useful post.
I like to use /proc/${PROCESS_NUMBER}/smaps for memory usage.

hickey.liu@gmail.com

Thanks a lot for the summarize of the analyze methods, it helps me diagnose one bug of the servers keep increasing memory in product environment. Great trouble-shooting sharing on MySQL.

KRMR

Thanks a lot for great explanation. I am seeing huge memory used by MySQL server when no access it. I did check all memory sizes, and every thing seems low except but still the total memory used is high. I am suspecting it could be a memory leak. I am using Amazon’s RDS, I dont have access to the VM where MySQL is running,so I can’t see if there is any swap files is existing and growing. Is there anything more we can debug from DB side about memory leak?

Thanks.

Jason_asia

i met this problem too,it’s very hard to make sure what happened with the mysql server…

james

In my case, only if long stick queries are killed. Any one else has the same experience please?
I had asked here as well: https://forums.mysql.com/read.php?20,664418,664472