I vaguely recall a couple of blog posts recently asking something like “what’s the formula to compute mysqld’s worst-case maximum memory usage?” Various formulas are in wide use, but none of them is fully correct. Here’s why: you can’t write an equation for it.

The most popular equation, of course, is the one in the error log after a crash: “It is possible that mysqld could use up to key_buffer_size + (read_buffer_size + sort_buffer_size)*max_connections”. This was never true — not even when ISAM and friends were the only storage engines, which was before most of us remember. And at a minimum, most servers these days need to consider the added memory for InnoDB: specifically, the buffer pool. (There’s also an innodb_additional_mem_pool_size but it’s typically set pretty small, like 20M).

But even that’s not enough. Staying with InnoDB, we also need to count the data dictionary. This can take many gigabytes on servers with a lot of tables. We’re trying to fix that, but it’s unbounded in a non-Percona server. And InnoDB has other needs that it doesn’t actually promise to fit within innodb_additional_mem_pool_size, by the way. If it needs more it will allocate it. innodb_additional_mem_pool_size is just a handy way to avoid repeated malloc() calls by doing it all up front in one call.

And what about the query cache and the table cache and the size needed for each thread (thread stack, etc)? Those take memory too. The query cache can use quite a bit if it’s configured to. There are a bunch more global buffers to think about; that’s just a partial list. And there are some “global buffers” that aren’t really global. Suppose you set sort_buffer_size to 128K — I can quite well connect and say “set @@sort_buffer_size := 1 * 1024 * 1024 * 1024;” and my sort_buffer_size is now 1GB. And a single query may cause multiple sort buffers to be allocated. You can’t control that or calculate it at a global level.

But let’s keep going! Any stored code (triggers, stored routines, etc) uses memory to execute; so do temporary tables. The following query creates at least 2 temporary tables:

You can have many of those per query. Sorting and grouping and DISTINCT and UNION may/will also create temp tables; the same applies to views and probably some other things I’m forgetting.

Let’s not forget about the memory needed to parse and optimize and execute queries; mysqld creates internal structures for the query plan, for example, and keeps it around until the query is done. And there are prepared statements, which you can create and never deallocate.

Any more? Of course… I can also set a bunch of user variables — they use memory too. And they can be really big (but no bigger than max_allowed_packet). And keys for user locks! I set my max_allowed_packet bigger and ran the following on my laptop:

Now I’ve got a 100M user variable and a 100M lock key. Note how long it took to run these statements! And now mysqld is suddenly using 324M of memory, up from 124M.

I’m sure you can think of other things I’ve forgotten here (the manual page has a bunch). My point is there simply is no formula to compute the maximum possible memory usage, and there cannot be because there are some things that have no defined limits.

It’s not something that’s worth worrying about much, in my opinion. I think it’s far better to use a monitoring tool to watch the actual memory usage over time and see how it varies. When I’m initially setting up a my.cnf for a server, I usually calculate the Big Ones like the buffer pool, query cache etc; leave some memory for other needs; and see what happens. The results are workload-specific.

17 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
Peter Zaitsev

Baron,

Right. And there are a lot of fun things here. Prepared statements… stored procedures etc.

I think people should stop thinking they can put some safe values to MySQL never run out of memory – extreme stupidity or intent will cause MySQL to consume enough to start swapping and run out of memory. So it is better to speak about saint defaults and about given workload. I like to see VSZ monitored/graphed – this is a very good indication of how much memory MySQL really uses allowing you to spot problems quickly.

Hi!

“I’m sure you can think of other things I’ve forgotten here”

obvious one…MEMORY tables.

One that seems likely, although I don’t know for sure – retrieving BLOBs/TEXT in your queries materializes the data in memory.

regards,

Roland

Shlomi Noach

Hi,

That part of the ‘standard’ equasion, where it says:
max_connections*(thread_stack + join_buffer_size + sort_buffer_size + binlog_cache_size…)
is so much misleading. In particular, the join_buffer_size, sort_buffer_size are interesting to note:
sort_buffer_size is only used when sorting without index
join_buffer_size is only used when joining without index

So the ‘worst case scenario’ which is calculated here speaks of max_connections (say, 500?), all doing sorting without index use, all joining without index use…
Worst case scenario indeed!

I think it’s better to stick with realistic approximations, and hope to allocate MySQL a little *less* than you would usually allow for, so that in case of high load, there’s room to spare. Cases I’ve seen where MySQL was paging were disastrous. It’s best never to get there.

@Roland: while any single memory table is limited by max_heap_table_size – I don’t believe there’s a limit to the number of memory tables. Do you know of such one?

Brice Figureau

What worries me more than the absolute maximum, is that this maximum always increases on my servers. I mean I’m certainly hitting a MySQL/InnoDB memory leak because whatever values I put in my my.cnf, I always end-up a few weeks after restart with 2 or 3 extra GB of memory used than when I first launched mysqld.
So far all my attempts to find where the leak was failed (the issue is that I never could reproduce the leak in the lab, mainly because I guess it is difficult to reproduce exactly the same production traffic/workload). Unfortunately in this aspect, MySQL is not really good to display information about where it spends all the used memory 🙁
If you Percona guys could produce a patch for memory tracking/accounting, that’d be fantastic!
Thanks for the really good articles as usual.

Hi!
@shlomi: No, I don’t think there is a specific limit to the number of MEMORY tables.

Arjen Lentz

@Brice: it’s relatively unlikely to be a memory leak, they have been hunted down with various OSS and commercial tools. Of course it’s still possible, but something that eats 2G over a few weeks…
Your innodb_additional_mem_pool is a good place to look for your “missing” memory (use SHOW ENGINE INNODB STATUS \G to see where it’s at)
Also do a “SHOW FULL PROCESSLIST” and see if there are any long-sleeping connections; but even without sleepers, if your appserver uses persistent connections, they tend to not always be perfectly “clean” when reset. Persistent connections with MySQL don’t gain you much anyway as it’s so fast to set up a new connection (just set thread_cache_size so that generally no new thread needs to be created).
Anyway, those are just a few ideas of where “missing memory” tends to hide in the real world.

Brice Figureau

@Arjan,
I don’t really know if it’s a memory leak but it didn’t happened when we were running under 4.0. I reported this issue around 5.0.27 to mysql, and they indeed find one memory leak. Unfortunately this didn’t fix my issue.
We’re running with short lived connections, and with a wait timeout of 120s, since that was my first thought (I mean long lived connections from our game servers).

,
Here is my mem status:

Total memory allocated 8751883484; in additional pool allocated 20926208
Buffer pool size 458752

Which mean per your formula:
mysql> select 8751883484 – (458752*16384) – 20926208;
+—————————————-+
| 8751883484 – (458752*16384) – 20926208 |
+—————————————-+
| 1214764508 |
+—————————————-+

That’s 1.2GB for 1.5 week of uptime, ouch.

We have about 300 innodb tables. But about 20 of them are truncated every 5s (not dropped) as they are used to compute online games rankings. Maybe that’s temporary table ending in the dictionnary?

I’m not running a percona build (yet I’d say, I’m in the process of building a debian package out of the latest release).

I tried everything I could to diagnose this, including running with different debug allocator (like the google perftool) , or on top of valgrind. Unfortunately usually the server dies (because there is too much load) before giving much information.
So, yes maybe that’s the dictionnary, because when we were running 4.0 (and much less load) we were a MyISAM only shop.
Thanks,
Brice

Sheeri K. Cabral

Brice,

Monty recently posted that his company does NRE, or Non-Recurring Engineering. This is explained on http://askmonty.org/wiki/index.php/Commercial_offerings as “Provide bug fixes for MySQL that is not fixed in the standard MySQL release from Sun.”

I think it would be a good idea to send him an enquiry.

Shane Bester

Hi!

I reported this strange memory consumption formula as a bug after I got tired of questions about it:

http://bugs.mysql.com/bug.php?id=35661

Heiðar Eiríksson

Hello, and thanks for all the great information on this site.

I have a huge problem.
I was recently hired to “fix” a web that Realestate agents use to advertise properties for sale and/or rent.

We are currently using MySQL and MyISAM tables and we keep running out of memory so the server starts to swap and then crashes.

What makes me realy scratch my head is that the database isn’t even that big….the rows in the property table ar only about 15500 at the moment.

In the runtime information i can see that we have about 177.23 k select queries per hour wich accounts for 91.48% of the server load.

There are a bunch of varibales that jump out as not good to be, here are a few:
Innodb_buffer_pool_reads 12
Handler_read_rnd 2,448 k(this increases steady as the server runs)
Handler_read_rnd_next 105 M(this increases steady as the server runs)
Created_tmp_disk_tables 9,208
Select_full_join 1,074(the application is using indexes to join the tables so this puzzles me)
Sort_merge_passes 2,602
Opened_tables 1,500
Table_locks_waited 74

hope this is enough information to help you help me.

The server has 6GB memory running on Linux
and a

Is it the queries that are to blame here or could i optimize the config for the server some….maybe use InnoDB since the database is 3NF and uses alot of selects using joins?

Any tips and help would be immensly helpfull.

Thanks in advance,
Heiðar Eiríksson

yingkuan

Heiðar,

If you are using MYISAM then you should have any Innodb related memory usage like
buffer pool, redo buffer, additional mem pool etc.
you might want to first look into that. use either

show innodb status
show variables like ‘inno%’;

also what your key buffer size, sort buffer size and query cache setting?
Actually if you look the first half of this article, Baron already provided a good list to look at when innodb not used.

yingkuan

Erm, correction, I meant to say ‘you should NOT have any innodb related mem usage’

Smon J Mudd

While this thread is quite old. The problem of figuring these things out does not seem to have gone away. Also many other RDBMSes try to predefine a certain size of memory for “Database usage” and then only use that memory for all tasks. I would in many ways be nice to be able to do the same with MySQL, as then the problem of the server suddenly going into swap and dying, or the kernel OOM killer killing a suddenly growing mysqld would go away. The counterpoint to that of course is that the threads running inside mysqld would need to expect the possibility of not having enough memory to complete their task which would lead them to block until memory were freed, or perhaps some “deadlock” condition were detected which would kill a thread a free up memory, allowing other threads to continue.

Having suddenly bumped into this issue recently of not quite having the right configuration and mysqld “jumping” into swap, or just killing the server I’d _much_ prefer to be able to limit this behaviour before there’s a problem and probably too lock that memory into ram. I guess however doing that now would mean too much of a change in memory allocation strategy for it to happen. Most mysql developers would blame the administrators for configuring mysqld incorrectly. I’d just prefer to say: “you can use up to XXX GB” and have mysqld stick to that, using the other parameters which are configured now.