The more I work with MySQL Performance Optimization and Optimization for other applications the better I understand I have to less believe in common sense or common sense of documentation writers and do more benchmarks and performance research. I just recently wrote about rather surprising results with sort performance and today I’ve discovered even read_buffer_size selection may be less than obvious.

MySQL read_buffer_size

What do we generally hear about read_buffer_size tuning? If you want fast full table scans for a large table you should set this variable to some high value. Sample my.cnf values on large memory sizes recommend 1M settings and MySQL built-in default is 128K. Some people having a lot of memory and few concurrent connections set it as high as 32M in hopes for better performance. Let’s see if it is really best strategy:

To check things out I’ve created a table with a simple structure:

Populated it with 75M of rows to reach 4G in size so the workload will be IO bound on the box with 2GB of memory.
The was running Fedora Core i686 had 2 Xeon CPUs and 2 drives in RAID0.

I’ve used the following query to perform full table scans, with 3 runs and averaged results. MySQL 5.1.21-beta was used for tests.

Here are the results I’ve got:

read_buffer_size impace on scan performance
read_buffer_sizeTime (sec)
820045.2
16K44.8
32K45.6
64K43.4
128K43.0
256K51.9
512K60.8
2M65.2
8M66.8
32M67.2

8200 bytes is the minimum size for read_buffer_size, this is why we start from this value.

As you can see results look really strange. Performance indeed grows by a few percent as you increase the buffer to 128K but after that instead of improving any further, it drops down sharply being 50% slower at the 2MB size. After this value, it continues to drop slowly all the way to 32M.

Why this is happening? I have not spent enough time to come up with a good explanation. It could be OS has to split large requests into multiple ones submitting them to the device which slows things down or it could be something else. But the fact remains – on some platforms for some workloads large read_buffer_sizes may hurt you even on large full table scans. (I wrote about some other cases when it hurts a while ago)

Let us do one more test – what if we test out smaller table (which fits in OS cache):

read_buffer_size impace on in memory table
read_buffer_sizeTime (sec)
82004.15
16K4.15
32K4.12
64K4.11
128K4.11
256K4.12
512K4.25
2M4.49
8M4.54
32M4.58

As you see the difference in percents is smaller with only 10% difference between best and worst numbers but the best number still remains the same – 128K and 32M is again the worst value. This means it can’t request split issues, at least not just that.

Note: In this case, I’m really curious how much values change on different platforms (OS and Hardware) as well as different file systems as these could all be involved here. Different table structures (ie longer rows) also may affect results, not to mention tables with fragmented rows when IO pattern can be a lot different.

The degree of parallelism is another important variable which was not considered – small buffers with high concurrency may mean more seeks and so worse performance, or maybe not – something to test as well.

In general, it just reconfirms one basic thing – do not just grab someone else’s “best configuration” from the web and apply for your application if you’re interested in best performance – experiment with realistic load and realistic data (including fragmentation) to find what works best for you.

22 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
Jay Pipes

Like Monty wrote, the performance degradation is caused by the flip between regular malloc and mmap() at 256K (default):

http://bonglonglong.com/2007/09/06/read-buffer-performance-hit/

-j

Jay Pipes

Peter,

I understand you. That makes sense. Couple things:

a) A couple typos. default value for read_buffer_size is 128K, not 128M… and there is another place you say 128M when I think you mean 128K…

b) What is the CPU cache for this machine? I agree that since this is a large table and the read_buffer must be used repeatedly, that values of read_buffer_size which best fit the CPU L1/2 cache would likely mean better performance? Would be nice to have other folks with different processor caches do a similar test…

-j

Sinisa Milivojevic

Peter,

What you discovered looks definitely like unnecessary slowdown. However, in order to make measurement more scientific, I would propose that you make a table InnoDB and to have O_DIRECT method of accessing files, or to mount filesystem on Solaris (or HP-UX) appropriately. Would be nice to see if this is may be caused to some extent by OS cacheing. Another thing to try is to use CHAR instead of VARCHAR.

Roland Volkmann

Hi Peter,

your box has RAID0, and I guess it has stripe size of 128 KB.

When I was testing file IO performance on RAID0-Systems using Windows XP some time ago, I found great correlation of stripe size / buffer size and performance. So with MyISAM file IO it might me the same thing.

With best regards,
Roland.

Sinisa Milivojevic

Peter,

O_DIRECT does use different I/O path, but that is irrelevant. What is relevant that even InnoDB tables are read by using read buffer and that such combo would dodge out the consequences of OS cache-ing … I think I will try to do myself what I suggested, as soon as I find some time ……… ;o)

Roland Volkmann

Hi Peter,

your theory using larger buffers is valid for sequential IO only. Having random IO large buffers will result in lot of unneccessary read ahead data. And if file system hadn’t written file in consecutive physical segments (fragmentation), then theory becomes much more complicated …

Apachez

11. Also the filesystems own buffers will be used which will lower the impact using random access (depending on size of the tables etc).

Regarding the 128k optimization it sounds very much like a sync is needed for how the data travels from the harddrive into the cpu. Pretty much similar why a cpu:ram ratio of 1:1 regarding fsb is often better than 1:2 ratio or some other variant.

Jay: Any progress of a tool which can find “optimal values” for a given system?

Peter: I disagree with you regarding example configurations. By using example configuration from a given system you can save much time in order to if not get a perfect configuration at least get a way better configuration than the outdated examples which are included with the mysql itself. It sounds more that you want to protect your consulting business to help others to find their optimal values with such statement as you gave in the end of this article.

Most people involving in security should by now know that security by obscurity doesnt work, I think the same applies in order to find optimal values for different setup. Optimization by obscurity doesnt work either – so lets share our findings and configuration examples 🙂

Son Nguyen

Wow, I’m glad I found this benchmark and not wasting investing into more memory (I’m just begin to see swap with one of our servers)

Apachez

Is it just me or how do you properly set the buffers to 128 kbyte?

I put this in my my.cnf:

key_buffer_size = 256M
sort_buffer_size = 128K
read_buffer_size = 128K
read_rnd_buffer_size = 128K
join_buffer_size = 128K
myisam_sort_buffer_size = 128K

and restarted the mysql process, the result when running “SHOW VARIABLES LIKE ‘%buffer%’;” is the following (only including the rows regarding to the my.cnf):

key_buffer_size 268435456
sort_buffer_size 131064
read_buffer_size 126976
read_rnd_buffer_size 126976
join_buffer_size 126976
myisam_sort_buffer_size 131072

The key_buff shows the proper “268435456” which is 256*1024*1024, but what about the rest? Shouldnt they all show 131072 which is 128*1024 or am I missing something here?

Apachez

I have now filed the above as bug http://bugs.mysql.com/39634

Douglas Manley

An interesting evolutionary consequence of this setting is that it is used by the MEMORY table engine as the allocation increment size (minus a few bytes). This means that a MEMORY table with *one row* will take up, essentially, “read_buffer_size” bytes. For each on the table, add on another “read_buffer_size” bytes to the table. The table will not change in size again until all of that allocation is used by new rows; then it will grow in increments again.

This is not documented anywhere as far as I can tell, and I only found it after banging my head against a wall looking through the MySQL source code.

Junseok, Bae

Thank You!

And could I ask a question?

I’m novice and have no idea for my situation..

I use windows + wampserver(x64 APM are all self updated) and innodb only.

I set the key_buffer_size value 32MB(in [mysqld]) but lightly slower than when I use it with 512MB.

When there’s no use of MyISAM, the key_buffer_size is also effect?

Anyway, Thank you for great articles!

Harish Naik

Apart from MyISAM, read_buffer_size is used by

Queries using order by statement
Nested queries
Bulk inserts in partitions

But not specifically by innodb as said in the above coments.

Monte

Thanks for posting this. The disk is the lowest common denominator, so 128K makes sense for typical hardware.