This is to follow up on my previous post and show the results for MySQL 5.5.8 and Percona Server on the fastest hardware I have in our lab: a Cisco UCS C250 server with 384GB of RAM, powered by a Virident tachIOn 400GB SLC card.

To see different I/O patterns, I used different innodb_buffer_pool_size settings: 13G, 52G, an 144G on a tpcc-mysql workload with 1000W (around 100GB of data). This combination of buffer pool sizes gives us different data/memory ratios (for 13G – an I/O intensive workload, for 52G – half of the data fits into the buffer pool, for 144G – the data all fits into memory). For the cases when the data fits into memory, it is especially important to have big transactional log files, as in these cases the main I/O pressure comes from checkpoint activity, and the smaller the log size, the more I/O per second InnoDB needs to perform.

So let me point out the optimizations I used for Percona Server:

  • innodb_log_file_size=4G (innodb_log_files_in_group=2)
  • innodb_flush_neighbor_pages=0
  • innodb_adaptive_checkpoint=keep_average
  • innodb_read_ahead=none

For MySQL 5.5.8, I used:

  • innodb_log_file_size=2000M (innodb_log_files_in_group=2), as the maximal available setting
  • innodb_buffer_pool_instances=8 (for a 13GB buffer pool); 16 (for 52 and 144GB buffer pools), as it is seems in this configuration this setting provides the best throughput
  • innodb_io_capacity=20000; a difference from the FusionIO case, it gives better results for MySQL 5.5.8.

For both servers I used:

  • innodb_flush_log_at_trx_commit=2
  • ibdata1 and innodb_log_files located on separate RAID10 partitions, InnoDB datafiles on the Virident tachIOn 400G card

The raw results, config, and script are in our Benchmarks Wiki.
Here are the graphs:

13G innodb_buffer_pool_size:

In this case, both servers show a straight line, and it seems having 8 innodb_buffer_pool_instances was helpful.

52G innodb_buffer_pool_size:

144G innodb_buffer_pool_size:

The final graph shows the difference between different settings of innodb_io_capacity for MySQL 5.5.8.

Small innodb_io_capacity values are really bad, while 20000 allows us to get a more stable line.

In summary, if we take the average NOTPM for the final 30 minutes of the runs (to avoid the warmup stage), we get the following results:

  • 13GB: MySQL 5.5.8 – 23,513 NOTPM, Percona Server – 30,436 NOTPM, advantage: 1.29x
  • 52GB: MySQL 5.5.8 – 71,774 NOTPM, Percona Server – 88,792 NOTPM, advantage: 1.23x
  • 144GB: MySQL 5.5.8 – 78,091 NOTPM, Percona Server – 109,631 NOTPM, advantage: 1.4x

This is actually the first case where I’ve seen NOTPM greater than 100,000 for a tpcc-mysql workload with 1000W.

The main factors that allow us to get a 1.4x improvement in Percona Server are:

  • Big log files. Total size of logs are: innodb_log_file_size=8G
  • Disabling flushing of neighborhood pages: innodb_flush_neighbor_pages=0
  • New adaptive checkpointing algorithm innodb_adaptive_checkpoint=keep_average
  • Disabled read-ahead logic: innodb_read_ahead=none
  • Buffer pool scalability fixes (different from innodb_buffer_pool_instances)

We recognize that hardware like the Cisco UCS C250 and the Virident tachIOn card may not be for the mass market yet, but
it is a good choice for if you are looking for high MySQL performance, and we tune Percona Server to get the most from such hardware. Actually, from my benchmarks, I see that the Virident card is not fully loaded, and we may benefit from running two separate instances of MySQL on a single card. This is a topic for another round.

(Edited by: Fred Linhoss)

11 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
Alex

Percona Server 5.5.7? Is it internal version?

Harrison

How practical is it to have 8G of log files? How long does crash recovery take in that case on this hardware?

Mikhail

Hi Vadim,

Thank you for the interesting info and sharing optimization tips.

Do you have similar benchmarking results for quad-socket servers like Xeon 7500 with 32 physical cores?

Regards,
Mikhail.

Andy

Vadim,

1) In the 1st graph MySQL 5.5.8 bp_inst=8 actually gave better performance than 1 buffer instance. But in your previous post (http://www.mysqlperformanceblog.com/2010/12/20/mysql-5-5-8-and-percona-server-being-adaptive/) in the 3rd graph there, buffer_pool_instances=8 didn’t give any better performance and actually caused a bigger performance “dips”. Why the discrepancy between the 2 test? What value would you recommend for buffer_pool_instances, 1 or 8?

2) Is the buffer_pool_instances=8 option only available for MySQL 5.5 whereas Percona Server 5.5 only allows 1 buffer pool instance? I’m a bit confused here.

3) In the 4th graph innodb_io_capacity=500 actually gave the worst performance because of the severe dips in throughput. innodb_io_capacity=20000 seems to be the best option here. But in your previous post (http://www.mysqlperformanceblog.com/2010/12/20/mysql-5-5-8-and-percona-server-being-adaptive/) in the 3rd graph innodb_io_capacity=500 was the best option while innodb_io_capacity=20000 was the worst.
Again what causes the discrepancy between the 2 tests? What value of innodb_io_capacity would you recommend for SSD?

Also is the innodb_io_capacity available for Percona Server?

Normann

What’s the best way to change the innodb_log_file_size without dumping and reloading the whole db?

jackbillow

to Normann
may be you can directly delete the ib_logfileN and restart it, it will be change to you hope.

theo

Imressive results! Is there any chance to put the Percona Server to the TPC-H tests?

http://www.tpc.org/tpch/results/tpch_results.asp?print=false&orderby=tpm&version=2%&sortby=desc