If you are terrified by the stability of the results in MySQL in my previous post, I am going to show what we can get with Percona Server. This is also to address the results presented there Benchmarking MariaDB-5.3.4

The initial benchmark is described in Benchmarks of Intel 320 SSD 600GB, and the result for MySQL 5.5.20 in case with 4 (46GB of data) and 16 tables (184GB of data) you can see in my experiments with R graphics.

How do we solve it in Percona Server ? There is whole set of improvement we made, like:

  • Big log files
  • Tuned flushing algorithm
  • Disable flushing of neighbor pages

and the configuration to provide better experience on SSD is :

Versions: MySQL 5.5.20, Percona Server 5.5.19

With these settings we have following results:

As you see with Percona Server we have stable and predictable lines.

Now, how to compare these results ?
If we draw next boxplot:

and compare the average (middle line inside box) for whole 1h run, we may get impression that average throughput for Percona Server is worse, because averages for 16 tables are:

  • MySQL: 3658 tps
  • Percona Server: 3487 tps

and now if you draw a column plot with these results, you will get something like:

One, looking on this graph, may come to the conclusion: wow, there is a regression in Percona Server.

But if we cut of first 1800 sec, to exclude warmup period, the average will be different:

  • MySQL: 3746 tps
  • Percona Server: 3704 tps

And for comparison, average throughput for 4 tables:

  • MySQL: 3882 tps
  • Percona Server: 6735 tps

The Percona Server is still slower, but you say me, would you rather prefer a stable throughput or sporadic jumps ? Furthermore, there is a way to improve throughput in Percona Server: increase innodb_log_file_size.

There are stability timeline for Percona Server with innodb_log_file_size=8GB

And to aggregate results and provide final numbers, jitter (after initial warmup 1800 sec)

So, in the conclusion, you can see that with a proper tuning, Percona Server/XtraDB outperforms MySQL, and provides a more stable throughput. Of course if a tuning is too hard to figure it out, you always can fall back to the vanilla InnoDB-plugin, like MariaDB suggests in Benchmarking MariaDB-5.3.4.

Raw results and scripts are on Benchmarks Launchpad


21 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
Wlad

Could also briefly follow up XtraDB’s 50% performance regression compared to Innodb, if XtraDB runs without specific tuning presented in this post? Just for my own understanding of numbers presented here http://blog.montyprogram.com/benchmarking-mariadb-5-3-4/ .

Wlad

Maybe you can just run MySQL vs Percona Server on your usual hardware, omit specific tuning presented in this post, and publish the results.

Brian

Thank you so much for posting this with your settings. We recently setup a slave with a pair of micron p300 ssd’s. We promoted the box to master to take advantage of the increased i/o and had 5-10 minute stalls every half hour or so. It was completely unusable (mysql 5.5). Going to load up xtradb and try with the above tweaks.

Wlad

5.5 performance improvements are great, but perhaps do not make that much difference in this specific workload, which seems to mostly stress bufferpool flushing.

Axel said in his blog, 5.1 and 5.5 performed almost the same. It sounds to be correct, since MariaDB+vanilla Innodb (both based on 5.1 codebase) performed well 256 concurrent clients, only a slightly behind MySQL5.5

On the other hand, everything based on XtraDB (PerconaServer 5.1, MariaDB+XtraDB) showed quite a slowdown in his results.

Wlad

Well, after looking again at the results by Axel, percona 5.5 was not too bad either. Maybe AIO helped, I dunno

Baron Schwartz

The results by Axel don’t show much detail. That benchmark is reduced to single numbers. Look at Vadim’s examples in this post to see how misleading that can be.

Wlad

Schwartz: If throughput difference is 50% it is very far from being misleading. Also, not everyone has time to invest into the graphs as nice an Vadim’s ones.

Baron Schwartz

I trust Vadim’s benchmarks because he either shows what’s happening in the system and explains it, or he shows what’s happening and says more investigation is needed. I can’t tell if Axel knows more and didn’t present the results, or whether he’s satisfied not knowing how to explain his results.

James Day

Brian, that’s usually a symptom of innodb_io_capacity set too low for the workload and system. For an SSD setup with high write load, if you were using the default of 200, try 400 then adjust upwards if that’s not sufficient to resolve the problem. 2000 is likely to be the highest you’ll need but don’t just go there because it’s best not to set it much higher than required.

MySQL 5.5 server dirty page flushing respects the innodb_io_capacity setting, Percona server flushing doesn’t. Set innodb_io_capacity too low and MySQL server will have stalls from time to time when async or sync flushing starts at 75% or 85% of the InnoDB log file space used, while Percona server won’t.

You’ll also normally benefit from setting innodb_purge_threads to 1 and, if doing lots of multithreading, setting innodb_buffer_pool_instances to 8 as a starting point and adjusting from there.

Views are my own, if you want an official Oracle view contact a PR person.

James Day, MySQL Senior Principal Support Engineer, Oracle

Yaroslav Vorozhko

Vadim, it looks like misprint, isn’t it?

And for comparison, average throughput for 4 tables:

MySQL: 3882 tps
Percona Server: 6735 tps

Yaroslav Vorozhko

Average for 16 tables is:
Percona Server: 3487 tps

Average for 4 tables is :
Percona Server: 6735 tps

I think 6735 – it is misprint, correct value probably 3735?

Mark Callaghan

Wlad – you don’t need fancy graphs. If you only report one number then let it be 95th or 99th percentile response time. Great average performance combined with high variance is a great way to waste a lot of time debugging problems in production.

Ragu Bhat

Wlad

These SSD reviews of yours are just superb. My favourite on the MySQL Performace blog! You can also do a review on the best available hardware raid-sets and give your advise.

Patrick Galbraith

Vadim,

Thank you, as always. I would have not thought of having such a large log file size. I thought that would make recovery painfully long? Is the log size you show something that many of your clients are now using and that you would advise?

Thanks!

Brian

Just to update my comment from earlier. We moved to percona 5.5, switched to the new keep average flushing method (leaving everything else the same), and we no longer have stalls under load.

James, we were actually using 2000 as our i/o capacity with 5.5. We also had 8 buffer pool instances. I wasn’t able to spend a lot of time collecting more data on the issue unfortunately since it was in production. I may clone our settings in our dev environment to see if I can reproduce it on similar hardware, but so far changing over to using keep average has taken care of things.