Dear Community,

The build 17 of MySQL with Percona patches is available.

New features in the release:

  • MySQL-5.0.83 is taken as the basis
  • The new patch innodb_use_sys_malloc.patch is added
  • The new patch innodb_split_buf_pool_mutex.patch is added
  • This patch splits the single global InnoDB buffer pool mutex into several mutexes for different purposes. This reduces mutex contention. It may help if you suffer performance loss when the working set does not fit in memory. You can detect buffer pool mutex contention by examining the output of SHOW INNODB STATUS and looking at the first section, SEMAPHORES.

  • Google’s style IO – innodb_io_patches.patch
  • FreeBSD .tar.gz package for amd64 platform is available

You can download binaries and sources with the patches here

https://www.percona.com/mysql/5.0.83-b17/

The Percona patches live on Launchpad : https://launchpad.net/percona-patches and you can report bug to Launchpad bug system:

https://launchpad.net/percona-patches/+filebug. The documentation is available on our Wiki

For general questions use our Pecona-discussions group, and for development question Percona-dev group.

For support, commercial and sponsorship inquiries contact Percona.

19 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
Norbert Tretkowski

Are you aware of CVE-2009-2446? 5.0.83 is affected, it’s fixed in 5.0.84 with this patch: http://lists.mysql.com/commits/77649

Stuart Davies

A while back you announced you would be releasing through ourdelta. Is this still the case or will you instead be providing the builds just from the percona site?
I currently use the ourdelta stable builds and they are working excellently. However I’d be interested in trying out the new additions such as xtradb if they were going to be integrated to it.
If not thats fine, but would it be possible for you to set up a ubuntu repository style build for the percona builds so its just a case of adding it into the sources.list I appreciate it can be used from the .deb packages, but it dosent provide an easy step for quick server configuration. (Yes its lazy 🙂 but I’m sure I’m not the only person who’d like to save some time.)

Also can you maybe do a post clarifying what distributions (ourdelta, percona, etc etc) you are currently involved in just for clarities sake. Its getting confusing since all the branching has been happening of late.

Much appreciated and great work with all you’ve done so far.

Doonie

+1 ^^
Everything Stuart said, I also run on ourdelta but would love to use pure percona builds if you decide to keep it a ‘local’ project as ourdelta isn’t updated ‘that’ often. So a ubuntu repo for percona would be great as many are unsure what to do with .deb files or bin/src… More users = more testers for you 🙂

Stuart Davies

Thanks for the comment and I’d be happy with this solution, especially as the percona builds are more up to date in terms of features at present.

On another note is their any news as to whether you’ve decided to work on allowing the innodb_buffer_pool to be stored out to SSD and what the advantage of having that facility would be?
We currently have SSD based systems that we are trialing so I’d be interested in trailing such a facility.

Baron Schwartz

Stuart, Doonie, we didn’t say we’d be releasing on OurDelta. You misunderstood. OurDelta is a “downstream” re-builder of our builds with extra stuff added in. This is fantastic. There is a need for both the “minimally perturbed” Percona builds and the “try all the good stuff you’ve been ogling” OurDelta builds. We plan to continue building and releasing our patches.

Vadim

Stuart,

We release our patches and for convenience provide binary releases. OurDelta and any other maintainer can use our patches as they want, but we are going to release builds which we think are most stable and provide most performance.

Your request about ubuntu repository is valid, but I am totally unfamiliar how it works. If you can make contribution providing full instructions how to setup it, it will be very appreciated!

CoolCold

very easy way to setup debian/ubuntu repo:
http://www.debian-administration.org/articles/286
i use it for my own repos

Doonie

Yes ourdelta was never a ‘pert’ of percona, seeing they choose their own patches from different vendors. What I ment was that I’d rather have a clean percona which get’s more updates than a mixed delta build 🙂

Just found this, it seems like a nice and easy way to install the deb files until there is a repo setup
https://help.ubuntu.com/community/Repositories/Personal
(haven’t had the time to test it yet though, will try it tonight to see if it’s easy to maintain as an offline repo)

Stuart Davies

Thanks for the info Doonie / Coolcold. I’ll try it on a test box the next time I’ve got a spare minute.
I’m looking forward to seeing what the new xtradb build will do with and without the intel SSD’s

imran

@Stuart:

Would love to hear how the results turn out with the Intel SSD’s whenever you get around to it.

Gabriel

@imran
About the Intel SSDs: Stay away from the X25-M MLC series (the bigger and cheaper ones). Thoses are the best money can buy for laptops, but they are worse than standard 7200rpm HDDs on servers: The reads and writes are really not consistent at all over time so you have awesome write speeds for about 1-2 seconds, then long I/O freezes before the next I/O burst (see our benchmark results: http://pub.grosboulet.com/benchmark-seqwrite.jpg). It is as if the drive was choking on the data to write. Needless to say, that behaviour is very bad when innodb is flushing the pages when checkpointing: You basically can’t do anything else when that happens.

However, not a single problem with the X25-E SLC series, I guess the 2 series (Mainstream vs Extreme) have a very different firmware, because it’s not about the write speed (SLC drives are always higher of course), it’s about having a drive capable of writing consistently over time. Too bad the Intel SLC drives are so small (32/64GB max) and so expensive compared to their MLC counterpart!

Gabriel

By the way, I don’t see the innodb fast recovery patch in this release? Why?

I “backported” the xtradb fast recovery patch to 5.0.82 and it is compiling/working fine, but recovery is not that fast in some cases: 100GB InnoDB data, 24GB buffer pool, 3x256M logs on SSD is taking 40 minutes to recover, worst case scenario, fast recovery enabled. I don’t know how much time it would take without that patch though.

pservit

I have problems with -highperf under freebsd 7.2 (mysqld process locked in ucond state). So test it before using in production.

Stuart Davies

Hi Gabriel / Imran,
The X25-M’s certainly arent worse than 7200rpm drives! Maybe in very write heavy scenarios I’d agree. It does however give a massive boost in responsiveness over the 7200rpm drives for reads, especially on large tables that dont fit in the buffer pool.
To give an idea with our current dataset (42GB database split over 100 ish tables with 15 tables that are between 1 – 4GB in size) an X25-M will show the tables instan
Server specs are:
8GB Quad core xeon X5355 running ubuntu 9.04 with Raid 1 array X25-M and Raid 10 array of 4 x 1.5TB Seagate 7200 rpm drives both configured using mdadm.
As the data set does not fit in the limited memory queries will naturally do a lot of seeks and reads.
With the database configured on the raid 10 set just showing a table or navigating the phpmyadmin interface takes upwards of 20 seconds.
With the database on the SSD response is near instant.
I have noticed the write delays on large data writes such as repairs, but with a mixed read-write load it happens less frequently. The best solution we have found is to hold the logs on the 7200 disks as the writing is largely sequential to the log file anyway.

Would I buy one after our results? Yes but only in the following scenarios:
– A database with a read write ratio below 80% read / 20% write.
– Slave servers to maximise throughput without having to spend the earth. You can configure a really cheap server with just a couple of them in a raid 1 set and dont have to worry about padding the server out with memory.

In any other scenario Gabriel is absolutely right and I’d go for a X25-E if you can afford it.

Anyway just my thoughts! Always make sure you trial the solution for your own workload! 🙂

Gabriel

@Stuart

Agreed. With the SSD drives, operations with phpMyadmin are a lot smoother than with HDDs, but I really wanted to warn about the write activity on these drives. The read only performance is the same as the X25-E drives for much more capacity.
What makes thoses drives inoperable with write heavy databases, is that when the writes are “blocking”, the pending reads from the drive are also blocked! (using a RAID controller with a big write cache doesn’t really help)
So if you have a lot a read activity on your databases, and InnoDB decides to flush the buffer pool (“checkpointing”), you will notice your read queries are piling up in the processlist until the writes are completed. This checkpointing may occur more or less often, depending on the log size and the write ratio, but in my case it happens ~10 times a day (70% writes / 30% reads)
This is really a problem when queries supposed to take 5000 queries / sec on the server.

Baron Schwartz

Gabriel, you are observing InnoDB’s behavior, not the flash drive’s. We’ve addressed this in some of our patches.

Gabriel

Baron, are you referring to the innodb adaptive checkpoint feature ? With the X25-M SSDs, this only makes it worse. And the problem is specific to the X25-M flash drives, because we don’t have such freezes with writes on the X25-E drives. On the other hand, using innodb_adaptive_checkpoint with the X25-E makes the server run very smoothly, even when with a lot a INSERTs.

This problem really comes from the drive, and is verified by benchmarks.

Baron Schwartz

Gabriel, I meant to reply sooner — sorry. Yes, I was referring to checkpointing activity. I stand corrected, I see you are well aware of that problem.

Stuart Davies

Just as an aside as know lots of people want to try out the intel SSD’s, some simple config changes I’d make to get the most out of them (assuming its going to be dedicated to the database:
1) Update to the latest firmware (http://downloadcenter.intel.com/Detail_Desc.aspx?agr=Y&DwnldID=17485)
2) Hard erase the drive after the update to make sure its
3) Make sure you use software raid. Raid cards these days are rarely as good as linux in maximising throughput, even the PCI-E x8 variants. We get the best results using the standard intel host controller which has 6 SATA ports.
4) Switch on AHCI for the disk configuration in your bios, it vastly improves small file access whether reading or writing, key for db operations! (http://www.pcper.com/article.php?aid=669&type=expert&pid=3)
4) Align the drive when formatting to match the SSD (see here: http://thunk.org/tytso/blog/2009/02/20/aligning-filesystems-to-an-ssds-erase-block-size/)
5) Set the drive mount in /etc/fstab to be noatime
6) Make sure the mysqltmp is mounted on the drive
7) Relocate the innodb log files to a directory on a seperate partition on a standard HDD. It’ll help minimise writes and the standard hard drives are excellent at sequential writes which is what most of the writes to the log files are.
innodb_log_group_home_dir=/var/lib/mysqlnonssd/
innodb_log_arch_dir=/var/lib/mysqlnonssd/
8) Make sure you bypass the OS caches
innodb_flush_method=O_DIRECT
9) We get the best results with filepertable. Especially if you find your tables fragment a lot.
innodb_file_per_table=1

I’m sure other people can chime in with some additional settings such as using the percona or ourdelta builds both of which have decent enhancements.
Our server listed above comfortably averages 3500 queries a second with peaks up to 10000 queries being handled well.
The query mix on our server (details of which are in a post above) is as follows:
49.99% change db
34.73% SELECT
2.93% DELETE
1.47% INSERT
5.31% REPLACE
5.57% UPDATE
All of this on a 43.4GB data set constituting 161,000,000 data rows (details of the table mix was stated above)
Obiously this is just my own findings but hopefully will help someone.

-Stu