In my recent post, “TokuDB gotchas: slow INFORMATION_SCHEMA TABLES,” I saw a couple questions and tweets asking if we use TokuDB in production. Actually I mentioned it in that post and we also blogged about it in a couple of other recent posts:

So, yes, we are using Percona Server + TokuDB as a main storage engine in Percona Cloud Tools to store timeseries data.

And, yes, Percona Server + TokuDB is available GA Percona Server 5.6.19-67.0 with TokuDB (GA).

Just having good performance is not enough to make it into production; there are also operational questions and one such question is about backups. I want to explain how we do backups for Percona Server + TokuDB in Percona Cloud Tools.

I should say up front, that we DO NOT have support for TokuDB in Percona XtraBackup. TokuDB internals are significantly different from InnoDB/XtraDB, so it will be a major project to add this to Percona XtraBackup and we do not have any plans at the moment to work on this.

It does not mean that TokuDB users do not have options for backups. There is Tokutek Hot back-up, included in the Tokutek Enterpise Subscription. And there is a method we use in Percona Cloud Tools: LVM Backups. We use mylvmbackup scripts for this task and it works fairly well for us.

There is however some gotchas to be aware. If you understand an LVM backups mechanic, this is basically a managed crash recovery process when you restore from a backup.

Now we need to go in a little detail for TokuDB. To support transactions that involve both TokuDB and InnoDB engines, TokuDB uses a two-phase commit mechanism in MySQL. When involved, the two-phase commit requires binary logs presented for a proper recovery procedures.

But now we need to take a look at how we setup a binary log in Percona Cloud Tools. We used SSD for the main data storage (LVM partition is here) and we use a Hardware RAID1 over two hard-drives for binary logs. We choose this setup as we care about SSD lifetime. In write-intensive workloads, binary logs will produce a lot of write operations and in our calculation we will just burn these SSDs, so we have to store them on something less expensive.

So the problem there is that when we take an LVM snapshot over main storage, we do not have a consistent view of binary logs (although it is possible to modify backup scripts to copy the current binary log under FLUSH TABLES WITH READ LOCK operation, this is probably what we will do next). But binary logs are needed for recovery, without them we face these kind of errors during restoring from backup:

The error message actually hints a way out. Unfortunately it seems that we are the first ones to have ever tried this option, as tc-heuristic-recover is totally broken in current MySQL and not supposed to work… and it would be noticed if someone really tried it before us (which gives me an impression that Oracle/MySQL never properly tested it, but that is a different story).

We will fix this in Percona Server soon.

So the way to handle a recovery from LVM backup without binary logs is to start mysqld with –tc-heuristic-recover switch (unfortunately I did not figure out yet, should it be COMMIT or ROLLBACK value, hehe).

The proper way to use LVM backup is to have a corresponding binary log file, like I said it will require a modification to mylvmbackup script.

I should say this is not the only way we do backups in Percona Cloud Tools. In this project we use Percona Backup Service provided by the Percona Managed Services team, and our team also uses mydumper to perform a logical backup of data.
While it works acceptably to backup hundreds of gigabytes worth of data (it is just a sequential scan, which should be easy for TokuDB), the full recovery is painful and takes unacceptably long. So mydumper backup (recovery) will be used if we ever need to perform a fine-grained recovery (i.e only small amount of specific tables).

So I hope this tip is useful if you are looking for info about how to do backups for TokuDB.

7 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
Peter Zaitsev

Vadim,

Would not doing a lot of actions under “FLUSH TABLES WITH READ LOCK” cause stall during backup which is nasty if you’re backing up from the master. Is not it possible to have something similar to “backup locks” for TokuDB so we do not have to cause major stall to coordinate between LVM snapshot being taken and binlog position ? To get binary log position you just need to be able to prevent writes to the binary log for the period of creating LVM snapshot which is lighter operation than flushing all tables.

Great technical insights in the blog post!

Enrico Placci

I am curious to understand what considerations made you choose this approach rather than having an extra slave to take offline for the backup. To me it seems a simpler and safer (no hassle with the binlogs) way to do it.

Mike EKlund

I am curious about the setup you use and how you keep from LVM degrading performance during the backups. Do you just live with the performance degradation or with SSDs is it negligible?

Tim Ellis

You aren’t the first to try this! We do LVM backups of InnoDB/MyISAM tables. Recently, we saw a dramatic uptick in the number of times we get this –tc-heuristic-recover=OPT error in our logs when trying to start an instance.

The error will only occur sometimes. Perhaps 1/10th of the time depending on the number of implied XA transactions MySQL is starting/finishing at the moment of the LVM snapshot.

Because it’s intermittent, repro path is not easy. I have been trying to repro this all day today and have not had luck.

Eric Robinson

I’ve been reading the blog posts and articles, but I still don’t understand what could possibly go wrong with…

1. Flush tables with read lock.
2. Take LVM snapshot.
3. Release read lock.
4. Use rsync to back up the snapshot directory to another server.

The snapshot takes less than a second.

Why would this not work fine?