I am happy to announce next build of our backup tool. This version contains several bugfixes and introduces initial implementation of incremental backup.
Incremental backup works in next way. When you do regular backup, at the end of procedure you can see output:
1 2 3 4 5 6 7 8 9 10 | The latest check point (for incremental): '1319:813219999' >> log scanned up to (1319 813701532) Transaction log of lsn (1318 3034677302) to (1319 813701532) was copied. 090404 06:03:29 innobackupex: All tables unlocked 090404 06:03:29 innobackupex: Connection to database server closed innobackupex: Backup created in directory '/mnt/data/tmp' innobackupex: MySQL binlog position: filename 'db02-bin.001271', position 247627478 090404 06:07:58 innobackupex: innobackup completed OK! innobackupex: You must use -i (--ignore-zeros) option for extraction of the tar stream. |
which gives start point 1319:813219999 for further incremental backup. This point is LSN of last checkpoint operations. Now next time when you want only copy changed pages you can do:
1 | xtrabackup --incremental_lsn=1319:813219999 --backup --target-dir=/data/backup/increment_day1 |
and only changed pages (ones with LSN greater than given) will be copied to specified dir. You may have several incremental dir, and apply them one-by-one.
Current version does not allow to copy incremental changes to remote box or to stream, it is only local copy for now, but we are going to change it in next release. Beside putting last checkpoint LSN to output we also store it in xtrabackup_checkpoint file to use it in scripts.
More about incremental you can read on our draft page https://www.percona.com/docs/wiki/percona-xtrabackup:spec:incremental
You can download current binaries RPM for RHEL4 and RHEL5 (compatible with CentOS also), DEB for Debian/Ubuntu and tar.gz for Mac OS / Intel 64bit there:
https://www.percona.com/mysql/xtrabackup/0.5/.
By the same link you can find general .tar.gz with binaries which can be run on any modern Linux distribution.
By the same link you can download source code if you do not want to deal with bazaar and Launchpad.
The project lives on Launchpad : https://launchpad.net/percona-xtrabackup and you can report bug to Launchpad bug system:
https://launchpad.net/percona-xtrabackup/+filebug. The documentation is available on our Wiki
For general questions use our Pecona-discussions group, and for development question Percona-dev group.
For support, commercial and sponsorship inquiries contact Percona
Great stuff! Just testing this. From the doucmentation it seems to show a different use case for incremental.
It states you just pass the last increments directory in as –incremental-basedir= and the backup process should read the lsn from the checkfile in this last increment directory. I’ve tested that and it seems to work. This suggests we just need to know the path to our last increment and pass this in each time we increment and we don’t actually need to script the reading of the incremental_lsn.
Have I understood this correctly?
Thanks again
Leon
leon,
yes, that’s right xtrabackup can read last lsn by itself without additional scripting
Well I am very impressed so far. I have mounted a remote drive using fuse and ssfhs and incrementally backing up straight to that which is working great. Not using it in production yet as need to test the restores.
I have a couple of questions if you could possibly answer, as I can’t seem to make sense of the documents.
When doing a –prepare, lets say the current database server has died, we can first download our backup to a new server on /data/backup we then want to restore the data to /data what would the prepare command look like?
Also are triggers /user accounts backed up or is it just data and schema?
Thanks very much I am honestly very impressed with it and looking forward to compression etc in future releases
Leon,
To restore data you can copy backup to final /data directory directly and run
innobackupex –apply-log /data
it should execute prepare and create iblogs ready to use, that is MySQL will be ready to start.
As for trigger/user it depends what tool you use.
xtrabackup binary works only with innodb tables.
innobackupex handles all instance including MyISAM, user accounts, triggers, views etc.
As for compression you can use it already in stream mode, i.e.
innobackupex –stream=tar tmp | gzip – > backup.tar.gz.
Are you looking for different compression way ?
Hi,
Thanks for the response I am using InnoDB. I have actually just been using xtrabackup at this point to create backups. As I am doing incremental backups I didn’t think compression was supported yet.
Is it ok to use the xtrabackup command to backup and innobackupex to restore?
The above prepare looks pretty easy, how would increments be added after the main backup?
Thanks again
Leon
Actually I see incremental is not supported in innobackupex , do you know when this is available?
For now I guess I need to backup triggers/user accounts myself and use xtrabackup to get the data.
How would the prepare work using xtrabackup with the above example?
Would I copy all my backup data onto a new server at /data and issue this?
xtrabackup –prepare –datadir=/data
Then how would incrementals work lets say the incremental is at /data/backup/02 would this command be correct usage?
xtrabackup –prepare –datadir=/data –incremental-dir=/data/backup/02