Percona XtraBackup for MySQLOne very interesting feature, “Compact Backup,” is introduced in Percona XtraBackup 2.1. You can run “compact backups” with the  –compact option, which is very useful for those who have limited disk space to keep the MySQL database backup. Now let’s first understand how it works. When we are using –compact option with Innobackupex, it will omit the secondary index pages. This will make the backups more compact and this way they will take less space on disk but the downside of this is, the backup prepare process will take longer time because those secondary index pages will be recreated while preparing the MySQL backup. Here, we need to consider couple of things before implement it or use it.

  1. Compact backups are not supported for system table space, so in order to work correctly innodb-file-per-table option should be enabled.
  2. Difference in backup size will be depend on the size of the secondary indexes. so you’ll not see drastically changes in backup size if you have less secondary indexes in database.

Taking Compact Backup: We can use –compact option with innobackupex like  sudo innobackupex –compact /home/X_Backup/. As it looked like interesting feature, I have tried to teste it with some scenarios which I would like to share. I have taken a table something like this

Create Table: CREATE TABLE compact_test ( id int(11) DEFAULT NULL,
name varchar(50) DEFAULT NULL,
city varchar(25) DEFAULT NULL,
pin int(11) DEFAULT NULL,
phone bigint(20) DEFAULT NULL,
mobile bigint(20) DEFAULT NULL )
ENGINE=InnoDB DEFAULT CHARSET=latin1

Added around 10M records and without index I have tried to take backup with and without –compact option. Found bellow result.

3.9G 2013-06-19_10-21-58 – without –compact option, without index – Time: 3 minutes

3.9G 2013-06-19_10-48-44 – with –compact option, without index – Time: 3 minutes

1.4G backup.xbstream – compress  backup with xbstream – Time: 2.5 minutes. Total time 5.5  minutes (backup + compress)

For further testing, I have added indexes on compact_test table for name, city, pin, phone and mobile columns and found below result.

4.7G 2013-06-19_11-42-07 – without –compact option, with indexes – Time: around 3 minutes

3.9G 2013-06-19_11-55-13 – with –compact option, with indexes – Time: around 3 minutes

1.4G backup_with_indexes.xbstream – compress above backup with xbstream – Time: 3 minutes.  Total time 6  minutes (backup + compress)

So now it clarifies that if you have many indexes on tables then only we can take benefit of –compact  otherwise it will not be useful that much. After taking simple backup, if we can compress it with xbstream, it takes more time but looks like it’s worth it.  One more thing, that if you have many indexes and you are using –compact option, then you can compress only that much space which is occupied by indexes. You can see in above example that with –compact, backup size (3.9G) is same like without index backup.

If you want to check from backup dir that if this backup is taken with –compact or not, you can simply check xtrabackup_checkpoints file, Compact value will be 0 if compact option will not be used.

backup_type = full-backuped

from_lsn = 0

to_lsn = 9023692002

last_lsn = 9023692002

compact = 1

Restoring Compact Backup:

  • Prepare Backup

While preparing the backup with –apply-log on both the backups, I found that –apply-log takes around 13 minutes with the compact backup while its taking 14 seconds for without compact backup.  I have also tried to use –use-memory option to give extra memory to prepare operation but I think it didn’t affect to time. (https://www.percona.com/doc/percona-xtrabackup/2.1/innobackupex/innobackupex_option_reference.html#cmdoption-innobackupex–use-memory) There is also one option like –rebuild-threads where you can spcify no. of threads to make processes parallel. I have tried to test it with 3 tables but didn’t make any difference to process time.  It might be possible that more tables can make difference. (https://www.percona.com/doc/percona-xtrabackup/2.1/innobackupex/innobackupex_option_reference.html#cmdoption-innobackupex–rebuild-threads)

4.7G 2013-06-19_11-42-07 – Without –compact , prepared the backup – Time: 15 seconds

3.9G 2013-06-19_11-55-13 – With –compact, prepared the backup – Time: 13 minutes 

3.9G 2013-06-19_13-08-05 – With –compact and –use-memory=1G, prepared the backup – Time 13 minutes

As I said, this is the downside of compact backup that its taking longer time to prepare it. Even with compact backup, –apply-log output will be slightly different. i.e.

nilnandan@nil:~$ date && sudo innobackupex –apply-log –rebuild-indexes –use-memory=1GB /home/nilnandan/X_Backup/2013-06-19_13-08-05 && date
Wed Jun 19 14:04:13 IST 2013 [sudo] password for nilnandan:

InnoDB Backup Utility v1.5.1-xtrabackup; Copyright 2003, 2009 Innobase Oy
and Percona Ireland Ltd 2009-2012. All Rights Reserved.

130619 14:04:17 innobackupex: Starting ibbackup with command: xtrabackup_55 –defaults-file=”/home/nilnandan/X_Backup/2013-06-19_13-08-05/backup-my.cnf” –defaults-group=”mysqld” –prepare –target-dir=/home/nilnandan/X_Backup/2013-06-19_13-08-05 –use-memory=1GB –tmpdir=/tmp –rebuild-indexes

Starting to expand compacted .ibd files.
130619 14:04:18 InnoDB: Warning: allocated tablespace 14, old maximum was 9
Expanding ./test/compact_test.ibd

130619 14:05:00 InnoDB: Database was not shut down normally! InnoDB: Starting crash recovery.
InnoDB: Reading tablespace information from the .ibd files

130619 14:05:02 InnoDB: Waiting for the background threads to start
[01] Checking if there are indexes to rebuild in table test/compact_test (space id: 14)
[01] Found index name
[01] Found index city
[01] Found index phone
[01] Found index mobile
[01] Rebuilding 4 index(es).
130619 14:17:28 Percona XtraDB (https://www.percona.com) 1.1.8-20.1 started; log sequence number 11016623144

xtrabackup: starting shutdown with innodb_fast_shutdown = 1
130619 14:17:37 InnoDB: Starting shutdown

130619 14:17:42 InnoDB: Shutdown completed; log sequence number 11764348940
130619 14:17:42 innobackupex: completed OK!
Wed Jun 19 14:17:42 IST 2013
nilnandan@nil:~$

  • Restore backup to Data dir

Restore the compact backup is very simple and just like normal innobackupex utility. You can use –copy-back option with innobackupex and restore the prepared backup into database dir.

nilnandan@nil:~/X_Backup$ sudo innobackupex –copy-back /home/nilnandan/X_Backup/2013-06-19_13-08-05

It will copy all the data-related files back to the server’s datadir, determined by the server’s my.cnf configuration file. I would suggest to check the last line of the output for a success message. i.e

130619 14:17:42 innobackupex: completed OK!

Conclusion: As we can see, compact backup is helpful for saving disk space but it will also slow the preparation process. But for people who are concerned with disk space over recovery time, the compact + xbstream (archive) can be the best solution.  Actually, it’s just matter of need.

3 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
Guillaume Dievart

Hello,

I have tried backup compact with a database of 49G without compact, and 29G with compact option. But I have stopped the prepare after an hour … I think it’s too long, but it’s true we win a lot of disk space !

(sorry for my english)

Raghavendra

@Guillaume,

Did you try with rebuild-threads (set to number of procs for
instance) to see if that made the prepare faster?

Guillaume Dievart

@Raghavendra

I tried with use-memory but not wih rebuild-threads. I will try that today.