If someone asks me about MySQL Backup advice my first question would be if they have LVM installed or have some systems with similar features set for other operation systems. Veritas File System can do it for Solaris. Most SAN systems would work as well.
What is really needed is ability to create atomic snapshot of the volume, which can be later mounted same as original file system
Why snapshot based MySQL backups are great ?
There are number of reasons:
Almost Hot backup In most cases you can perform this type of backup while your application is running. No need to shut down server, make it read only or anything like it.
Support for all local disk based storage engines It works with MyISAM and Innodb and BDB, It also should work with Solid, PrimeXT and Falcon storage engines.
Fast Backup You simply do file copy in the binary form so it is hard to beat in speed.
Low Overhead It is simply file copy so overhead to the server is minimal.
Easy to Integrate Do you want to compress backup ? Backup it to tape, FTP or any network backup software – it is easy as you just need to copy files.
Fast Recovery Recovery time is as fast as putting data back and standard MySQL crash recovery, and it can be reduced even further. More on this later.
Free No extra commercial tools as Innodb Hot Backup are required to perform backup.
Are there any downsides ?
Need to have snapshot campatibility – this is obvious one.
May need root access In some organizations DBA and System Administrator are different people from different departmnents which might not like to trade access rights between each other.
Hard to predict downtime I mentioned this solution is often hot backup, but bad thing it is hard to estimate when it is hot and when it is not – FLUSH TABLES WITH READ LOCK may take quite a while to complete on systems with long queries.
Problems with data on multiple volumes If you have logs on separate devices or just your database spanning across multiple volumes you will be in trouble as you will not get consistent snapshot across all the database. Some systems may be able to do atomic snapshot of many volumes.
Lets speak a bit about how LVM and snapshotting in general works. Really there are different implementations but the sake of them is to provide you with volume which consistently matches state of the volume at the time storage is created. In LVM it is implementeed as copy on write. Special storage area allocated on device where old version of changed pages are stored. You can think about it as about simplified form of versioning like in Innodb if it is closer to you. In other cases snapshot may be implemented by tripple-mirroring. Ie you have RAID1 volume but there are 3 copies of data rather than 2. So you can move one devices out of mirror and use it as snapshot while still having your data safe and secure.
There are two types of snapshots – some of them are read-only while others can be read-write. read-only snapshots may sound good enough as you’re only going to read data anyway, but in reality read-write snapshots have number of benefits. First no extra handling is needed for journaling file sytems – you can simply do journal recovery on snapshot. With read-only snapshot you need to make sure filesystem synchronizes device before snapshot is taken so no journal replay is needed.
The other benefit of read-write snapshot is you can actually start MySQL Server on it and perform recovery, check tables or do whatever else you might need to do to ensure your backup is consistent. Backing up database which was already corrupted is very nasty problem you want to avoid.
Let’s now see what exactly you need to do to perform backup of MySQL Database (or create slave) using LVM2 on Linux.
1) Connect to MySQL and run FLUSH TABLES WITH READ LOCK
Note – this command may take a while to complete if you have long running queries. The catch here is FLUSH TABLES WITH READ LOCK actually waits for all statements to complete, even selects. So be careful if you have any long running queries. If you’re using only Innodb tables and do not need to synchronize binary log position with backup you can skip this step.
2) While holding connection open run: lvcreate -L16G -s -n dbbackup /dev/Main/Data – This will create snapshot named dbbackup for Logical Volume Main/Data . You should specify enough of undo space to hold modifications during backup process – I’ve specified 16GB in this case. If your undo size is not large enough snapshot will get invalidated and backup will be aborted.
Sometimes you might run into the errors on this step, The most common one I’ve resently seen is: snapshot: Required device-mapper target(s) not detected in your kernel – This means snapshot module is not loaded in your kernel by default and you need to load it, which is done by running modprobe dm-snapshot
3) Now you have created logical volume and can unlock the tables, but before that you should probably record binary log position which is done by running SHOW MASTER STATUS – This is binary log position you’ll need to point your MySQL Slaves created from this snapshot.
4) Snapshot created, now you want to let MySQL Server to continue, which is done by running UNLOCK TABLES or simply closing connection.
5) Mount backup Filesystem: mount /dev/Main/dbbackup /mnt/backup
6) Copy data to backup. Normally you can skip slow query logs and error log while taking backup. You also can skip most of binary logs – however if some of your slaves are far behind you might want to keep some of last binary logs just in case, or you can assume in case of recovery from the backup you will need to restore slaves as well and skip binary logs in your backup process.
7) Unmount filesystem umount /mnt/backup
8) Remove snapshot: lvremove -f /dev/Main/dbbackup
If you want to create slave based on such snapshot you need to perform couple of more simple steps
9) Extract/Copy database to the slave database directory.
10) Start MySQL Server. Wait for it to perform recovery.
11) Use CHANGE MASTER TO to point slave to saved binary log position:
1 | change master to master_host="master", master_user="user", master_password="password", master_log_file="host-bin.000335", master_log_pos=401934686; |
12) Run SLAVE START to restart replication.
With slightly modified process you can clone slaves from the slaves without stopping them – you just need to use SHOW SLAVE STATUS instead of SHOW MASTER STATUS to find out appropriate binary log position. Be careful however – cloning slave from the slave also clones inconsistences in data which slave could have accomulated – especially if you use slave_skip_errors or sql_slave_skip_counter. Cloning master you’re starting from consistent copy.
If you’re interested in ready script you can try mylvmbackup by Lenz Grimmer
I have innodb logs and innodb tablespace on different RAID volumes (RAID1 for the logs and a RAID10 volume for the tablespace).
I’m creating two snapshots of the volumes during “flush with read lock”, then I backup both snapshots.
Is it completely safe ?
My tests restore worked fine (even while backuping under load), but before the server goes live, I prefer to be sure that it will work as intended.
Any advice ?
Many Thanks,
Brice
Hmm, is it really safe to backup innodb tablespaces this way?
I heard that its not in a consistent state if its still running while doing the backup?
Brice,
No it is not safe because logs and tablespace will be out of sync. Note, even if no statements can be run after FLUSH TABLES WITH READ LOCK Innodb continues to do things in background – purging, insert buffer merge, flushing dirty pages etc.
You’re lucky it worked but I would not rely on it.
Thomas,
This is what LVM is for. If you simply copy Innodb tablespace/logs while running MySQL is running you will get it in inconsistent state and backup will be unusable. LVM however creates consistent snapshot – which works fine.
Thanks Peter,
I’ll put the logs on the same device as the tablespace then or try innodb hot backup.
I just followed the general advice of separating the logs from the tablespace in order to maximize disk usage 😉
Brice,
Actually this is very common advice… but it is frequently wrong. Logs on the separate volume speed things up assuming you keep same number of drives for tablespace. While if you only have 6 drives for example you can ether use 6 for the mix or 4 for data + 2 for the logs. In case your log traffic is insignificant the fact you have 33% less hard drives for your tablespace reduces performance a lot.
Especially if you have battery backed up cache on your RAID or have innodb_flush_logs_at_trx_commit=0/2 using same volume may be better idea.
How to you propose holding the connection to the database open while creating the snapshot, or is this backup process intended to be done manually?
When automated with a shell script and using something like…
mysql -u backup_user –password=somepassword –execute “flush tables with read lock”
The lock disappears as soon as the connection is closed (ie as soon as the command completes execution), which isn’t very useful. shellsql seems to be a potential solution, has anyone used that?
And yes… I realise you shouldn’t use passwords as arguments or they’ll show in ps auxw output.
Well, this just means you should not use shell to write backup script but use something like Perl, so you can hold connection open while executing other commands.
You can check existing script – mylvmbackup for inspiration.
Also password specified in command line is actually wiped from command line by mysql tools, so this is not the problem, it is still not the most secure solution though 🙂
can any one give me the list of commands which i have to execute to replicate my database on the master and slave.I am finding it very difficult to take a backup of my data.Is there any alternate solution for this
Check out some replication tutorial like this one
http://www.howtoforge.com/mysql_database_replication
But basically you have to have master and slave in sync at certain point in time to start replication.
I am using snapshots to backup my MySQL database on my SAN, LefthandNetworks.
It seems to be working fine, but I am not confident that I am getting all of the data in cache.
I am running InnoDB with binary logs, all stored on the same LUN/Volume. The snapshot is then replicated to another SAN device. I then use another server to mount the replicated snapshot and can run MySQL on it.
It works just fine, but I am afraid that when I really needed it I will be missing data.
I do not run any type of flush cache statements.
I am also concerned that if I took the snapshot while someone is using the database it may cause corruption or data loss.
Do you have any advice?
Supposing a company using MySQL(opensource) encounters a serious problem and apparently the system is not being backed up properly. The manuals are studied without any help. What does one do?
Hi, how is this different from, say, mysqlhotcopy? Which is free and takes one line in a cronjob?
mysqlhotcopy is for MyISAM and requires tables to be locked during copy operation.
This kind of makes it difficult for people using raw InnoDB partitions, huh? 🙂
Right. I have not seen them giving any serious performance improvement though if you filesystem is not suffering from locking issues
I’ve tried to backup my database with LVM Snapshot, but some terrible problems came to me~~~ 🙁
Here is my system information:
VMWare :CentOS 4.4 ,Kernel 2.6.9-42 i686. 100Mb disk for snapshot. CPU:PM 1.6G, MEM:256MB
Server :RH4 AS update 4,kernel 2.6.9-42 i686. 36*2->Raid 1 =36GB as / ,36*4->raid 0+1=72GB as data partation. CPU Xeon 3.2G*2
,Mem:4GB
I try all steps in my VMWare firstly
pvcreate /dev/sdb
vgcreate dbvolgrp /dev/sdb
lvcreate -L50M -ndbvol dbvolgrp
mkfs.ext3 /dev/dbvolgrp/dbvol
mount /dev/dbvolgrp/dbvol /mnt/data
~~copy some data to /mnt/data
modprobe dm-snapshot
lvcreate -L30M -s -ndbsnap /dev/dbvolgrp/dbvol
~~write some data to /mnt/data
mount /dev/dbvolgrp/dbsnap /mnt/snap
snap date seems ok,
umount /mnt/snap
lvremove /dev/dbvolgrp/dbsnap
every thing goes ok~~~~~~~
then try the same step on my server box.
pvcreate /dev/cciss/c0d1p1
vgcreate dbvolgrp /dev/cciss/c0d1p1
lvcreate -L50G -ndbvol dbvolgrp
mkfs.ext3 /dev/dbvolgrp/dbvol
mount /dev/dbvolgrp/dbvol /mnt/data
~~copy some data to /mnt/data
modprobe dm-snapshot
lvcreate -L10G -s -ndbsnap /dev/dbvolgrp/dbvol
~~write some data to /mnt/data
mount /dev/dbvolgrp/dbsnap /mnt/snap
snap date seems ok,
umount /mnt/snap
all steps above completed successfully.but headache came with the follows…..
lvremove /dev/dbvolgrp/dbsnap
The snap can’t be removed .System even not print any message,I can’t stop the command with CTRL+C,kill,nor kill -9.
IOWait became High gradually,and got 75% 60 min or so later. Finally , I had to reboot the system.
Some other friends met the same problem. But why this happens?
Get CentOS 4.5 first
With earlier CentOS we’ve seen various issues on snapshot removal including kernel crashes.
Thank you Peter ,I’ll try CentOS 4.5 and post result later here~
CentOS 4.5 Works,Thank you!
I am using CentOS 5.1 – works fine. I use lvm snapshot with zmanda recovery manager, which I think does some of the similar things mentioned in this blog.
Hi Guys;
I’m very new to this. My only query – how much space this lvm-snapshot will take? Is it equal to the space reserved by mysql data-dir?
sidh4u: Nope, the snapshot does not require the same amount of space. The requirements depend on how much data changes on the original volume in the meantime (as long as the snapshot is active). The snapshot size you provide when running “lvcreate -s” is the maximum size the snapshot can grow to before it will be discarded.
Hey guys.
What do you think about my little script? It seems to work quite nice but not 100% sure. It is suited to work with masterslave situations (lock,master-status,unlock).
#!/bin/sh
FMT_DATE=
date +%Y-%m-%d_%Hh%Mm
;HOST=127.0.0.1
BACKUPDIR=somedir
DATADIR=somedatadir
PASSWORD=$1
SNAPSHOT_SIZE=1G
SNAPSHOT_LVNAME=dbbackup
LVNAME=thelvname
VGNAME=thevgname
TMP_DIR=/tmp/mysql
MNT_DIR=/mnt/backup
_temporary_file=”/tmp/$$.sql”
cat < ${_temporary_file}
FLUSH TABLES WITH READ LOCK;
\! /usr/local/mysql/bin/mysql –user=root –password=${PASSWORD} –host=${HOST} -e “show master status” > ${BACKUPDIR}/masterpos-${FMT_DATE}.txt
\! /sbin/lvcreate –size ${SNAPSHOT_SIZE} –snapshot –name ${SNAPSHOT_LVNAME} ${LVNAME}
UNLOCK TABLES;
EOF
/usr/local/mysql/bin/mysql –user=root –password=$PASSWORD –host=${HOST} &2 $”ERROR: Can not unmount snapshot.”
fi
else
echo 1>&2 $”ERROR: Can not mount snapshot.”
fi
fi
Fuck, it is totally trimmed by wordpress…
We run MYSQL on Windows servers. We use a external NAS that Snapshots an immage of the Volume on other windows servers. We mount that image on a VMware like (vurtualbox) enviromemmt as a live enviroment if the server were to crash. If we are running My SQL on a Windows server Can we use the same FLUSH TABLES WITH READ LOCK command and UNLOCK TABLES to work with our snapshot (storagecraft) SW and directly mount this image as a failover for MySQL instead of using LVM? Also what command to use to hold the lock until snapshot is finished or will it stay in read only until UNLOCK TABLES is run
[…]tape drive[…]
Crazy question w/r/t LVM snapshots on a myisam slave: If updates via replication are not cached(further they invalidate cache?) then is it really necessary to “flush tables with read lock” or will “stop slave” be sufficient?
Does the same apply to buffers?
IE, we are already ok with running myisam tables, at cost of locking during the flush in the middle of a long select, is it really worth doing?
Peter,
It seems that the master_log_pos (Position as returned by SHOW MASTER STATUS;) is always equal to the file size of the bin-log file. At least that’s what my tests reveal on a MyISAM InnoDB hybrid system. Is this always consistent? If so, then you should be able to create a slave from a master snapshot without having to have the information given by SHOW MASTER STATUS. Why does this matter? If your nightly backup doesn’t capture master status, that shouldn’t prevent you from sing it to create a slave.
Here is a transcript…
/data/dev/mysql# mysql < FLUSH TABLES WITH READ LOCK;
> SHOW MASTER STATUS;
> \! ls -lart mysql-bin*|tail -n1
> UNLOCK TABLES;
> MYSQL
File Position Binlog_Do_DB Binlog_Ignore_DB
mysql-bin.000156 956920476
-rw-rw—- 1 mysql mysql 956920476 Aug 16 00:23 mysql-bin.000156
/data/dev/mysql# mysql < FLUSH TABLES WITH READ LOCK;
> SHOW MASTER STATUS;
> \! ls -lart mysql-bin*|tail -n1
> UNLOCK TABLES;
> MYSQL
File Position Binlog_Do_DB Binlog_Ignore_DB
mysql-bin.000156 957115478
-rw-rw—- 1 mysql mysql 957115478 Aug 16 00:23 mysql-bin.000156
/data/dev/mysql# mysql < FLUSH TABLES WITH READ LOCK;
> SHOW MASTER STATUS;
> \! ls -lart mysql-bin*|tail -n1
> UNLOCK TABLES;
> MYSQL
File Position Binlog_Do_DB Binlog_Ignore_DB
mysql-bin.000156 957218874
-rw-rw—- 1 mysql mysql 957218874 Aug 16 00:23 mysql-bin.000156
CARP! WordPress ate my lines that were supposed to have:
/data/dev/mysql# mysql << MYSQL
> FLUSH TABLES WITH READ LOCK;
Richard,
Indeed you can use log file size. Moreover if you have Innodb only tables you do not have to do flush tables with read locks. Unless you do DDL or modify mysql.* tables you should be fine.
Peter,
In post # 36 you make a quick “add on” statement that I want to confirm, because I think it’s significant.
So if you are “sure” you don’t have any myisam tables and don’t do DDL, and don’t modify the mysql.* myisam tables, then “flush tables with read lock” is not needed? But if someone creates a couple of myisam tables that you don’t know about, then those tables “may” not be consistent, right?
If that case occurred in Oracle, then the instance would not start after doing a restore. But is this the case in MySQL? What is the worse case in MySQL? Does it just mean that there may be missing records in those MyISAM tables? Or could there be corrupted data that would cause database problems. In Oracle, the instance won’t even start if the header “timestamps” don’t match on all the tablespaces.
I just want to be really clear on this! What does “inconsistent” mean for “user” (non mysql.*) tables?
Thanks,
Mike
Mike,
You are correct. However, the worst-case is that the MyISAM tables would be corrupt (more likely that just their indexes would be corrupt) and need repairing. There could be missing records or half-written records depending on when you take the snapshot. And MySQL will start just fine, since a MyISAM table is “just a file” 😉
I mean you could do this: “touch /var/lib/mysql/test/thisisnotatable.{MYI,MYD,frm}” and start MySQL, and it would start fine and SHOW TABLES would show test.thisisnotatable but accessing it would cause an error.
Peter,
Thanks for the quick response!! Great resource here!
By the way, I have your book “High Performance MySQL 2nd Edition” in the bookshelf within reach. Definitely the best MySQL book out there presently!
Thanks again!!
It is Baron who replied to you, but thanks anyway. We’re both Authors and I bet both glad to hear you like the book.
Trying to decide if I want to move away from innodb hot backup.
1) I saw that I can skip the FLUSH TABLES WITH READ LOCK for my Innodb tables, but I also noticed the warning:
“If you’re using only Innodb tables and do not need to synchronize binary log position with backup you can skip this step.”
If I want to use that backup to restore a db for use as a replication slave, will that work? Will the binary log postion sync mentioned above create a problem?
2) I have a mix of MyISAM and Innodb, but mostly Innodb, is there a work around for this so I don’t have to run the FLUSH TABLES WITH READ LOCK?
modprobe dm-snapshot
saved my life.
Thanks
Frank
If I want to use the LVM to backup to S3,totally ibdata1 file size >50GB,How can I backup the DB to S3 by LVM without EBS.
@Winsom
Look at Lenz Grimmers’ mylvmbackup script. As I recall it compresses data, but you’ll probably still need to cut the resulting file into 5 GB pieces to store in S3. Use s3cmd to upload the file(s).
Great resource for replication setup, let me share with you a simple/clear way to hotbackup with LVM, i’m using this even with complete system backup: http://bit.ly/aFKoEB
Hi everyone ,
Following this blog we arrived at the conclusion to use lvm snapshots on the slave servers to prevent slaves from being used excessively ..
So we are faced with a situation that our analytics teams issues heavy raw mysql queries that run for hours in some cases days on our replicated slave servers because of which as you can imagine performance goes for a toss and that causes slave to slow down and not sync up with the master . Now some of these slaves run on LVM’s which have already occupied most of the file system space in that particular volume group they are residing in , now taking a mysqldump is a long process and can’t afford the slave shutdown so we need to pursue other avenues like lvm snapshot . So i can easily lock the mysql tables and take a lvm snapshot with lvcreate in a quick seconds and then release the mysql lock .
Now that I have the snapshot of the original volume lets say of 1 TB , one way to use this snapshot is to mount the snapshot and take a backup of 1tb data say tar which again takes a lot of time to tar it and then becomes even more challenging to save that tar backup else where especially when you dont have space on the same server .
what are the options of using that snapshot and how ? the purpose is to use this snapshot or the tar backup data and use another my sql instance to use that data and have analytics team to use it as long as they issues queries on tar backup on another volume it wont impact our performance since it has nothing to do with slave ??
can we also directly mount snapshot as read only and configure mysql to use it , will that work ? and will that impact on the original volume in any way ?>
what could be other solution to this mess that i haven’t thought about apart from mysqldump ???
Tariq
Hey Peter,
thank you for this great article! 🙂
Very interesting article. Is there a way to implement this with the database shut down to ensure that the files don’t get corrupted? I’ve been trying to figure out a more reliable way to keep the master and slave in sync without any little hiccups while replicating, or getting them to match initially.
@tariq
Why not buy a cheap NAS of say 5T and mount it on a MySQL slave. Then you have oodles of space for online historical backups there. You could also share it as an NFS export to all the other MySQL slaves. There is also a utility called “dar” (disk archive) as an alternative to a tarball. Just had a thought, maybe this idea could be combined with rsnapshot for space efficient backups using hard-links.
Peter seemed to suggest that a read-write snapshot can be directly mounted and then you can let InnoDB recovery do it’s thing. Only way is to experiment it in a safe environment first.
Peter, Thanks for sharing. This save my day in working on a 1TB database.
Hi Peter
Thanks for a great article. Must admit I am new to LVM’s.
I assume you have to make the very first backup when you start, and then each snapshot would be the difference since the backup or last snapshot.
Question :
How do you apply the new snapshot to your existing backup?
How does the LVM system know when you made that first backup, as this would be done with tar, rsync etc?
Hi Peter,
I am very new in snapshot technology, so i am worry about the space taken by snapshot on master as well slave because i have 1 TB of databases. and have Hard disk 1 TB so i need to configure some extra disk to get extra space for snapshot.
Please clear!!
Thanks in advance.