I occasionally see customers who are taking backups from their PXC clusters that complain that the cluster “stalls” during the backup.  As I wrote about in a previous blog post, often these stalls are really just Flow Control.  But why would a backup cause Flow control?

Most backups I know of (even Percona XtraBackup) take a FLUSH TABLES WITH READ LOCK (FTWRL) at some point in the backup process.  This can be disabled in XtraBackup (in certain circumstances), but it is enabled by default.

If you go to your active cluster right now an execute a FTWRL (don’t actually do this in production!), you’ll see this message in your error log on that node:

This indicates that Galera is unable to apply writes on the local node.  This by itself is does not indicate Flow control, but flow control is likely if it lasts too long.  Once the lock is released, we get a message that Galera is at work again:

During this interval (9 seconds in this case), the wsrep_local_recv_queue was backing up on this node and could cause Flow control, depending on how the fc_limit and other settings are configured.  I talk about how to tune Flow control in my other post, but what we really want is for flow control to not be in effect for the duration of our backup for this one specific node.

Astute Galera users know that a Donor during SST does not trigger flow control, even though it may get far behind the rest of the cluster.  What if we could manually make a node act like a donor for the purposes of a backup?  Turns out we now can.

Starting with PXC 5.5.33, a new variable has been added called ‘wsrep_desync’.  This allows us to manually toggle a node into and out of the ‘Donor/Desynced’ state.   The Donor/Desynced state is nothing magical.  It really just turns off flow control, and allows the node to fall arbitrarily far behind the rest of the cluster, but only when it is forced to.  This could be caused by a FTWRL, but also anything that may cause the node to lag like heavy disk utilization.

So, I can set Desync like this:

When I do that, I can see the node drop into the ‘Donor/Desynced’ state:

However, notice that my wsrep_local_recv_queue is still empty, and flow control is not apparently in effect.  myq_status agrees with this:

Moving to Donor/Desynced state does not force the node to fall behind, it just allows it without triggering flow control.  Now, let’s take a FTWRL on node3 and observe:

My FC settings on this node are the defaults:

and yet flow control has not kicked in in the cluster.

So if I am taking my backup here, I know flow control should not kick in because of this node.  FTWRL may not be the only reason replication may lag on this node, maybe just resource utilization taking the backup could also allow the queue to get high enough to cause FC.

Either way, once I’m done, I release the lock and I can immediately see the queue start to drop:

But, what about wsrep_desync? Should I turn it off immediately, or wait for the queue to drop?

No!  Turning off wsrep_desync keeps the node in Donor/Desynced state until it drops back down below the FC limit.  This you means you can turn it off right away and Galera will do the right thing by letting the node catch up first before moving it back to ‘Sync’ and allowing FC to be active again.

EDIT: Actually, turning off wsrep_desync will move the node to the JOINED state, note you can see that happen at time 13:51:11 above, though in reality it happened as soon as we toggled wsrep_desync and the event had to wait in the queue.  In this state the node will send flow control messages in a limited fashion to help it catch up faster.  If you want to let the node catch up naturally without causing flow control, then leave wsrep_desync ON until the wsrep_local_recv_queue is back down to 0.

This is all well and good, but how can your HA solution deal with a node in this state?  Well, it should do the same thing you do when a node is a regular Donor, take it out of rotation!  Every SST method currently has some FTWRL, and a node in a Donor/Desync state may (or may not) be far behind, so it’s safest to ensure these nodes are taking out of rotation.  Note that the clustercheck script that ships with PXC server gives you an option to report the node as ‘down’ when it is in a Donor state.  This should allow you to easily integrate this feature with HAproxy or similar HA solutions for Percona XtraDB Cluster.

In summary, this should make it easy write your backup scripts: just turn on wsrep_desync at the start and turn it off as soon as you are done and Galera will handle the rest.  Happy backups!

4 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
Alex

Hi Jay!
One thing to note is that wsrep_desync is a global variable and a number of considerations didn’t allow us to make it “thread-safe”. Meaning that the first client to set wsrep_desync=OFF will disable it globally, no matter how many clients requested it. This is also true for the node that is actually donating SST at the moment.
Also the node in this state can’t be chosen as a donor, which may or may not be a good thing, depending on your intentions. Forgetting to unset it may lead to undesirable situations. So this option should be used with care. Misuse won’t cause data loss, but may cause unexpected cluster stalls.

Amol

Wow.i found this link a day to late…we were in a tricky situation with the cluster recovered and all the slave replications were broken on node3, this trick would have helped us what we did manually. Basically we shutdown node3 so had node 1 and node 2 in the cluster mode and before shutting down node3 took a “show master status” on it for replication. Once it was shutdown we moved/rsynced the entire mysql folder to a slave machine and brought up the database there and then started the node3 back up…(as the node3 was shutdown cleanly and maintained the gravitate.. it just started with a small amount of IST )
but we were able to get the slaves up and running with a downtime that xtrabackup or mysqldump would have caused on the cluster…but this variable would have definitely helped..

Thanks once again or the link

Moody

Great article Jay..worked perfectly for me

alam

Here is a situation..
I have 3 node PXC 5.7 running with writes going to single node at a time. I am taking backups from node 3 and making sure wsrep_desync values are adjusted before and after the script runs and also looking at the queue factor. All good so far.

Now my master crashes due to power failure and haproxy does a good job moving writes to Node 2, so far so good. But then my backup script kicks in on Node 3 and sets wsrep_desync to On and wsrep_local_state_comment says it is Donor/Desynced. Now I was expecting cluster to crash as I am not sure if Quorum should exist in such a situation. given there are 20k queries running on the cluster while all of this happened.

But In my test Cluster keeps functioning and after a while I removed the desync back to off and Node 3 synced with the group.

My question is how is the Quorum maintained in this situation, I was expecting cluster to crash as soon as backup kicked in.