July 31, 2014

Finding a good IST donor in Percona XtraDB Cluster 5.6

Gcache and IST

The Gcache is a memory-based cache of recent Galera transactions that is local to each node in a cluster.  If a node leaves and rejoins the cluster, it can use the gcache from another node that stayed in the cluster (i.e., its donor node) to fetch the transactions it missed (IST) as opposed to doing a full state snapshot transfer (SST).  However, there are a few nuances that are not obvious to the beginner:

  • The Gcache is lost when a node restarts
  • The Gcache is fixed size and implemented as a LRU.  Once it is full, older transactions roll off.
  • Donor selection is made irregardless of the gcache state
  • If the given donor for a restarting node doesn’t have all transactions needed, a full SST (read: full backup) is done instead
  • Until recent developments, there was no way to tell what, precisely, was in the Gcache.

So, with (somewhat) arbitrary donor selection, it was hard to be certain that a node restart would not trigger a SST.  For example:

  • A node crashed over night or was otherwise down for some length of time.  How do you know if the gcache on any node is big enough to contain all the transactions necessary for IST?
  • If you brought two nodes in your cluster simultaneously, the second one you restart might select the first one as its donor and be forced to SST.

Along comes PXC 5.6.15 RC1

Astute readers of the PXC 5.6.15 release notes will have noticed this little tidbit:

New wsrep_local_cached_downto status variable has been introduced. This variable shows the lowest sequence number in gcache. This information can be helpful with determining IST and/or SST.

Until this release there was no visibility into any node’s Gcache and what was likely to happen when restarting a node.  You could make some assumptions, but now it its a bit easier to:

  1. Tell if a given node would be a suitable donor
  2. And hence select a donor manually using wsrep_sst_donor instead of leaving it to chance.

 

What it looks like

Suppose I have a 3 node cluster where load is hitting node1.  I execute the following in sequence:

  1. Shut down node2
  2. Shut down node3
  3. Restart node2

At step 3, node1 is the only viable donor for node2.  Because our restart was quick, we can have some reasonable assurance that node2 will IST correctly (and it does).

However, before we restart node3, let’s check the oldest transaction in the gcache on nodes 1 and 2:

So we can see that node1 has a much more “complete” gcache than node2 does (i.e., a much smaller seqno). Node2′s gcache was wiped when it restarted, so it only has transactions from after its restart.

To check node3′s GTID, we can either check the grastate.dat, or (if it has crashed and the grastate is zeroed) use –wsrep_recover:

So, armed with this information, we can tell what would happen to node3, depending on which donor was selected:

Donor selectedDonor’s gcache oldest seqnoNode3′s seqnoResult for node3
node210501511039191SST
node18897031039191IST

So, we can instruct node3 to use node1 as its donor on restart with wsrep_sst_donor:

Note that passing mysqld options on the command line is only supported in RPM packages, Debian requires you put that setting in your my.cnf.  We can see from node3′s log that it does properly IST:

Sometime in the future, this may be handled automatically on donor selection, but for now it is very useful that we can at least see the status of the gcache.

About Jay Janssen

Jay joined Percona in 2011 after 7 years at Yahoo working in a variety of fields including High Availability architectures, MySQL training, tool building, global server load balancing, multi-datacenter environments, operationalization, and monitoring. He holds a B.S. of Computer Science from Rochester Institute of Technology.

Comments

  1. Jay,

    Great information.

    I wonder on the other topic – if we’re having multiple nodes starting SST at the same time are they going to use single node multiple nodes in the cluster or there is no guarantee ?

  2. Peter,
    The rules as I understand them are thus:

    - A given donor can only donate to one joiner at a time.
    - Multiple joiner/donor pairs can exist in the same cluster as long as there are available donors (i.e., two nodes can join two existing nodes and SST simultaneously)
    - If there are no available donors (i.e., they are all busy doing other donations), the joiner blocks until one becomes available. There maybe some timeout in play here.

  3. I believe the above rules hold true for both IST and SST.

  4. Alex says:

    Yes, for the moment it holds for both SST and IST. Although for IST we should be able to relax that.

  5. Rick James says:

    Perhaps Galera could (should) do the steps you suggest automagically?

    For the messy case you mentioned, do something like this… For each possible donor, estimate how long it would take to finish the IST/SST.
    * Donor available and IST possible: estimate amount to transfer.
    * Donor available and SST required: estimate how long SST would take.
    * Machine is busy: Estimate how long before it will finish with the current transfer, then add on what it would take (IST/SST) to do the desired transfer.
    Then pick the one that would finish the task fastest. So, in your example… If the IST is deemed significantly faster than SST, it should decide to wait node1 to finish the first IST, then do a second IST.

  6. Alex says:

    He-he, “estimate”… ;)

    However, in great majority of cases IST is way better than SST, if for nothing else, then for having least impact on donor. And several concurrent ISTs from a single donor is possible, SST can be only one. So we are working on it.

Speak Your Mind

*