April 23, 2014

Useful MySQL 5.6 features you get for free in PXC 5.6

I get a lot of questions about Percona XtraDB Cluster 5.6 (PXC 5.6), specifically about whether such and such MySQL 5.6 Community Edition feature is in PXC 5.6.  The short answer is: yes, all features in community MySQL 5.6 are in Percona Server 5.6 and, in turn, are in PXC 5.6.  Whether or not the new feature is useful in 5.6 really depends on how useful it is in general with Galera.

I thought it would be useful to highlight a few features and try to show them working:

Innodb Fulltext Indexes

Yes, FTS works in Innodb in 5.6, so why wouldn’t it work in PXC 5.6?  To test this I used the Sakila database , which contains a single table with FULLTEXT.  In the sakila-schema.sql file, it is still designated a MyISAM table:

I edited that file to change MyISAM to Innodb, loaded the schema and data into my 3 node cluster:

and it works seamlessly:

Sure enough, I can run this query on any node and it works fine:

There might be a few caveats and differences from how FTS works in Innodb vs MyISAM, but it is there.

Minimal replication images

Galera relies heavily on RBR events, but until 5.6 those were entire row copies, even if you only changed a single column in the table. In 5.6 you can change this to send only the updated data using the variable binlog_row_image=minimal.

Using a simple sysbench update test for 1 minute, I can determine the baseline size of the replicated data:

This results in 62.3 MB of data replicated in this test.

If I set binlog_row_image=minimal on all nodes and do a rolling restart, I can see how this changes:

This yields a mere 13.4MB, that’s 80% smaller, quite a savings!  This benefit, of course, fully depends on the types of workloads you are doing.

Durable Memcache Cluster

It turns out this feature does not work properly with Galera, see below for an explanation:

5.6 introduces an Memcached interface for Innodb.  This means any standard memcache client can talk to our PXC nodes with the memcache protocol and the data is:

  • Replicated to all nodes
  • Durable across the cluster
  • Highly available
  • Easy to hash memcache clients across all servers for better cache coherency

To set this up, we need to simply load the innodb_memcache schema from the example and restart the daemon to get a listening memcached port:

This all appears to work and I can fetch the sample AA row from all the nodes with the memcached interface:

However, if I try to update a row, it does not seem to replicate (even if I set innodb_api_enable_binlog):

So unfortunately the memcached plugin must use some backdoor to Innodb that Galera is unaware of. I’ve filed a bug on the issue, but it’s not clear if there will be an easy solution or if a whole lot of code will be necessary to make this work properly.

In the short-term, however, you can at least read data from all nodes with the memcached plugin as long as data is only written using the standard SQL interface.

Async replication GTID Integration

Async GTIDs were introduced in 5.6 in order to make CHANGE MASTER easier.  You have always been able to use async replication from any cluster node, but now with this new GTID support, it is much easier to failover to another node in the cluster as a new master.

If we take one node out of our cluster to be a slave and enable GTID binary logging on the other two by adding these settings:

If I generate some writes on the cluster, I can see GTIDs are working:

Notice that we’re at GTID 1505 on both nodes, even though the binary log position happens to be different.

I set up my slave to replicate from node1 (.70.2):

And it’s all caught up.  If put some load on the cluster, I can easily change to node2 as my master without needing to stop writes:

So this seems to work pretty well. It does turns out there is a bit of a bug, but it’s actually with Xtrabackup — currently the binary logs are not copied in Xtrabackup SST and this can cause GTID inconsistencies within nodes in the cluster.  I would expect this to get fixed relatively quickly.


MySQL 5.6 introduces a lot of new interesting features that are even more compelling in the PXC/Galera world.  If you want to experiment for yourself, I pushed the Vagrant environment I used to Github at: https://github.com/jayjanssen/pxc_56_features

About Jay Janssen

Jay joined Percona in 2011 after 7 years at Yahoo working in a variety of fields including High Availability architectures, MySQL training, tool building, global server load balancing, multi-datacenter environments, operationalization, and monitoring. He holds a B.S. of Computer Science from Rochester Institute of Technology.


  1. gpfeng.cs says:

    I have read all your blogs about PXC in this website, they are really helpful, thanks!

Speak Your Mind