I get a lot of questions about Percona XtraDB Cluster 5.6 (PXC 5.6), specifically about whether such and such MySQL 5.6 Community Edition feature is in PXC 5.6.  The short answer is: yes, all features in community MySQL 5.6 are in Percona Server 5.6 and, in turn, are in PXC 5.6.  Whether or not the new feature is useful in 5.6 really depends on how useful it is in general with Galera.

I thought it would be useful to highlight a few features and try to show them working:

Innodb Fulltext Indexes

Yes, FTS works in Innodb in 5.6, so why wouldn’t it work in PXC 5.6?  To test this I used the Sakila database , which contains a single table with FULLTEXT.  In the sakila-schema.sql file, it is still designated a MyISAM table:

I edited that file to change MyISAM to Innodb, loaded the schema and data into my 3 node cluster:

and it works seamlessly:

Sure enough, I can run this query on any node and it works fine:

There might be a few caveats and differences from how FTS works in Innodb vs MyISAM, but it is there.

Minimal replication images

Galera relies heavily on RBR events, but until 5.6 those were entire row copies, even if you only changed a single column in the table. In 5.6 you can change this to send only the updated data using the variable binlog_row_image=minimal.

Using a simple sysbench update test for 1 minute, I can determine the baseline size of the replicated data:

This results in 62.3 MB of data replicated in this test.

If I set binlog_row_image=minimal on all nodes and do a rolling restart, I can see how this changes:

This yields a mere 13.4MB, that’s 80% smaller, quite a savings!  This benefit, of course, fully depends on the types of workloads you are doing.

Durable Memcache Cluster

It turns out this feature does not work properly with Galera, see below for an explanation:

5.6 introduces an Memcached interface for Innodb.  This means any standard memcache client can talk to our PXC nodes with the memcache protocol and the data is:

  • Replicated to all nodes
  • Durable across the cluster
  • Highly available
  • Easy to hash memcache clients across all servers for better cache coherency

To set this up, we need to simply load the innodb_memcache schema from the example and restart the daemon to get a listening memcached port:

This all appears to work and I can fetch the sample AA row from all the nodes with the memcached interface:

However, if I try to update a row, it does not seem to replicate (even if I set innodb_api_enable_binlog):

So unfortunately the memcached plugin must use some backdoor to Innodb that Galera is unaware of. I’ve filed a bug on the issue, but it’s not clear if there will be an easy solution or if a whole lot of code will be necessary to make this work properly.

In the short-term, however, you can at least read data from all nodes with the memcached plugin as long as data is only written using the standard SQL interface.

Async replication GTID Integration

Async GTIDs were introduced in 5.6 in order to make CHANGE MASTER easier.  You have always been able to use async replication from any cluster node, but now with this new GTID support, it is much easier to failover to another node in the cluster as a new master.

If we take one node out of our cluster to be a slave and enable GTID binary logging on the other two by adding these settings:

If I generate some writes on the cluster, I can see GTIDs are working:

Notice that we’re at GTID 1505 on both nodes, even though the binary log position happens to be different.

I set up my slave to replicate from node1 (.70.2):

And it’s all caught up.  If put some load on the cluster, I can easily change to node2 as my master without needing to stop writes:

So this seems to work pretty well. It does turns out there is a bit of a bug, but it’s actually with Xtrabackup — currently the binary logs are not copied in Xtrabackup SST and this can cause GTID inconsistencies within nodes in the cluster.  I would expect this to get fixed relatively quickly.

Conclusion

MySQL 5.6 introduces a lot of new interesting features that are even more compelling in the PXC/Galera world.  If you want to experiment for yourself, I pushed the Vagrant environment I used to Github at: https://github.com/jayjanssen/pxc_56_features

6 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
gpfeng.cs

I have read all your blogs about PXC in this website, they are really helpful, thanks!

Benjamin Vetter

Do you know more about the GTID part for async replication in combination with PXC?

Executed_Gtid_Set: e941e026-ac70-ee1c-6dc9-40f8d3b5db3f:1-3712

I don’t find the uuid (e941e026-ac70-ee1c-6dc9-40f8d3b5db3f) anywhere in SHOW VARIABLES, SHOW STATUS, etc.
Usually (ie. vanilla mysql 5.6) the server_uuid is used (show variables like ‘server_uuid’). For PXC, what is used?
Must be some kind of global UUID. However, wsrep_cluster_state_uuid is not used, … so what is this uuid all about?

Thanks

Benjamin Vetter

Hm, thanks, but can you explain this:

a) on PXC node, in 3 node wsrep cluster:

mysql> show master status;
+——————+———-+————–+——————+——————————————-+
| File | Position | Binlog_Do_DB | Binlog_Ignore_DB | Executed_Gtid_Set |
+——————+———-+————–+——————+——————————————-+
| mysql-bin.000028 | 2410 | | | 1e220986-f466-ee1b-6956-71785eb310d7:1-51 |
+——————+———-+————–+——————+——————————————-+
1 row in set (0,00 sec)

mysql> insert into cities(name, locale) values(‘Hamburg’, ‘de’);
Query OK, 1 row affected, 2 warnings (0,01 sec)

mysql> show master status;
+——————+———-+————–+——————+——————————————-+
| File | Position | Binlog_Do_DB | Binlog_Ignore_DB | Executed_Gtid_Set |
+——————+———-+————–+——————+——————————————-+
| mysql-bin.000028 | 2696 | | | 1e220986-f466-ee1b-6956-71785eb310d7:1-52 |
+——————+———-+————–+——————+——————————————-+
1 row in set (0,00 sec)

mysql> show variables like ‘server_uuid’;
+—————+————————————–+
| Variable_name | Value |
+—————+————————————–+
| server_uuid | 27e15552-0b94-11e4-93b2-00163ce26fed |
+—————+————————————–+
1 row in set (0,00 sec)

b) on NON-PXC node:

mysql> show master status;
+——————+———-+————–+——————+——————-+
| File | Position | Binlog_Do_DB | Binlog_Ignore_DB | Executed_Gtid_Set |
+——————+———-+————–+——————+——————-+
| mysql-bin.000001 | 151 | | | |
+——————+———-+————–+——————+——————-+
1 row in set (0,01 sec)

mysql> insert into cities(name, locale) values(‘Hamburg’, ‘de’);
Query OK, 1 row affected, 2 warnings (0,00 sec)

mysql> show master status;
+——————+———-+————–+——————+—————————————-+
| File | Position | Binlog_Do_DB | Binlog_Ignore_DB | Executed_Gtid_Set |
+——————+———-+————–+——————+—————————————-+
| mysql-bin.000001 | 432 | | | bb2b80de-10b9-11e4-b541-00163c315a4f:1 |
+——————+———-+————–+——————+—————————————-+
1 row in set (0,00 sec)

mysql> show variables like ‘server_uuid’;
+—————+————————————–+
| Variable_name | Value |
+—————+————————————–+
| server_uuid | bb2b80de-10b9-11e4-b541-00163c315a4f |
+—————+————————————–+
1 row in set (0,00 sec)

I’m using percona-xtradb-cluster-56 , 5.6.15-25.5-759.wheezy

Thanks

Benjamin Vetter

Exactly, if every server would use its own server_uuid, CHANGE MASTER TO… MASTER_AUTO_POSITION=1 would not be easy anymore. I’d simply like to know how this cluster wide UUID for async replication is built and how it is stored, as, like already mentioned, i don’t find it anywhere in SHOW STATUS or SHOW VARIABLES, and it seems to be important if you’re using PXC together with async replication. However, thanks for you help. I’d be happy to know more in case you find out.