August 22, 2014

Actively monitoring replication connectivity with MySQL’s heartbeat

Until MySQL 5.5 the only variable used to identify a network connectivity problem between Master and Slave was slave-net-timeout. This variable specifies the number of seconds to wait for more Binary Logs events from the master before abort the connection and establish it again. With a default value of 3600 this has been a historically […]

How to use MySQL Global Transaction IDs (GTIDs) in production

Reconfiguring replication has always been a challenge with MySQL. Each time the replication topology has to be changed, the process is tedious and error-prone because finding the correct binlog position is not straightforward at all. Global Transaction IDs (GTIDs) introduced in MySQL 5.6 aim at solving this annoying issue. The idea is quite simple: each […]

Q&A: Putting MySQL Fabric to use

Martin Arrieta and I gave an online presentation last week on “Putting MySQL Fabric To Use.” If you missed it, you can find a recording and the slides here, and the vagrant environment we used plus a transcript of the commands we ran here (be sure to check out the ‘sharding’ branch, as that’s what we used […]

Paris OpenStack Summit Voting – Percona Submits 16 MySQL Talks

MySQL plays a critical role in OpenStack. It serves as the host database supporting most components such as Nova, Glance, and Keystone and is the most mature guest database in Trove. Many OpenStack operators use Percona open source software including the MySQL drop-in compatible Percona Server and Galera-based Percona XtraDB Cluster as well as tools such as Percona XtraBackup and Percona Toolkit. […]

Monitoring MySQL flow control in Percona XtraDB Cluster 5.6

Monitoring flow control in a Galera cluster is very important. If you do not, you will not understand why writes may sometimes be stalled. Percona XtraDB Cluster 5.6 provides 2 status variables for such monitoring: wsrep_flow_control_paused and wsrep_flow_control_paused_ns. Which one should you use? What is flow control? Flow control does not exist with regular MySQL replication, […]

Percona Monitoring Plugins 1.1.4 release

Percona is glad to announce the release of Percona Monitoring Plugins 1.1.4. Changelog: * Added login-path support to Nagios plugins with MySQL client 5.6 (bug 1338549) * Added a new threshold option for delayed slaves to pmp-check-mysql-replication-delay (bug 1318280) * Added delayed slave support to pmp-check-mysql-replication-running (bug 1332082) * Updated Nagios plugins and Cacti script […]

Managing shards of MySQL databases with MySQL Fabric

This is the fourth post in our MySQL Fabric series. In case you’re joining us now, we started with an introductory post, and then discussed High Availability (HA) using MySQL Fabric here (Part 1) and here (Part 2). Today we will talk about how MySQL Fabric can help you scale out MySQL databases with sharding. Introduction At the […]

Check for MySQL slave lag with Percona Toolkit plugin for Tungsten Replicator

A while back, I made some changes to the plugin interface for pt-online-schema-change which allows custom replication checks to be written. As I was adding this functionality, I also added the –plugin option to pt-table-checksum. This was released in Percona Toolkit 2.2.8. With these additions, I spent some time writing a plugin that allows Percona […]

Failover with the MySQL Utilities: Part 2 – mysqlfailover

In the previous post of this series we saw how you could use mysqlrpladmin to perform manual failover/switchover when GTID replication is enabled in MySQL 5.6. Now we will review mysqlfailover (version 1.4.3), another tool from the MySQL Utilities that can be used for automatic failover. Summary mysqlfailover can perform automatic failover if MySQL 5.6′s […]

High Availability with MySQL Fabric: Part I

In our previous post, we introduced the MySQL Fabric utility and said we would dig deeper into it. This post is the first part of our test of MySQL Fabric’s High Availability (HA) functionality. Today, we’ll review MySQL Fabric’s HA concepts, and then walk you through the setup of a 3-node cluster with one Primary and two […]