In the previous post of this series we saw how you could use mysqlrpladmin to perform manual failover/switchover when GTID replication is enabled in MySQL 5.6. Now we will review mysqlfailover (version 1.4.3), another tool from the MySQL Utilities that can be used for automatic failover.

Summary

  • mysqlfailover can perform automatic failover if MySQL 5.6’s GTID-replication is enabled.
  • All slaves must use --master-info-repository=TABLE.
  • The monitoring node is a single point of failure: don’t forget to monitor it!
  • Detection of errant transactions works well, but you have to use the --pedantic option to make sure failover will never happen if there is an errant transaction.
  • There are a few limitations such as the inability to only fail over once, or excessive CPU utilization, but they are probably not showstoppers for most setups.

Setup

We will use the same setup as last time: one master and two slaves, all using GTID replication. We can see the topology using mysqlfailover with the health command:

Note that --master-info-repository=TABLE needs to be configured on all slaves or the tool will exit with an error message:

Failover

You can use 2 commands to trigger automatic failover:

  • auto: the tool tries to find a candidate in the list of servers specified with --candidates, and if no good server is found in this list, it will look at the other slaves to see if one can be a good candidate. This is the default command
  • elect: same as auto, but if no good candidate is found in the list of candidates, other slaves will not be checked and the tool will exit with an error.

Let’s start the tool with auto:

The monitoring console is visible and is refreshed every --interval seconds (default: 15). Its output is similar to what you get when using the health command.

Then let’s kill -9 the master to see what happens once the master is detected as down:

Looks good! The tool is then ready to fail over to another slave if the new master becomes unavailable.

You can also run custom scripts at several points of execution with the --exec-before, --exec-after, --exec-fail-check, --exec-post-failover options.

However it would be great to have a --failover-and-exit option to avoid flapping: the tool would detect master failure, promote one of the slaves, reconfigure replication and then exit (this is what MHA does for instance).

Tool registration

When the tool is started, it registers itself on the master by writing a few things in the specific table:

This is nice as it avoids that you start several instances of mysqlfailover to monitor the same master. If we try, this is what we get:

With the fail command, mysqlfailover will monitor replication health and exit in the case of a master failure, without actually performing failover.

Running in the background

In all previous examples, mysqlfailover was running in the foreground. This is very good for demo, but in a production environment you are likely to prefer running it in the background. This can be done with the --daemon option:

and it can be stopped with:

Errant transactions

If we create an errant transaction on one of the slaves, it will be detected:

However this does not prevent failover from occurring! You have to use --pedantic:

Limitations

  • Like for mysqlrpladmin, the slave election process is not very sophisticated and it cannot be tuned.
  • The server on which mysqlfailover is running is a single point of failure.
  • Excessive CPU utilization: once it is running, mysqlfailover hogs one core. This is quite surprising.

Conclusion

mysqlfailover is a good tool to automate failover in clusters using GTID replication. It is flexible and looks reliable. Its main drawback is that there is no easy way to make it highly available itself: if mysqlfailover crashes, you will have to manually restart it.

19 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
Daniël van Eeden

The mysqlfailover process is not a SPOF. If it fails the system as a whole continues to run. I think the same is true for MHA. This and also the manual restart issue can be fixed by running mysqlfailover with Solaris SMF, systemd, etc. or by running it on a cluster.

Gurbrinder Singh

Hi

Thanks for such a lovely explanation.
Really useful!
Can you please elaborate around the another layer we can add over mysqlfailover which although adds complexity but ensures more guarantee!

Many thanks!

Bhavesh

What are the steps to bring back the original master back in service as a slave ?

Gurbrinder Singh

Hi

Thanks a ton!
We use VIP so is it any coding or mechanism by which VIP also failovers at same time when mysqlrpladmin command does it magic of switch over?

Javier Bautista

Hello

First of all thank you for your post. It is very useful. I have a mysql cluster with gtid replication and my problem is when slave lose sync, automatically it becomes as a master server and this is a problem for us because we have a load balancer that redirect querys based on server’s role and we can have two masters at same time which it is a problem. Is there any way to make that slave server loses sync it does not change the role and will be the same?

Thank you in advance

Joe Dunn

Thanks for posting this article. Very helpful.

I was wondering what your thoughts are for running mysqlfailover on each of the slave hosts. Because of the conflict resolution built into the tool it seems as though only one at a time. Of course, we will try testing it, but was wondering what your thoughts on doing so.

Thanks, again.

abhishek rai

Hi, please help me with this error.

[root@ip-172-31-6-140 ~]# mysqlfailover –master=slave2 –slaves=slave1 health
# Checking privileges.
2015-01-22 09:35:50 AM CRITICAL Query failed. 1694 (HY000): Cannot modify @@session.sql_log_bin inside a transaction
ERROR: Query failed. 1694 (HY000): Cannot modify @@session.sql_log_bin inside a transaction

Warisara

Hello, abhishek rai

How do you fix this problem?
I have same problem with you. please helpme.

chan

Hello,
ERROR: Query failed. 1694 (HY000): Cannot modify @@session.sql_log_bin inside a transaction

How do you fix this problem?
Pease help me.

Aneesha KA

Hi ,

I have setup replication with one master and one slave. Replication is working perfectly. Then I tried to excecute the mysqfailover command, it does not list slave. I have got the following result

MySQL Replication Failover Utility
Failover Mode = auto Next Interval = Tue May

Master Information
——————
Binary Log File Position Binlog_Do_DB Binlog
mysql-bin.000016 9568

GTID Executed Set
8fe8b710-cd34-11e4-824d-fa163e52e544:1-1143

Replication Health Status
0 Rows Found.
Q-quit R-refresh H-health G-GTID Lists U-UUIDs U

when i try to excecute mysqlrplcheck and mysqlrplshow, it lists my slave and master.

Can anyone help me .

I am very knew in this. I have one doubt, Where we need to excecute the mysqlfaiover command – (slave or master) .

Aneesha KA

Configuration files.

Master my.ini

[mysqld]
server-id=7
expire_logs_days = 30
log-bin = “C:/logmysql/mysql-bin.log”
binlog-format=ROW
log-slave-updates=true
gtid-mode=on
enforce-gtid-consistency=true
master-info-repository=TABLE
relay-log-info-repository=TABLE
sync-master-info=1
binlog-checksum=CRC32
master-verify-checksum=1
report-host=10.24.184.12
report-port=3306
port=3306

Slave my.ini

sync_relay_log_info=10000
binlog_format=ROW
log-slave-updates=true
log-bin=C:\logs\mysql-bin.log
gtid-mode=ON
enforce-gtid-consistency=true
server-id=8
report-host=10.24.184.13
report-port=3306
master-info-repository=TABLE
relay-log-info-repository=TABLE
sync-master-info=1
port=3306

Sebastiano Favaro

Thanks for this article!
I have setup replication with 1 master and 4 slave (5 different physical servers).
When i stop the master database (but the physical server stay up) the failover take about 15 seconds.
But when the server is powered off the failover process take about 8 minutes!!!!

Karan

How to configure the ‘elect’ setting. Can anyone give example? I am running one master 2 slaves traditional topology and would only want one slave to be considered for master promotion in case of master failure.