April 25, 2014

High availability for MySQL on Amazon EC2 – Part 2 – Setting up the initial instances

This post is the second of a series that started here.

The first step to build the HA solution is to create two working instances, configure them to be EBS based and create a security group for them. A third instance, the client, will be discussed in part 7. Since this will be a proof of concept, I’ll be using m1.small type instances while normally, the mysql host would be much larger. Using another type is trivial. I will assume you are using the command line api tools, on Ubuntu, install “ec2-api-tools”. The use of these tools simplifies the expression of the command compared to the web based console.

Create the security group

The instances involved in the MySQL HA setup will need to be inside the same security group for networking purposes and the help identify them. To create a security simply run this command:

From now, I’ll always assume the EC2_CERT and EC2_PRIVATE_KEY environment variables are setup in your shell. Next, we need to authorize some communications for the security group. I’ll authorize 3306/tcp (MySQL) from hamysql, 694/udp (Heartbeat) from hamysql and 22 (SSH) from everywhere. You can be more restrictive for SSH if you want to.

Launch the instances

Now we can start creating our instances. Since this is only a proof of concept, I’ll built 2 m1.small instances, fell free to use other types. At the time I wrote this, the following AMI seems ok.

So, lauching 2 of these,

I don’t know about you but I don’t like multi-lines output so I wrote a small filter script to on one line the parameters I need separated by a delimiter.

and now we have

which is to my opinion easier to manipulate.

Configuring Heartbeat

Now, let’s configure Heartbeat. The first thing to is to set up the hostname on both host. Heartbeat identifies the host on which it is running by its hostname so that’s mandatory step.

First host:

Second host:

We don’t really need to set /etc/hostname since it is overwritten when the instance is started, even when using EBS based AMI. The next step is to install Heartbeat and Pacemaker on both host, with Ubuntu 10.04, it is very straightforward:

Then we can proceed and configure Heartbeat, Pacemaker will come later. Heartbeat needs 2 configuration files, /etc/ha.d/authkeys for cluster authentication and /etc/ha.d/ha.cf which is the configuration file per say. On both host, the chosen key in the authkeys file must be identical and good way to generate unique one is to run “date | md5sum” and grab a substring from the output.

Also don’t forget to restrict the access rights on the file like:

For the /etc/ha.d/ha.cf file, since EC2 does not support neither broadcast or multicast within the security group, we need to use unicast (ucast) so both files will not be identical. The ucast entry on one host will contain the IP address on the internal network of the other host. On the monitor host, we will have:

and on the hamysql host:

Let’s review briefly the configuration file. First we have setup “autojoin none” that means no host not listed explicitely in the configuration file can join the cluster so we know we have at most 2 members, “monitor” and “hamysql”. Next is the ucast communication channel to reach the other node and the timing parameters. “warntime” is a soft timeout in second that logs the other node is later while “deadtime” is the hard limit after which heartbeat will the consider the other node dead and start actions to restore the service. The “initdead” is just a startup delay to allow host to fully boot before attempting actions and “crm respawn” starts the Pacemaker resources manager. Finally, we have the two “node” declarations” for the cluster members.

So we are done with the configure, time to try if it works. On both hosts, run:

and if everything is right, after at most a minute, you should be able to see both heartbeat processes chatting over the network

We can also use the “crm” tool to query the cluster status.

Install MySQL

For the sake of simplicity, we will just install the MySQL version in the Ubuntu repository by doing:

The package install MySQL has an automatic startup script controlled by init (new to lucid). That’s fine, I will surprise you but Pacemaker will not manager MySQL, just the host running it. I’ll also skip the raid configuration of multiple EBS volumes since it is not the main purpose of this blog series.

EBS based AMI

Others have produce excellent article on how to create EBS based AMI, I will not reinvent the wheel. I followed this one: http://www.capsunlock.net/2009/12/create-ebs-boot-ami.html

Upcoming in part 3, the configuration of the HA resources.

About Yves Trudeau

Yves is a Principal Consultant at Percona, specializing in technologies such as MySQL Cluster, Pacemaker and DRBD. He was previously a senior consultant for MySQL and Sun Microsystems. He holds a Ph.D. in Experimental Physics.

Comments

  1. Yves Trudeau says:

    Luke,

    which version of the tool are you using and could you sent a sample output of ec2-describe-instance

  2. Luke says:

    Cleared the print line statement. now when running the line is just printed in a loop. It seems that for some reason the ec2-describe-instances isn’t being handled by the perl script, unless im mistaken.

  3. Yves Trudeau says:

    Luke,
    remove the “#” in #print “Processing: $_n”; and look at the output. maybe we don’t use the same version of the tool and that cause the regexp to fail.

  4. john says:

    You can set the hostname and have it stick after reboot on ec2.

    vi /etc/rc.local
    then add a line at the bottom
    hostname hamysql

  5. Luke says:

    Great article! really been informative.

    However i am having a problem with the filtre_instances.pl script, whenever it is run the command line hangs then does nothing. any ideas? as you know this script is important in order to get the killing script working correctly.

    failing command
    ec2-describe-instances -K /usr/local/bin/pk-******.pem -C /usr/local/bin/cert-******.pem | /usr/local/bin/filtre_instance.pl

  6. Morgan says:

    It’s important to keep in mind that EC2 network performance degrades significantly if you attempt to cross availability zones (e.g. going from us-east-1c to us-east-1b). You’ve chosen correctly to put the HA servers in the same zone. This goes for any internal communication that might take place.

  7. Running heartbeat with non-identical ha.cf files is a bad idea. You can just use identical ha.cf files with two ucast lines in there. Any heartbeat node will happily ignore any ucast line that matches a locally configured IP address. Check out the ucast entry in the ha.cf man page (http://www.linux-ha.org/doc/re-hacf.html).

  8. ap1285 says:

    Dear Yves Trudeau,

    Thank You very much for the article.

    I’m trying to setup heartbeat and pacemaker on two Ubuntu 10.04 servers. I followed your setups. I can see in the output of “tcpdump -i eth0 port 694″ that msgs sent and received. But “crm status” give me error “Connection to cluster failed: connection failed”.

    Any Idea, I’m quite new to heartbeat and pacemaker

    Thanks in advance

  9. Dimitri says:

    ap1285,

    Run the crm status command as root.

Speak Your Mind

*