July 30, 2014

How to start a Percona XtraDB Cluster

Before version 5.5.28 of Percona XtraDB Cluster, the easiest way was to join the cluster using wsrep_urls in [mysqld_safe] section of my.cnf.

So with a cluster of 3 nodes like this :

node1 =
node2 =
node3 =

we defined the setting like this :

With that line above in my.cnf on each node, when PXC (mysqld) was started, the node tried to join the cluster on the first IP, if no node was running on that IP, the next IP was tried and so on…. until the node could join the cluster or after it tried and didn’t find any node running the cluster, in that case mysqld failed to start.
To avoid this, when all nodes where down and you wanted to start the cluster, it was possible to have wsrep_urls defined like this :

That was a nice feature, especially for people that didn’t want to modify my.cnf after starting the first node initializing the cluster or people automating their deployment with a configuration management system.

Now, since wsrep_urls is deprecated since version 5.5.28 what is the better option to start the cluster ?

In my.cnf, [mysqld] section this time, you can use wsrep_cluster_address with the following syntax:

As you can see the port is not needed and gcomm:// is specified only once.

Note:In Debian and Ubuntu, the ip of the node cannot be present in that variable due to a glibc error:

130129 17:03:45 [Note] WSREP: gcomm: connecting to group 'testPXC', peer ',,'
17:03:45 UTC - mysqld got signal 11 ;
This could be because you hit a bug. It is also possible that this binary
or one of the libraries it was linked against is corrupt, improperly built,
or misconfigured. This error can also be caused by malfunctioning hardware.

So what can be done to initialize the cluster when all nodes are down ? There are two options:

  • modify my.cnf and set wsrep_cluster_address=gcomm:// then when the node is started change it again, this is not my favourite option.
  • start mysql using the following syntax (it works only on RedHat and CentOS out of the box):
    /etc/init.d/myslqd start --wsrep-cluster-address="gcomm://"
    As there is no need to modify my.cnf, this is how I recommend to do it.
About Frederic Descamps

Frédéric joined Percona in June 2011, he is an experienced Open Source consultant with expertise in infrastructure projects as well in development tracks and database administration.

Frédéric is a believer of devops culture.


  1. Patrik says:


    Just so i understand this right: You still has your ‘wsrep_cluster_address=gcomm://,,′ in the my.cnf file?

  2. Hi Patrick,

    Yes indeed, I don’t modify my.cnf and I keep the same value on all the nodes, which is easier for large or automated deployments.
    That’s why I prefer that method to initialized my cluster.

  3. mig5 says:

    Sorry, a few things aren’t clear to me.

    Question 1) if you only specify gcomm:// once now in wsrep_cluster_address, how do you define the ‘fallback’ that is to start a brand new cluster (reset the node) if the other two nodes aren’t available?

    e.g, if I have:


    wouldn’t I want to make it likeso:

    node1: wsrep_cluster_address=gcomm://node2,node3,gcomm://
    node2: wsrep_cluster_address=gcomm://node1,node3,gcomm://
    node3: wsrep_cluster_address=gcomm://node1,node2,gcomm://

    Will that work with an extra gcomm:// at the end?

    Question 2) What happens if node1 goes down and node2 receives writes (e.g failover in haproxy to write to node2 instead of node1)… when node1 comes up, node2 assumes node1 is its first machine to replicate from no? will node1 replicate the missing changes during its downtime from node2?


  4. Hi mig5,

    For question 1, wsrep_cluster_address doesn’t allow that syntax, you cannot have and empty gcomm:// at the end (this is also not supported: wstep_cluser_address=gcomm://node1,node2,node3,) [see the last comma].

    Also in my example, when all your nodes are running, you should not have any wsrep_cluser_address=gcomm:// in your my.cnf, but have the list of all the nodes.

    gcomm:// (empty) is used only to initialize the cluster.

    The downside of this is that is all nodes crash and restarts, the cluster will never be initialized… but if all nodes crashes do you really want to have them back automatically ? A manual intervention is required to start the first node (it could be any node of the cluster, where you modify my.cnf to have wsrep_cluser_address=gcomm:// or by forcing the value using the init script as described in the post)

    2) if when all your nodes are running you have in my.cnf of each nodes the following line wsrep_cluster_address=gcomm://node1,node2,node3 when node1 will restart it will join the cluster using node2 if that node is running and if not try join the cluster using node3 and if none of the two are available, mysqld will just stop.

    I hope I clarified it.


  5. UPDATE: The latest version of PXC (http://www.mysqlperformanceblog.com/2013/01/30/announcing-percona-xtradb-cluster-5-5-29-23-7-1/) fixes the issue related to the glibc in debian and ubuntu :-D

  6. Patrik says:


    I cant get this to work on RHEL6.
    I have 2 nodes and 1 garbd. In the my.cnf on both the nodes i have the same ‘wsrep_cluster_address’:
    wsrep_cluster_address = gcomm://,
    I start the first node as a standalone cluster with –wsrep-cluster-address=”gcomm://”
    Then i get the uuid and seqnr and create the grastate.dat on the second node and tries to start that. It will crash with the message:
    gcomm: connecting to group ‘my_cluster’, peer ’,’
    09:41:58 UTC – mysqld got signal 11 ;
    This could be because you hit a bug.

    If i remove the ip that belongs to the node i try to start from the ‘wsrep_cluster_address’ it works. Dont know if this is the same as the Ubuntu and Debian thing.

    Tested with binary version:

  7. Adrian says:


    And if you want to automatically start the cluster if all machines go down?


  8. zx1986 says:

    here is my.cnf file for my 3 nodes:

    wsrep_sst_method=xtrabackup # default is mysqldump, mysqldump/rsync/xtrabackup

    I could run /etc/init.d/mysql start –wsrep-cluster-address=”gcomm://” on
    but how could I start the cluster ?

    run /etc/init.d/mysql start –wsrep-cluster-address=”gcomm://” on other 2 nodes ?
    or just run /etc/init.d/mysql start on the first node?

    I read the doc and google for a while, but I didn’t get any specify instructions …

  9. zx1986,

    You need to use the extra –wsrep-cluster-address=”gcomm://” only on the first node that will bootstrap the cluster. When you have already one node started in your cluster, to start the other ones, you just start them with /etc/init.d/mysql start.

    Also I recommend you to set a SST method in your my.cnf like this :





  10. Pieter Immelman says:

    This had me stumped for too long, so hopefully it will help someone out there. The /etc/init.d/mysql script on Debian can be edited to support the –wsrep-cluster-address=”gcomm://” option, or anything else you want to send along to mysqld. I needed this to bootstrap my cluster, so I don’t have to edit files and remember to remove the edits afterwards. Add “$@” to the parameters to the mysqld_safe call and insert a new line containing the single word shift in the line directly above that. The “shift” will get rid of the $1 values of “start” so anything you give after the start parameter will then be sent to mysqld via $@. This is how your /etc/init.d/mysqld should look after editing:

    “${PERCONA_PREFIX}”/bin/mysqld_safe “$@” > /dev/null 2>&1 &

    This was done on 5.5.30, but should work for other versions too.


  11. There’s a typo in one of the commands at the very end of the article:
    ‘/etc/init.d/myslqd’ has mysql misspelled :)

  12. Ashish says:

    Hi Frederick,

    I have 3 nodes in Percona cluster and have defined as you suggested as below:




    In case my first node(node1) which bootstrapped the cluster goes down due to some reason then will I have to restart each node to restart/revive the cluster? How can I bring one node out of rotation of cluster to avoid any downtime.


  13. Hi Ashish,

    If your node1 goes out of the cluster for any reason, the other two nodes still communicates together and won’t bring down the cluster.

    wsrep_cluster_address is used ONLY when the node is started (and not bootstrapped), this setting just informs the node which other nodes it has to try to contact to join a cluster.

    so if you have on nodeX : wsrep_cluster_address=gcomm://node1,node2,node3 ; nodeX will try first to connect the cluster using node1, if node1 is not responding, then it will try node2 and if node2 is down, then it tries node3… if this one is also unavailable then the node won’t start,


  14. Hai Frederic,

    Thanks four your tips “/etc/init.d/mysql start –wsrep-cluster-address=”gcomm://” , it’s very very useful :D

  15. jonny says:

    hi can anyone tell me why? What is the problem…?????

    FreeBSD FreeBSD91 9.1-RELEASE-p13 FreeBSD 9.1-RELEASE-p13 #0

    2014-05-30 08:50:57 2044 [Note] InnoDB: Percona XtraDB (http://www.percona.com) 5.6.16-64.1 started; log sequence number 1626067
    2014-05-30 08:50:57 2044 [ERROR] /usr/local/libexec/mysqld: unknown variable ‘wsrep_provider=/usr/local/lib/libgalera_smm.so’
    2014-05-30 08:50:57 2044 [ERROR] Aborting

    My my.cnf is standard:
    port = 3306
    socket = /var/db/mysql/mysql.sock
    user = mysql
    default-storage-engine = InnoDB
    socket = /var/db/mysql/mysql.sock
    pid-file = /var/db/mysql/mysql.pid
    datadir = /var/db/mysql/mysql/
    log-bin = /var/db/mysql/mysql-bin
    expire-logs-days = 14
    sync-binlog = 1
    log-error = /var/db/mysql/mysql-error.log
    log-queries-not-using-indexes = 1
    slow-query-log = 1
    slow-query-log-file = /var/db/mysql/mysql-slow.log
    wsrep_sst_method=mysqldump # default is mysqldump, mysqldump/rsync/xtrabackup

    ls -al /usr/local/lib/libgalera_smm.so
    -r–r–r– 1 root wheel 23672582 May 30 08:26 /usr/local/lib/libgalera_smm.so


  16. Hi Jonny, this is an old post so not the best place to ask your question. I suggest sharing on the Percona discussion forums… here’s the specific url to the Percona XtraDB Cluster board: http://www.percona.com/forums/questions-discussions/percona-xtradb-cluster

  17. jonny says:

    Thx ;-)

Speak Your Mind