Project Titanicarus: Part 6 – Building the MySQL Cluster

Before I’d dealt with the filers, I had written this weeks tasks up as being the most difficult part of the project.

I have a bunch of experience working with standard MySQL servers using replication, but I’ve never played with MySQL Cluster server before. Learning how to make it work was made difficult by a lack of packages in the Ubuntu repositories, I also struggled to find documentation that was simple enough to understand and complete the task without having to fill in blanks that were left by those documenting their learnings.

I’ve decided to write up the process I used to build a two node MySQL Cluster, hopefully I can fill in the gaps for others trying to make this kind of project happen for themselves. I’m building one cluster per island on a pair of servers. Inter-island replication is something I’m going to have to experiment with as the MySQL cluster docco seems to say that it gets cranky when asked to replicate over the internet.

This is the design I want to deliver at the end of the process: MySQL Cluster   Looks a little complicated doen’t it? I was ploughing through configs and layering services on top of each other, hacking my way into a working MySQL Cluster setup,

But then…

After spending about 3 days playing with databases and compiling things from source, I stumbled upon something that I nearly missed because it looked like something commercial that I wasn’t interested in paying for. severalnines.com OMG. Could these guys have made my life any easier? I honestly don’t think so. The amazing people at Severalnines specialise in building High Availability Database Systems and the management interfaces you require to drive them. These guys are freaks of nature, they have taken a nightmare process and turned it into something that is so easy to do, that a windows system administrator could do it. Severalnines have an online configurator wizard that allows you to setup a MySQL cluster with pretty much no effort. While I was looking at it, I found something Id never seen before – “MySQL Galera”. After reading up on it, I realised this was very definitely what I wanted, but there were a couple of tradeoffs:

  1. I had to move all databases to INNODB (big woop)
  2. I had to have a minimum of 4 servers in the cluster (1 x Controller, 3 x Database)

In return for those tradeoffs, we get true multimaster replication that can function over a WAN. This is the new model: MySQL Cluster Nodes 1 & 2 are going to live in the first island and I’m keeping node 3 on a separate LAN segment that emulates another island until I’ve got the second island built. The cluster control node will live in Island #1. So how to I make this magic? EEEEEEASY!

  1. Go to http://www.severalnines.com/galera-configurator/ and select the “Codership” option on the first page.
  2. Fill in the next page with your site specific details. Galera MySQL Config Page 2
  3. More site specific info. Galera MySQL Config Page 3
  4. More info’s (easy isn’t it!) http://www.hooton.org/wp-content/uploads/2013/06/Screen-Shot-2013-06-18-at-7.31.36-PM.png
  5. Put your email address in and press generate scripts. Galera MySQL Config Page 5
  6. Download the scripty business onto your cluster controller box. You will also receive an email with this info in it. Galera MySQL Config Page 6
  7. Before you run the installer, you need to add the “OS User” you defined back on Page 2 to each server and setup ssh key auth for it between each of the servers. To setup password-less ssh you can run the script:
    ./s9s-galera-2.2.0/mysql/scripts/install/shared-ssh-keys.sh

    You will be prompted for password to login to the servers and it will setup shared keys to this server and all nodes in your cluster.

  8. To setup password-less sudo copy the file:
    ./s9s-galera-2.2.0/mysql/scripts/install/s9s_sudo
    To /etc/sudoers.d/ on each node, then chmod 0400 it and chown it to root:root
  9. Now run the installer as severalnines instruct:
    tar xvfz s9s-galera-2.2.0.tar.gz
    cd s9s-galera-2.2.0/mysql/scripts/install
    bash ./deploy.sh  2>&1  |tee cc.log
  10. Once the installer has finished, you should be able to access the web GUI at http://10.0.0.39/cmonapi to register your cluster.
  11. Login with your email address and the password “admin”
  12. Enter the API key you were emailed.
  13. You should now be able to access a pretty impressive looking cluster control gui at http://10.0.0.39/clustercontrol Cluster Control
  14. If you’re that kinda guy, you can stick PHP MyAdmin on the Cluster Control server. Just make sure you lock it down and don’t let it face the outside world. PhpMyAdmin can be a nightmare security wise, I only allow it to be accessed behind htaccess files or VPN.

Thats pretty much it! You’ve now got a working MySQL Galera cluster. When things break, the easiest way to fix them is to restart the broken node, if that fails, restart the cmon and mysql processes on the cluster control box.

Now it time to fiddle with knobs..

MySQL tuning is definitely not the thing I am best at in the world, but here are a few things i’ve done that appear to make things run much faster.

  1. Download tuning-primer.sh and use it to check for suggested config improvements. (Can be conservative in a few areas)
  2. Download mysqltuner.pl and use it to look for performance improvements as well, it often points out stuff which you might not notice as being an issue when using tuning-primer.sh
  3. Make MySQL’s temp files be written to RAM to combat temporary tables written to disk performance hits. Set “tmpdir=/tmp” in my.cnf Add “tmpfs /tmp tmpfs defaults,noexec,nosuid 0 0” to the end of your /etc/fstab and run “mount -a”
  4. Feed MySQL as much memory as you can, if you’ve got as much ram as you database, you’re doing it right.

Important things…

Remember that you need to convert your databases to INNODB.. This caught me out and caused some REALLY weird behaviour that was hard as hell to work out until I realised that there were transactions being part handled by multiple DB servers and MyISAM was doing odd things with them. Also – when you do upgrades to apps, dome app developers love to make changes to the database that create MyISAM tables. So you kinda need to keep checking on this stuff. Have a great week! Next week we’re building web servers. I’ve learned about some cool (to me) new stuff that we’re going to use, I’m kinda excited to see the results in production.

Last updated by at .