So we have a bunch of websites for different markets, running wordpress, which we would ideally like hosted in their home market. BUT, we want to be able to fail them over to a different country, should the servers in their country go down. Failover in a MySQL master-slave relationship is always a bit of a pain (as it is with any DB engine) – once you’ve failed, you really can’t ‘go back’ to the original master, until you’ve re-synced it all. Which isn’t overly easy when you only have a 1 hour per 24-hr-period maintenance window, across all the markets your company operates in.
Enter MySQL Multi-Master replication. Make a change on on server? It appears on the other. Make a change on the other server? It appears on the first!
The way this works, is that each MySQL server can be both a Master, AND a Slave. So Take two servers, A and B. Any changes made on A are played via logs to the B server. Similarly, any changes on the B server are pushed to the A server. Well, actually it’s a little more than that, as Server A will send the updates it receives from Server B, on to Server B. Why does it do this?
Well, we might have six Masters! Going Master A -> Master B -> Master C -> Master D -> Master E -> Master F. And Master F is feeding Master A. All a nice big circle. So when you make a change on Master B, it propagates to C, D, E, F, and then to A. AND back to B. But B knows not to replicate its own changes on again, and they stop there.
It’s easiest to set this all up with fresh, clean, servers.
I installed MySQL-server on two clean Debian VM’s, one in Australia, one in Ireland.
Configuration
Server A – /etc/my.cnf
Add the following:
[mysqld]
# ... other configuration, tuning, etc ...
server-id = 10
# Make sure this partition has space to log bin, relay and whatever else!
log-bin = /var/lib/mysql/bin.log
relay-log = /var/lib/mysql/slave-relay.log
relay-log-index = /var/lib/mysql/slave-relay-log.index
# Creating some room between pk ids, we can always manually insert if need be.
auto_increment_increment = 10
auto_increment_offset = 1
# This is the default, but let's be safe and ensure it's on
replicate-same-server-id = FALSE
# Want more slaves in the future with writes going to both masters?
log-slave-updates = TRUE
# If there's a reboot, let's not auto start replication. - we need to make sure of where we are, and start it manually..
skip-slave-start = TRUE
Server B – /etc/my.cnf
[mysqld]
# ... other configuration, tuning, etc ...
server-id = 11
log-bin = /var/lib/mysql/bin.log
relay-log = /var/lib/mysql/slave-relay.log
relay-log-index = /var/lib/mysql/slave-relay-log.index
auto_increment_increment = 10
auto_increment_offset = 2
replicate-same-server-id = FALSE
log-slave-updates = TRUE
skip-slave-start = TRUE
You could add multiple more servers here, just increment the server-id, and the auto_increment_offset.
Starting Replication
To start replication, we first need to create a replication user on both servers, then setup the replication attributes.
First, create the replication user on both servers:
Server A
# mysql -u root -p
mysql> GRANT REPLICATION SLAVE, REPLICATION CLIENT ON *.* TO repluser@'serverb.ip.address' IDENTIFIED BY 'replpassword';
Server B
# mysql -u root -p
mysql> GRANT REPLICATION SLAVE, REPLICATION CLIENT ON *.* TO repluser@'servera.ip.address' IDENTIFIED BY 'replpassword';
Find the master info on Server B:
mysql> show master status;
+------------+----------+--------------+------------------+
| File | Position | Binlog_Do_DB | Binlog_Ignore_DB |
+------------+----------+--------------+------------------+
| bin.000001 | 294 | | |
+------------+----------+--------------+------------------+
1 row in set (0.00 sec)
Now we want to start the replication on Server A, using the info from Server B:
mysql>CHANGE MASTER TO
MASTER_HOST='92.1.1.1',
MASTER_USER='repluser',
MASTER_PASSWORD='replpassword',
MASTER_LOG_FILE='bin.000001',
MASTER_LOG_POS=294;
mysql>start slave;
mysql>show slave status\G
You may need to run the show slave status\G a few times, before the slave drops into the standard ‘Waiting for master to send event’ state.
Once this is done, you can then work on repeating this process to start Server B slaving from Server A.
Find the master info on Server A:
mysql> show master status;
+------------+----------+--------------+------------------+
| File | Position | Binlog_Do_DB | Binlog_Ignore_DB |
+------------+----------+--------------+------------------+
| bin.000001 | 293 | | |
+------------+----------+--------------+------------------+
1 row in set (0.00 sec)
Now we want to start the replication on Server A, using the info from Server B:
mysql>CHANGE MASTER TO
MASTER_HOST='202.62.1.1',
MASTER_USER='repluser',
MASTER_PASSWORD='replpassword',
MASTER_LOG_FILE='bin.000001',
MASTER_LOG_POS=293;
mysql>start slave;
mysql>show slave status\G
Exporting/Importing the Data
Now you want to create any databases, users, grants, and then import any data you want. Keep an eye on the ‘show slave status\G’ on the server opposite to where you’re doing all this, to make sure it is replicating correctly π
And you’re done!
What a crazy weekend
So a Classic Industries ‘mini tourer’ popped up on ebay last week for $1200, Buy It Now. It was an ’86 model, so fairly old shape, but I still couldn’t resist – that was basically what I had budgeted for building a trailer! So I phoned the guy, had a good chat, and bought it. I’ll ride down on the weekend to pick it up, thinks I – it’s in Canberra, but that’s only 4 hours each way. Do it on the bike, and it’ll cost 80 bucks, max.
7am Saturday morning rolls around, and off I ride.30 mins down the road, and all my auxiliary power devices shut down. iPod, GPS, autocom, etc. Check fuses, nope those are all good. Ended up with the tank raised, inside what I call the ‘Brain box’ – whee all the relays live on the k100. I have NO idea what I was thinking when I first wired the bike up, but I had a daisy chain cum star wiring next of earth wires to all the relays and various circuit feeds. And this was iffy. Messed with it for a while, thinking I’ll fix it tomorrow over a relaxing day, and got everything working. Back on the road again, 30 mins behind.
Roll off the throttle 15km before goulburn, and the bike dies. UGH! Pull over, she starts fine. Next time I roll off, same thing! Babied it into Goulburn, removed the fuel filter, up-ended it onto the driveway, and wow, that’s a lot of water and other crap!
Continue to Canberra, no probs. It’s way too hot, so I strap my jacket across the back seat, and continue on in my tshirt. I haven’t burnt in over 10 years, so I’m not scared of the sun. I remember back to my Summernats days, the Maori guys would be lathering up with sunscreen, I never bothered, I just get more brown, if anything..
Arrive in Canberra at the sellers house, and bugger me, I’m going red! UV must be through the roof today! Trailer is a little beauty though! Went across to Duffy to catch up with my best man for a while, then headed off northbound.
Wow, the trailer moves around! The seller, Allan, did say you need to have a certain amount of weight in it. Apparently the bag of soil he put in for me wasn’t enough π ‘well, I have an esky’, I think, though it isn’t currently mounted – it’s sitting inside the trailer. I buy three bags of ice at the next servo (and some drinks), and the trailer is now towing perfectly!
20km short of Goulburn, the bike starts playing up again – the same old dying on power-off problem as on the way down! I manage to baby it to Goulburn, empty the filter, yup it’s got water again! I go buy a 20l jerry can (cost me 60 bucks!!), remove the tank, and drain the tank into it – wow there was a lot of gunk! Reassemble the bike, and off I go again. By this time, it’s 8pm! I get 10km down the road, and the problem re-appears! I managed to baby it another 15km, then it gave up totally, and would not fire at all.
Ended up getting towed to Goulburn, mrs drove down and picked me up, and towed the trailer home, then I did another trip down to get the bike on the Sunday. Time to look at the bike during the week!
All up, I was on the road for 34hr continuously – not a good weekend!