Working with MySQL Multi-master Replication – Keeping a True Hot Standby


Knowing and working with master-master replication and keeping a true hot standby is very important, especially when your business requires a fast failover with the least amount of downtime. This article focuses on the wonderful world of multi-master replication. Knowing how to setup and implement multi-master replication is the next step to faster, automated failover.

This article is a continuation of a replication/failover series.
Knowing and working with master-master replication and keeping a true hot
standby is very important, especially when your business requires a fast
failover with the least amount of downtime.

Last month I wrote an article
on two-node manual failovers. Today we are diving into the
wonderful world of multi-master replication. Knowing how to setup and implement
multi-master replication is the next step to faster, automated failover.

There are many reasons why you
would want to set up multi-master replication in your production environments.
Some of the reasons are to streamline large maintenances like ALTER table and
full dump out and reloading of data. Basically, we are getting closer to faster
bi-lateral failovers in the case of a planned or unplanned outage. Multi-master
can be defined in a couple different ways.

A true multi-master setup is
when an application is writing to and reading data from both servers. This is
not an optimal setup for a number of reasons. We are not going to get into them
now, but just think about how to recover from a split-brain data situation
without an arbitrator. I’m sure you get my point!

The multi-master setup we will
cover in this article is more specifically called, master-master replication.
This type of replication is the same as normal replication; however, with master-master
replication you can easily failover both ways, not just one way. Running a
two-node cluster with master-master replication can aid in:

  • Planned
    schema changes (large ALTER statement(s))
  • Full
    dump out, destroy and reload
  • Some
    Hardware outages
    • Depending
      on the hardware failure and the level of service you provide to your customers
      base on your service level agreement
    • For the
      most part, master-master replication will provide piece of mind in a
      non-catastrophic hardware failure, and, more than likely, in catastrophic ones
      as well

In a master-master replicated
pair, one of the servers is taking both the read and write load while the
second is considered a "hot standby" server. Typically, the "hot
standby" is used for non-essential purposes, like running one-time long
running queries, data dumps for QA or test. The hot-standby server can also be
used for more essential purposes like running read load, taking backups and
replication (replication is a given with this setup!).

In either case, whether the
"hot-standby" is being used for essential purposes or not, what
resides in the caches’ will not be identical to that of the master server; that
is, unless you use a couple of simple techniques. First, let’s go through the
simple steps for master-master replication setup.

The setup

In this example, I already have
a MySQL server running on two separate hosts. The first host is called
testhost1, running on 192.168.1.101. The second host is called testhost2,
running on 192.168.1.102. Our main goal is to get these two servers replicating
to each other.

The Steps

1.  Ensure
that log-bin is enabled on each of the servers

2.  Ensure
that each server has its own unique server-id

a.  For this
example the server-id on testhost1 is 1 and testhost2 is 2

b.  Ensure
that read-only is enabled on testhost2

1.  Optional,
but it might save you from headaches down the line, plus, it’s not hard to add
in "set read-only = OFF" during a failover situation

c.   Run a
show master status on testhost1 (192.168.1.101)

mysql> show master statusG
*************************** 1. row ***************************
File: binary-logs.000002
Position: 107
Binlog_Do_DB:
Binlog_Ignore_DB:
1 row in set (0.00 sec)

3.  Now
create your change master to statement

	CHANGE MASTER TO
	MASTER_HOST='192.168.1.101,
	MASTER_USER='repluser',
	MASTER_PASSWORD='bigsecret',
	MASTER_PORT=3306,
	MASTER_LOG_FILE='binary-logs.000002',
	MASTER_LOG_POS=107,
	MASTER_CONNECT_RETRY=10;

4.  Run the
change master to statement on testhost2 and then run start slave

5.  Run show
slave status on testhost2 to ensure that replication is caught up and running
correctly

Note:
you should now have replication setup from testhost1 to testhost2

6.  Repeat
steps 4 through 6 but creating a replication stream from testhost2 to testhost1

Note:
you should now have replication moving from and to testhost1 and testhost2

The term,
"hot-standby," is a server that can take over for the other node with
minimal to no warm up time. As a side note, you have to ensure that, in the
event of a node failure, one database node will be able to handle ALL
application traffic (reads and writes). In other words, be careful you do not
disguise throughput capacity issues by splitting your read activity evenly on
the two nodes. Make sure you test the production load, or close to it, on one
node before you split the load between two-nodes.

There are a few ways to make
sure your hot standby server is warm enough. You can write your own script
based off of what MySQLSniffer
does, or you can save a boat load of time and just use a tool in the MaatKit, mk-query-digest
specifically.

I would like to remind anyone
attempting this technique to read the documentation then test before adding
this to your production environment. Straight from the mk-query-digest perl
documentation is the key to accomplishing the true hot-standby, or at least
really close to it.

	shell> perldoc /usr/bin/mk-query-digest

Watch a server’s SHOW FULL
PROCESSLIST, filter out everything but SELECT queries, and replay the queries
against another server, then use the timings from replaying them to analyze
their performance:

mk-query-digest --processlist h=host1 --execute h=another_server --filter '$event->{fingerprint} =~ m/^select/'

Additional flags that you
should consider for master-master replication and automation are –mirror
and –daemonize. Remember the optional read-only setting during
the setup… well, if you didn’t enable it on the hot-standby server then here
comes a small headache! Restart your hot-standby with read-only enabled.

Some conclusions

The term
"hot-standby" is loose when I’ve spoken to different MySQL
administrators. To ensure that you have the closest representation of
production load on the hot standby server use something like mk-query-digest
because it’s already there and ready to go. If you want a different solution
try MySQL proxy, MySQLSniffer or a homegrown solution; sure, it’s more of a
challenge but it could be fun! Be careful with master-master replication
failovers while backups are running on the hot-standby.

As always, mileage may vary…
TEST, TEST, TEST then TEST some more!

»


See All Articles by Columnist

Chris Schneider

Chris Schneider
Chris Schneider
Certified MySQL professional, actively involved in the MySQL community for the better end of a decade. I have designed, implemented and maintained small to large scale MySQL installations while training and mentoring teams of DBAs. This includes building architectures from the ground up and improving on those that are currently in place while emphasizing scalability, performance and ease of use. Most of the work I am involved with now is in the Web 2.0 arena. I also run a community site, EverythingMySQL.com, that promotes collaboration within the open source community, specifically, MySQL.

Get the Free Newsletter!

Subscribe to Cloud Insider for top news, trends & analysis

Latest Articles