Setting up multiple MySQL slaves on one server using Docker Compose
This article is going to go through the steps to setup multiple MySQL slaves on one server using Docker Compose.
These slaves will be used only for backup purposes, and MySQL slaves work by pulling the binary logs from the master, so no ports need to be exposed to the Internet. Additionally all interaction with the MySQL instances will be done using docker exec
, not the network.
I'm going to assume that these slaves are going to be configured from a mysqldump
with --master-data
specified, but they could also be configured from a binary copy, such as an LVM snapshot of the master or rsync from another replica.
In my case, I use LVM volumes for the persistent volumes so I can control disk space per replica, but I'm not going to cover that here - if you want to do something like that just mount it at /persistent/client1
(for example) before starting.
Setup the basic structure:
mkdir /persistent/
mkdir -p /etc/docker/slave
I'm going to be importing from databases which still use the /etc/mysql/debian.cnf
file with a debian-sys-maint
user, so I'm going to use the same passwords and files on the slave, configured in /etc/docker/slave/.env
:
CLIENT1_PASSWORD=vaveeneil6ohcaiXooGh
CLIENT2_PASSWORD=Iu7iewohquo0YuQu6ahl
CLIENT3_PASSWORD=ThaiLai0Aira4mahngoo
CLIENT4_PASSWORD=Nev3gi2vaixu5Ioth2oa
I'm going to create one default MySQL config file for all of them which enables the servers as slaves. It will be overwritten by a per-host config file ( client1.cnf
for example) if it exists.
Create /etc/docker/slave/my.cnf
:
[mysqld]
innodb_strict_mode
innodb_file_per_table = ON
# Need binlogging for PITR and for a slave
# Server ID needs to be unique per cluster.
# If these four clients are all separate it's fine to have the same
server-id = 2000
log_bin = /var/lib/mysql/mysql-bin.log
binlog_format = row
max_binlog_size = 100M
expire_logs_days = 5
relay-log = mysql-slave
Next I create /etc/docker/slave/docker-compose.yaml
with each of the services defined:
services:
client1:
build:
context: .
dockerfile_inline: |
FROM mariadb:10.6
COPY my.cnf client1.cn[f] /etc/mysql/mariadb.conf.d/50-server.cnf
RUN echo "[client]\nhost = localhost\nuser = debian-sys-maint\npassword = ${CLIENT1_PASSWORD}" > /etc/mysql/debian.cnf
environment:
- "MARIADB_ALLOW_EMPTY_ROOT_PASSWORD=True" # This can be deleted once the data has been imported, we're overwriting mysql.*
volumes:
- /srv/drbd/client1-slave-10.6:/var/lib/mysql
networks:
mysql_slaves:
restart: unless-stopped
client2:
build:
context: .
dockerfile_inline: |
FROM mariadb:10.6
COPY my.cnf client2.cn[f] /etc/mysql/mariadb.conf.d/50-server.cnf
RUN echo "[client]\nhost = localhost\nuser = debian-sys-maint\npassword = ${CLIENT2_PASSWORD}" > /etc/mysql/debian.cnf
environment:
- "MARIADB_ALLOW_EMPTY_ROOT_PASSWORD=True"
volumes:
- /srv/drbd/client2-slave-10.6:/var/lib/mysql
networks:
mysql_slaves:
restart: unless-stopped
client3:
build:
context: .
dockerfile_inline: |
FROM mariadb:10.6
COPY my.cnf client3.cn[f] /etc/mysql/mariadb.conf.d/50-server.cnf
RUN echo "[client]\nhost = localhost\nuser = debian-sys-maint\npassword = ${CLIENT3_PASSWORD}" > /etc/mysql/debian.cnf
environment:
- "MARIADB_ALLOW_EMPTY_ROOT_PASSWORD=True"
volumes:
- /srv/drbd/client3-slave-10.6:/var/lib/mysql
networks:
mysql_slaves:
restart: unless-stopped
client4:
build:
context: .
dockerfile_inline: |
FROM mariadb:10.6
COPY my.cnf client4.cn[f] /etc/mysql/mariadb.conf.d/50-server.cnf
RUN echo "[client]\nhost = localhost\nuser = debian-sys-maint\npassword = ${CLIENT4_PASSWORD}" > /etc/mysql/debian.cnf
environment:
- "MARIADB_ALLOW_EMPTY_ROOT_PASSWORD=True"]
volumes:
- /srv/drbd/client4-slave-10.6:/var/lib/mysql
networks:
mysql_slaves:
restart: unless-stopped
networks:
mysql_slaves:
driver: bridge
I've in-lined the Dockerfile
to keep it simple while allowing me to specify per-client configuration files if I want to. If the file exists (eg. /etc/docker/slave/client1.cnf
it will overwrite the mysqld config in the container).
There is nothing stopping you using different versions of MySQL for each slave here, but you might need to COPY the configuration file to a different place (for instance, on at least Percona Server image it looks in /etc/mysql/docker.cnf
)
Also in this instance, I've put each container on the same bridge network. You could put them on their own network to isolate them from each other.
Bring them up and verify they are all running:
cd /etc/docker/slave
docker compose up
docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
8007437880f9 slave-client1 "docker-entrypoint.s…" 3 seconds ago Up 1 second 3306/tcp slave-client1-1
f895faeff9eb slave-client2 "docker-entrypoint.s…" 3 seconds ago Up 1 second 3306/tcp slave-client2-1
85ce2696ae57 slave-client3 "docker-entrypoint.s…" 3 seconds ago Up 1 second 3306/tcp slave-client3-1
1e1d04a7885b slave-client4 "docker-entrypoint.s…" 3 seconds ago Up 1 second 3306/tcp slave-client4-1
Each of the instances has an empty root password set. Now we can import the mysqldump for each, and overwrite the root password at the same time. This could be done automatically but I prefer to configure the data manually.
docker exec -i slave-client1-1 mysql < ~/mysql_imports/client1.sql
docker exec -i slave-client1-2 mysql < ~/mysql_imports/client2.sql
docker exec -i slave-client1-3 mysql < ~/mysql_imports/client3.sql
docker exec -i slave-client1-4 mysql < ~/mysql_imports/client4.sql
From each one of those files, grep out replication settings you're going to use, in my case I'm using log file and log pos:
grep -m1 'CHANGE MASTER' ~/mysql_imports/*.sql
client1.sql:CHANGE MASTER TO MASTER_LOG_FILE='mysql-bin.000001', MASTER_LOG_POS=543;
client2.sql:-- CHANGE MASTER TO MASTER_LOG_FILE='mysql-bin.000002', MASTER_LOG_POS=1534;
client3.sql:CHANGE MASTER TO MASTER_LOG_FILE='mysql-bin.000001', MASTER_LOG_POS=543;
/root/mysql/client4.sql:CHANGE MASTER TO MASTER_LOG_FILE='mysql-bin.000001', MASTER_LOG_POS=543;
Then login to each slave and configure the master settings:
docker exec -it slave-client1-1 mysql --defaults-file=/etc/mysql/debian.cnf
Once verified that each slave is running, the next step is to configure backups and some form of monitoring. I'll cover this in another post.