Docker Swarm with interlock


Docker is a software layer that permits to run linux application inside isolated containers on an only shared system operating.

This type of virtualization is lighter, portable, scalable and easy to manage: it is an good alternative to classical virtualization approach, like xen, kvm, vmware, where every virtual machine run with its own kernel and SO.

For my opinion the best virtualization approach doesn’t exit; it depends from what to do. Certainly the docker project provides in a simple way the possibility to delivery an architecture high scalable and available.

In this context, this article has the goal to show a solution scalable and high available of a web application, running on a docker cluster with two hosts, proxied by a nginx that, automatically configured by interlock, communicates with a Mysql database.

The high availability is reached by docker swarm ( and the scalability by interlock (

The reference architecture is the following:

docker swarm with interlock

docker swarm with interlock

The architecture above has the following features.

  1. The php web application is hosted in an apache container and started by docker-compose.
  2. The apache containers are runned on both nodes of the cluster by swarm manager.
  3. The apache containers communicate with the database on a overlay network attached on both docker nodes.
  4. A nginx is used as reverse proxy and configured automatically by interlock container when the apache containers are created.
  5. The nginx is running in one of two systems by the swarm manager: a virtual server behind the nginx, pfsense or keepalived, should be present. The configuration of this system is out of scope,see
  6. All the cluster information (number and ip of nodes, overlay network information) are saved in a key value store called consul.

Before starting, let’s to introduce swarm docker cluster.

Swarm docker cluster

Swarm feature has been added natively in docker. You can skip this paragraph and instead of it read this new approach:

Docker swarm is a native cluster for docker. All the docker hosts are managed by a swarm manager as a single virtual docker host.

Every docker runs a swarm agent that registers in a discovery container, consul in this case, the ip address and port of docker daemon: in our case and

The swarm manager discoveries all the docker daemons consulting the discovery container and by docker api runs the containers to multiple hosts. With docker 1.12 is possible to manage what the Swarm scheduler does for containers when the nodes they are running on fail.

There is the need to permit to containers spanned to different hosts to communicate among them and it’s is done by an overlay network The information about that network is stored by docker daemon in consul server.

The php web application stored in an apache container communicates with the dabatase using this overlay network. It does not matter the hosts where the container is running: solution fully scalable.

For semplicity these assumptions are done:

  1. The swarm manager is one only. For configuring multi master swarm manager please read this
  2. The communication beewten swarm agent and consul server is in clear. No TLS protocol is used.
  3. The database mysql is running only in the first node.

Let’s start to install docker-engine and docker-compose.

Docker Installation

Docker-engine is very easy to install.

On both nodes called swarm-01( and swarm-02 ( with Centos 7.2 OS running:

[root@swarm-01 ]# vi /etc/yum.repos.d/docker-main.repo
name=Docker main Repository
[root@swarm-01 ~]# yum install docker-engine
[root@swarm-01 ~]# systemctl enable docker

Next, let’s install docker-compose (, a tool for automating the running and the management of containers:

[root@swarm-01 ~]# curl -L`uname -s``uname -m` > /usr/local/bin/docker-compose
[root@swarm-01 ~]#chmod +x /usr/local/bin/docker-compose

Now we are going to create a yaml file with all the services running in our architecture. The first service is consul container that stores information about nodes participating in the cluster and about overlay network that permits container running on different nodes to communicates along them.

The complete docker-compose file can be download at

[root@swarm-01 ~]#vi env.yml
image: progrium/consul
restart: always
hostname: consul
– 8500:8500
command: “-server -bootstrap”

This consul container will be running on the first node.

[root@swarm-01 ~]#docker-compose -f env.yml up -d myconsul

The parameter “restart: always” assures us the automatic restart when docker-engine goes up. The consule server is listening on 8500 port inside the container and this port is natted to 8500 of consul daemon.

[root@swarm-01 docker]# docker ps -a
b5db9442ac8a progrium/consul “/bin/start -server -” 16 seconds ago Up 14 seconds 53/tcp, 53/udp, 8300-8302/tcp, 8400/tcp, 8301-8302/udp,>8500/tcp docker_myconsul_1
[root@swarm-01 docker]# netstat -anp |grep 8500
tcp6 0 0 :::8500 :::* LISTEN 21663/docker-proxy
[root@swarm-01 docker]# iptables -t nat –list |grep 8500
DNAT tcp — anywhere anywhere tcp dpt:fmtp to:
[root@swarm-01 docker]# docker inspect b5db9442ac8a|grep
“IPAddress”: “”,
“IPAddress”: “”,

Now the docker daemon must be restarted changing its start configuration for these reasons:

  1.  Permitting to connect to consul server for storing information about overlay network.
  2.  Permitting to listen on tcp port (2375 in this case) for receiving connection from a swarm manager running also on different hosts.

The new start configuration:

[root@swarm-01 docker]# vi /usr/lib/systemd/system/docker.service
ExecStart=/usr/bin/docker daemon –label storage=ssd –cluster-advertise=enp0s3:2375 –cluster-store=consul:// -H tcp:// -H fd://

The docker daemon is listening on 2375 tcp port and in a socket unix. The label db should be present only in the swarm-01 node: this will permit to Swarm manager to start the database only in this node.

The option –cluster-advertise is to advertise the docker address to other manager nodes in the swarm: no used in this case because there is one only swarm manager running.

We are using the option fd:// that forces docker daemon to use socket via socket activation (this is alternative to default socket unix listening), so we have to enable the socket activation in this way:

[root@swarm-01 docker]# vi /usr/lib/systemd/system/docker.socket
Description=Docker Socket for the API
[root@swarm-01 docker]#systemctl enable docker.socket
[root@swarm-01 docker]#systemctl start docker.socket

If the docker.service is down, for starting the docker is enough to start the docker.socket unit: when a client connects to unix socket (/var/run/docker.sock), systemd automatically starts the docker.service unit. it works like old xinetd daemon.

Now let’s start to install the swarm docker cluster.

Swarm docker cluster Installation

The Swarm manager is the central point of the cluster. It finds the nodes of the cluster consulting the discovery server and orchestrates the running of containers on them.

For starting the swarm manager a new section inside the env.yml this section must be added:

[root@swarm-01 docker]# vi env.yml
image: swarm
restart: always
hostname: swarm_manager
– 8333:2375
command : “manage consul://”
[root@swarm-01 docker]# docker-compose -f env.yml up -d swarm_manager
[root@swarm-01 docker]# docker ps -a
0e651157a2ca swarm “/swarm manage consul” About an hour ago Up About an hour>2375/tcp swarm-01/docker_swarm_manager_1
1ebf704bc289 progrium/consul “/bin/start -server -” About an hour ago Up About an hour 53/tcp, 53/udp, 8300-8302/tcp, 8400/tcp, 8301-8302/udp,>8500/tcp swarm-01/docker_myconsul_1

The swarm manager is listening on 8333 port natted to 2375 container port and it connect to consul server to 8500 port.

Now on both nodes the swarm agent can be started. Inside env.yml file this section must be added;

[root@swarm-01 docker]# vi env.yml
image: swarm
restart: always
hostname: swarm_join
command : “join –addr= consul://”
[root@swarm-01 docker]# docker-compose -f env.yml up -d swarm_join
Creating docker_swarm_join_1
[root@swarm-01 docker]# docker ps -a |grep join
f6e124aa5e41        swarm               “/swarm join –addr=1”   Less than a second ago   Up Less than a second   2375/tcp                                                                    
[root@swarm-01 docker]# docker logs f6e124aa5e41
time=”2016-09-17T22:47:57Z” level=info msg=”Registering on the discovery service every 20s…” addr=”″ discovery=”consul://”

The cluster status is showed in this way: the healthy state confirms the the node is registered well in the consul server.

[root@swarm-01 docker]# docker -H tcp:// info
Containers: 5
Running: 4
Paused: 0
Stopped: 1
Images: 13
Server Version: swarm/1.2.3
Role: primary
Strategy: spread
Filters: health, port, containerslots, dependency, affinity, constraint
Nodes: 2
â”” Status: Healthy
â”” Containers: 3
â”” Reserved CPUs: 0 / 1
â”” Reserved Memory: 0 B / 2.022 GiB
â”” Labels: executiondriver=, kernelversion=3.10.0-327.18.2.el7.x86_64, operatingsystem=CentOS Linux 7 (Core), storage=ssd, storagedriver=devicemapper
â”” UpdatedAt: 2016-09-18T20:38:03Z
â”” ServerVersion: 1.12.1
â”” Status: Healthy
â”” Containers: 2
â”” Reserved CPUs: 0 / 1
â”” Reserved Memory: 0 B / 2.022 GiB
â”” Labels: executiondriver=, kernelversion=3.10.0-327.28.3.el7.x86_64, operatingsystem=CentOS Linux 7 (Core), storagedriver=devicemapper
â”” UpdatedAt: 2016-09-18T2

From now on, every container is started by swarm manager that runs the containers connecting directly to docker engine listening on 2375 tcp port of any host.

Let’s show an example with hello-world where the container is started on second node.

[root@swarm-01 docker]# docker  -H tcp:// run hello-world
[root@swarm-01 docker]# docker -H tcp:// ps -a|grep hello-world
44dfe0d41c70 hello-world “/hello” Less than a second ago Exited (0) Less than a second ago

The swarm cluster has been configured; let’s start to configure interlock container for scaling automatically the apache container on nginx.

Interlock and nginx

Interlock is a container that interacts with other Docker containers, when docker event stream are received by swarm manager, configuring the extension, nginx in this scenario, and then reload it.

Interlock is always started with docker-compose adding this configuration in the env.yml.

[root@docker-test-01 interlock]# more env.yml
image: ehazlett/interlock:master
command: -D run -c /etc/ interlock/config.toml
tty: true
– 8080
ListenAddr = “:8080”
DockerURL = “”
Name = “nginx”
ConfigPath = “/etc/ nginx/nginx.conf”
PidPath = “/var/run/”
TemplatePath = “”
MaxConn = 1024
Port = 80
– /var/lib/boot2docker:/var/lib/boot2docker:ro
image: nginx:latest
entrypoint: nginx
command: -g “daemon off;” -c /etc/ nginx/nginx.conf
– 80:80
– “”

Interlock works in this way: every container, labelled with interlock.hostname and interlock.domain, is added automatically in the nginx configuration as virtual hosts hostname.domain.

The important field is DockerURL that points to swarm manager in order to listen events about creation and stopping of containers labelled with interlock field. The section Extensions section contains the container to configure, nginx, with its configuration file, nginx.conf

Following nginx and interlock containers are started with docker-compose:

[root@swarm-01-01 interlock]# docker-compose -H tcp:// up -d nginx
Creating interlock_nginx_1
[root@swarm-01-01 interlock]# docker-compose -H tcp:// up -d interlock
Creating interlock_interlock_1
[root@swarm-01 interlock]# docker -H tcp:// ps -a |grep “interlock\|nginx”
028ff5dbad1d ehazlett/interlock:master “/bin/interlock -D ru” About a minute ago Up About a minute>8080/tcp
1cd18bd5d7cc nginx:latest “nginx -g ‘daemon off” About a minute ago Up About a minute>80/tcp, 443/tcp

The nginx is started in the swarm-02 and it will balance the http request, automatically configured by interlock, to apache web server.

Before testing the architecture, the php apache and mysql containers must be started. Let’s explain how to do that.

Apache and mysql installation

The web application balanced by nginx is written in php. It’s very simple: a simple php that logs ok or error in case the connection to database completed successfully. In real life it can be more complex.

The example is from site

The first thing to do is create a docker image using a Dockerfile with a based ubuntu where is installed apache2, php and the mysql libraries:

[root@swarm-01 docker]# cd apache/
[root@swarm-01 apache]# ls -tlr
total 12
-rw-r–r–. 1 root root 1232 Sep 5 12:35 Dockerfile
-rw-r–r–. 1 root root 345 Sep 5 12:35 apache-config.conf
drwxr-xr-x. 2 root root 4096 Sep 13 13:03 www
[root@swarm-01 apache]# more Dockerfile
FROM ubuntu:latest
MAINTAINER Dan Pupius <>
# Install apache, PHP, and supplimentary programs. openssh-server, curl, and lynx-cur are for debugging the container.
RUN apt-get update && apt-get -y upgrade && DEBIAN_FRONTEND=noninteractive apt-get -y install \
apache2 php7.0 php7.0-mysql libapache2-mod-php7.0 curl lynx-cur
# Enable apache mods.
RUN a2enmod php7.0
RUN a2enmod rewrite
# Update the PHP.ini file, enable <? ?> tags and quieten logging.
RUN sed -i “s/short_open_tag = Off/short_open_tag = On/” /etc/php/7.0/apache2/php.ini
RUN sed -i “s/error_reporting = .*$/error_reporting = E_ERROR | E_WARNING | E_PARSE/” /etc/php/7.0/apache2/php.ini
# Manually set up the apache environment variables
ENV APACHE_LOG_DIR /var/log/apache2
ENV APACHE_LOCK_DIR /var/lock/apache2
# Expose apache.
# Copy this repo into place.
ADD www /var/www/site
# Update the default apache site with the config we created.
ADD apache-config.conf /etc/ apache2/sites-enabled/000-default.conf
# By default start up apache in the foreground, override with /bin/bash for interative.
CMD /usr/sbin/apache2ctl -D FOREGROUND
[root@swarm-01 apache]# more www/index.php
mysqli_connect(“db”, “root”, “wordpress”) or die(mysqli_error());
echo “Connected to MySQL<br />”;
[root@swarm-01 apache]# docker build -t web .
[root@swarm-01 apache]#docker images |grep web
web latest a14de3d8bf95 5 days ago 285.7 MB

The web image has been created and ready for starting a container from it. From moment that I don’t use a docker registry as image repository, it’s necessary to build it on both nodes.

As usual, I will use docker-compose adding in our env.yml file the following:

[root@swarm-01 interlock]#vi env.yml
image: web
restart: always
– 80
– “interlock.hostname=test”
– “interlock.domain=local”
net: “my-multi-host-network”

The apache containers has a new interface in the network my-multi-host-network: the overlay network spanned to both hosts permits to containers in hosts different to communicate among them.

The information about overlay network is saved in the consul server and this is the reason for what the docker engine is started with a parameter set to the consul server address.

Following how to create the overlay network:

[root@swarm-01 interlock]# docker network create –driver overlay  my-multi-host-network
[root@swarm-01 interlock]# docker -H tcp:// network ls|grep my-multi-host-network
4ae778a320f9        my-multi-host-network                      overlay             global
[root@swarm-01apache]# docker -H tcp:// network inspect my-multi-host-network |grep Subnet
“Subnet”: “”,
[root@swarm-01apache]# docker -H tcp:// network inspect my-multi-host-network |grep Gateway
“Gateway”: “”

We could choose another different network for the overlay network: i prefered to use the default The suggestion is to avoid overlap with the local subnets.

Let’s start now the db container. For this scope in the env.yml the following configuration must be added:

[root@swarm-01 interlock]# vi env.yml
image: mysql:5.7
– “/var/lib/mysql:/var/lib/mysql”
restart: always
container_name: db
– “constraint:storage==ssd”
net: “my-multi-host-network”

The container_name is db: it permit to apache container to contact the database using container_name as host address that is resolved by a internal dns inside the container with the overlay ip network provided to database.

The constraint storage==ssd forces the swarm manager to start the container on the host with this label, swarm-01 node in our example.

Before starting the db container, the directory /var/lib/mysql must be created on first host node; next the apache and database containers are started and checked successfully the communication between them:

[root@swarm-01 interlock]# docker-compose up -d db
Creating db
[root@swarm-01 interlock]# docker-compose up -d web
Creating web
[root@swarm-01 interlock]# docker -H tcp:// ps -a|grep web
f9a505c55325 web “/bin/sh -c ‘/usr/sbi” 11 hours ago Up 11 hours>80/tcp
[root@swarm-01 interlock]# docker -H tcp:// ps -a|grep mysql
e082d0ca8574 mysql:5.7 “” 11 seconds ago Up 11 seconds 3306/tcp
[root@swarm-01 interlock]# docker -H tcp:// exec -it f9a505c55325 bash
root@f9a505c55325:/# ping db
PING db ( 56(84) bytes of data.
64 bytes from ( icmp_seq=1 ttl=64 time=0.313 ms
64 bytes from ( icmp_seq=2 ttl=64 time=0.238 ms
— db ping statistics —
2 packets transmitted, 2 received, 0% packet loss, time 999ms
rtt min/avg/max/mdev = 0.238/0.275/0.313/0.041 ms

The db and the web containers are in the overlay network. The containers with an interface in an overlay network has two network interfaces: one belongs to overlay network and the other is connected to docker_gwbridge.

The bridge  docker_gwbridge permits to containers to communicate outside world. Following how to show it:

[root@docker-test-01 ns]# docker network ls
cfa487bf2680 bridge bridge local
4ae778a320f9 my-multi-host-network   overlay             global
f580e9407fe5 docker_gwbridge bridge local
[root@docker-test-01 ns]# brctl show
bridge name bridge id STP enabled interfaces
docker0 8000.0242734a6c93 no veth0b56880
docker_gwbridge 8000.0242baad0f37 no veth6a38bf9 vethc0001cf

The docker_gwbridge has two virtual interface mapped to virtual interfaces of db and web containers (eth1): this is how the network naming space works

Connecting inside the db container it’s possible to see two interfaces: eth1 that is connected to veth6a38bf9 for outside communication:

[root@docker-test-01 ns]#ethtool -S veth6a38bf9
NIC statistics:
peer_ifindex: 32
[root@docker-test-01 ns]# docker exec -it 510dca529cec bash
root@510dca529cec:/# ip addr show
29: eth0@if62: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UP group default
inet scope global eth0
31: eth1@if32: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
inet scope global eth1.

The eth0 is connected by vxlan tunnel to other cluster hosts: note the mtu set to 1450 because the other 50 bytes are used by tunnelling protocol.

The eth0 is connected to veth10 of another hidden namespace; this veth10 is connected to a hidden bridge br0. In this hidden namespace the vxlan tunnel to other system has been configured.

Let’s clarify it by this:

[root@swarm-01 ~]# docker exec -it 88bff740cff5 ip addr show
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
63: eth0@if64: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UP group default
inet scope global eth0
67: eth1@if68: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
inet scope global eth1r
[root@swarm-01 ~]# ln -s /var/run/docker/netns /var/run/netns
[root@swarm-01 ~]# ip netns show
de1ea001fd1e (id: 13)
017181936ab8 (id: 14)
3a9d2f8b8dc5 (id: 11)
e69bdca3a643 (id: 10)
99052090ef09 (id: 8)
814bc2404046 (id: 12)
21478bde0ffb (id: 9)
e1029a36e75b (id: 7)
4fe9b1fdf422 (id: 5)
e74b35a3536b (id: 2)
1-4ae778a320 (id: 1)
3241459fbe5b (id: 0)
[root@swarm-01 ~]# ip netns exec 1-4ae778a320 ip addr show
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: br0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UP
inet scope global br0
20: vxlan1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue master br0 state UNKNOWN
4: veth3@if23: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue master br0 state UP
40: veth5@if39: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue master br0 state UP
44: veth6@if43: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue master br0 state UP
4: veth7@if53: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue master br0 state UP
58: veth8@if57: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue master br0 state UP
62: veth9@if61: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue master br0 state UP
64: veth10@if63: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue master br0 state UP
[root@swarm-01 ~]# ip netns exec 1-4ae778a320 bridge fdb show dev vxlan1
02:42:0a:00:01:09 dst link-netnsid 0 self permanent
02:42:0a:00:01:0a dst link-netnsid 0 self permanent
02:42:0a:00:01:0b dst link-netnsid 0 self permanent
[root@docker-test-01 ~]#

This picture explains the network of a container with a interface attached to overlay network:



For the vxlan protocol please read this article

After understanding how to work the overlay network, let’s to test the architecure.

Swarm docker cluster with interlock

In this section we are going to test the architecture already created. By docker-compose is possible to scale web application in easy way:

[root@swarm-01 interlock]# docker-compose -H tcp:// scale apache=4
Creating and starting interlock_apache_2 … done
Creating and starting interlock_apache_3 … done
Creating and starting interlock_apache_4 … done

The interlock gets event from swarm manager in order to understand how many web server are started and how to reach them

[root@swarm-01 interlock]# docker -H tcp:// ps -a |grep web
b9990879e0d6 web “/bin/sh -c ‘/usr/sbi” About a minute ago Up About a minute>80/tcp
982d05d4954b web “/bin/sh -c ‘/usr/sbi” About a minute ago Up About a minute>80/tcp
9480590d2dfa web “/bin/sh -c ‘/usr/sbi” About a minute ago Up About a minute>80/tcp
819dd22c172c web “/bin/sh -c ‘/usr/sbi” About a minute ago Up About a minute>80/tcp

In this case four apache containers has been created: tree running on second node, one on the first. The destination nat from a random to http 80 port permits to reach the web server directly from nginx even if they are running on different hosts.

The even logs have been received correctly from docker interlock:

[[root@swarm-01 interlock]# docker -H tcp:// ps -a| grep /bin/interlock
566fa4af5181 ehazlett/interlock:master “/bin/interlock -D ru” 4 minutes ago Up 4 minutes>8080/tcp
[root@swarm-01 interlock]# docker -H tcp:// logs 566fa4af5181|grep test.local
INFO[0030] test.local: upstream= ext=nginx
INFO[0030] test.local: upstream= ext=nginx
INFO[0030] test.local: upstream= ext=nginx
INFO[0030] test.local: upstream= ext=nginx

Le’t check the nginx configuration:

[root@swarm-01 interlock]# docker -H tcp:// ps -a| grep nginx
1cd18bd5d7cc nginx:latest “nginx -g ‘daemon off” 16 hours ago Up 16 hours>80/tcp, 443/tcp swarm-02.test.netinterlock_nginx_1
[root@swarm-01 interlock]# docker -H tcp:// exec -it 1cd18bd5d7cc bash
root@1cd18bd5d7cc:/# more /etc/ nginx/nginx.conf | grep -A 10 test.local
upstream test.local {
zone test.local_backend 64k;
server {
listen 80;
server_name test.local;
location / {
proxy_pass http://test.local;}}

The balancing performed by nginx is correct: it can be tested contacting more times the application server by curl.

[root@swarm-01 ~]# curl -v http://test.local/index.php
* About to connect() to test.local port 80 (#0)
* Trying…
* Connected to test.local ( port 80 (#0)
> GET /index.php HTTP/1.1
> User-Agent: curl/7.29.0
> Host: test.local
> Accept: */*
< HTTP/1.1 200 OK
< Server: nginx/1.11.1
< Date: Mon, 19 Sep 2016 13:23:28 GMT
< Content-Type: text/html; charset=UTF-8
< Content-Length: 24
< Connection: keep-alive
* Connection #0 to host test.local left intact
Connected to MySQL<br />
[root@swarm-01 ~]# docker -H tcp:// exec -it 9480590d2dfa tail -10 /var/ log/apache2/access.log – – [19/Sep/2016:13:23:27 +0000] “GET /index.php HTTP/1.0” 200 191 “-” “curl/7.29.0”
[root@swarm-01 ~]# docker -H tcp:// exec -it 819dd22c172c tail -10 /var/ log/apache2/access.log – – [19/Sep/2016:13:23:20 +0000] “GET /index.php HTTP/1.0” 200 191 “-” “curl/7.29.0”
[root@swarm-01 ~]# docker -H tcp:// exec -it 982d05d4954b tail -10 /var /log/apache2/access.log – – [19/Sep/2016:13:16:18 +0000] “GET /index.php HTTP/1.0” 200 191 “-” “curl/7.29.0” – – [19/Sep/2016:13:16:20 +0000] “GET /index.php HTTP/1.0” 200 191 “-” “curl/7.29.0” – – [19/Sep/2016:13:23:27 +0000] “GET /index.php HTTP/1.0” 200 191 “-” “curl/7.29.0”
[root@swarm-01 ~]# docker -H tcp:// exec -it b9990879e0d6 tail -10 /var /log/apache2/access.log – – [19/Sep/2016:13:16:15 +0000] “GET /index.php HTTP/1.0” 200 191 “-” “curl/7.29.0” – – [19/Sep/2016:13:16:20 +0000] “GET /index.php HTTP/1.0” 200 191 “-” “curl/7.29.0” – – [19/Sep/2016:13:23:26 +0000] “GET /index.php HTTP/1.0” 200 191 “-” “curl/7.29.0” – – [19/Sep/2016:13:23:28 +0000] “GET /index.php HTTP/1.0” 200 191 “-” “curl/7.29.0”

For contacting the nginx I configured in the hosts of the system the mapping test.local–

In general, don’t forget to configure in a balancer like keepalived, haproxy or pfsense, a virtual server configured in active standby way for balancing the traffic to active nginx. Read this my article about that

The last thing to test is the high availability provided by swarm cluster and docker engine.

If a container goes down, it’s restarted immediately by docker engine because the parameter “restart always” is used:

[root@swarm-02 ~]# docker ps -a|grep apache_2
b9990879e0d6 web “/bin/sh -c ‘/usr/sbi” 5 days ago Up 14 minutes>80/tcp interlock_apache_2
[root@swarm-02 ~]# docker inspect -f ‘{{.State.Pid}}’ b9990879e0d6
[root@swarm-02 ~]# kill -9 10236
[root@swarm-02 ~]# docker ps -a|grep apache_2
b9990879e0d6 web “/bin/sh -c ‘/usr/sbi” 5 days ago Up 1 seconds>80/tcp interlock_apache_2

If the node goes down, the swarm manager restarts all container in another node only if swarm rescheduling is enabled: it can be enabled setting the on-node-failure policy with a reschedule environment variable adding in the env.yml file for the container interlock, nginx and apache the following section:

[root@swarm-01 interlock]# more env.yml
– reschedule=on-node-failure



The solution showed is fully scalable but it should be improved for production enviroment where security requirements and more high availability are required.

For these reasons, the following should be applied in a production context.

  1. Communication between swarm manager and docker daemon in TLS..
  2. Comunication beewen swam and consul in TLS.
  3. More consul servers and swarm manager in order to delete the single point of failure on them.
  4. Configuration of a database in active standby using DRBD and Heartbeat for example.
  5. It useful to configure a docker registry as central repository for all the images.

Don’t hesitate to contact me for any suggestion or issue about this article.