Deploy a stack to a swarm cluster

  System
 

Starting from 1.12 docker version, the cluster swarm functionality has been natively added in docker.

Now it’s possible to easy create docker swarm cluster where to start application containers stack. Every stack is a set of services related between them and every service is a docker container running in any node of the cluster.

Every stack is accessible from external to its published ports and its services communicate between them on an overlay network (http://www.securityandit.com/network/inside-docker-overlay-network/) using the service name discovered automatically by a docker internal dns. You can find more about docker swarm cluster in https://docs.docker.com/engine/swarm/swarm-tutorial/create-swarm/.

In this article I will explain how to create wordpress stacks (formed by apache and a mysql container) exposed to different published ports. This solution could be proposed for managing different virtual hosts in environment well isolated.

The swarm cluster will be composed of 3 linux nodes running on amazon aws with in front a elastic load balancing for balancing the traffic from internet  to the stack. A elastic file system will provide a shared storage to cluster nodes for storing the mysql database and static wordpress files. If you don’t use aws, you can use glusterfs. Please, for that read this my article http://www.securityandit.com/system/gluster-file-system-in-geo-replication/. Instead of elastic load balancing, you can use nginx or apache for managing different virtual hosts balanced to container stack.For that read it: http://www.securityandit.com/network/nginx-haproxy-and-keepalived/.

The reference architecture of my job is the following:
Container stack with swarm

Before starting to prepare the environment someone could say: why don’t use the EC2 Container Service or Google kubernets cluster in order to make a docker cluster? the answer is that I want to explain how the cluster works and I don’t want to have additonal costs using the aws or google containers. Only in this way, it’s possible to know well the docker swarm cluster and to save money.

Docker cluster installation and configuration

The first step to do is create 3 VM on amazon aws cloud by aws cli commands. The three VM will be created in three different availability zones. This is a best practise in order to have the nodes of clusters physicaly well distribuited:

[root@nikto ~]#aws ec2 run-instances –image-id ami-5e7f5f3b –count 1 –instance-type t2.micro –key-name key-01 –security-group-ids sg-d36782bb –subnet-id subnet-b7f42cfa
[root@nikto ~]#aws ec2 run-instances –image-id ami-5e7f5f3b –count 1 –instance-type t2.micro –key-name key-01 –security-group-ids sg-d36782bb –subnet-id subnet-9e6afce5
[root@nikto ~]#aws ec2 run-instances –image-id ami-5e7f5f3b –count 1 –instance-type t2.micro –key-name key-01 –security-group-ids sg-d36782bb –subnet-id subnet-be8ad3d7

For reaching the VM, it’s necessary to make a association with a public ip. For the first vm:

[root@nikto ~]#aws ec2 associate-address –instance-id i-03621cfa9bb98f5cf –public-ip 13.59.200.246

For any virtual machine, it’s necessary to install docker-ee and nfsutils. For not charging in aws, I used virtual machines with Red-Hat installed where docker-ce is not supported. For that I installed docker-ee with trial license ( I suggest you to use Centos with docker-ce, I have to use Red-Hat with AWS for not charging anything). These the commands to execute (I put only the commands executed in the first vm):

[root@nikto ~]# ssh ec2-user@13.59.200.246
Last login: Thu Aug 3 10:23:13 2017 from 215.ip-164-132-193.eu
[ec2-user@ip-192-168-50-98 ~]$sudo su –
[root@ip-192-168-50-108 yum.repos.d]#yum-config-manager –add-repo https://download.docker.com/linux/centos/docker-ce.repo
[root@ip-192-168-50-108 yum.repos.d]# yum install -y yum-utils device-mapper-persistent-data lvm2
[root@ip-192-168-50-108 yum.repos.d]#yum -y install docker-ee
[root@ip-192-168-50-108 yum.repos.d]#yum -y install nfs-utils

Now I created on aws cloud a elastic file system with 3 mount points in different availability zones. The role of this file system is to store and share between nodes of cluster the wordpress static and dynamic content.

I did it by GUI, it is the result:

[root@nikto ~]# aws efs describe-file-systems
{
“FileSystems”: [
{
“SizeInBytes”: {
“Timestamp”: 1502377199.0,
“Value”: 18432
},
“CreationToken”: “console-d4f71ee8-fbb2-46bf-a6e4-f90a51df5048”,
“CreationTime”: 1501767731.0,
“PerformanceMode”: “generalPurpose”,
“FileSystemId”: “fs-f1d33c88”,
“NumberOfMountTargets”: 3,
“LifeCycleState”: “available”,
“OwnerId”: “770550247028”
}
]
}
[root@nikto ~]# aws efs describe-mount-targets –file-system-id fs-f1d33c88
{
“MountTargets”: [
{
“MountTargetId”: “fsmt-7606e90f”,
“NetworkInterfaceId”: “eni-28a8ec00”,
“FileSystemId”: “fs-f1d33c88”,
“LifeCycleState”: “available”,
“SubnetId”: “subnet-b7f42cfa”,
“OwnerId”: “770550247028”,
“IpAddress”: “192.168.10.210”
},
{
“MountTargetId”: “fsmt-7706e90e”,
“NetworkInterfaceId”: “eni-1f9ed54d”,
“FileSystemId”: “fs-f1d33c88”,
“LifeCycleState”: “available”,
“SubnetId”: “subnet-9e6afce5”,
“OwnerId”: “770550247028”,
“IpAddress”: “192.168.50.225”
},
{
“MountTargetId”: “fsmt-9fb45ae6”,
“NetworkInterfaceId”: “eni-1eb49c42”,
“FileSystemId”: “fs-f1d33c88”,
“LifeCycleState”: “available”,
“SubnetId”: “subnet-be8ad3d7”,
“OwnerId”: “770550247028”,
“IpAddress”: “192.168.60.252”}]}

There are three mount points, one for any subnet where the virtual machines are running. The file system is mounted by nfs protocol:

[root@ip-192-168-50-108 ~]# vi /etc/fstab
192.168.50.210:/ /efs nfs4 nfsvers=4.1,rsize=1048576,wsize=1048576,hard,timeo=600,retrans=2,_netdev 0 0
[root@ip-192-168-50-108 ~]#  mount /efs
[root@ip-192-168-60-39~]# vi /etc/fstab
192.168.60.252:/ /efs nfs4 nfsvers=4.1,rsize=1048576,wsize=1048576,hard,timeo=600,retrans=2,_netdev 0 0
[root@ip-192-168-60-39~]#mount /efs
[root@ip-192-168-10,5 ~]# vi /etc/fstab
192.168.10.225:/ /efs nfs4 nfsvers=4.1,rsize=1048576,wsize=1048576,hard,timeo=600,retrans=2,_netdev 0 0
[root@ip-192-168-10-5~]#mount /efs

The file system is shared with all three nodes of clusters; in one only node of cluster I will create the directories for storing the wordpress content, static and database:

[root@ip-192-168-10-5~]#mkdir /efs/stack-01/www
[root@ip-192-168-10-5~]#mkdir /efs/stack-01/wp-data
[root@ip-192-168-10-5~]#mkdir /efs/stack-02/www
[root@ip-192-168-10-5~]#mkdir /efs/stack-02/wp-data

We are ready to create the cluster initializing it in this way:

[root@ip-192-168-60-39 yum.repos.d]# docker swarm init –advertise-addr 192.168.60.39
Swarm initialized: current node (ttwmajvlvd3qdlctgjhmjf6v9) is now a manager.
To add a worker to this swarm, run the following command:
docker swarm join –token SWMTKN-1-3od43kg8k6ovyyeqyqxikym3e4iv1r4rrl1bbet6goyzkgo18c-8ylmj1khswoagqyebso1l529y 192.168.60.39:2377
To add a manager to this swarm, run ‘docker swarm join-token manager’ and follow the instructions.
[root@ip-192-168-60-39 yum.repos.d]# docker swarm join-token manager
To add a manager to this swarm, run the following command:
docker swarm join –token SWMTKN-1-3od43kg8k6ovyyeqyqxikym3e4iv1r4rrl1bbet6goyzkgo18c-6h19geayd68dambpagnmqfsvj 192.168.60.39:2377

I initialized the cluster and with the next command I asked how to add another manager in the cluster. In the other nodes:

[root@ip-192-168-50-108 efs]# docker swarm join –token SWMTKN-1-3od43kg8k6ovyyeqyqxikym3e4iv1r4rrl1bbet6goyzkgo18c-6h19geayd68dambpagnmqfsvj 192.168.60.39:2377
This node joined a swarm as a manager.
[root@ip-192-168-10-5 efs]# docker swarm join –token SWMTKN-1-3od43kg8k6ovyyeqyqxikym3e4iv1r4rrl1bbet6goyzkgo18c-6h19geayd68dambpagnmqfsvj 192.168.60.39:2377
This node joined a swarm as a manager.

The cluster was created with three manager. As default the managers works as worker, so it means that they can run the containers orchestrated and scheduled by the manager leader.

For showing the state of all nodes the clusters:

[root@ip-192-168-10-5 ~]# docker node ls
ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS
5ufx65l487m9xsc01xo7datb6 * ip-192-168-10-5.us-east-2.compute.internal Ready Active Reachable
skiifso8cuwfdz0sghrbjbhpk ip-192-168-50-108.us-east-2.compute.internal Ready Active Reachable
ttwmajvlvd3qdlctgjhmjf6v9 ip-192-168-60-39.us-east-2.compute.internal Ready Active Leader

Let’s start now to discuss about how to create the two wordpress stacks.

Swarm stack configuration

The stack is a set of micro services related between them that work together for providing together an service. Every service is a container defined in a yaml file that communicates with the other of same stack using a overlay network spanned in all nodes of cluster automatically by swarm cluster.

An overlay network is implemented by udp tunnel that encapsulates layer 2 frame. For more information about overlay network, you can read my article http://www.securityandit.com/network/inside-docker-overlay-network/.

Every stack is reachable in a published port defined in the yaml file. A proxy load balancer managed by swarm cluster will balance the traffic from any published port listening in any node of cluster to node where the container is listening. The communication from proxy load balancer to container happens by the overlay network.

Every container can be scaled. For example: with three nodes it’s possible to scale the apache of wordpress three times. The orchestrator will run an instance of container on any node of cluster and the load balancer will balance the traffic in all the containers. This is well explained in docker documentation.

Every service of the stack will reference the other services with the service named defined in the yam file. The name is resolved by a internal dns with the ip address of the container in the overlay network.

In our scope, the yaml file containing the stack definition is formed by two containers: an apache server and a mysql database. The files for the two stacks are the following:

[root@ip-192-168-10-5 ~]#vi /efs/compose/stack-01.yml
version: ‘3’
services:
wordpress:
image: wordpress:latest
ports:
8080:80
volumes:
/efs/stack-01/www:/var/www/html # Full wordpress project
environment:
WORDPRESS_DB_HOST: db
WORDPRESS_DB_NAME: wordpress
WORDPRESS_DB_USER: root
WORDPRESS_DB_PASSWORD: password
db:
image: mysql:latest
volumes:
/efs/stack-01/wp-data:/docker-entrypoint-initdb.d
environment:
MYSQL_DATABASE: wordpress
MYSQL_ROOT_PASSWORD: password
[root@ip-192-168-10-5 ~]#vi /efs/compose/stack-02.yml
version: ‘3’
services:
wordpress:
image: wordpress:latest
ports:
8081:80
volumes:
/efs/stack-02/www:/var/www/html # Full wordpress project
environment:
WORDPRESS_DB_HOST: db
WORDPRESS_DB_NAME: wordpress
WORDPRESS_DB_USER: root
WORDPRESS_DB_PASSWORD: password
db:
image: mysql:latest
volumes:
/efs/stack-02/wp-data:/docker-entrypoint-initdb.d
environment:
MYSQL_DATABASE: wordpress
MYSQL_ROOT_PASSWORD: password

The two stacks are running in two different ports: 8080 for the first stack, 8081 for the second. Every node of stack is listening in this port and the traffic is balanced correctly to node where wordpress (apache + php)is running).

Every stack mount a different local directory spanned by an amazon efs on all node of clusters. The directory contains the wordpress static content and the wordpress dynamic content. As I told, this can be implemented by a simple gluster filesystem.

The two stacks are running in this way:

[root@ip-192-168-50-108 compose]# docker stack deploy –compose-file /efs/compose/stack-01.yml stack-01
Creating network stack-01_default
Creating service stack-01_wordpress
Creating service stack-01_db
[root@ip-192-168-50-108 compose]# docker stack ps stack-01
ID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR PORTS
lbdiqwo92x9d stack-01_db.1 mysql:latest ip-192-168-10-5.us-east-2.compute.internal Running Running 3 minutes ago
r6n2kg2bmize stack-01_wordpress.1 wordpress:latest ip-192-168-60-39.us-east-2.compute.internal Running Running 3 minutes ago
[root@ip-192-168-50-108 compose]# docker stack deploy –compose-file /efs/compose/stack-02.yml stack-02
Creating network stack-02_default
Creating service stack-02_wordpress
Creating service stack-02_db
[root@ip-192-168-50-108 compose]# docker stack ps stack-02
ID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR PORTS
kn1rxhl265ux stack-02_db.1 mysql:latest ip-192-168-50-108.us-east-2.compute.internal Running Running 5 seconds ago
ev0jfs3fehg7 stack-02_wordpress.1 wordpress:latest ip-192-168-10-5.us-east-2.compute.internal Running Running 5 seconds ago

As noted in the output of “docker stack ps” , the wordpress of stack-01 is running on 192-168-60-39 host, but the service is reachable from any node of cluster in the published port:

[root@ip-192-168-50-108 compose]# curl -v –silent http://192.168.50.108:8080 2>&1 |grep HTTP
> GET / HTTP/1.1
< HTTP/1.1 302 Found
[root@ip-192-168-50-108 compose]# curl -v –silent http://192.168.10.5:8080 2>&1 |grep HTTP
> GET / HTTP/1.1
< HTTP/1.1 302 Found
[root@ip-192-168-50-108 compose]# curl -v –silent http://192.168.60.39:8080 2>&1 |grep HTTP
> GET / HTTP/1.1
< HTTP/1.1 302 Found

The same for the stack-02 on the port 8081:

[root@ip-192-168-50-108 compose]# curl -v –silent http://192.168.50.108:8081 2>&1 |grep HTTP
> GET / HTTP/1.1
< HTTP/1.1 302 Found
[root@ip-192-168-50-108 compose]# curl -v –silent http://192.168.10.5:8081 2>&1 |grep HTTP
> GET / HTTP/1.1
< HTTP/1.1 302 Found
[root@ip-192-168-50-108 compose]# curl -v –silent http://192.168.60.39:8081 2>&1 |grep HTTP
> GET / HTTP/1.1
< HTTP/1.1 302 Found

If you inspect by “ip addr show” the wordpress container of stack-01, you will find three network interfaces:

[root@ip-192-168-60-39 ~]# docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
136c5e24bdcd wordpress:latest “docker-entrypoint…” 8 minutes ago Up 8 minutes 80/tcp stack-01_wordpress.1.r6n2kg2bmizegdmbstyc2cigl
[root@ip-192-168-60-39 ~]# docker exec -it 136c5e24bdcd bash
root@136c5e24bdcd:/var/www/html# ip addr show
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
32: eth0@if33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UP group default
link/ether 02:42:0a:ff:00:08 brd ff:ff:ff:ff:ff:ff
inet 10.255.0.8/16 scope global eth0
valid_lft forever preferred_lft forever
inet 10.255.0.7/32 scope global eth0
valid_lft forever preferred_lft forever
34: eth1@if35: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
link/ether 02:42:ac:12:00:03 brd ff:ff:ff:ff:ff:ff
inet 172.18.0.3/16 scope global eth1
valid_lft forever preferred_lft forever
37: eth2@if38: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UP group default
link/ether 02:42:0a:00:01:03 brd ff:ff:ff:ff:ff:ff
inet 10.0.1.3/24 scope global eth2
valid_lft forever preferred_lft forever
inet 10.0.1.2/32 scope global eth2
valid_lft forever preferred_lft forever

The eth0 interface is used for communication with the proxy load balancer running in any node. The second interface eth1 is used for external network: it’s the classic interface that permits by a external bridge to reach internet. The third interface is for internal communication. The database is reached by this interface using the database name present in the yaml file:

For the stack-01 the database db is reached by 10.0.1.4/24 ip address:

root@136c5e24bdcd:/var/www/html# ip addr show
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
32: eth0@if33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UP group default
link/ether 02:42:0a:ff:00:08 brd ff:ff:ff:ff:ff:ff
inet 10.255.0.8/16 scope global eth0
valid_lft forever preferred_lft forever
inet 10.255.0.7/32 scope global eth0
valid_lft forever preferred_lft forever
34: eth1@if35: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
link/ether 02:42:ac:12:00:03 brd ff:ff:ff:ff:ff:ff
inet 172.18.0.3/16 scope global eth1
valid_lft forever preferred_lft forever
37: eth2@if38: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UP group default
link/ether 02:42:0a:00:01:03 brd ff:ff:ff:ff:ff:ff
inet 10.0.1.3/24 scope global eth2
valid_lft forever preferred_lft forever
inet 10.0.1.2/32 scope global eth2
valid_lft forever preferred_lft forever
root@136c5e24bdcd:/var/www/html# ping db
PING db (10.0.1.4): 56 data bytes
64 bytes from 10.0.1.4: icmp_seq=0 ttl=64 time=0.061 ms

For the stack-02 the db is reached by 10.0.0.4/24 network.

[root@ip-192-168-10-5 ~]# docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
d3c572b56186 mysql@sha256:96edf37370df96d2a4ee1715cc5c7820a0ec6286551a927981ed50f0273d9b43 “docker-entrypoint…” 18 minutes ago Up 18 minutes 3306/tcp stack-01_db.1.lbdiqwo92x9dzsyui1ribvhv0
7d1767c5aa72 wordpress@sha256:b7a64fd24470dd3e2786b1d029bed5814e9339c6324cfcf082fd4c5b647ed8b3 “docker-entrypoint…” 22 minutes ago Up 22 minutes 80/tcp stack-02_wordpress.1.ev0jfs3fehg70j5b3dn6f9tdb
[root@ip-192-168-10-5 ~]# docker exec -it 7d1767c5aa72 bash
root@7d1767c5aa72:/var/www/html# ip addr show
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
22: eth0@if23: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UP group default
link/ether 02:42:0a:ff:00:06 brd ff:ff:ff:ff:ff:ff
inet 10.255.0.6/16 scope global eth0
valid_lft forever preferred_lft forever
inet 10.255.0.5/32 scope global eth0
valid_lft forever preferred_lft forever
inet6 fe80::42:aff:feff:6/64 scope link
valid_lft forever preferred_lft forever
24: eth1@if25: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
link/ether 02:42:ac:12:00:03 brd ff:ff:ff:ff:ff:ff
inet 172.18.0.3/16 scope global eth1
valid_lft forever preferred_lft forever
inet6 fe80::42:acff:fe12:3/64 scope link
valid_lft forever preferred_lft forever
27: eth2@if28: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UP group default
link/ether 02:42:0a:00:00:03 brd ff:ff:ff:ff:ff:ff
inet 10.0.0.3/24 scope global eth2
valid_lft forever preferred_lft forever
inet 10.0.0.2/32 scope global eth2
valid_lft forever preferred_lft forever
inet6 fe80::42:aff:fe00:3/64 scope link
valid_lft forever preferred_lft forever
root@7d1767c5aa72:/var/www/html# ping db
PING db (10.0.0.4): 56 data bytes
64 bytes from 10.0.0.4: icmp_seq=0 ttl=64 time=0.041 ms
64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.072 ms

The default route on the containers is the eth1 interface that is connected to external bridge:

root@7d1767c5aa72:/var/www/html# ip route show
default via 172.18.0.1 dev eth1
10.0.0.0/24 dev eth2 proto kernel scope link src 10.0.0.3
10.255.0.0/16 dev eth0 proto kernel scope link src 10.255.0.6
172.18.0.0/16 dev eth1 proto kernel scope link src 172.18.0.3

A secure, scalable and available infrastructure for managing virtual hosts has been created. Every wordpress and database are well isolated and if the one stack wordpress is compromised, there is no impact for the other stacks.

For completing the job, it’s necessary to have a reverse proxy behind the scene for routing the traffic to right application stacks. For this type of job, I usually prefer to use nginx, but in this case I will use the elastic load balancer of aws.

By the GUI of AWS, I created the load balancing with 3 availability zones:

[root@nikto ~]# aws elbv2 describe-load-balancers
{
“LoadBalancers”: [
{
“IpAddressType”: “ipv4”,
“VpcId”: “vpc-c61862af”,
“LoadBalancerArn”: “arn:aws:elasticloadbalancing:us-east-2:770550247028:loadbalancer/app/test-lb-01/2546066b9e0a7703”,
“State”: {
“Code”: “active”
},
“DNSName”: “test-lb-01-1912982270.us-east-2.elb.amazonaws.com”,
“SecurityGroups”: [
“sg-3550b75d”
],
“LoadBalancerName”: “test-lb-01”,
“CreatedTime”: “2017-07-23T21:52:34.430Z”,
“Scheme”: “internet-facing”,
“Type”: “application”,
“CanonicalHostedZoneId”: “Z3AADJGX6KTTL2”,
“AvailabilityZones”: [
{
“SubnetId”: “subnet-9e6afce5”,
“ZoneName”: “us-east-2b”
},
{
“SubnetId”: “subnet-b7f42cfa”,
“ZoneName”: “us-east-2c”
},
{
“SubnetId”: “subnet-be8ad3d7”,
“ZoneName”: “us-east-2a”}]}]}

The ELB above will balance the traffic following these rules:

[root@nikto ~]# aws elbv2 describe-rules –listener-arn arn:aws:elasticloadbalancing:us-east-2:770550247028:listener/app/test-lb-01/2546066b9e0a7703/75217cb9b6d52851
{
“Rules”: [
{
“Priority”: “1”,
“Conditions”: [
{
“Field”: “host-header”,
“Values”: [
“stack02.sysandnetsecurity.com”
]
}
],
“RuleArn”: “arn:aws:elasticloadbalancing:us-east-2:770550247028:listener-rule/app/test-lb-01/2546066b9e0a7703/75217cb9b6d52851/573417146c3f77b3”,
“IsDefault”: false,
“Actions”: [
{
“TargetGroupArn”: “arn:aws:elasticloadbalancing:us-east-2:770550247028:targetgroup/stack-02/1a9f497980082d68”,
“Type”: “forward”
}
]
},
{
“Priority”: “2”,
“Conditions”: [
{
“Field”: “host-header”,
“Values”: [
“stack01.sysandnetsecurity.com”
]
}
],
“RuleArn”: “arn:aws:elasticloadbalancing:us-east-2:770550247028:listener-rule/app/test-lb-01/2546066b9e0a7703/75217cb9b6d52851/eb70ed4ad8fde029”,
“IsDefault”: false,
“Actions”: [
{
“TargetGroupArn”: “arn:aws:elasticloadbalancing:us-east-2:770550247028:targetgroup/stack-01/3283078ffec698f2”,
“Type”: “forward”
}
]
},
{
“Priority”: “default”,
“Conditions”: [],
“RuleArn”: “arn:aws:elasticloadbalancing:us-east-2:770550247028:listener-rule/app/test-lb-01/2546066b9e0a7703/75217cb9b6d52851/0cf98cf502e26aa9”,
“IsDefault”: true,
“Actions”: [
{
“TargetGroupArn”: “arn:aws:elasticloadbalancing:us-east-2:770550247028:targetgroup/stack-01/3283078ffec698f2”,
“Type”: “forward”}]}]}

The traffic is balanced in function of  host header: stack01.sysandnetsecurity.com is balanced to stack-01stack02.sysandnetsecurity.com is balanced to stack-02.

For permitting to reach the aws ELB I created these two CNAME on the sysandnetsecurity.com my domain:

[root@nikto ~]# dig -t CNAME stack01.sysandnetsecurity.com|grep test
stack01.sysandnetsecurity.com. 3580 IN CNAME test-lb-01-1912982270.us-east-2.elb.amazonaws.com.
[root@nikto ~]# dig -t CNAME stack02.sysandnetsecurity.com|grep test
stack02.sysandnetsecurity.com. 3580 IN CNAME test-lb-01-1912982270.us-east-2.elb.amazonaws.com.

Everything it works. The two wordpress stack can be reached from internet:

wordpress stack

Conclusions

I showed how to create a swarm cluster for deploying and scaling in easy way a wordpress stack. The same approach can be used for any other applications.

The benefits of security, scalability and easy management and deploy respected to classical distributed application are evident and this makes this approach increasingly popular and widespread.

I used amazon aws only as laboratory, in order to avoid excessive resources consumption by virtualization system like virtual box or kvm and saving time for reverse proxy and file system distributed installation and configuration.

Don’t hesitate to contact me for any questions or doubt.

 

LEAVE A COMMENT