Docker and orchestration, create a production environment with Swarmkit

0
280

Docker from release 1.12 provides a built-in orchestration system that allows you to merge and manage multiple Docker Engine in a cluster. This new feature allows us to distribute containers on multiple nodes and then prepare to manage a production environment with Docker.

There are other Orchestration frameworks that use Docker as runtime containers, Kubernetes, Swarm (intended as docker/swarm a docker project but not built-in), Mesos and much more.

This new feature is based on a project called SwarmKit: a series of primitives that allow you to manage distributed applications such as node discovery, a raft-based implementation of cores/raft and more.

Here from the Docker version 1.12 onwards, we can notice some new commands:

<pre class=”brush: php; html-script: true”>
swarm docker
docker node
docker service
</pre>

The first command allows us to initialize and manage our cluster, the second to inspect our nodes and the third to start, manage and scale our applications.

We can start immediately with a practical example using docker-machine to turn on 3 new servers in our local machine using VirtualBox, for this reason you need to verify that you have installed VitualBox, either docker-machine, or the whole stack that you can easily install by following the directions on the Docker Toolbox website.

To turn on the 3 nodes we use the commands:

<pre class=”brush: php; html-script: true”>
docker-machine create -d virtualbox sw1
docker-machine create -d virtualbox sw2
docker-machine create -d virtualbox sw3
</pre>

At this point we can enter the first machine sw1, which will be our master . In a cluster swarm there are 2 actors: masters and workers . Inside the master nodes are saved information about our containers and their distribution, they are the brain of the cluster. In the case of failover, the Raft algorithm will elect a new leader among the masters still active, so it is important, in a production context, to make sure that you have a multi-master architecture, so as to avoid single points of failure.

<pre class=”brush: php; html-script: true”>
docker-machine ssh sw1
</pre>

Once inside we can initialize our cluster with the command:

<pre class=”brush: php; html-script: true”>
docker swarm init –advertise-addr 192.168.99.100
</pre>

–advertise-addrit is a useful option to communicate on which network interface the communications between nodes will take place, to know the IP to be assigned instead use the ifconfig command .

The output of this command contains essential information to add nodes within the cluster, in the example:

<pre class=”brush: php; html-script: true”>
docker swarm join \
–token SWMTKN-1-4bl65z15zd6nt4y0xc0mf639z4hhwphqn5l523bo6ws0yp230v-b96hecfns0k8ey3pasvwcp7gt \
192.168.99.100:2377
</pre>

This is the command to be used within our future nodes in order to add them to the swarm cluster.

We then proceed out of the car and entering sw2and sw3executing precisely the command that docker has suggested. Then we verify that our cluster is composed of 3 nodes. To do so we go out of our machine, back to our terminal, we use docker-machine to communicate to the local client docker the address of our master, using the command:

eval $ (docker-machine env sw1)

At this point our client is communicating with our master and we can perform:

<pre class=”brush: php; html-script: true”>
docker node ls
</pre>

The output should show us our 3 nodes, ready to be used. Now proceed with the deployment of our application.
For our example we use gianarb / micro: 1.0.0 an application written in go taken on GitHub. It is a simple HTTP server that exposes the 8000 port, has a homepage to the route /the shows the ip of the container in which the application is executed. This small service is very useful because what we expect to see is the same application distributed in multiple containers, on different nodes, then with different ip based on the container.

First of all, let’s create a network:

docker network create -d overlay micro-net

All the containers deployed within this network will be able to communicate with each other, in this specific example, we do not need to let two containers communicate because our application does not have a dependency. If instead, we used, for example, a database MySQL, we should then associate this network with the container hosting MySQL.

Docker supports several drivers, so let’s be free to create plugins to extend the functionality of the network, in our case overley is the default driver and allows our containers to communicate even if they are not in the same node.

Create a service

At this point we create our first service:

docker service create --name micro -p 8000 --replicas 1 --network micro-net gianarb/micro:1.0.0
Parameter Description
--name It is the name of the service
-p It is the port that exposes our HTTP server
--replicas It is the number of tasks to be started, in our case 1
--network It takes the name of the network created above and the name of the image gianarb/micro:1.0.0

 

Let us now try to understand how to call our service and inspect the work done by Swarm.

docker service ps 

He tells us the list of all our services, with the command:

docker service ps &lt;id_servizio&gt;

We can have a clearer idea of the number of tasks and their status in case 1. The output of this command is very similar to what we normally see when we execute commands with Docker in single node mode or in the original mode to be clear.

This is because a service is a logical entity that allows us to manage a container pool that Swarm calls tasks and that scales our application. In our example we have only one task or one container. Among the information of the last command we can know in some way the tasks that are running, in our example, in the node sw1.

As a demonstration of what we said we can execute access within the sever on which the container runs and execute:

docker ps

Here we can see that under the hood swarm is managing our tasks like normal containers. So let’s go back to our master and run the following command:

docker service inspect &lt;service_id&gt;
&lt;/service_id&gt;

This command contains information related to our service: networking, volumes, name. We are looking for how to connect and run our application.

As we will remember when creating our service, we have specified a parameter -p 8000, docker swarm does not directly expose the requested port but creates a proxy that redirects the traffic to the requested service. We look inside the answer the field Endpoint inside it we will find this:

"Ports": [
               {
                   "Protocol": "tcp",
                   "TargetPort": 8000,
                   "PublishedPort": 30000
               }
           ],

The door 8000of our service is exposed as a gateway 30000to Swarm, this means that we can take any of our cluster IPs to the door 30000, in the example 192.168.99.100:30000. By contacting this URL you can resell our application!

Routing Mesh

One of the goals of Docker Swarm is to allow us to perceive our cluster as a single machine, for this reason, we can not therefore directly expose the port of the requested service but a random port.

This feature is called Routing Mesh, in our example, the tasks are inside the node sw1but changing ip and using that of the machine sw2on the door 30000we can still reach our container because Swarm takes care to redirect the traffic to the server that contains a task of our service.

All services have a built-in load balancer that allows us to redirect incoming traffic from port 30000 to one of the active tasks.

It’s time to climb

At this point it is time to scale our containers and see what happens:

docker service scale micro = 7

We have just informed Swarm to create 7 tasks for our service micro. The command:

docker service ps micro

it will show us 7 tasks and not 1 as before and we can see how the tasks are distributed among our nodes.

We can also run the page refresh several times in the browser, you will notice that the IP changes, because the load balancer is turning round-robin traffic on different containers.

We can perform scale up as done previously but also scale down, simply vary the number of expected tasks.

Docker Swarm replaces the containers that for some reason are no longer in a running state and tries to distribute the containers between the nodes in an intelligent way evaluating the resources, leaving us free to distribute other containers using labels in cases such as the one in which we need that two services are on the same node to reduce bandwidth latency.

For example, we can try to remove a container. Let’s sit in server 1 and select a task within this node, for example, ap7i6m0zwz3q1q6e3w46lpxoc. The task id is not equivalent to that of the container to identify our container we can execute the following command:

docker inspect ap7i6m0zwz3q1q6e3w46lpxoc

The field Speccontains the information we are looking for

"Status": {
            "Timestamp": "2016-10-06T14: 32: 57.054417164Z",
            "State": "running",
            "Message": "started",
            "ContainerStatus": {
                "ContainerID": "0f1f67e04e296a183311e9904217f24109fb84e5203a267cb513b4415bf3a130",
                "PID": 3538
            }
        },

The id of our container is 0f1f67e04e296a183311e9904217f24109fb84e5203a267cb513b4415bf3a130

docker rm -fv 0f1f67e04e296a183311e9904217f24109fb84e5203a267cb513b4415bf3a130

We can still vote for the value REPLICASas 7/7.

Extract the id of a container starting from a task

It is essential to perform analyzes in the case of unexpected failures. Once the container id is obtained, we can take it in the way it contains it and use all the tools we already know and that Docker provides as logs, export, attach, exec.

The same failover maccanism is performed in case of momentary or definitive malfunction of a way, we can better understand how it works by eliminating a node using docker-machine from our terminal:

docker-machine rm sw3

By executing docker service lswe can see how swarm re-ignites containers in the remaining nodes so as to bring the number of tasks to 7 as required.

Safety

When we talk about clusters, networks and production, one of the fundamental and recurring words is security, we have not yet talked about this in this article, but it is a very important aspect for Docker Swarm. It is for this reason that all communications within the cluster are encrypted by default.

The underlying philosophy of this implementation is the idea that a system with security protocols and overly complex implementations simply leads developers not to use quotes functions for this reason swarm is safe by default and this functionality cannot be disabled.

We saw something about it at the beginning of this article when we ran join a node command and we had to use the option token, this token is a security key that is used within the cluster to decrypt the traffic that is precisely encrypted with TLS certificates by default. The rotation of the periodic keys is also managed, so as to further reduce the risk that malicious people who managed to take possession of our private key can for too long to decrypt the traffic.

LEAVE A REPLY

Please enter your comment!
Please enter your name here