Quick intro to Docker Swarm mode

Docker is cool. It is a great tool to pack your application into set of containers, throw them into the host and they’ll just work. However, when it’s all happening within the single host, the app cannot really scale much: there’s fixed amount of containers it can accomodate. Moreover, when the host dies, everything dies with it. Of cause, we could add more hosts and join them with overlay network, so more containers can coexist and they still would be able to talk to each other.

However, maintaining such cluster would be a pain. How to detect if host goes down? Which containers are missing now? What’s the best place to recreate them?

Starting from version 1.12.0 Docker can work in Swarm mode and handle all of those tasks and even more.

What’s Docker Swarm mode

Docker Swarm mode is simply Docker Engine working in a cluster. In addition to treating cluster hosts as shared container space, Swarm mode brings few new Docker commands (e.g. docker node, or docker service) and a concept of services.

Service is one more level of abstraction over individual containers. Like a container, it’ll have a name, image to create container from, published ports and volumes. Unlike a container, it can declare constraints on the hosts it allowed to run. Service also can scale and at the moment of creation one can specify how many replicas of underlying container it should run.

It’s important to understand that docker service create command itself doesn’t create any containers. It declares desired state of the service and it’s Swarm manager’s job to satisfy it: find appropriate hosts, create as many containers as service needs and monitor that the desired state is fulfiled at all times, even when container crushes or the whole host goes down. Sometimes it won’t be even possible to fulfill desired state. E.g. when we simply run out of hosts. In this situation Swarm will keep the service in pending state until something changes.

Plan for today

Let’s explore! It’s really, really easy to create a swarm and right of the box it can demonstrate some cool features.

We’ll create three virtual machines to be the part of Swarm, deploy visualization service to graphically see how the cluster looks like, and then deploy replicated nginx service to see how it scales and survives when underlying host goes down.

Prerequisites

To follow along you’ll need Docker v1.12.0 or above, docker-machine and VirtualBox. First two usually come together with Docker Toolbox for Mac or Windows. Installing on Linux is a bit tricker, but not much. Like Docker, VirtualBox also works on all platforms and is very trivial to install.

I’ll be using Docker 17.03.1-ce for Mac and VirtualBox 5.1.20

Step 0. Creating three VMs

Creating VMs with docker-machine has one huge advantage over the other approaches: they immediately come with Docker installed. We’ll need three hosts: one for swarm manager, and two regular worker hosts. Creating those VMs is painfully trivial:

Step 1. Creating a Swarm

There’re two Docker commands that can turn individual Docker Engine into a Swarm mode: docker swarm init if you’re creating a new cluster, and docker swarm join if you’re joining existing one. We don’t have a cluster yet, so let’s start creating one in sw-master:

Because sw-master is the first node in a cluster it automatically became its manager – exactly what we wanted. swarm init command was so nice, that even provided us with the command to execute at worker nodes to add them to newly created cluster. --token value is a secret string which also encodes in what role other nodes will be joining the cluster. If you lose it or decide to add other nodes as managers, docker swarm join-token [manager] will get you another one.

Btw, because each of VirtualBox VMs I created has two network interfaces and therefore two IP addresses, I had to use --advertise-addr param to tell Docker which one to use.

Now, if we exit sw-manager and execute swarm join command that we got from previous step in remaining two hosts, we’ll get ourselves ready to use Docker cluster! Head back to host OS, connect local Docker client to manager’s Docker Engine with eval $(docker-machine.. command, and we can try some of new Docker commands that became available. E.g. docker node ls:

Step 2. Deploying Swarm visualization service

We can use Docker’s public visualizer image to demonstrate how to create Swarm service and also to get a nice picture of what’s happening inside of the cluster. Here’s the command that will do that and I’ll explain in a moment why and how it works:

Here what it does, line by line:

  1. docker service create – obviously, creates the service. It’s very similar to docker run and in most cases it’ll actually lead to starting a container.
  2. --name=viz – service name. It is the same parameter as docker run accepts.
  3. --publish=8080:8080 – despite the name, you actually know this one. It’s good old -p from docker run that binds container ports to host ports. Just in full form.
  4. --constraint=node.role==manager – specifies requirements to the nodes where this service can run. In our case, the service can run on manager nodes only, as visualizer needs to talk to the manager node directly in order to get the full picture.
  5. --mount=... even though this line looks cryptic, it simply binds docker.sock socket to container’s file system, so visualizer can talk to Docker Engine. This parameter is very similar to -v (volume) parameter in docker run.
  6. dockersamples/visualizer – The image to use.

When command finishes, it will take some time for swarm scheduler to find a proper host for the service and deploy the container, but we can monitor the process with docker service ls and docker service ps commands and know when it’s the time to try the service:

Manager node IP is 192.168.99.101 (docker-machine ip sw-master), so let’s check what’s in there:

visualizer: one service

Neat.

Step 3. Benchmarking Swarm load balancer

By default, Swarm tries to put service containers into the emptiest at the moment node, so if you execute docker service create --name=web --publish=80:80 nginx to start a container called web with nginx inside, Swarm will put it at one of the worker nodes:

nginx service

But that’s not the interesting part. The interesting part is that you still can access that service by using sw-master‘s IP address – 192.168.99.101. In fact, you can use any IP address within the cluster to get access to the service published in it.

nginx at master

Unlike with single-host mode, publishing the port publishes it within the whole cluster. This comes in handy when your service runs replicated containers for higher availability. This way Swarm also acts as load balancer and when the request comes it will route it to one of the containers that can accept it. Let’s check how we can exploit this feature.

First, we need to scale up the service. I think two replicas will be enough for now.

nginx: two replicas

Assuming that Swarm acts as a load balancer, replicated web service should handle twice as many requests as non replicated one. It’s quite easy to test. I have Apache Bench on my machine, so I’m going to hammer web with 10000 requests in 50 threads, then scale down the service, rerun the test and see, what difference it makes in number of handled requests per second.

nginx scaled down

Apparently, scaled web service was able to handle more than twice of the amount of requests per second than in single container configuration. Of cause, proper benchmarking should involve more samples, but numbers do line up with the expectations.

Step 4. Testing service failover

It’s been awfully long post, but I can’t end it without this quick step. We left remaining web service’s container to run on sw-worker-2. What will happen if that host goes away? Let’s find out.

It wasn’t much of a surprise that several seconds later web service from fallen sw-worker-2 was resurrected at sw-worker-1. Swarm manager realized that service’s desired state is no longer fulfiled and took action.

nginx-resurrect

Conclusion

If Docker itself was cool, Docker in Swarm mode is simply great. The cluster is easy to create, out of the box it comes with load balancing, service scalability and failover recovery. It also has bazillion of other features I didn’t even touch. Personally, I’m going to use it even for single host configurations.

2 thoughts on “Quick intro to Docker Swarm mode

Leave a Reply

Your email address will not be published. Required fields are marked *