Multi-host Docker network without Swarm

Docker has several types of networks, but one of them is particularly interesting. Overlay network can span across hosts boundaries, so your web application container at HostA can easily talk to database container at HostB by its name. It doesn’t even have to know where that container is.

Unfortunately, you can’t just create overlay network and hope that it magically finds out about all participating hosts. There should be one more component in order to make that happen.

Of cause, we could use Docker in Swarm mode and problem’s solved. But we don’t have to. Configuring multi-host Docker network without Swarm is actually quite easy.

Prerequisites

As we’ll have to deal with several hosts, here’re some additional tools we’ll use:

  1. VirtualBox – to run virtual hosts.
  2. docker-machine – to create and provision those hosts. If you’re running Docker at Mac or Windows most likely it’s already installed. But if it doesn’t, installation instructions are short and simple.

The plan

As individual Docker hosts don’t know about each other, it would be tricky for them to share anything, especially something as complex as a network. But if we introduced a special service, whose sole job would be keeping a list of participating hosts, as well as network configuration in general, and then told Docker engines to use it, that would probably do the trick.

Out of the box Docker can work with several discovery services. I’ll use Consul, but ZooKeeper and Etcd also would work. After it’s up and running, we’ll create few Docker hosts and configure them to use one with Consul as cluster configuration storage. Then it would be trivial to create an overlay network that spans across these hosts.

Installing discovery service

So we need to create a host with Consul installed on it. Easy peasy. First, let’s create a host for that:

It basically tells docker-machine to create a host named keyvalue using virtualbox as a driver. The host it creates will have fully configured Docker engine, so we can use it to pull and run Consul image.

One more piece of magic: as docker executable itself is just a client to Docker engine, we can tell it to connect to engine on another machine and send our local commands there! Of cause, we could just docker-machine ssh keyvalue and do everything directly there, but c’mon, it’s not nearly as cool.

docker-machine config keyvalue can provide settings for connecting to Docker engine inside of newly created host, so all we need to do is to pass those to docker client:

Having keyvalue‘s ip address, we actually can navigate a browser to port 8500 and see if anything responds:

Consul at :8500

Configuring hosts with Docker engines

Now we’ll need to create two hosts with regular Docker engines that know about discovery service we just created. Docker engine has two properties for cluster mode:

  1. cluster-store to point to discovery service and
  2. cluster-advertise to specify a door to current Docker engine for others. For docker-machine created hosts it’s usually eth1:2376.

In common scenario we’d go to Docker configuration file to set those, but as we’re using docker-machine, we actually can specify those settings during host creation:

And for the second node:

The magic: part 1

Now let’s SSH into the first host and create overlay network in it. The one that supposed to span across the host boundaries, remember?

Now, after we created network called multi-host-netexit that host, head to the second one and check out what networks it knows about:

Behold! Mighty gods of Docker made multi-host-net network available at that host as well.

The magic: part 2

Let’s try one more thing. While we’re still at node-1, let’s start nginx container attached to overlay network we just created:

I called it webserver so it’s easier to refer to it over the network. Just out of curiosity let’s type curl localhost to confirm the server is up and running and then head back to node-0:

I’m going to start regular ubuntu container in it and launch a command line browser to see if I can access webserver:

elinks: welcome

elinks: navigate

A drumroll…

webserver container accessed via overlay network

Voilà! A container running at node-0 was able to reach webserver container at node-1 just by its name.

Summary

Today we checked out the simplest way to build multi-host Docker network without involving Swarm mode into it. Surprisingly, that was very easy. Firstly, we need a discovery service, which can be set up by just a few keystrokes, no configuration involved. And secondly we need to tell participating Docker hosts to use it. Piece of cake. After that you can create as an overlay network, attach few containers to it, and they will be able to talk to each other having no idea or intention to know where they physically are. Pure magic.

2 thoughts on “Multi-host Docker network without Swarm”

    1. It looks like the IP address of your keyvalue store never reached the docker engine settings. Assuming you created keyvalue and docker engine hosts with docker-machine, did docker-machine ip keyvalue return IP for you?
      If you find docker engine process at problematic VM with something like ps aux | grep dockerd, does dockerd contain proper (ip+port) --cluster-store value?

Leave a Reply

Your email address will not be published. Required fields are marked *