Multi-host Docker network without Swarm

Docker has several types of networks, but one of them is particularly interesting. Overlay network can span across hosts boundaries, so your web application container at HostA can easily talk to database container at HostB by its name. It doesn’t even have to know where that container is.

Unfortunately, you can’t just create overlay network and hope that it magically finds out about all participating hosts. There should be one more component in order to make that happen.

Of cause, we could use Docker in Swarm mode and problem’s solved. But we don’t have to. Configuring multi-host Docker network without Swarm is actually quite easy.

Prerequisites

As we’ll have to deal with several hosts, here’re some additional tools we’ll use:

  1. VirtualBox – to run virtual hosts.
  2. docker-machine – to create and provision those hosts. If you’re running Docker at Mac or Windows most likely it’s already installed. But if it doesn’t, installation instructions are short and simple.

The plan

As individual Docker hosts don’t know about each other, it would be tricky for them to share anything, especially something as complex as a network. But if we introduced a special service, whose sole job would be keeping a list of participating hosts, as well as network configuration in general, and then told Docker engines to use it, that would probably do the trick.

Out of the box Docker can work with several discovery services. I’ll use Consul, but ZooKeeper and Etcd also would work. After it’s up and running, we’ll create few Docker hosts and configure them to use one with Consul as cluster configuration storage. Then it would be trivial to create an overlay network that spans across these hosts.

Installing discovery service

So we need to create a host with Consul installed on it. Easy peasy. First, let’s create a host for that:

It basically tells docker-machine to create a host named keyvalue using virtualbox as a driver. The host it creates will have fully configured Docker engine, so we can use it to pull and run Consul image.

One more piece of magic: as docker executable itself is just a client to Docker engine, we can tell it to connect to engine on another machine and send our local commands there! Of cause, we could just docker-machine ssh keyvalue and do everything directly there, but c’mon, it’s not nearly as cool.

docker-machine config keyvalue can provide settings for connecting to Docker engine inside of newly created host, so all we need to do is to pass those to docker client:

Having keyvalue‘s ip address, we actually can navigate a browser to port 8500 and see if anything responds:

Consul at :8500

Configuring hosts with Docker engines

Now we’ll need to create two hosts with regular Docker engines that know about discovery service we just created. Docker engine has two properties for cluster mode:

  1. cluster-store to point to discovery service and
  2. cluster-advertise to specify a door to current Docker engine for others. For docker-machine created hosts it’s usually eth1:2376.

In common scenario we’d go to Docker configuration file to set those, but as we’re using docker-machine, we actually can specify those settings during host creation:

And for the second node:

The magic: part 1

Now let’s SSH into the first host and create overlay network in it. The one that supposed to span across the host boundaries, remember?

Now, after we created network called multi-host-netexit that host, head to the second one and check out what networks it knows about:

Behold! Mighty gods of Docker made multi-host-net network available at that host as well.

The magic: part 2

Let’s try one more thing. While we’re still at node-1, let’s start nginx container attached to overlay network we just created:

I called it webserver so it’s easier to refer to it over the network. Just out of curiosity let’s type curl localhost to confirm the server is up and running and then head back to node-0:

I’m going to start regular ubuntu container in it and launch a command line browser to see if I can access webserver:

elinks: welcome

elinks: navigate

A drumroll…

webserver container accessed via overlay network

Voilà! A container running at node-0 was able to reach webserver container at node-1 just by its name.

Summary

Today we checked out the simplest way to build multi-host Docker network without involving Swarm mode into it. Surprisingly, that was very easy. Firstly, we need a discovery service, which can be set up by just a few keystrokes, no configuration involved. And secondly we need to tell participating Docker hosts to use it. Piece of cake. After that you can create as an overlay network, attach few containers to it, and they will be able to talk to each other having no idea or intention to know where they physically are. Pure magic.

6 thoughts on “Multi-host Docker network without Swarm

    1. It looks like the IP address of your keyvalue store never reached the docker engine settings. Assuming you created keyvalue and docker engine hosts with docker-machine, did docker-machine ip keyvalue return IP for you?
      If you find docker engine process at problematic VM with something like ps aux | grep dockerd, does dockerd contain proper (ip+port) --cluster-store value?

  1. Thanks for such a nice blog, though it fullfilled my requirement in which i am migrating legacy application of two physical nodes with different application communicating each other. but still some questions

    1- For mentioned scenario web-app(node1) and DB(node2), we can use expose port options, why to create overlay network?

    2- By using swarm-mode with replica=1 we can achieve same, so what advantage we will get by using above mentioned methodology?

    3- if node on which consul is installed, it goes down our whole application is no more working. But if we use swarm-mode option with web-app(node1) and DB(node2). if any node goes down my understanding is swarm will launch both containers on available host? please correct me if my understanding is wrong?

    1. Hey,
      If I understood your questions right:
      1) yes, we can expose the ports and let services to talk to each other via IP:port pairs. However, applications somehow need to know what those addresses are, and we will be responsible for updating them when hosts move. Plus, in this case there will be at least three logical networks involved, which kind of not a big deal, but still three times more than we could have. With overlay network we’re dealing with only one network and applications need to know only the service names – Docker will do names resolution for us.
      2) The point of the blog post is that we can do overlay network without swarm, not about that we should 🙂 But if you specifically don’t want to bring Swarm into this, using just the network is useful. For instance, I’m not sure if that’s changed, but in swarm mode I found it inconvenient that I can’t start a container and simply let it die. I don’t need a service for that, I simply need one-off container.
      3) Single node Consul configuration is a single point of failure to the network, sure. Cluster of Consuls is not 🙂 You’re right, swarm will restore services from failed node in another one. But again, sometimes you don’t need that and the whole container orchestration thing. Sometimes you might simply want to keep Docker hosts separate, but have an option for its containers to share the network.

  2. ok but no one wants to do this with virtualbox can you give instructions using a real host or a ‘real’ ESXi/QEMU VM? Docker-machine is confusing

Leave a Reply

Your email address will not be published. Required fields are marked *