Using private registry in Docker Swarm

containersIn one of my previous posts about Docker health checks closer to the end of the post I managed to build a Dockerfile and run it as a service in Docker in Swarm mode. To be honest, I’m a little bit surprised that Docker allowed me to do that. That Swarm cluster could’ve had more than one host. What if the service went somewhere, where underlying image didn’t exist? Swarm node wouldn’t copy the image to the node that needs it, right? Or would it?

Let’s try replicating our service based on custom image across all hosts of multi-host Swarm cluster and see how that goes (spoiler: we’ll need private registry in order for that to work).

Continue reading “Using private registry in Docker Swarm”

What exactly is Kubernetes

kubernetesKubernetes (or K8s) is another tool for orchestrating containerized apps in a cluster. It’s job is to find the right place for a container, fulfill its desired state (e.g. “running, 5 replicas”), provide a network, internal IP, possibly, access from outside, apply updates, etc. Originally developed by Google, now Kubernetes is open source. Continue reading “What exactly is Kubernetes”

Docker health checks

Docker health checkSomehow I missed the news that starting from version 1.12 Docker containers support health checks. Such checks don’t just test if container itself is running, but rather is it doing the job right. For instance, it can ping containerized web server to see if it responds to incoming requests, or measure memory consumption and see if it’s reasonable. As Docker health check is a shell command, it can test virtually anything.

When the test fails few times in a row, problematic container will get into “unhealthy” state, which makes no difference in standalone mode (except for triggered health_status event), but causes container to restart in Swarm mode. Continue reading “Docker health checks”

docker-compose for Swarm: docker stack

docker compose + swarm = docker stackImagine you configured your new shiny Docker cluster and now ready to fill it with dockerized applications. How exactly are you going to do that? Not by manually typing docker service create for every app, right? Especially when average application that requires cluster will contain more than one service in it.

In standalone Docker we had docker-compose tool, which allowed us to describe all app containers in single docker-compose.yml file and then start it with docker-compose up. Can we use the same for Swarm? Absolutely. Continue reading “docker-compose for Swarm: docker stack”

Quick intro to Docker Swarm mode

docker swarmDocker is cool. It is a great tool to pack your application into set of containers, throw them into the host and they’ll just work. However, when it’s all happening within the single host, the app cannot really scale much: there’s fixed amount of containers it can accomodate. Moreover, when the host dies, everything dies with it. Of cause, we could add more hosts and join them with overlay network, so more containers can coexist and they still would be able to talk to each other.

However, maintaining such cluster would be a pain. How to detect if host goes down? Which containers are missing now? What’s the best place to recreate them?

Starting from version 1.12.0 Docker can work in Swarm mode and handle all of those tasks and even more. Continue reading “Quick intro to Docker Swarm mode”

Building VM image with Packer

Packer logoQuite often building a VM from scratch is not very wise. Unless server configuration is trivial, its provisioning might take significant amount of time. For example, creating an instance of a build server for my current project takes about 40 minutes. This includes installing updates, various SDKs and other dependencies. How is it possible then that I can add new build server to a cluster in about two or three minutes?

The secret is that the most of the software is baked into a VM image, so I never start from scratch. New VM still needs some steps like final configuration and registering within the cluster, but that’s fast. The slowest part is allocating resources for VM, not provisioning.

Historically, we’ve been using own tools for that, but there’re also free and open source ones. Like Packer. Continue reading “Building VM image with Packer”

Using Vagrant for Windows VMs provisioning

Vagrant Windows

Using Vagrant for creating Consul cluster on Linux probably was fun. But what about Windows hosts? Believe it or not, but more than half of developers are actually using Windows, so for most of the folks seeing how Vagrant creates Linux VMs is pretty useless.

However, you can create and provision Windows VMs with Vagrant with little to no problem. In fact, Windows support has been around for years. But there’re some things to keep in mind though. Continue reading “Using Vagrant for Windows VMs provisioning”

How to use Vagrant to create Consul cluster

Vagrant logo

Last two articles about Consul service discovery involved one simple but extremely boring manual task: creating and configuring a cluster. In fact, I had to do it twice. I had to create three virtual machines, download and unpack Consul on them, find out their IP addresses, add configuration files and finally launch the binaries.

It’s dull. It’s boring. Humans shouldn’t do that kinds of things by hand. Seeing how easily we can automate creation of Docker containers with Dockerfile and docker-compose makes me wonder if we can do the same for hosts. Continue reading “How to use Vagrant to create Consul cluster”

Checking service health status with Consul

Consul logoIn previous post we created a small Consul cluster which kept track of 4 services in it: two web services and two db‘s. However, we didn’t tell Consul agents how to monitor those services, so they completely missed the fact that none of the services actually exists. So today we’re going to take a close look at Consul’s health checks and see what effect they have on service discoverability. Continue reading “Checking service health status with Consul”

Using Consul for Service Discovery

Consul logoImagine your distributed app has two kinds of services: web and db. Both of them are replicated for higher availability, live on different hosts, go online and offline whenever they like. So, here’s a question: how do web‘s find db‘s?

Obvious solution would be to come up with some sort of reliable key-value storage, and whenever service comes online, it would register itself with the address in the store. But what happens when service goes offline? It probably could notify the store just before that, but c’mon, it’s internet: things can go offline without any warning. OK, then we could implement some sort of service health checks to ensure that they are still available… By the way, did you notice how quickly the simple idea of using external store for service discovery started to become a reasonably large infrastructure project?

Service discovery is something very hard to do. But we don’t have to – there’re tools for that, and Consul is one of them. Continue reading “Using Consul for Service Discovery”