Autoscaling build servers with Gitlab CI

autoscaling builds

I’ve been using Gitlab CI for a while now and until certain point it worked really well. We had three build servers (GitLab runners) in the beginning, and when number of teammates or build steps and therefore commits and build jobs increased, I’d just add one more server to handle an extra load and felt that problem was solved.

Not for long. When number of servers climbed to more than ten, it became obvious that simply adding servers one by one doesn’t work anymore. It was both expensive to have all of them running all the time and it still wasn’t enough to handle occasional spikes of commits. Not to mention that during the nights and weekends those servers were doing absolutely nothing.

The whole thing needs to be dynamic and fortunately GitLab CI supports autoscaling out of the box. Documentation is a little bit confusing but in reality it’s very easy to get started. So here’s the plan: let’s try it!

Continue reading “Autoscaling build servers with Gitlab CI”

Easy continuous integration and deployment with GitLab CI

GitLab logo

Last month we finally finished migration from our previous CI/CD system to GitLab CE and that’s something that makes me extremely happy. It’s just so much easier to maintain our CI/CD monster, when repository, build configurations, build results, test results and even that “Approve” button that publishes the build to release repository – when they all are in the same place.

And what I particularly love about GitLab is how simple it is to configure all of that. So simple, that today I’ll show you how to setup fully functional CI/CD for demo project starting from installing GitLab and finishing with successful commit landing at “production” server. So, without further ado, let’s begin.

Continue reading “Easy continuous integration and deployment with GitLab CI”

Move existing WordPress site into Docker

Existing wordpress site in Docker

I’ve been running two WordPress blogs for some time and my biggest regret is that they are not running in Docker containers. If I did the right thing in the beginning, I wouldn’t have to worry about whether or not the server upgrade will be safe, or will I be able to recall server configuration when time to migrate comes. I actually would be able to spin up local blog replica, run some experiments on it (new settings, features or design change) and decide whether or not I want move that change into ‘production’.

However, it’s never too late. I’m reluctant to make a big change on the real server without prior tests, so today I’ll try to create local Docker replica of one of my blogs and see how that goes. Continue reading “Move existing WordPress site into Docker”

Using private registry in Docker Swarm

containersIn one of my previous posts about Docker health checks closer to the end of the post I managed to build a Dockerfile and run it as a service in Docker in Swarm mode. To be honest, I’m a little bit surprised that Docker allowed me to do that. That Swarm cluster could’ve had more than one host. What if the service went somewhere, where underlying image didn’t exist? Swarm node wouldn’t copy the image to the node that needs it, right? Or would it?

Let’s try replicating our service based on custom image across all hosts of multi-host Swarm cluster and see how that goes (spoiler: we’ll need private registry in order for that to work).

Continue reading “Using private registry in Docker Swarm”

What exactly is Kubernetes

kubernetesKubernetes (or K8s) is another tool for orchestrating containerized apps in a cluster. It’s job is to find the right place for a container, fulfill its desired state (e.g. “running, 5 replicas”), provide a network, internal IP, possibly, access from outside, apply updates, etc. Originally developed by Google, now Kubernetes is open source. Continue reading “What exactly is Kubernetes”

Docker health checks

Docker health checkSomehow I missed the news that starting from version 1.12 Docker containers support health checks. Such checks don’t just test if container itself is running, but rather is it doing the job right. For instance, it can ping containerized web server to see if it responds to incoming requests, or measure memory consumption and see if it’s reasonable. As Docker health check is a shell command, it can test virtually anything.

When the test fails few times in a row, problematic container will get into “unhealthy” state, which makes no difference in standalone mode (except for triggered health_status event), but causes container to restart in Swarm mode. Continue reading “Docker health checks”

docker-compose for Swarm: docker stack

docker compose + swarm = docker stackImagine you configured your new shiny Docker cluster and now ready to fill it with dockerized applications. How exactly are you going to do that? Not by manually typing docker service create for every app, right? Especially when average application that requires cluster will contain more than one service in it.

In standalone Docker we had docker-compose tool, which allowed us to describe all app containers in single docker-compose.yml file and then start it with docker-compose up. Can we use the same for Swarm? Absolutely. Continue reading “docker-compose for Swarm: docker stack”

Quick intro to Docker Swarm mode

docker swarmDocker is cool. It is a great tool to pack your application into set of containers, throw them into the host and they’ll just work. However, when it’s all happening within the single host, the app cannot really scale much: there’s fixed amount of containers it can accomodate. Moreover, when the host dies, everything dies with it. Of cause, we could add more hosts and join them with overlay network, so more containers can coexist and they still would be able to talk to each other.

However, maintaining such cluster would be a pain. How to detect if host goes down? Which containers are missing now? What’s the best place to recreate them?

Starting from version 1.12.0 Docker can work in Swarm mode and handle all of those tasks and even more. Continue reading “Quick intro to Docker Swarm mode”

Persistent data in Docker volumes

Docker volumes

As Docker containers supposed to be small, single process and easy replaceable instances, it’s not particularly clear how persistent data fits into that picture. Imagine you have MySQL container which you decided to upgrade. What will you do with its database files? In containers world “upgrade” means “nuke an old one, start a new one” and your data will turn into radioactive ashes with the rest of container’s file system.

However, along with the problem Docker also provides a solution: Docker volumes.

Continue reading “Persistent data in Docker volumes”

Multi-host Docker network without Swarm

Docker has several types of networks, but one of them is particularly interesting. Overlay network can span across hosts boundaries, so your web application container at HostA can easily talk to database container at HostB by its name. It doesn’t even have to know where that container is.

Unfortunately, you can’t just create overlay network and hope that it magically finds out about all participating hosts. There should be one more component in order to make that happen.

Of cause, we could use Docker in Swarm mode and problem’s solved. But we don’t have to. Configuring multi-host Docker network without Swarm is actually quite easy. Continue reading “Multi-host Docker network without Swarm”