Last month we finally finished migration from our previous CI/CD system to GitLab CE and that’s something that makes me extremely happy. It’s just so much easier to maintain our CI/CD monster, when repository, build configurations, build results, test results and even that “Approve” button that publishes the build to release repository – when they all are in the same place.
And what I particularly love about GitLab is how simple it is to configure all of that. So simple, that today I’ll show you how to setup fully functional CI/CD for demo project starting from installing GitLab and finishing with successful commit landing at “production” server. So, without further ado, let’s begin.
Step 0. Demo project
I have silly TypeScript web project that simply changes the message on a web page as soon as it’s loaded. As this project requires some sort of compilation, has room for testing, and can be deployed after first two steps succeeded, we can setup CI/CD pipeline for it with “build”, “test” and “deploy” stages. But we’ll get to that later. For now, this is our guinea pig:
1 2 3 4 5 6 7 |
<!doctype html> <html> <head> <script src="index.js"></script> </head> <body>Waiting for JS to fire</body> </html> |
1 2 3 4 |
window.addEventListener('load', e => document.querySelector('BODY').textContent = 'JS worked', false ); |
Apart from the web content, the project also needs .gitignore
file so we don’t commit generated JS files, and README.md
, so the project looks cooler than it is.
1 |
*.js |
1 2 |
GitLab-CI demo project ====================== |
And that’s it. Assuming there’s TypeScript compiler installed locally (e.g. via sudo npm install -g typescript
), tsc index.ts
will do the compilation and any web server, sourced from current directory, will be able to display remarkably unremarkable page:
Now, let’s run few more commands to create local repo and we’re good to go to the next step: installing GitLab.
1 2 3 |
git init . git add . git commit -m "Initial commit" |
Step 1: Installing GitLab
Cool thing about gitlab is that it can be deployed as Docker container and there’s even image for that. In fact, dockerized gitlab has been running at my home server for months with absolute zero of problems.
As I’m going to deal with more than one container, instead of typing docker run ...
I’ll use docker-compose instead, so we can add more containers as we go.
Initial version of docker-compose.yml
file is trivial:
1 2 3 4 5 6 7 |
version: '2' services: gitlab: image: 'gitlab/gitlab-ce:latest' ports: - '80:80' |
docker-compose up -d gitlab
begins and concludes installation and after a minute or so we can configure our root
account.
OK, we’re in. The next thing is adding our demo project into it. “Add” button is right in the top:
When it’s done, we can use git remote add
and git push -u origin master
commands nicely suggested by GitLab to import the demo project and then move to the first stage in our CI/CD GitLab pipeline: the build stage.
Step 2: Configuring “Build” stage
Telling GitLab to do things automatically when code gets pushed in it is quite simple. It requires .gitlab-ci.yml
instructions file in project repository and some sort of build server (or container, or VM) called “runner” to perform those instructions. Let’s start with the first one.
.gitlab-ci.yml
Our build stage simply means invoking TypeScript compiler against index.ts
file on any build runner that has TypeScript installed. Here’s how I’d put that into .gitlab-ci.yml
:
1 2 3 4 5 6 7 8 9 |
stages: - build Compile: stage: build tags: - typescript script: - tsc index.ts |
Pretty straightforward, right? stages
describes what stages we have, stage: build
tells what stage current build job (“Compile”) belongs to, and tags
will look for build runners that have these tags assigned to, so we can send build jobs to specific machines. It’s up to us to come up with the list of tags and assign them to both the runner and the build job. Personally, I tag runners with the features that they support (compilers, versions, tools) and then mention those tags in individual build jobs as a requirements list.
If we commit this file and push it to origin, something slightly cool will happen: we can go to “Pipelines” page and see something like this:
Apparently, GitLab noticed new commit, recognized CI configuration and tried to launch a build job for it. But it’s stuck, as there’s no capable runner to perform that job. Don’t worry, we’ll create that in a second.
Configuring gitlab-runner service
What converts regular host to build runner? Installed gitlab-runner
service and nothing more. Basically, we need to find a container or host, apt-get
or download gitlab-runner to it, configure it to work with GitLab server, assign a tag or two to describe what it’s capable of, and we’re done.
For simplicity I’ll install a runner in a Docker container and add it to docker-compose.yml
, so the runner and GitLab will be in the same network. This is how a Dockerfile for such runner could look like:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 |
FROM node RUN wget -O /usr/local/bin/gitlab-runner https://gitlab-ci-multi-runner-downloads.s3.amazonaws.com/latest/binaries/gitlab-ci-multi-runner-linux-amd64 && \ chmod +x /usr/local/bin/gitlab-runner && \ useradd --comment 'GitLab Runner' --create-home gitlab-runner --shell /bin/bash && \ npm install -g typescript CMD gitlab-runner register \ -u http://gitlab/ci \ -r rSyUTfHxLL_qP7nYSfvA \ -n \ --executor shell \ --tag-list "typescript"\ --name "TypeScript Runner" && \ gitlab-runner install --user=gitlab-runner --working-directory=/home/gitlab-runner && \ gitlab-runner start && \ tail -f /var/log/dpkg.log |
I used nodejs base image, so it later can be used for installing TypeScript. Most of the code I copy-pasted from the official guide. RUN
section installs all the dependencies we need and CMD
section actually registers and starts the runner. Ugly, but it will work.
There’re four particularly interesting parameters in gitlab-runner register
:
-u
– specifies where GitLab server is. As container is going to run in the same Docker network as gitlab container does, I can use container name as a host name.-r
– secret token that establishes the trust between gitlab and its runner. You can find yours at “GitLab -> Settings -> CI/CD settings” page.--executor
specifies how to interpret commands from.gitlab-ci.yml
file. In our case it’sshell
– regular shell commands. It also could’ve beenpowershell
for Windows machine, or evendocker
.--tags
– what features current gitlab-runner is going to support. For now, it only supportstypescript
, which was installed few lines above.
When I add this Dockerfile as runner-ts
service to docker-compose.yml
and start it with docker-compose up -d runner-ts
, that pending pipeline will continue and succeed.
1 2 3 4 5 6 7 8 |
# ... gitlab: image: 'gitlab/gitlab-ce:latest' ports: - '80:80' runner-ts: build: runner-ts/. # Dockerfile in runner-te folder |
Breaking the build
Out of curiosity, what will happen if I try to commit invalid index.ts
? It should fail, right? Let’s insert unwanted whitespace inside of the document
.
1 |
e => doc ument.querySelector('BODY').textContent = 'JS worked', |
Without much of a surprise, the pipeline for broken commit fails as well:
I can click at any of red icons and see more details about the failure.
Obviously, fixing the commit will fix the build as well:
That’s the power of CI!
Step 3. Configuring “Test” stage
I have no intention to install and configure testing framework, especially when the only thing to test I can think of is testing rendered content, but we can install tslint
for testing the code style. It’s just one more section in .gitlab-ci.yml
and one more runner to add. We also could’ve installed tslint
on existing runner, but I prefer to keep existing stuff immutable.
First, let’s add new test job and its stage into .gitlab-ci.yml
:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 |
stages: - build - test Compile: stage: build #.... Test: stage: test tags: - typescript - tslint script: - tslint -c tslint.json index.ts |
And then copy-paste the most of the Dockerfile for runner-ts
into runner-tslint
(if anyone asks, I never heard about code reuse):
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 |
FROM node RUN wget -O /usr/local/bin/gitlab-runner https://gitlab-ci-multi-runner-downloads.s3.amazonaws.com/latest/binaries/gitlab-ci-multi-runner-linux-amd64 && \ chmod +x /usr/local/bin/gitlab-runner && \ useradd --comment 'GitLab Runner' --create-home gitlab-runner --shell /bin/bash CMD npm install -g typescript && \ npm install -g tslint && \ gitlab-runner register \ -u http://gitlab/ci \ -r rSyUTfHxLL_qP7nYSfvA \ -n \ --executor shell \ --tag-list "typescript,tslint"\ --name "TSLint Runner" && \ gitlab-runner install --user=gitlab-runner --working-directory=/home/gitlab-runner && \ gitlab-runner start && \ tail -f /var/log/dpkg.log |
Because copy-pasted typescript
installation is also still there, our runner supports both typescript
and tslint
build jobs and we can clearly indicate that with proper tags.
Finally, let’s add new runner to docker-compose.yml
, create it with docker-compose up -d runner-tslint
and after commit with updated .gitlab-ci.yml
finds its way to remote origin
, we’ll have ourselves two consequent CI stages: Build and Test:
Apparently, tslint
didn’t like my index.ts
style and the build failed.
However, one more commit with the fix will make it happy again.
Step 4. Adding “Deploy” stage
CI/CD has “D” in it for a reason. Successfully built and tested code should be deployed somewhere and this is exactly what we’re going to do now.
Quite often new builds are deployed at two environments: staging and production. The first one is safer to deploy to as that supposed to be testing environment anyway. Installing the new code in production on the other hand requires more thinking and prayers. Having said that, let’s add two more build jobs – deployment jobs. One will automatically deploy successful build into staging environment as soon as the build is ready. Installing the build in production, however, will remain manual job.
Deployments runner
I was puzzled at first about how to emulate two different servers, but eventually decided to start two nginx
containers with their html
folders being connected to third container – another gitlab runner. One more copy-pasting experience, and we’ve got ourself one more Dockerfile for deployment runner:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 |
FROM ubuntu RUN apt-get update && apt-get install -y wget && \ apt-get install -y git && \ wget -O /usr/local/bin/gitlab-runner https://gitlab-ci-multi-runner-downloads.s3.amazonaws.com/latest/binaries/gitlab-ci-multi-runner-linux-amd64 && \ chmod +x /usr/local/bin/gitlab-runner CMD gitlab-runner register \ -u http://gitlab/ci \ -r rSyUTfHxLL_qP7nYSfvA \ -n \ --executor shell \ --tag-list "deploy,staging,production"\ --name "Deploy Runner" && \ gitlab-runner install --user=root --working-directory=/root && \ gitlab-runner start && \ tail -f /var/log/dpkg.log |
This time it’s based on ubuntu
image and unlike the first two times gitlab-runner
will use root
account to run. Otherwise it won’t have enough permissions to write to shared volume (and ubuntu:16.04
doesn’t have sudo
in it).
Sharing the volumes between containers is quite straightforward:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 |
# ... runner-deploy: build: runner-deploy/. volumes: - staging:/www-staging - production:/www-production staging: image: 'nginx' ports: - '8081:80' volumes: - staging:/usr/share/nginx/html production: image: 'nginx' ports: - '8082:80' volumes: - production:/usr/share/nginx/html volumes: staging: production: |
Launching new containers works without a glitch, so we can move to .gitlab-ci.yml
.
1 2 3 |
docker-compose up -d runner-deploy docker-compose up -d staging docker-compose up -d production |
Deployment jobs in .gitlab-ci.yml
First of all, as we’re going to deploy compiled JS files, and those were produced at “Build” stage, we need to make sure that “Deploy” stage, which runs at separate host, can access them. Making compilation result an “artifact” will make that happen:
1 2 3 4 5 6 7 8 9 10 11 12 |
#... Compile: stage: build artifacts: name: "CompiledJS" paths: - ./*.js tags: - typescript script: - tsc index.ts #... |
Now to deployment jobs. I’ll put the whole and final version of .gitlab-ci.yml
file right here and explain what’s changed:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 |
stages: - build - test - deploy Compile: stage: build artifacts: name: "CompiledTS" paths: - ./*.js tags: - typescript script: - tsc index.ts Test: stage: test tags: - typescript - tslint script: - tslint -c tslint.json index.ts Deploy-Staging: stage: deploy tags: - deploy - staging environment: name: Staging url: http://127.0.0.1:8081 script: - cp -f ./{index.js,index.html} /www-staging/ Deploy-Production: stage: deploy tags: - deploy - production environment: name: Production url: http://127.0.0.1:8082 script: - cp -f ./{index.js,index.html} /www-production/ when: manual |
Obviously, we’ve got ourself a new stage (deploy
) in the top and two build jobs in the bottom (Deploy-Staging
and Deploy-Production
). The only thing that differs regular build job from deployment job is environment
section. Everything else is identical.
Environment name is absolutely arbitrary (and can be dynamically generated) and you can choose it in the way that suits the most for your project.
The last line, when: manual
really makes the job manual, no magic here.
After I commit the changes in .gitlab-ci.yml
file and push them to remote origin
(and fix few errors along the way), here’s what happens:
We’ve got three green “check” marks, three successful stages. Clicking on pipeline details, however, shows that only one of two deployment jobs has actually run:
As configured, deploying in production is manual job and we can start it by clicking “Play” button.
Finally, we can see our deployments history at “Environments” page. Link buttons at left hand side are pointing to staging
and production
containers (127.0.0.1:8081 and 127.0.0.1:8082) with newest content deployed in them:
Conclusion
Just think about what we’ve just done. We’ve configured fully functional CI/CD system, in which any push to repository will get compiled, tested and potentially deployed at testing server. And it’s all 100% automatic. If new code has a mistake, you’ll get notified about it right away. If the code is correct, your shiny new feature will get to the testing server as soon as possible. That’s huge.
CI/CD configuration looks trivial for hello-world app, but I assure you, for large projects it’s not much harder. More build jobs, more runners, but the same principles.
How would this deployment to staging workout if i made a Docker image in the build process and would want to make this image run in staging on each commit?
From the top of my head I’d build and push Docker image to GitLab’s build-in Docker registry (or any other registry) and then tell staging server with container running in it to update (pull new image, stop and remove current container, start again).
“Telling staging server” can take different forms. E.g. if there’s SSH access then that would be something like
ssh mystaging.server.com "docker pull.. docker stop.. docker rm.. docker run.."
. If you can connect to Docker daemon directly then Docker client, installed at gitlab runner, can be configured to talk directly to remote Docker engine (e.g. by setting env variables (like docker-machine does –https://docs.docker.com/machine/reference/env/), so the whole
ssh mystaging..
prefix can be skipped.As an option, Docker image can be exported into tarball, uploaded to remote machine via
scp
and then again: import, stop, rm, run.