Automating GCP Infrastructure with Deployment Manager

Deployment Manager

I don’t know how and why, but even though for the last couple of years I was spending at least few hours a week doing something with Google Cloud Platform, I never managed to notice that they have their own tool for automating infrastructure creation. You know, creating VMs, networks, storage, accounts and other resources. But it’s there, right in the main menu.

The tool is called Deployment Manager and it can build and provision virtually everything that Google Cloud Platform can provide. All in one command. As any other tool from Google it has slightly mind bending learning curve and not always up to date documentation, but it works and gets the job done. Most of the time I was automating everything starting from the host and up, using Vagrant, Ansible, docker-compose or kubectl. But automating everything from the host and down – actual infrastructure – that’s going to be interesting. Continue reading “Automating GCP Infrastructure with Deployment Manager”

Drones and possible blog theme shift

blog theme

When I started this blog my initial impulse was to write about things that I usually work with or at least have a relation to. I even came up with a list of 50 topics or so. Half of them were about JavaScript, as that was my main focus area back then. And the other half was about NoSQL, as.. well, that was the book I was reading.

However, almost immediately I ended up writing about weirdly unrelated stuff: micro-services, distributed apps and DevOps. It had some crossovers with what I do for a living, and sometimes blog topics did became related to my work. But most of the time I simply stumbled upon an interesting name or a concept in a realm of distributed applications, learned about it and came up with a blog post afterwards. DevOps and distributed apps became a hobby, and therefore it was relatively easy to sacrifice a noticeable amount of sleep hours to it. Continue reading “Drones and possible blog theme shift”

Quick intro to helm – a package manager for Kubernetes

helm-logoI suddenly realized that I haven’t blogged about Kubernetes for quite a while. But there’s so much happening in that area! For instance, even though creating Kubernetes objects from YAML configuration was the true way, it never felt that much convenient. So here’s the solution – use helm, the package manager for Kubernetes. Continue reading “Quick intro to helm – a package manager for Kubernetes”

The mystery of package downgrade issue

Compare build logsIn last six or so weeks Microsoft managed to release whole bunch of .NET Core 2.1 SDKs (Preview 2, Release Candidate 1, Early Access, RTM) and we tried all of them. By the end of these weeks my cluster of CI servers looked like a zoo. As everything was done in a hurry, there were servers with RC1 pretending to be Early Access ones. EA servers pretended to be RTM compatible, and the only RTM host we had was pretending to support everything. Don’t look at me funny. It happens.

The problem happened when I tried to cleanup the mess: removed P2, RC1 and EA SDK tags from release branches, deleted prerelease servers, forced remaining servers to tell exactly who they are and finally rolled out new VMs with latest and greatest .NET Core SDK 2.1 installed. Naturally, very first build failed.

Continue reading “The mystery of package downgrade issue”

Service mesh implemented via iptables

Imaginary distributed app with services plugged into the service mesh
Imaginary distributed app with services plugged into the service mesh

So last time I mentioned, that another Kubernetes compatible service meshConduit – has chosen another approach to solve the problem. Instead of enabling the mesh at machine level via e.g. http_proxy env variable, it connects k8s pods or deployments to it one by one. I really like such kinds of ideas that make 180° turn on solving the problem, so naturally I wanted to see how exactly they did that. Continue reading “Service mesh implemented via iptables”

Playing with a service mesh

Imaginary distributed app with service mesh node per hostI was looking for something new to play with the other day and somehow ended up with the thing called a service mesh. Pretty interesting concept, I can tell you. Not a game changing, or world peace bringing, but still nice intellectual concept with several scenarios where it can make life much simpler. Let’s have a look. Continue reading “Playing with a service mesh”

Debugging .NET Core app from a command line on Linux

command line debugging

Million years ago, way before the ice age, I was preparing small C++ project for “Unix Programming” university course and at some point had to debug it via command line. That was mind blowing. And surprisingly productive. Apparently, when nothing stands in the way, especially UI, debugging can become incredibly focused.

Since .NET Framework got his cross platform twin brother .NET Core, I was looking forward to repeat the trick and debug .NET Core app on Ubuntu via command line. Few days ago it finally happened and even though it wasn’t a smooth ride, that was quite an interesting experience. So, let’s have look.

Continue reading “Debugging .NET Core app from a command line on Linux”

Sending proactive messages with Microsoft Bot Framework

Futurama rebellion

I was thinking again about that bot, who supposedly will monitor unreliable tests for me, and suddenly realized one thing. All examples I dealt with were dialog based. You know, user sends the first message, bot responds, etc. But the bot I’m thinking about is different. Initial conversation indeed starts like a dialog. But once bot starts monitoring unit test statistics and finds something that I should take a look at, he needs to talk first! Microsoft calls such scenario sending proactive messages and there’re few tricks how to make that possible. Continue reading “Sending proactive messages with Microsoft Bot Framework”

Playing with Microsoft Bot Framework

Little Bender

Part of my job description is our CI/CD and it kind of implies that I’m interested in keeping the build green. It doesn’t mean that I immediately jump in whenever some unit test fails, but I’m definitely keeping an eye on unreliable ones.

Whenever master branch stays red long enough, this is what starts to happen to each failed test in it:

  1. Look for test failures history in Google BigQuery (select Name, Result, count(*)...).
  2. If test behaves like a random results generator, create a case for that.
  3. Skip the test in master branch and put the case number as a reason.
  4. Find out who created the test (git blame) and assign it back to the author.

Pretty simple. And boring. I can automate that, but it’s not always clear who is the author of the test. After all, people resign, update each other’s tests, refactor and destroy git history on special occasions. I was thinking about doing something with machine learning to solve that, but it feels like an overkill. Creating a bot, on the other hand, who would ask me to double check when it’s uncertain, sounds more interesting and actually doable. Even if I’m never going to finish it.

However, I’ve never wrote any bots before, so for starters I’d like to check what it actually feels like. Continue reading “Playing with Microsoft Bot Framework”

Caveman’s brief look into modern front-end

modern front-end

Well, it might seem surprising, given what this blog is usually about, but during most of my career my main focus was… in front-end development. Yup, JavaScript and friends. It wasn’t the only thing I did, but definitely the biggest one. After moving to Canada focus shifted a little bit: I still do occasional front-end tasks for our web project, which started back in 2009, but basically last 2 years I’m on a server side. Continue reading “Caveman’s brief look into modern front-end”