Extending Deployment Manager with Type Providers

It’s been just s few weeks since I complained that Google’s Deployment Manager (DM) doesn’t support its own latest Cloud Functions API, when I accidentally found an alternative way to use it. Thing is that if you have a RESTful CRUD-like API and OpenAPI specification for it, you can register it as DM’s type provider and use it almost like any other type from inside of a YAML configuration. Cloud Functions API v1 does have such specification, so in fact I could use it with DM.

Continue reading “Extending Deployment Manager with Type Providers”

Few bugs (or features?) I managed to find in Google Cloud Platform so far

bugs google cloud platformWorking closely with GCP’s Deployment Manager recently, it was really hard not to notice that Google sometimes… makes bugs. Seriously. Not that many, I definitely introduced more, but it’s still enough to stumble across them now and then. I think within a month I found like 4 of the most obvious ones, but so did the other members of my team, so bugs in GCP is not something uncommon. So, let’s have a look at few?

Continue reading “Few bugs (or features?) I managed to find in Google Cloud Platform so far”

Python templates in GCP Deployment Manager

Imagine we have Deployment Manager’s config file that creates a virtual machine from certain image and assigns an ephemeral public IP address to it. Something like this:

If I decided to create 5 other VMs, similar to this one, I’d probably have to copy-paste the config, changing just the tiny pieces: the name and probably the zone with the image.

However, Deployment manager supports Jinja and Python templates, so we can move a repetitive blocks into those, leaving only customizable parts on the surface. Let’s see is how it works for Python. Continue reading “Python templates in GCP Deployment Manager”

Automating GCP Infrastructure with Deployment Manager

Deployment Manager

I don’t know how and why, but even though for the last couple of years I was spending at least few hours a week doing something with Google Cloud Platform, I never managed to notice that they have their own tool for automating infrastructure creation. You know, creating VMs, networks, storage, accounts and other resources. But it’s there, right in the main menu.

The tool is called Deployment Manager and it can build and provision virtually everything that Google Cloud Platform can provide. All in one command. As any other tool from Google it has slightly mind bending learning curve and not always up to date documentation, but it works and gets the job done. Most of the time I was automating everything starting from the host and up, using Vagrant, Ansible, docker-compose or kubectl. But automating everything from the host and down – actual infrastructure – that’s going to be interesting. Continue reading “Automating GCP Infrastructure with Deployment Manager”

Quick intro to helm – a package manager for Kubernetes

helm-logoI suddenly realized that I haven’t blogged about Kubernetes for quite a while. But there’s so much happening in that area! For instance, even though creating Kubernetes objects from YAML configuration was the true way, it never felt that much convenient. So here’s the solution – use helm, the package manager for Kubernetes. Continue reading “Quick intro to helm – a package manager for Kubernetes”

The mystery of package downgrade issue

Compare build logsIn last six or so weeks Microsoft managed to release whole bunch of .NET Core 2.1 SDKs (Preview 2, Release Candidate 1, Early Access, RTM) and we tried all of them. By the end of these weeks my cluster of CI servers looked like a zoo. As everything was done in a hurry, there were servers with RC1 pretending to be Early Access ones. EA servers pretended to be RTM compatible, and the only RTM host we had was pretending to support everything. Don’t look at me funny. It happens.

The problem happened when I tried to cleanup the mess: removed P2, RC1 and EA SDK tags from release branches, deleted prerelease servers, forced remaining servers to tell exactly who they are and finally rolled out new VMs with latest and greatest .NET Core SDK 2.1 installed. Naturally, very first build failed.

Continue reading “The mystery of package downgrade issue”

Service mesh implemented via iptables

Imaginary distributed app with services plugged into the service mesh
Imaginary distributed app with services plugged into the service mesh

So last time I mentioned, that another Kubernetes compatible service meshConduit – has chosen another approach to solve the problem. Instead of enabling the mesh at machine level via e.g. http_proxy env variable, it connects k8s pods or deployments to it one by one. I really like such kinds of ideas that make 180° turn on solving the problem, so naturally I wanted to see how exactly they did that. Continue reading “Service mesh implemented via iptables”

Playing with a service mesh

Imaginary distributed app with service mesh node per hostI was looking for something new to play with the other day and somehow ended up with the thing called a service mesh. Pretty interesting concept, I can tell you. Not a game changing, or world peace bringing, but still nice intellectual concept with several scenarios where it can make life much simpler. Let’s have a look. Continue reading “Playing with a service mesh”

Debugging .NET Core app from a command line on Linux

command line debugging

Million years ago, way before the ice age, I was preparing small C++ project for “Unix Programming” university course and at some point had to debug it via command line. That was mind blowing. And surprisingly productive. Apparently, when nothing stands in the way, especially UI, debugging can become incredibly focused.

Since .NET Framework got his cross platform twin brother .NET Core, I was looking forward to repeat the trick and debug .NET Core app on Ubuntu via command line. Few days ago it finally happened and even though it wasn’t a smooth ride, that was quite an interesting experience. So, let’s have look.

Continue reading “Debugging .NET Core app from a command line on Linux”

Sending proactive messages with Microsoft Bot Framework

Futurama rebellion

I was thinking again about that bot, who supposedly will monitor unreliable tests for me, and suddenly realized one thing. All examples I dealt with were dialog based. You know, user sends the first message, bot responds, etc. But the bot I’m thinking about is different. Initial conversation indeed starts like a dialog. But once bot starts monitoring unit test statistics and finds something that I should take a look at, he needs to talk first! Microsoft calls such scenario sending proactive messages and there’re few tricks how to make that possible. Continue reading “Sending proactive messages with Microsoft Bot Framework”