Well, as I promised the last time (a long, long time ago), let’s have a look at GCP’s external load balancer now. While sharing some features with internal load balancer, it has something unique as well:
ELB meant to be accessed from outside, and “outside” is kind of global, so ELB will tend to use global and regional building blocks.
It knows about existence of HTTP(S) and can use that knowledge to route traffic to more than one backend service, using URL as a map.
It also acts as a proxy, so if e.g. SSL ELB is used, it will terminate SSL session way before traffic hits actual instances.
At the moment of writing GCP supports four breeds of ELBs: HTTP, HTTPS, SSL Proxy and TCP Proxy. The one which seems to be the most complex is HTTPS, so for today’s dissecting session let’s prefer that guy over the others.
It’s interesting how some tools that try to look simpler and be more user friendly, actually make the things way more complex. Back in a day it was like this with the git, when I had to read ‘Pro GIT’ book and switch to command line, so GUI clients finally started to make sense. It was then with Kubernetes, when it took switching to kubectl apply and YAML configurations in order to make sense of kubectl run and kubectl expose.
It’s been just s few weeks since I complained that Google’s Deployment Manager (DM) doesn’t support its own latest Cloud Functions API, when I accidentally found an alternative way to use it. Thing is that if you have a RESTful CRUD-like API and OpenAPI specification for it, you can register it as DM’s type provider and use it almost like any other type from inside of a YAML configuration. Cloud Functions API v1 does have such specification, so in fact I could use it with DM.
Working closely with GCP’s Deployment Manager recently, it was really hard not to notice that Google sometimes… makes bugs. Seriously. Not that many, I definitely introduced more, but it’s still enough to stumble across them now and then. I think within a month I found like 4 of the most obvious ones, but so did the other members of my team, so bugs in GCP is not something uncommon. So, let’s have a look at few?
I don’t know how and why, but even though for the last couple of years I was spending at least few hours a week doing something with Google Cloud Platform, I never managed to notice that they have their own tool for automating infrastructure creation. You know, creating VMs, networks, storage, accounts and other resources. But it’s there, right in the main menu.
The tool is called Deployment Manager and it can build and provision virtually everything that Google Cloud Platform can provide. All in one command. As any other tool from Google it has slightly mind bending learning curve and not always up to date documentation, but it works and gets the job done. Most of the time I was automating everything starting from the host and up, using Vagrant, Ansible, docker-compose or kubectl. But automating everything from the host and down – actual infrastructure – that’s going to be interesting. Continue reading “Automating GCP Infrastructure with Deployment Manager”
However, almost immediately I ended up writing about weirdly unrelated stuff: micro-services, distributed apps and DevOps. It had some crossovers with what I do for a living, and sometimes blog topics did became related to my work. But most of the time I simply stumbled upon an interesting name or a concept in a realm of distributed applications, learned about it and came up with a blog post afterwards. DevOps and distributed apps became a hobby, and therefore it was relatively easy to sacrifice a noticeable amount of sleep hours to it. Continue reading “Drones and possible blog theme shift”
In last six or so weeks Microsoft managed to release whole bunch of .NET Core 2.1 SDKs (Preview 2, Release Candidate 1, Early Access, RTM) and we tried all of them. By the end of these weeks my cluster of CI servers looked like a zoo. As everything was done in a hurry, there were servers with RC1 pretending to be Early Access ones. EA servers pretended to be RTM compatible, and the only RTM host we had was pretending to support everything. Don’t look at me funny. It happens.
The problem happened when I tried to cleanup the mess: removed P2, RC1 and EA SDK tags from release branches, deleted prerelease servers, forced remaining servers to tell exactly who they are and finally rolled out new VMs with latest and greatest .NET Core SDK 2.1 installed. Naturally, very first build failed.
So last time I mentioned, that another Kubernetes compatible service mesh – Conduit – has chosen another approach to solve the problem. Instead of enabling the mesh at machine level via e.g. http_proxy env variable, it connects k8s pods or deployments to it one by one. I really like such kinds of ideas that make 180° turn on solving the problem, so naturally I wanted to see how exactly they did that. Continue reading “Service mesh implemented via iptables”