Configuring Internal Load Balancer with Deployment Manager

It’s interesting how some tools that try to look simpler and be more user friendly, actually make the things way more complex. Back in a day it was like this with the git, when I had to read ‘Pro GIT’ book and switch to command line, so GUI clients finally started to make sense. It was then with Kubernetes, when it took switching to kubectl apply and YAML configurations in order to make sense of kubectl run and kubectl expose.

And the same thing is now with GCP’s load balancers. Putting aside the question why there are so many of them, it’s really hard to see what they are made of. All these wizards and checkboxes completely hide the picture of what exactly those load balancers will be made of and why so.

Create load balancer

On the other hand, as soon as you make a step a little further and try to create them manually via Deployment Manager, their internal architecture and logic suddenly starts to make sense. Take Internal Load Balancer, for example.

What is Internal Load Balancer (ILB)

The idea behind the ILB is very simple: we’ll be distributing requests between multiple instances, but those requests and instances must stay within the same network. Hence, internal. It’s convenient, when one micro-service needs to talk to another one, which is load balanced, and they don’t need ‘external’ internet for that.

Here is how extremely simplified diagram of ILB would look like:

Internal Load Balancer's magic

It’s more or less clear that IP address is one resource, and the instances are another one, but what’s the magic part is made of?

Unveiling the magic.

Well, building an ILB is actually quite simple. The secret ingredients are the following:

  1. a forwarding rule,
  2. a backend service with a health check,
  3. a managed instance group and an instance template.

Plus, there might be static IP Address in front of all of that. In order for things to make more sense, let’s have a look at the components in reversed order.

Managed instance group and an instance template.

Managed instance group is a resource that will maintain the number of instances by creating them from a template. E.g. if you need 10 identical Apache web servers, you don’t need to create and configure 10 instances for them. All you need is an instance template, showing how exactly the instance should be configured, and a managed instance group (MIG), that points to the template and whose targetSize property is set to 10. It’s the MIG that’s going to be connected to consequent parts of the load balancer, not the instances.

We also could’ve used unmanaged instance group, but in this scenario we’d indeed had to create all those instances manually, and unlike MIG, such groups couldn’t be autoscaled (which is controlled by yet another resource – autoscaler).

So here it is: a managed instance group and an instance template in deployment manager:

One more thing, internal load balancing works across the region, so therefore it is regionalInstanceGroupManager. And also it’s Instance Group Manager, not just Instance Group because.. well, it seems like you can’t create the latter one directly. It also seems that both IGM and IG exist and they are, in fact, different.

A backend service with a health check

A backend service is a logical grouping of multiple instance groups, that serve the same purpose – provide the same service. This is also the place where you can configure how load balancing of the traffic will behave. Is it INTERNAL or EXTERNAL, what will be the protocol, how exactly load balancing should work, etc.

A backend service also needs a health check, so it knows if target instances are still capable of receiving the traffic. The sad, subtle part is that default firewall settings will block incoming health check traffic, so we’ll need an extra firewall rule to let it through.

Our load balancer is INTERNAL, the traffic protocol is TCP (one of the only two protocols supported by ILB), so let’s throw more YAML into our deployment manager configuration:

Like with regionInstanceGroupManager before, it’s also regionBackendService, and it resides in the same region. The health check properties and load balancing configuration are left default, so the YAML at least looks readable. And the firewall rule in the and will allow the health checks to talk to the instances.

A forwarding rule with IP address

A forwarding rule is the thing that connects an IP address on the one end with a receiver, such as a backend service, on another. That’s probably the easiest thing to set up and configure, so let’s give it a static IP address, so configuration looks a little bit more realistic:

The final look

This how our internal load balancer looks in the end:

Internal Load Balancer Magic explained

It still might look confusing, but has quite less magic in it. Now we can deploy the resulting YAML via Deployment Manager, create a test instance inside of the default network (where ILB is configured to live), and try to run few http requests to the IP address we reserved in the last step (gcloud compute addresses list):

Deploy:

Get internal IP:

Fire few requests from test VM:

Yup, it all works. GCP Console also likes this load balancer a lot:

GCP Internal Load Balancer

Believe it or not, that’s one of the simplest load balancers that GCP has. I think the next time we’ll take a look at external HTTP LB, and that will be lots of fun. Slightly disturbing, but fun.

2 thoughts on “Configuring Internal Load Balancer with Deployment Manager

  1. Hi Pav,

    I am getting errors for the below changes in your article. Plz let me know if this is expected to work.
    Since backends: -> is an array of instance-group. How is this suppose to work with the name of the instance group in the managed-instance-group?

    – name: backend-service
    type: compute.v1.regionBackendService
    properties:
    backends:
    – group: $(ref.managed-instance-group.instanceGroup)

    1. Hi dsrini,
      sorry, I’m not following what exactly doesn’t work for you. What kind of error are you getting and what was the change you made to the configuration from the article?

Leave a Reply

Your email address will not be published. Required fields are marked *