Configuring External Load Balancer with Deployment Manager

Well, as I promised the last time (a long, long time ago), let’s have a look at GCP’s external load balancer now. While sharing some features with internal load balancer, it has something unique as well:

  1. ELB meant to be accessed from outside, and “outside” is kind of global, so ELB will tend to use global and regional building blocks.
  2. It knows about existence of HTTP(S) and can use that knowledge to route traffic to more than one backend service, using URL as a map.
  3. It also acts as a proxy, so if e.g. SSL ELB is used, it will terminate SSL session way before traffic hits actual instances.

At the moment of writing GCP supports four breeds of ELBs: HTTP, HTTPS, SSL Proxy and TCP Proxy. The one which seems to be the most complex is HTTPS, so for today’s dissecting session let’s prefer that guy over the others.

What HTTPS ELB is made of

If you recall, internal load balancer had the following look:

Internal Load Balancer Magic explained

Structurally, HTTPS external load balancer looks pretty much the same as an internal one, but with a few extra components in the middle (in bold):

  1. Global forwarding rule
  2. Target HTTPS proxy
  3. SSL Certificate
  4. URL Map
  5. Backend service with a health check
  6. Regional managed instance group and an instance template.

Something like this:

external load balancer

In common scenario we also might need a few helper resources. The first one is a firewall rule to allow health check traffic in. The other one, though not required, as otherwise Google will auto assign, is an external IP address.

There’s one more interesting fact about external load balancer. Even though it might serve HTTPS traffic at its public end, internally traffic can remain unencrypted. Such configuration sounds cool enough, so this is exactly what we’re going to build today.

As usual, let’s review the building blocks one by one, starting from the bottom – a managed instance group.

Managed instance group and an instance template

There’s nothing unusual about instance template -it’s a regular Ubuntu instance with Apache web server on it. Though instance template declares external IP address, strictly speaking, it’s not required for external load balancing. However, unless we set up some sort of NAT Gateway, external IP address is required for getting access to internet, and our apt-get install instructions will definitely need one.

What’s interesting, unlike the last time, managed instance group (MIG) is regional now, and we’ll be creating two instances in chosen region. Not only it will make load balancing to look like actual balancing, it also introduces high-availability features. MIG will try to allocate instances across the whole region, so if one of its zones fails, we’ll have surviving instances in another one.

Finally, we told MIG through the settings that its instances will expose port 80, here and after named as http.

Backend Service

A group of instances will make us a service, so we’ll declare it as one. In fact, service can refer to more than one MIG, e.g. one in each region, giving another level of high availability. But who needs that now.

Last three lines of the configuration define how we are going to use this service. Here it’s designed for EXTERNAL load balancing, which affects the choices of session affinity, backend service and instance group locations. We also specify what portName will be exposed, and its protocol.

Because backend service requires a health check, and Google’s default firewall settings (ingress – deny all) will block health check traffic, we have to add both a HTTP health check resource and a firewall rule for it.

URL Map

Now we’re getting to something new. URL Map behaves like a router between incoming traffic and actual backend service that’s going to handle it. Because URL Map knows what HTTP is, it will extract the path component of the URL and choose which of the backend services will handle it. For instance, /media requests can go to one service, /api to another one, etc.

As we have only one backend, we’ll just specify defaultService and route all traffic to it, regardless of the URL.

HTTPS Target Proxy and SSL Certificate

We’re almost there. targetHttpsProxy resource is the one that will encrypt HTTP traffic with sslCertificate provided. All it needs is just two resources: an sslCertificate and a urlMap that will take the traffic from here.

sslCertificate resource is just another Deployment Manager’s resource which holds, you guessed it, an SSL certificate. I simply generated self-signed certificate and copy-pasted its private and public keys to this YAML:

Global forwarding rule and external IP address

Finally, the entry point of all of this beauty – a global forwarding rule and external IP address for it – predefined or auto assigned.

It takes TCP traffic at port 443 from outside world and passes it to targetHttpsProxy and below, thus making resources we created so far – a load balancer.

An external static IP address is tiny two-liner with no particular magic in it. I’m not even choosing what IP address to use – whatever Google picks is fine, I just want it to have a name.

And finally…

After putting all building block into elb_https.yaml, we can deploy the resources and wait. Deployment command executes pretty fast, but it might take five or even more minutes until our external load balancer responds with HTTP 200.

Eventually it happens. Health checks turn green:

ELB settings

MIG recognizes backing instances as healthy:

More ELB settings

It all starts to work:

ELB is working

Conclusion

I was going to say something like “You see, that was easy”, but who am I kidding. That wasn’t. However, external load balancer’s building blocks, their order and purpose kinda make sense, and that’s good enough. But I’m definitely keeping this demo ELB configuration as a template in case I need to create one with Deployment Manager in the future.

2 thoughts on “Configuring External Load Balancer with Deployment Manager

  1. Thank you so much, this helped me a lot. I would like to ask you a follow up question.

    Let’s say we want to use google certificates and automatically associate the static IP to a subdomain managed also by google. Is it possible to achieve this in a clean way or could you point me to the right resource?

    1. It’s been a while since I touched deployment manager, its types support constantly changes, but it should be possible to achieve what you want without much of hacks. I’ve checked gcloud deployment-manager types list output and there are no types for domains or DNS records set. However, Google does have API for those, and you can register them via type providers (https://codeblog.dotsandbrackets.com/type-providers/) and use newly imported types like any other type – instance, sslCertificate, etc. I found DNS API reference momentarily – https://cloud.google.com/dns/docs/reference/v1, and that’s the entry point into the API descriptor URL, which is needed by type provider, as well as the reference of all existing types and their parameters. Basically you’d come up with the list of API calls you’d need to execute in order to create needed resources, and then turn that list of calls into a declarative list of DM resources, importing missing resource types as type providers in the meanwhile.
      Unfortunately, I can’t be more specific without actually trying to implement the stuff myself, so this is the best I can do for now.

Leave a Reply

Your email address will not be published. Required fields are marked *