I don’t know how and why, but even though for the last couple of years I was spending at least few hours a week doing something with Google Cloud Platform, I never managed to notice that they have their own tool for automating infrastructure creation. You know, creating VMs, networks, storage, accounts and other resources. But it’s there, right in the main menu.
The tool is called Deployment Manager and it can build and provision virtually everything that Google Cloud Platform can provide. All in one command. As any other tool from Google it has slightly mind bending learning curve and not always up to date documentation, but it works and gets the job done. Most of the time I was automating everything starting from the host and up, using Vagrant, Ansible, docker-compose or kubectl. But automating everything from the host and down – actual infrastructure – that’s going to be interesting.
How it works
I find it somewhat similar to docker-compose and Ansible. We describe the desired state of the infrastructure – VMs, networks, firewalls, etc. – in a file and then tell DM to make that happen. DM will treat the whole configuration as a single deployable unit, so we can check its status later, update it or even delete the whole thing.
Interesting, but not surprising thing with DM is that it won’t create requested e.g. virtual machines if they already exist. That’s default behaviour, it’s configurable and plays well with the idea of desired state configuration. However, it also can be confusing at times, when, for instance, one deployment accidentally acquires another one’s VM just because they use identical host name. It actually happens.
Simple configuration example
Here’s how declaring a single virtual machine would look like:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 |
resources: - name: tiny-vm type: compute.v1.instance properties: zone: us-central1-a machineType: zones/us-central1-a/machineTypes/f1-micro disks: - deviceName: boot type: PERSISTENT boot: true autoDelete: true initializeParams: sourceImage: projects/ubuntu-os-cloud/global/images/family/ubuntu-1804-lts networkInterfaces: - network: global/networks/default |
Then, assuming you have your Cloud SDK installed and Deployment Manager API enabled (as far as I remember, it’s enabled by default), here’s how we can deploy this configuration:
1 2 3 4 5 6 7 |
gcloud deployment-manager deployments \ create my-deployment \ --config simple-config.yaml # ... # Create operation operation-1533697749330-572e3d58aa851-2623ddda-996fe75b completed successfully. # NAME TYPE STATE ERRORS INTENT # tiny-vm compute.v1.instance COMPLETED [] |
View results
Now we can see that deployment indeed was created:
1 2 3 |
gcloud deployment-manager deployments list # NAME LAST_OPERATION_TYPE STATUS DESCRIPTION MANIFEST ERRORS # my-deployment insert DONE manifest-1533697749667 [] |
Plus, a virtual machine it supposed to create is also there:
1 2 3 |
gcloud compute instances list # NAME ZONE MACHINE_TYPE PREEMPTIBLE INTERNAL_IP EXTERNAL_IP STATUS # tiny-vm us-central1-a f1-micro 10.128.0.2 35.184.94.132 RUNNING |
Small improvement: set ephemeral external IP
There’s minor improvement that we can make. Current configuration creates a VM without external IP address, which among the other things means you won’t be able to SSH into it. It’s very easy to fix, though. If we add the following accessConfig
to networkInterfaces
section, all firewall permitted connections from outside world will become possible. As we’re using ‘default’ network, that includes SSH.
1 2 3 4 5 6 |
# ... networkInterfaces: - network: global/networks/default accessConfigs: - name: External NAT type: ONE_TO_ONE_NAT |
More complex scenario
Creating single virtual machine from configuration file is nice example for a blog post, but in real life there will be something more complex. E.g. configuration file will contain multiple resources, some of which might depend on the others and therefore should be created after their dependencies. By default Deployment Manager will try to create everything in parallel, so we need an extra trick to introduce some sort of resource hierarchy.
There’re at least two ways to do that: through metadata and references.
Explicit dependency through metadata
That’s both the easiest and to my taste a little bit suspicious way to setup resource hierarchy. For instance, if our configuration has two resources – a VM and a network, and the VM must be created after the network, we could do something like this:
1 2 3 4 5 6 7 8 9 10 |
resources: - name: not-that-tiny-network type: compute.v1.network # ... - name: tiny-vm type: compute.v1.instances metadata: dependsOn: - not-that-tiny-network # ... |
However, if one resource depends on the other, probably it also uses some attributes of that resource: a link, an IP address or a name. It’s way more logical to add references to those attributes, which in its turn will introduce dependencies and hierarchy.
Dependency through references
Let’s have a look at the following example. Assume we need a VM connected to a custom network with SSH allowed. In terms of GCP resources it means we need to create a network, a firewall rule and the VM itself. Obviously, the network should come first and only when it’s ready – continue with the VM and the rule. We can make sure DM does the right thing by adding references to the network to both the VM and the firewall rule like here:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 |
resources: - name: tiny-vm # A VM type: compute.v1.instance properties: zone: us-central1-a machineType: zones/us-central1-a/machineTypes/f1-micro disks: - deviceName: boot type: PERSISTENT boot: true autoDelete: true initializeParams: sourceImage: projects/ubuntu-os-cloud/global/images/family/ubuntu-1804-lts networkInterfaces: - network: $(ref.my-network.selfLink) accessConfigs: - name: External NAT type: ONE_TO_ONE_NAT - name: my-network # A network type: compute.v1.network properties: IPv4Range: 10.0.0.1/16 - name: allow-inbound-ssh # A firewall rule type: compute.v1.firewall properties: network: $(ref.my-network.selfLink) sourceRanges: - 0.0.0.0/0 allowed: - IPProtocol: tcp ports: - 22 |
When we deploy it, it’s pretty obvious that not everything is created in parallel now:
Few moments later:
We also could confirm that the VM is indeed connected to my-network
and allow-inbound-ssh
firewall rule is there:
Conclusion
By the end of such experiments I usually come to some sort of Stockholm syndrome and start to like the tool, no matter how good it is. This time is not an exception.
Deployment Manager is fine. It’s slightly hard to get started with it, as there are not many clear examples out there. Ironically, ones that do exist sometimes not actually compatible with latest APIs. Plus, it’s not obvious where to look for some information, like what configuration properties are there, or how exactly template schema should look like. But so far I was able to find almost everything I need.
I wonder how Deployment Manager compares to its seemingly main competitor – HashiCorp’s Terraform. My gut feeling says that DM is probable less mature, but who knows. I probably should get to know Terraform better to be sure, but that will happen later. So far there’re few interesting DM features to explore.