Provisioning Vagrant VM with Ansible

I’m still looking for ways to automate hosts configuration. So far I’ve been using Vagrant + bash/PowerShell for configuring Linux or Windows hosts, but somehow I managed to miss the tool designed specifically for tasks like this – Ansible. It’s been around for last five years or so and became almost a synonym to “automatic configuration”. Today I’ll finally give it a try and see what difference it makes to use it comparing to provisioning with good old Bash.

What’s Ansible

As I said, Ansible is a tool for configuring machines automagically. It can send simple ad-hoc commands (e.g. reboot) to one one or more hosts, or even apply a whole playbook with complex scenarios (e.g. install service, copy config files, make sure service is running) to hundreds of them.

It uses a “push” model, so there’s always some sort of control server that sends commands to target hosts via SSH (to Linux hosts) or PowerShell (Win). As it’s written in Python, the interpreter obviously should be installed on control server. What’s more, target hosts need Python as well, as Ansible delivers its commands as .py files and executes them on target hosts, not locally.

Sounds pretty clear, so let’s try to apply some of this.

The plan

I want to rewrite Consul server installation I blogged about before. It used to be Vagrant + Bash, but Vagrant + Ansible might be a more interesting combination.

As I recall, Consul server provisioning included the following:

  1. Install unzip.
  2. Download and unzip Consul.
  3. Make it executable.
  4. Put executable into /usr/local/bin. It’s going to be a service.
  5. Put Consul service definition file to /etc/systemd/system/ dir.
  6. Put Consul configuration file /etc/systemd/system/consul.d/.
  7. Start the service.

Sounds like something that Ansible should be able to handle very well.

Installation

I’m still going to use Vagrant and VirtualBox for host creation, and installing both of them is still a pleasure. Getting Ansible to run, however, is a bit trickier, at least on Mac. In my case I had to install pip first, then use pip to install ansible, then install sshpass, so Ansible can use SSH with login and passwords. That’s not “download-and-run” way that I got used to, but it could’ve been worse.

Step 0. Preparing the host

Before we can configure the host, we need to create it. What’s more, Ansible will need python and guest account to use for SSH connection, so these should be installed as well. Fortunately, Vagrant makes configuring such things trivial.

First, create Vagrantfile file:

And then, give it static IP address, set existing ubuntu user’s password to ubuntu, and install Python’s bare minimum to make Ansible happy:

Piece of cake. Now, vagrant up and we can test that SSH connection is working:

So far, so good. Testing SSH connection did one important thing: it added VM fingerprint to known hosts list, so Ansible won’t complain later. We also could’ve configured it to trust all hosts by default, but for single host experiment like ours it doesn’t worth it.

OK then, the host is ready, let’s do some Ansible.

Step 1. Creating an inventory file

Ansible stores the hosts it’s going to deal with in inventory files. That’s regular text files in INI format that define host names, IP addresses, groups, credentials, variables and other options. Even single IP address will make a valid inventory file.

Inventory’s default location is /etc/ansible/hosts, but as I’ll immediately forget about that path, I’d rather put the file locally, next to Vagrantfile. So here’s my hosts inventory file:

I know, I know, storing SSH credentials in plain text… Well, I’ll destroy the VM soon after finishing this blog post, so who cares. But in production I’d definitely use either SSH certificates or Ansible Vault for keeping secrets.

One more thing, even though ansible_ssh_pass setting is deprecated and I should use ansible_pass instead, latter one sometimes works, sometimes doesn’t. I don’t know if it’s a Mac thing (I had few other issues as well), or an effect recent solar eclipse, but legacy setting works 100% of time.

So, having the inventory set up, let’s try sending some ad-hoc commands with Ansible to see if it’s working:

It’s definitely working. The first command launched ping module (-m) at all items in hosts inventory (-i) file. The other one executed raw shell command – lsb_release -r, which returned OS version of the VM. If our hosts file had hundred of machines, we’d get hundred of results.

all isn’t the only possible target. I could’ve used specific VM name, e.g. consul-server or even a group, if I’d defined any.

Step 2. Creating a playbook

Sending ad-hoc commands one by one doesn’t scale. It’s actually even worse than having all commands in one shell file. When we need to combine multiple Ansible commands in one place, we create a playbook.

A playbook is regular YAML file which has the same things we passed through the command line – hosts and modules, and even more. And it’s really simple. Let’s create a playbook that performs the first step from Consul installation list – installing unzip, and you’ll see how simple that is.

Step 2.1 Install unzip

This is the playbook that would do the trick on Debian-like systems:

This time I was specific and used concrete host name: consul-server. The section that comes after that – tasks – contains list of individual steps to perform on target host. Right now there’s only one step – apt, which knows how to manage Ubuntu’s apt packages. In our case it basically says “make sure that unzip is present”.

What’s really cool is it doesn’t say “install”, it’s “present”, so if unzip is already there, nothing will happen. Such effect has a name – idempotence, which allows applying the same operation many times, but it will have an effect only once. In fact, Ansible comes with idempotence in mind and all modules, even command (with some help) can be idempotent. If you read the article about provisioning with Vagrant, I had to make an extra effort to achieve this with regular shell, yet here it comes for free.

And I almost forgotten about become: true. Under the strange name lurks simple idea: you have to become a root if you want to install something. No more, no less.

But enough chitchat, we have a playbook to execute:

Apparently, Ubuntu VM didn’t have unzip installed, so “Install unzip” task reported that it changed something. But if you execute the same playbook one more time, it won’t install anything. Trust me, I already tried that.

Step 2.2 – 2.4 Download and install Consul

What has been a three separate steps in regular shell provisioner is single task in Ansible: unarchive module can download anything from the web, unpack it, put the result at desired location and change file attributes in the end. If only it could brew the beer..

This adds one more task to the playbook:

I had to add more blank lines for readability (or lack of it) and use one more feature of a playbook: variables. But generally “Install Consul” task is pretty straightforward. The only interesting thing is creates property, that makes the whole task idempotent: if consul is already there, there’s nothing to download. And the other thing is mode, which makes the file executable to all users.

After relaunching the playbook we can check if it did install consul on target VM:

It definitely did.

Step 2.5 Make Consul a service

Making Consul a service simply means copying service definition file to systemd services directory. As I already have consul.service file from older post, I just need to copy it locally and add copy task to the playbook.

Step 2.6 Configure Consul

Configuring Consul means simply copy JSON file with settings to somewhere where Consul expects it to be. In our case, to this directory –  /etc/systemd/system/consul.d. It would’ve been fairly easy to do, but configuration file has server IP address, which ideally should be stored somewhere in the playbook, not in the configuration. What we can do is to turn Consul config file into template and pass server IP as a variable into it. Ansible ‘speaks’ templates out of the box, so there won’t be any problem here.

What’s more, consul.d directory most likely will not exist, so in addition to populating template file and copying it somewhere, there should be a task to make sure that “somewhere” exists. All this leads to these changes in our playbook.

Along with that comes template for configuration file: init.json.j2, where j2 stands for Jinja2 templating language.

Step 2.6 Starting the service

This is going to be one of the simplest tasks. Ansible’s service module is the one that can make sure the service is running, and the code for that is going to be the following:

If you got this far and can apply the whole playbook to the VM, it will succeed, and put something new at 192.168.99.100:8500 address:

consul

Yup, this is Consul.

Step 3. Connecting Vagrant to Ansible

Here’s the thing: Vagrant can use Ansible as provisioner, so we can launch the playbook directly from Vagrantfile:

I don’t need to configure ubuntu password, as Vagrant is going to take care of inventories and credentials from now on, so I removed the whole line. If you destroy existing VM with vagrant destroy -f and recreate it with vagrant up, you could clearly see in the output that Vagrant will initiate Ansible provisioner and end up with identical VM as before.

Conclusion

I like it. Even though configuring Ansible on Mac wasn’t completely flawless, I like how it ended up. Configuration gets much clearer, idempotency comes out of the box, and instead of reinventing the wheel for every bit of configuration I can choose existing one from a bazillion of ready to use modules. What’s more, we didn’t touch it here, but Ansible has a concept of roles, e.g. “ftp server”, so we can reuse the pieces of existing playbooks over and over. What’s even more, Ansible Galaxy is a hub of such ready to use roles and, believe it or not, “Consul service” role is already there. Lots of them.

One thought on “Provisioning Vagrant VM with Ansible

Leave a Reply

Your email address will not be published. Required fields are marked *