I’m still looking for ways to automate hosts configuration. So far I’ve been using Vagrant + bash/PowerShell for configuring Linux or Windows hosts, but somehow I managed to miss the tool designed specifically for tasks like this – Ansible. It’s been around for last five years or so and became almost a synonym to “automatic configuration”. Today I’ll finally give it a try and see what difference it makes to use it comparing to provisioning with good old Bash.
What’s Ansible
As I said, Ansible is a tool for configuring machines automagically. It can send simple ad-hoc commands (e.g. reboot
) to one one or more hosts, or even apply a whole playbook with complex scenarios (e.g. install service, copy config files, make sure service is running) to hundreds of them.
It uses a “push” model, so there’s always some sort of control server that sends commands to target hosts via SSH (to Linux hosts) or PowerShell (Win). As it’s written in Python, the interpreter obviously should be installed on control server. What’s more, target hosts need Python as well, as Ansible delivers its commands as .py
files and executes them on target hosts, not locally.
Sounds pretty clear, so let’s try to apply some of this.
The plan
I want to rewrite Consul server installation I blogged about before. It used to be Vagrant + Bash, but Vagrant + Ansible might be a more interesting combination.
As I recall, Consul server provisioning included the following:
- Install unzip.
- Download and unzip Consul.
- Make it executable.
- Put executable into
/usr/local/bin
. It’s going to be a service. - Put Consul service definition file to
/etc/systemd/system/
dir. - Put Consul configuration file
/etc/systemd/system/consul.d/
. - Start the service.
Sounds like something that Ansible should be able to handle very well.
Installation
I’m still going to use Vagrant and VirtualBox for host creation, and installing both of them is still a pleasure. Getting Ansible to run, however, is a bit trickier, at least on Mac. In my case I had to install pip
first, then use pip
to install ansible
, then install sshpass
, so Ansible can use SSH with login and passwords. That’s not “download-and-run” way that I got used to, but it could’ve been worse.
Step 0. Preparing the host
Before we can configure the host, we need to create it. What’s more, Ansible will need python and guest account to use for SSH connection, so these should be installed as well. Fortunately, Vagrant makes configuring such things trivial.
First, create Vagrantfile file:
1 |
[pav@pav-macbookpro]$ vagrant init ubuntu/xenial64 --minimal |
And then, give it static IP address, set existing ubuntu
user’s password to ubuntu
, and install Python’s bare minimum to make Ansible happy:
1 2 3 4 5 6 7 8 9 |
Vagrant.configure("2") do |config| config.vm.box = "ubuntu/xenial64" config.vm.define "consul-server" do |machine| machine.vm.network "private_network", ip: "192.168.99.100" machine.vm.provision "shell", inline: "echo ubuntu:ubuntu | chpasswd" machine.vm.provision "shell", inline: "apt-get update && apt-get install -y python-minimal" end end |
Piece of cake. Now, vagrant up
and we can test that SSH connection is working:
1 2 3 4 5 6 7 8 9 10 11 12 13 |
[pav@pav-macbookpro]$ vagrant up # Bringing machine 'consul-server' up with 'virtualbox' provider... # ... # ==> consul-server: Setting up python (2.7.11-1) ... [pav@pav-macbookpro]$ ssh ubuntu@192.168.99.100 # The authenticity of host '192.168.99.100 (192.168.99.100)' can't be established. # ECDSA key fingerprint is SHA256:Bzxp6dpLcdJPRUMyJKZUVzK8jLN6z4OI1iG1j4Iu77M. # Are you sure you want to continue connecting (yes/no)? yes # Warning: Permanently added '192.168.99.100' (ECDSA) to the list of known hosts. ubuntu@192.168.99.100's password: # ... ubuntu@ubuntu-xenial:~$ |
So far, so good. Testing SSH connection did one important thing: it added VM fingerprint to known hosts list, so Ansible won’t complain later. We also could’ve configured it to trust all hosts by default, but for single host experiment like ours it doesn’t worth it.
OK then, the host is ready, let’s do some Ansible.
Step 1. Creating an inventory file
Ansible stores the hosts it’s going to deal with in inventory files. That’s regular text files in INI format that define host names, IP addresses, groups, credentials, variables and other options. Even single IP address will make a valid inventory file.
Inventory’s default location is /etc/ansible/hosts
, but as I’ll immediately forget about that path, I’d rather put the file locally, next to Vagrantfile. So here’s my hosts
inventory file:
1 |
consul-server ansible_host=192.168.99.100 ansible_user=ubuntu ansible_ssh_pass=ubuntu |
I know, I know, storing SSH credentials in plain text… Well, I’ll destroy the VM soon after finishing this blog post, so who cares. But in production I’d definitely use either SSH certificates or Ansible Vault for keeping secrets.
One more thing, even though ansible_ssh_pass
setting is deprecated and I should use ansible_pass
instead, latter one sometimes works, sometimes doesn’t. I don’t know if it’s a Mac thing (I had few other issues as well), or an effect recent solar eclipse, but legacy setting works 100% of time.
So, having the inventory set up, let’s try sending some ad-hoc commands with Ansible to see if it’s working:
1 2 3 4 5 6 7 8 9 |
[pav@pav-macbookpro]$ ansible all -i hosts -m ping # consul-server | SUCCESS => { # "changed": false, # "ping": "pong" # } [pav@pav-macbookpro]$ ansible all -i hosts -m command -a "lsb_release -r" # consul-server | SUCCESS | rc=0 >> # Release: 16.04 |
It’s definitely working. The first command launched ping
module (-m
) at all
items in hosts
inventory (-i
) file. The other one executed raw shell command
– lsb_release -r
, which returned OS version of the VM. If our hosts
file had hundred of machines, we’d get hundred of results.
all
isn’t the only possible target. I could’ve used specific VM name, e.g. consul-server
or even a group, if I’d defined any.
Step 2. Creating a playbook
Sending ad-hoc commands one by one doesn’t scale. It’s actually even worse than having all commands in one shell file. When we need to combine multiple Ansible commands in one place, we create a playbook.
A playbook is regular YAML file which has the same things we passed through the command line – hosts and modules, and even more. And it’s really simple. Let’s create a playbook that performs the first step from Consul installation list – installing unzip
, and you’ll see how simple that is.
Step 2.1 Install unzip
This is the playbook that would do the trick on Debian-like systems:
1 2 3 4 5 |
- hosts: consul-server tasks: - name: Install unzip apt: name=unzip state=present become: true |
This time I was specific and used concrete host name: consul-server
. The section that comes after that – tasks
– contains list of individual steps to perform on target host. Right now there’s only one step – apt
, which knows how to manage Ubuntu’s apt packages. In our case it basically says “make sure that unzip
is present”.
What’s really cool is it doesn’t say “install”, it’s “present”, so if unzip
is already there, nothing will happen. Such effect has a name – idempotence, which allows applying the same operation many times, but it will have an effect only once. In fact, Ansible comes with idempotence in mind and all modules, even command
(with some help) can be idempotent. If you read the article about provisioning with Vagrant, I had to make an extra effort to achieve this with regular shell, yet here it comes for free.
And I almost forgotten about become: true
. Under the strange name lurks simple idea: you have to become
a root if you want to install something. No more, no less.
But enough chitchat, we have a playbook to execute:
1 2 3 4 5 6 7 8 9 10 11 12 |
[pav@pav-macbookpro]$ ansible-playbook -i hosts consul.yml # #PLAY [consul-server] ****************************************************** # #TASK [Gathering Facts] **************************************************** #ok: [consul-server] # #TASK [Install unzip] ****************************************************** #changed: [consul-server] # #PLAY RECAP **************************************************************** #consul-server : ok=2 changed=1 unreachable=0 failed=0 |
Apparently, Ubuntu VM didn’t have unzip
installed, so “Install unzip” task reported that it changed something. But if you execute the same playbook one more time, it won’t install anything. Trust me, I already tried that.
Step 2.2 – 2.4 Download and install Consul
What has been a three separate steps in regular shell provisioner is single task in Ansible: unarchive
module can download anything from the web, unpack it, put the result at desired location and change file attributes in the end. If only it could brew the beer..
This adds one more task to the playbook:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 |
- hosts: consul-server vars: consul_version: 0.9.2 tasks: - name: Install unzip apt: name=unzip state=present become: true - name: Install Consul become: true unarchive: src: https://releases.hashicorp.com/consul/{{ consul_version }}/consul_{{ consul_version }}_linux_amd64.zip remote_src: yes dest: /usr/local/bin creates: /usr/local/bin/consul mode: 0555 |
I had to add more blank lines for readability (or lack of it) and use one more feature of a playbook: variables. But generally “Install Consul” task is pretty straightforward. The only interesting thing is creates
property, that makes the whole task idempotent: if consul
is already there, there’s nothing to download. And the other thing is mode
, which makes the file executable to all users.
After relaunching the playbook we can check if it did install consul
on target VM:
1 2 3 4 |
[pav@pav-macbookpro]$ ansible all -i hosts -m command -a "consul --version" # consul-server | SUCCESS | rc=0 >> # Consul v0.9.2 # Protocol 2 spoken by default, understands 2 to 3 (agent will automatically use protocol >2 when speaking to compatible agents) |
It definitely did.
Step 2.5 Make Consul a service
Making Consul a service simply means copying service definition file to systemd services directory. As I already have consul.service
file from older post, I just need to copy it locally and add copy
task to the playbook.
1 2 3 4 5 6 7 8 9 10 11 12 13 |
[Unit] Description=consul agent Requires=network-online.target After=network-online.target [Service] EnvironmentFile=-/etc/sysconfig/consul Restart=on-failure ExecStart=/usr/local/bin/consul agent $CONSUL_FLAGS -config-dir=/etc/systemd/system/consul.d ExecReload=/bin/kill -HUP $MAINPID [Install] WantedBy=multi-user.target |
1 2 3 4 5 6 7 8 9 |
#... tasks: - name: Make Consul a service become: true copy: src: consul.service dest: /etc/systemd/system/consul.service #... |
Step 2.6 Configure Consul
Configuring Consul means simply copy JSON file with settings to somewhere where Consul expects it to be. In our case, to this directory – /etc/systemd/system/consul.d. It would’ve been fairly easy to do, but configuration file has server IP address, which ideally should be stored somewhere in the playbook, not in the configuration. What we can do is to turn Consul config file into template and pass server IP as a variable into it. Ansible ‘speaks’ templates out of the box, so there won’t be any problem here.
What’s more, consul.d
directory most likely will not exist, so in addition to populating template file and copying it somewhere, there should be a task to make sure that “somewhere” exists. All this leads to these changes in our playbook.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 |
#... vars: consul_version: 0.9.2 consul_server_ip: 192.168.99.100 consul_config_dir: /etc/systemd/system/consul.d tasks: #... - name: Ensure config directory exists become: true file: path: "{{ consul_config_dir }}" state: directory - name: Deploy consul config become: true template: src: init.json.j2 dest: "{{consul_config_dir}}/init.json" #... |
Along with that comes template for configuration file: init.json.j2
, where j2 stands for Jinja2 templating language.
1 2 3 4 5 6 7 8 |
{ "server": true, "ui": true, "advertise_addr": "{{ consul_server_ip }}", "client_addr": "{{ consul_server_ip }}", "data_dir": "/tmp/consul", "bootstrap_expect": 1 } |
Step 2.6 Starting the service
This is going to be one of the simplest tasks. Ansible’s service
module is the one that can make sure the service is running, and the code for that is going to be the following:
1 2 3 4 |
# ... - name: Ensure consul's running become: true service: name=consul state=started |
If you got this far and can apply the whole playbook to the VM, it will succeed, and put something new at 192.168.99.100:8500
address:
Yup, this is Consul.
Step 3. Connecting Vagrant to Ansible
Here’s the thing: Vagrant can use Ansible as provisioner, so we can launch the playbook directly from Vagrantfile:
1 2 3 4 5 6 7 8 9 |
Vagrant.configure("2") do |config| config.vm.box = "ubuntu/xenial64" config.vm.define "consul-server" do |machine| machine.vm.network "private_network", ip: "192.168.99.100" machine.vm.provision "shell", inline: "apt-get update && apt-get install -y python-minimal" machine.vm.provision "ansible", playbook: "consul.yml" end end |
I don’t need to configure ubuntu
password, as Vagrant is going to take care of inventories and credentials from now on, so I removed the whole line. If you destroy existing VM with vagrant destroy -f
and recreate it with vagrant up
, you could clearly see in the output that Vagrant will initiate Ansible provisioner and end up with identical VM as before.
1 2 3 4 5 6 7 8 |
[pav@pav-macbookpro]$ vagrant up # ... # ==> consul-server: Running provisioner: ansible... # consul-server: Running ansible-playbook... # # PLAY [consul-server] *********************************************************** # ... # consul-server : ok=7 changed=6 unreachable=0 failed=0 |
Conclusion
I like it. Even though configuring Ansible on Mac wasn’t completely flawless, I like how it ended up. Configuration gets much clearer, idempotency comes out of the box, and instead of reinventing the wheel for every bit of configuration I can choose existing one from a bazillion of ready to use modules. What’s more, we didn’t touch it here, but Ansible has a concept of roles, e.g. “ftp server”, so we can reuse the pieces of existing playbooks over and over. What’s even more, Ansible Galaxy is a hub of such ready to use roles and, believe it or not, “Consul service” role is already there. Lots of them.
One thought on “Provisioning Vagrant VM with Ansible”