Vagrant Provisioning with Ansible

Messing around with Vagrant again, this time using Ansible to automate configuration post deployment.

Ansible is billed as an automation platform which makes it easier to deploy systems and applications. It does this through a scripting framework which supports a wide range of functionality covering deployment and configuration.

Vagrant Config

To define which Ansible playbooks should be run, the vm.provision config can be used in a Vagrantfile:

  config.vm.define "vm1" do |vm1| = "centos/7"
vm1.vm.hostname = "vm1" "private_network", ip: ""

vm1.vm.provision "docker", type:"ansible" do |ansible|
ansible.playbook = "docker-playbook.yml"

vm1.vm.provision "kubernetes", type: "ansible" do |ansible|
ansible.playbook = "kube-playbook.yml"

A Simple Playbook

Ansible uses the concept of playbooks to define a set of repeatable deployment steps.

This example deploys Docker on a CentOS system (should be saved as docker-playbook.yml in same directory as the Vagrantfile):

- hosts: all
become: yes
- name: install docker dependancies
name: "{{ packages }}"
- yum-utils
- device-mapper-persistent-data
- lvm2

- name: Add Docker repo
dest: /etc/yum.repos.d/docker-ce.repo
become: yes

- name: install docker
name: "{{ packages }}"
- docker-ce
- docker-ce-cli

- name: Start Docker service
name: docker
state: started
enabled: yes

- name: Add user vagrant to docker group
name: vagrant
groups: docker
append: yes


Running ‘vagrant up’ will cause all configured provision entries to run:

$ vagrant up
Bringing machine 'vm1' up with 'virtualbox' provider…
==> vm1: Configuring and enabling network interfaces…
==> vm1: Rsyncing folder: /home/rich/vagrant_storm/ => /vagrant
==> vm1: Running provisioner: file…
==> vm1: Running provisioner: shell…
vm1: Running: inline script
==> vm1: Running provisioner: docker (ansible)…
vm1: Running ansible-playbook…
PLAY [all] *
TASK [setup] ***
ok: [vm1]
TASK [install docker dependancies] ***
changed: [vm1]
TASK [Add Docker repo] *
changed: [vm1]

Provisioning Again

Provisioning can also be run independently once the VM(s) are up

vagrant provision

By naming each provision entry, it’s also possible to run a specific item:

$ vagrant provision --provision-with kubernetes
==> vm1: Running provisioner: kubernetes (ansible)…
vm1: Running ansible-playbook…
PLAY [all] *
TASK [setup] ***
ok: [vm1]
TASK [add Kubernetes' YUM repository]
ok: [vm1]
TASK [install kubernetes]
ok: [vm1]
TASK [start kubelet] ***
changed: [vm1]
vm1 : ok=4 changed=1 unreachable=0 failed=0


Making use of Ansible along with Vagrant provides a super-fast, repeatable way to bring up a system with a defined configuration.

This will form the basis of an automation system for end-to-end testing, which future posts will build on.

Vagrant Networking Basics

Vagrant supports three basic types of networking:

  • Port Forwarding / NAT
  • Private Network
  • Public Network

By default, no networking is enabled (outside of Vagrant’s internal management mechanism), so one of these must be configured in the Vagrantfile to make the VM accessible by network.

Port Forwarding / NAT

The most basic network configuration forwards traffic from the host machine to the guest VM only on specific ports. By default only TCP is forwarded; config looks like this:

Vagrant.configure("2") do |config| "forwarded_port", guest: 80, host: 8080

Vagrant will also detect configuration conflicts where the same port is in use multiple times, and will prevent deployment of such a config.

Private Network

Private networks provide host only access to the guest VM; that is the networking is not bridged, and will not be accessible outside of the host VM.

Config looks like this when assigning an address via DHCP:

Vagrant.configure("2") do |config| "private_network", type: "dhcp"

Public Network

Public networks provide access to the guest VM which is available externally to the host system. Depending on provider, this is achieved through bridging, making the guest VM as public as the host machine is.

By default, DHCP is used for assigning addresses; config looks like this:

Vagrant.configure("2") do |config| "public_network"

Static IPs

The ‘ip’ config setting can be used to assign specific IPs for both private and public networks: "private_network", ip: ""

Note that there appears to be a bug in Vagrant 1.9.1 that prevents static IPs being applied properly in some RHEL based images. A workaround is to force the interface to come up by adding an additional provisioning line in the Vargrantfile:

config.vm.provision "shell", inline: "ifup eth1", run: "always

More Reading

Vagrant provides many more options for networking, with some features varying by provider.

Networking docs can be found here for more detail.

Running a Battlesnake Competition

Battlesnake is a community run open source AI competition, pitting player implemented bots against each other in a multiplayer snake arena.

Bots are simple web-servers which must respond to a defined API, and so can be implemented in any language. A yearly competition brings people together from all over the world to compete in Victoria, Canada.

In my role as a development manager, we try to run events that push the team outside of their comfort zone a bit, whilst having a little fun. Previously I have successfully run a Vindinium day (a similar AI competition which sadly seems to have disappeared), and discovered Battlesnake whilst looking for a replacement.

Battlesnake is a great project, but has a couple of shortcomings when it comes to my particular use case:

  1. Bots are implemented as web servers, which would require opening ports in the company firewall in order to use the public server. Big no no from the IT department.
  2. As a team building excercise, it’s desirable to run a private competition so that the devs are competing which each other, instead of internet strangers.


To meet the above needs, I created Battlesnake_ui; a light-weight Battlesnake game server aimed at internal competitions of limited size.

It can be found on Github, instructions for use will follow in another post shortly.


  • Auto match start mode for match making during development / free play
  • Manual start mode to allow full control during competition
  • TV page for showing games on big screens
  • Admin UI for configuration
  • Simple single package deployment

The Plan

Plan for the day will be as follows:

  • Intro session to describe the game
  • Forming of teams
  • Free time to develop a bot. Free access to match making during this time for testing
  • Pizza
  • Competition time
    • League
    • Knockout
    • Longest snake
  • Prizes and close

This format has worked well in the past; expect a post-mortem post once the event finishes.

Todo Lists Are Evil; You Should Definitely Use Them

Todo lists are great! You write down everything you need to do, split it into sub sections / sub-tasks, make it look pretty, and gain a massive sense of achievement.

Look at how productive I’ve been! I’m so organised!

Except all that you have really achieved is putting some stuff in a list.

But You Should Really Use Them

Having said all that, I’m still a big fan of having an up to date todo list:

A Starting Point for the day

Assuming your todo list is up to date, it acts as a great reminder of ‘where was I?’. This makes it much faster to get back into the flow of things.

Categorise Priorities

The modern workplace is a constant battle against interruptions: email, instant messaging, desk drive-bys; they are all source of more work to do, so it’s important to categorise what’s important, rather than just jumping to the most recent request.

Keeping a categorised list (I use ‘now’, ‘next’ and ‘future’) means new requests can be slotted in as appropriate, thus maintaining some semblance of flow.

Know what’s next

I find a key part of remaining productive is lowering the barrier to getting started on something. Having the decision made by a well defined list of ‘what’s next’ keeps thinking to a minimum.

Review what’s been achieved

Motivation ebs and flows. A great motivator is being able to review just how much has been achieved (assuming that is you’ve managed to get something done).

Taking a moment to review both the breadth of achievement, as well as key milestones is a great way to remind yourself that you’re on the right track.

Just Do It

So yes, Todo lists can be evil as a source of false sense of achievement, but as an organisational tool I find them to be one of the easiest ways to improve my own productivity.

You don’t need any fancy tools to get started, just open notepad++ and start writing stuff down!

Vagrant Life Cycle

Vagrant is a command line tool for building and managing virtual machine environments with a focus on automation through flexible configuration.

These are just some notes on how to get up and running, and working with the Vagrant life-cycle.

Starting Out

Having installed Vagrant and Virtual Box (On Ubuntu: ‘apt install vagrant virtualbox’), the first thing required is a Vagrantfile.

This file defines the required end state of your virtual envinronment. Here is a simple example:

Vagrant.configure("2") do |config| = "centos/7"
  config.vm.hostname = "vm2" "public_network"
  config.vm.provider "virtualbox" do |vb|
    vb.memory = "2048"
    vb.cpus = 2

‘’ defines the image you wish to create a VM from. Many community provided images are available at This example uses a CentOS image.

Create a directory and save the above as ‘Vagrantfile’. Inside that directory, running ‘vagrant up’ will retrieve the named image, create and configure an instance as specified:

rich@thevmserver:~/vagrant$ vagrant up
Bringing machine 'vm1' up with 'virtualbox' provider...
==> vm1: Importing base box 'centos/7'...
==> vm1: Checking if box 'centos/7' is up to date...
==> vm1: Running 'pre-boot' VM customizations...
==> vm1: Booting VM...
==> vm1: Waiting for machine to boot. This may take a few minutes...
<-- snipped some lines for brevity -->
==> vm1: Machine booted and ready!
==> vm1: Setting hostname...
==> vm1: Configuring and enabling network interfaces...
vm1: SSH address:
vm1: SSH username: vagrant
vm1: SSH auth method: private key
==> vm1: Rsyncing folder: /home/rich/vagrant/ => /vagrant
==> vm1: Running provisioner: file...

At this point, a fully functional VM is up and running, which can be accessed using ‘vagrant ssh’ (or externally via SSH if a public network was configured):

rich@thevmserver:~/vagrant$ vagrant ssh vm1
[vagrant@vm1 ~]$ hostname

Life Cycle

Some of the more useful life-cycle commands:

  1. up: creates VM instances as defined in config, and starts them
  2. halt: stops VM instances
  3. destroy: stops and deletes VM instances

Multiple VMs

Vagrant is also capable of managing multiple VMs in a single config. To do this use ‘config.vm.define’ for each instance:

Vagrant.configure("2") do |config|

  config.vm.define "vm1" do |vm1| = "centos/7"
    vm1.vm.hostname = "vm1"

  config.vm.define "vm2" do |vm2| = "centos/7"
    vm2.vm.hostname = "vm2"
  end "public_network"
  config.vm.provider "virtualbox" do |vb|
    vb.memory = "2048"
    vb.cpus = 2

Note that any config in the outer scope (e.g. is applied to all VM instances.


By default Vagrant will generate an SSH key for each VM, and use that to replace the insecure key most images are built with. To use a pre-defined key generated through ssh-keygen, update the vagrant file with:

config.ssh.insert_key = false
config.ssh.private_key_path = ["~/.ssh/id_rsa", "~/.vagrant.d/insecure_private_key"]
config.vm.provision "file", source: "~/.ssh/", destination: "~/.ssh/authorized_keys"

Change ~/.ssh/id_rsa and ~/.ssh/ to point at where your keys live.

Docker Container Upgrades on Synology

Synology provide great hardware for home storage, and moving up a tier from their budget offering adds a great deal of functionality, including full Docker support.

Inevitably Docker images will require updating; these are some quick notes on how to do that.

Docker Data Storage

Generally speaking, storing state within a docker container is discouraged. As the container could go away at any point, persistant data should be stored outside of it to separate the application from its state.

Docker provides various storage drivers to handle different workloads, each with different performance characteristics.

On Synology systems, Docker uses the Overlay volume to map container directories onto the host device’s file system.

Upgrade Process

This makes the upgrade process very straight forward, simply fire up the Synology Docker application, then:

  1. Goto ‘Registry’, search for the Image you wish to upgrade, and download (this will overwrite the existing version)
  2. Goto ‘Container’, and stop  the container you want to upgrade
  3. Clear the Container
  4. Start Container back up

That’s it, the container should now be on the latest version of the image, with all its previous state persisted.