Javascript required
Skip to content Skip to sidebar Skip to footer

Continuous Delivery Using Docker and Ansible

This article is part of the Continuous Integration, Delivery and Deployment series.

The previous article described several ways to implement Continuous Deployment. Specifically, it described, among other things, how to implement it using Docker to deploy applications as containers and nginx for reverse proxy necessary for successful utilization of blue-green deployment technique. All that was running on top of CoreOS, operating system specifically designed for running Docker containers.

In this article we'll try to do the same process using Ansible (an open-source platform for configuring and managing computers). Instead of CoreOS, we'll be using Ubuntu.

Source code used in this article can be found in the GitHub repo vfarcic/provisioning (directory ansible).

Ansible

AnsibleLogo_transparent_webAnsible is an open-source software platform for configuring and managing computers. It combines multi-node software deployment, ad hoc task execution, and configuration management. It manages nodes over SSH. Modules work over JSON and standard output and can be written in any programming language. The system uses YAML to express reusable descriptions of systems.

Preferable way to work with Ansible is with roles. They describe a set of tasks that should be run in order to setup something. In our case, we'll have 5 roles described in bdd.yml.

- hosts: all
remote_user: vagrant
sudo: yes
roles:
- etcd
- confd
- docker
- nginx
- bdd

First four roles (etcd, confd, docker and nginx) will make sure that the tools we need for blue-green deployment are present. Docker role, for example, is defined as following:

- name: Docker is present
apt: name=docker.io state=present
tags: [docker]
- name: Python-pip is present
apt: name=python-pip state=present
tags: [docker]
- name: Docker-py is present
pip: name=docker-py version=0.4.0 state=present
tags: [docker]

It installs Docker using apt-get, Python PIP and Docker-py. We need Docker to manage our containers. The Docker-py is a Python library required for the Ansible Docker module that we'll use to run nginx container.

As you can see, Ansible is very easy to use and understand. Just by reading YML files one can easily see what is going on. Simplicity is one of the main advantages it has over similar tools like Puppet and Chef. After very short introduction to Ansible, all we have to do is look for a module that performs tasks we need (i.e. apt for installation of Debian packages) and describe it as a YML task.

Deployment

Once we have the tools installed, it's time to take a look at the last Ansible role bdd. This is where deployment happens. However, before we proceed, let me explain goals I had in mind before I started working on it.

Deployment should follow blue-green technique. While the application is running, we would deploy a new version in parallel with the old one. Container that will be used already passed all sorts of unit and functional tests giving us reasonable confidence that each release is working correctly. However, we still need to test it after deployment in order to make the final verification that what we deployed is working correctly. Once all post-deployment tests passed we are ready to make the new release available to the public. We can do that by changing our nginx reverse proxy to redirect all requests to the newly deployed release. In other words, we should do following.

  • Pull the latest version of the application container
  • Run the latest application version without stopping the existing one
  • Run post-deployment tests
  • Notify etcd about the new release (port, name, etc)
  • Change nginx configuration to point to the new release
  • Stop the old release

If we do all of the above, we should accomplish zero-downtime. At any given moment our application should be available.

On top of the procedure described above, deployment should work both with and without Ansible. While using it helps a lot, all essential elements should be located on the server itself. That means that scripts and configurations should be on the machine we're deploying to and not somewhere on a central server.

The role bdd is following.

- name: TOML is present
copy:
src=bdd.toml
dest=/etc/confd/conf.d/bdd.toml
tags: [bdd]
- name: Config template is present
copy:
src=bdd.conf.tmpl
dest=/etc/confd/templates/bdd.conf.tmpl
tags: [bdd]
- name: Deployment script is present
copy:
src=deploy-bdd.sh
dest=/usr/local/bin/deploy-bdd.sh
mode="0755"
tags: [bdd]
- name: Deployment is run
shell: deploy-bdd.sh
tags: [bdd]

The first task makes sure that the template resource bdd.toml is present. It is used by confd to specify what is the template, what is the destination and is the command that should be executed (in our case restart of the nginx container).

The second task makes sure that the confd template (bdd.conf.tmpl)[https://github.com/vfarcic/provisioning/blob/master/ansible/roles/bdd/files/bdd.conf.tmpl] is present. This template together with the bdd.toml will change nginx proxy to point to a new release every time we deploy it. That way we'll have no interruption to our service.

The third task makes sure that the deployment script deploy-bdd.sh is present and the last one that it is run.

From the Ansible point of view, that's all there is. The real "magic" is in the deployment script itself. Let's go through it.

We start by discovering whether we should do blue or green deployment. If the current one is blue we'll deploy green and the other way around. Information about currently deployed "color" is stored in etcd key /bdd/color.

BLUE_PORT=9001 GREEN_PORT=9002 CURRENT_COLOR=$(etcdctl get /bdd/color) if [ "$CURRENT_COLOR" = "" ]; then   CURRENT_COLOR="green" fi if [ "$CURRENT_COLOR" = "blue" ]; then   PORT=$GREEN_PORT   COLOR="green" else   PORT=$BLUE_PORT   COLOR="blue" fi        

Once the decision is made, we stop and remove existing containers if there are any. Keep in mind that the current release will continue operating and won't be affected until the very end.

docker stop bdd-$COLOR docker rm bdd-$COLOR        

Now we can start the container with the new release and run it in parallel with the existing one. In this particular case, we're deploying BDD Assistant container vfarcic/bdd.

docker pull vfarcic/bdd docker run -d --name bdd-$COLOR -p $PORT:9000 vfarcic/bdd        

Once new release is up and running we can run final set of tests. This assumes that all tests that do not require deployment were already executed. In case of BDD Assistant, unit (Scala and JavaScript) and functional tests (BDD scenarios) are run as part of the container build process described in the Dockerfile. In other words, container is pushed to the repository only if all tests that are run as part of the build process passed. However, tests run before the deployment are usually not enough. We should verify that deployed application is working as expected. At this stage we're usually running integration and stress tests. Tests themselves are also a container that is run and automatically removed (arguments -rm) once it finished executing. Important thing to notice is that localhost on the host is, by default, accessed through 172.17.42.1 from within a container.

In this particular case a set of BDD scenarios are run using PhantomJS headless browser.

docker pull vfarcic/bdd-runner-phantomjs docker run -t --rm --name bdd-runner-phantomjs vfarcic/bdd-runner-phantomjs    --story_path data/stories/tcbdd/stories/storyEditorForm.story    --composites_path /opt/bdd/composites/TcBddComposites.groovy    -P url=http://172.17.42.1:$PORT -P widthHeight=1024,768        

If all tests passed, we should store information about the new release using etcd and run confd that will update our nginx configuration. Until this moment, nginx was redirecting all the requests to the old release. If nothing failed in the process so far, only from this point will users of the application be redirected to the version we just deployed.

etcdctl set /bdd/color $COLOR etcdctl set /bdd/port $PORT etcdctl set /bdd/$COLOR/port $PORT etcdctl set /bdd/$COLOR/status running confd -onetime -backend etcd -node 127.0.0.1:4001        

Finally, since we have the new release deployed, tested and made available to the general public through nginx reverse proxy, we're ready to stop and remove the old version.

docker stop bdd-$CURRENT_COLOR etcdctl set /bdd/$CURRENT_COLOR/status stopped        

Source code of the deploy-bdd.sh script can be found in the GitHub repo vfarcic/provisioning.

Running it all together

Let's see it in action. I prepared a Vagrantfile that will create an Ubuntu virtual machine and run Ansible playbook that will install and configure everything and, finally, deploy the application. Assuming that Git, Vagrant and VirtualBox are installed, run following.

git clone https://github.com/vfarcic/provisioning.git cd provisioning/ansible vagrant up        

First run might take a while since Vagrant and Ansible will need to download a lot of stuff (OS, packages, containers…). Please be patient especially on slower bandwidth. Good news is that each consecutive run will be much faster.

To simulate deployment of a new version, run following.

vagrant provision        

If you SSH to the VM you can see that the running version changes from blue (port 9001) to green (port 9002) and the other way around each time we run vagrant provision.

vagrant ssh sudo docker ps        

Before, during and after deployment, the application will be available without any interruption (zero-downtime). You can check it out by opening http://localhost:8000/ in your favorite browser.

Summary

As you could see, deployment is greatly simplified with Docker and containers. While, in a more traditional setting, Ansible would need to install a bunch of stuff (JDK, web server, etc) and make sure that ever-increasing amount of configuration files are properly set, with containers the major role of Ansible is to make sure that OS is configured, that Docker is installed and that few other things are properly set. In other words, Ansible continues being useful while an important part of its work is greatly simplified with containers and the concept of immutable deployments (what we deploy is unchangeable). We're not updating our applications with new versions. Instead, we're deploying completely new containers and removing old ones.

All the code used in this article can be found in the directory ansible inside the GitHub repo vfarcic/provisioning.

Can we put all this into Jenkins server and apply Continuous Deployment process? That's the topic of the Continuous Integration, Delivery or Deployment with Jenkins, Docker and Ansible article.

The DevOps 2.0 Toolkit

The DevOps 2.0 ToolkitIf you liked this article, you might be interested in The DevOps 2.0 Toolkit: Automating the Continuous Deployment Pipeline with Containerized Microservices book.

This book is about different techniques that help us architect software in a better and more efficient way with microservices packed as immutable containers, tested and deployed continuously to servers that are automatically provisioned with configuration management tools. It's about fast, reliable and continuous deployments with zero-downtime and ability to roll-back. It's about scaling to any number of servers, design of self-healing systems capable of recuperation from both hardware and software failures and about centralized logging and monitoring of the cluster.

In other words, this book envelops the whole microservices development and deployment lifecycle using some of the latest and greatest practices and tools. We'll use Docker, Kubernetes, Ansible, Ubuntu, Docker Swarm and Docker Compose, Consul, etcd, Registrator, confd, Jenkins, and so on. We'll go through many practices and, even more, tools.

perdriaupribill.blogspot.com

Source: https://technologyconversations.com/2014/12/29/continuous-deployment-implementation-with-ansible-and-docker/