Containerization: Why We Use Docker to Deploy Our Software

Docker has changed the way we deploy software. Ten years ago, we kept text files that contained the commands required to rebuild various servers. We ran into all kinds of issues with inconsistent code and unstable infrastructure. Docker containers changed all that. Now, it’s much easier to build an application because the containers are standardized, spun up with every deployment, and help us maintain continuous integration with zero downtime.

Docker’s Popularity Isn’t an Accident

It’s no surprise that Docker surpassed 12 billion all-time pulls this year. The infrastructure as code movement that began with Ansible, Chef, and Puppet is responsible for a major shift in the way we view software infrastructure and deployment. These tools simplify the process of developing, packing, and shipping programs. However, Docker takes code infrastructure to the next level with containerization of apps, rebuilding infrastructure on every deploy, and creating a consistent environment across development teams and local, staging, and production environments.

The core concept behind Docker is containerization. When we use Docker, we’re creating silos where each service can run independently. Essentially, we spin up all new containers, deploy our software to the containers, and run our test suite on the new containers. If everything was successful, we add the new release to the container orchestrator and remove the old containers from the orchestrator. The most valuable aspect of this is that we can also do rollbacks instantly, if needed, as we kept the old release running in their containers.

Containers are also more efficient, allowing more apps to run on the same servers. Docker’s containers create an abstraction layer that reduces the need for server-side virtual machines. It also makes virtual machines more flexible, since the machines themselves can utilize containers. This breakthrough for server infrastructure has many large corporations considering Docker as a way to save cost while accelerating app delivery.

Docker Makes Local Development Easier

Another great advantage in Docker is the case of local development. All of our developers work locally, but previously they used a mix of tools (Vagrant, Homestead, Valet, Rails Serve, npm run, etc). This lead to discrepancies in the development environment and our production environment.

Docker solves all of this. We store our Dockerfile directly in the repo with our code. Developers simply checkout the code, and run docker-compose up and our entire environment is spun up locally using containers. We can even split each application out into micro-services and have containers for each service with different resources and auto-scaling configuration. The end result is that we don’t have to worry about code working locally but failing in staging or production.

Our Release Process with Docker

One of the challenges with Docker is deploying to production. There’s not yet an established best practice for Docker images and deployment. We’ve developed a few key criteria for our deployment process:

  1. Ease of use - Deployment needs to be easy for developers so they do it frequently. We try to release software quickly and frequently. If the process was hard or buggy we’d end up releasing less.
  2. Zero downtime - People expect sites to be up all the time. Downtime is unacceptable for a modern company.
  3. Automated deployment - By automating the deployment process you reduce risk since every update will follow standard procedures to get to production.

In practice, those three key criteria led us to create development and production workflows that work like this...

Our Development Workflow

  1. Developers work locally using Docker. The developer uses feature branches for new features, following gitflow procedures.
  2. When a feature is complete, the developer merges the feature branch into the develop branch.
  3. On merge into develop, our continuous integration environment will spin up Docker containers and test that feature.
  4. If the tests pass, then a deploy to staging is triggered automatically.

The staging and deployment process takes place in less than 30 seconds, excluding running of the automated tests.

Our Production Workflow

  1. Developer will merge the release branch into the master branch.
  2. Upon merge into master, our continuous integration environment will spin up Docker containers and test the release.
  3. If tests pass, a deploy to production is triggered automatically.
  4. New containers are spun up and tests are run again on this new environment.
  5. If tests on containers pass, the new containers are added to the orchestrator and the old containers are removed from it.
  6. The deploy will never go live if any part fails, and it keeps the old version up until the new one is live.

The production workflow generally takes us less than 60 seconds.

The speed, redundancy, and automation of this workflow makes it a clear winner- especially in cases where we’re managing and working on many services. Using Docker with Kubernetes, we can ensure consistent results in our builds, zero downtime, and secure automated continuous integration of code.

Article originally published on Encryption.