A Guide to Container Lifecycle Management

Containers have changed the way we develop and maintain applications. One of the main promises of containers is that you’ll be able to ship software faster.

But sometimes how this happens seems a bit obscure. If you want to understand the benefits of containers, you first need to know what the lifecycle management is.

Once you understand that, it’s going to be easier to connect all the points; then the aha moment will come naturally.

In this guide, I’ll use the Docker container engine—that way it’s easier to understand the lifecycle management behind it. Commands might be different in other container engines, but the concept is still valid.

I’ll start with the application development, and I’ll finish with how to ship application changes. A container’s lifecycle only takes minutes to complete and is a reusable process.

1. Everything Should Start With a File Definition

Let’s assume that you’re not using containers. The very first step then is to containerize your application. In Docker, this is as easy as creating a file called Dockerfile.

A Dockerfile is where you define all of an application’s dependencies and how to run it. For example, this is a Dockerfile that you could use for an application written in Go:

FROM golang:1.6-alpine

RUN mkdir /app
ADD . /app/
WORKDIR /app
RUN CGO_ENABLED=0 GOOS=linux go build -a -installsuffix cgo -o main .

FROM alpine
EXPOSE 80
CMD ["/app"]

COPY --from=0 /app/main /app

And this is the Go application that you have in an app.go file:

package main

import (
  "fmt"
  "net/http"
  "os"
)

func handler(w http.ResponseWriter, r *http.Request) {
  var name, _ = os.Hostname()

  fmt.Fprintf(w, "<h1>This request was processed by host: %s</h1>\n", name)
}

func main() {
  fmt.Fprintf(os.Stdout, "Web Server started. Listening on 0.0.0.0:80\n")
  http.HandleFunc("/", handler)
  http.ListenAndServe(":80", nil)
}

In this simple example, you’ll have two files in the same folder like this:

- Dockerfile
- app.go

As you can see, the Dockerfile contains all the instructions needed to run the application. It also gives visibility to everyone in the team, so it’s easy to make any change.

Without a Dockerfile, you’ll need to run all those instructions in every environment where you want to deploy the application. (Assuming that the environment has Go 1.6 installed.)

You can use Docker without a Dockerfile by:

  • remotely accessing the running container,
  • making any changes,
  • and then committing the changes.

But when you do that, you’re using containers like they were virtual machines. They’re not virtual machines.

A Dockerfile is like the source code. And the idea is that (among other benefits) when fixing a problem you don’t forget about what you did to fix it.

2. Generate a Reusable Container Image

What should you do with a Dockerfile aside from documenting the steps to run the application including its dependencies?

Build a container image that you can run anywhere where Docker is installed. This building phase should be done only once, I’ll explain why in a moment.

Let me start by showing you how to build a container image using a Dockerfile. To build a container image you need to run a command like this one:

docker build -t christianhxc/goapp:1.0 .

There are a lot more options that you could use, but you should at least include a proper name of the image and the context that the Dockerfile will use.

In this case, the dot at the end means that the Dockerfile will use the current directory. Now what’s important here is the name or tag of the image.

The tag usually consists of the following elements:

  • The Docker registry name. You could specify just a username (as I did) if the registry is the Docker Hub.
  • A slash to indicate where the registry name ends.
  • The name of the application. It could be whatever you want, but it should always be the same.
  • A colon to indicate where the name of the application ends.
  • The version number of the image using the semantic versioning format.

The version number will change in every build when you create a new image version of the application. You’ll create a new version when the base image, the dependencies, or the source change.

The source code is the most common reason why you need to create a new version.

An image name is important because it’s how you’ll roll out a new application change, or easily roll back when there’s a problem.

3. Run All the Necessary Tests Locally

One of the benefits of using containers is that you can run your application locally in the same way you would in a production environment.

You might have heard that with containers the typical phrase “It works on my machine” will vanish, but not in all cases. There are still going to be problems; at times your application will continue to run perfectly on your computer but not in production.

That will mostly be because you’re not configuring the application’s external dependencies properly. But one thing is certain: you reduce the chances of screwing things up by testing locally before continuing with the lifecycle of the container.

To run the application (in any environment) you need to run the following command:

docker run -d -p 80:80 christianhxc/goapp:1.0

The “-d” parameter is to tell Docker that you’re running the application in the background of your terminal in detached mode. You also need to tell Docker in which local port you’d like to run the application.

You see in the command the “-p 80:80” where the first “80” is the host port. The other “80” is the container port defined in the EXPOSE instruction in the Dockerfile.

If everything went smoothly, your application should be running in the local port 80. Go to http://localhost/, and you’ll see the app that’s running in the container. If you need to spin up a new container using the same image, you change the host port.

4. Push the Container Image to a Centralized Registry

Once you’re OK with the new container image you created, it’s time to make it available for other environments. You need a centralized image registry to push the image where anyone with access will be able to pull it.

In Docker, you can use the Docker Hub which is free but the images you put there are public. There’s a paid version where you can host your registry privately.

There are other registry options for container images like Nexus from Sonatype, ECR for AWS, ACR from Azure, or Google’s container registry.

It doesn’t matter where the container images are registered, with Docker you do it by running the following command:

docker push christianhxc/goapp:1.0

See? All the commands we saw here are pretty similar.

In this case, we indicate the action and the full image name. You should have signed in to the registry previously, even if you chose the Docker Hub.

It’s important to mention that even though anyone could run this command on their computers, it’s better if you leverage this part to a centralized integration tool like Jenkins.

By doing so, you enforce that everyone makes changes in the Dockerfile in the code repository. No human will be responsible for building and pushing the images because you can automate the process, even the testing part.

5. Run a Vulnerability Scan to the Container Image

Docker images are built upon other image layers, like in a Dockerfile when you’re defining the base image with the FROM instruction. You might not know how that base image was built, and if you’re using one from a public registry, it might contain security vulnerabilities.

Therefore, it’s crucial that you include a security scan in the lifecycle of all the container images you build.

There are a lot of tools for the job out there, but I use one that’s commonly used to scan vulnerabilities in container images.

CoreOS open sourced a tool called Clair a few years ago. Clair is a static analysis of container vulnerabilities for Docker and appc. Clair scans each container layer with well-known vulnerabilities from the CVE database, and other Linux distributions like RedHat, Ubuntu, and Debian.

You can integrate Clair with your image repository (like ECR, Docker, or GCR) with another tool called Klar. And you’d use these tools together to run a security scan by running a command like the one below:

CLAIR_ADDR=localhost CLAIR_OUTPUT=High CLAIR_THRESHOLD=10 DOCKER_USER=docker DOCKER_PASSWORD=secret klar postgres:9.5.1

The above command uses environment variables, and at the end, it calls the “klar” tool.

You’ll receive either an exit code of 0 if there are no vulnerabilities found, or if vulnerabilities are at least under a certain threshold. It will return 1 if the image is considered vulnerable, and 2 if there’s an error and the scan couldn’t run.

6. Deploy at Scale Using an Orchestration Tool

Now it’s time to use the container image outside your local computer. Anyone that has access to image registry where you pushed the container image and wants to use the image, either to extend it or to instantiate it, has to run the following command:

docker pull christianhxc/goapp:1.0

But if you’re going to use containers at scale, you won’t want to have to run that command in each server where you need to run your application. If you don’t have a big cluster of servers, you won’t have a problem running the command on your own.

But what if you don’t have enough resources to run the container, how would you know? Or what if once the container is running, is terminated and you need to sping up a new one again.

After all, it looks like containers are not going help, right? Well, to run containers at a scale you need an orchestrator.

There are so many different types of orchestrators, but the most popular ones are Kubernetes, Docker Swarm, and DC/OS.

Orchestrators are in charge of not just pulling the container image, but also of managing the container workload.

It’s a Repeatable Process

Forget about how you traditionally managed your applications—with containers the lifecycle is different. Once you finish the full lifecycle of the container, you just need to repeat the lifecycle when you want or need to change something in your application.

As you might notice, all the commands in Docker are pretty similar to the ones you’d use in Git. So it won’t be a problem for a developer to adapt to this new way of working.

Actually, many developers like this approach more because there’s nothing like being able to test your own changes in an environment as similar as production.