5 container security tips for advanced adopters

This post was originally published here by jenks gibbons.

Last year CloudPassage hit some big container milestones. We launched CloudPassage Container Secure, our security solution for containers, container images, and the hosts they run on. We also served up some helpful content, offering tips for early adopters of Docker to jumpstart their security. This year, we’re ready to dive in deeper.

So for all of you who are at an organization that tested containers and quickly began integrating them into your various workloads, this guide is for you. See below for our five critical tips for advanced Docker container adopters. And make no mistake, to ensure that your containers are secure, each one of these five tips should be employed.

#1. When it comes to Docker images and secrets, don’t bake credentials into the image. Why, you ask? Because images are easy to bundle and ship around, they’re zipped archives and can be stored in a Docker registry, and each image layer represents a file system delta from the prior step in the build process.

Picture this: A developer is in a hurry and copies credentials in a layer, uses them, and removes them in a later layer. If the container is then instantiated and inspected, the secrets file will not appear with an ls. If all the layers are downloaded however, the file can be seen in /var/lib/docker/overlay2 and issue tree. This is because rather than changing a prior layer, OverlayFS will create a whiteout file so it appears the file was deleted. To remove a file in the build process it either needs to be removed in the same layer or a multi-stage build needs to be implemented.

There are several ways to address this:
As noted by “The Twelve-Factor App,” you can use environment variables such as “docker run –name=qa –env=”MYSQL_ROOT_PASSWORD=mypassword” mysql”
Use Docker secrets or a kubernetes secret to store sensitive information.

There are different ways to tackle this. Some people believe that environment variables should be used because each environment is different and they can be passed at runtime. While on the other hand, others note that environment variables are stored in the clear and processes with privileges greater than or equal to the running process can read them. That being said, use whatever makes the most sense for your application.

#2. Don’t use images you don’t trust. If the application is critical, you can always build the image from scratch.Always pull base images into your private registry before building anything off of them. This is useful for getting your image analysis tools within reach of your base images (if you can’t point them at the public registry).

#3. Limit the rate of change. If you must use public images, consider pinning your base image (the FROM line in your Dockerfile) to the digest instead of an image tag (https://docs.docker.com/engine/reference/builder/#from). FROM <image>[@<digest>] [AS <name>]If you can’t point at the public registry, pull base images into your private registry before building anything off of them to control the rate of change in the base images your applications are built from.

#4. Be aware of the configuration. Always take note of the number of layers in any container, and what’s contained in the layers (e.g. CVE-2017-14992). Also, being aware of users is critical. Not too long ago a team here at CloudPassage conducted some independent research into Docker image layers. The team analyzed more than 4,900 community-built images from Docker Hub, starting with the most popular. 90.8 percent of those images didn’t have a specified user, so will run as root. And around two percent of images were specified as root, so at least it was considered.

#5. Execute careful vulnerability management. There’s no room for slack when it comes to vulnerability management. Always scan all images for CVEs and analyze your running containers to see if they have vulnerable packages, are running as root, are writable, or are rogue.

Ad

No posts to display