Security restrictions when building dockerfile

9/5/2017

The company where I work (strictly regulated/audited environment) is yet to embrace containers but would like to adopt them for some applications. There is the view that as the image build process issues commands as root (or could be overridden by the user by use of the USER command), that building (not running) a container is effectively giving a user unfettered access as root during the build process. This is anathema to them and goes against all manner of company policies. Access to certain commands for computers is restricted via PowerBroker, i.e. access to certain commands requires explicit permissioning and is logged/subject to audit.

We need to allow container images to be built by a CI/CD system as well as ideally to allow developers to be able to build containers locally. Containers will generally be run in Kubernetes, but may be run directly on a VM. I'd like to be able to have CI build agents spin up on demand, as there are a lot of developers, so I want to run the build process within Kubernetes.

What is the best practice for building docker containers in this sort of environment please? Should we look to restrict access to commands within the Dockerfile?

My current thinking for this approach:

CI/CD:

  1. Define "company-approved" image to act as build agent within Kubernetes.
  2. Build image defines a user that the build process runs as (not root).
  3. Build agent image contains PowerBroker, enabling locking down access to sensitive commands.
  4. Scan docker file for use of user command and forbid this.
  5. Build agent runs docker-in-docker, as per here (https://applatix.com/case-docker-docker-kubernetes-part-2/). This achieves isolation between multiple build instances whilst ensuring all containers are controlled via Kubernetes.
  6. Images are scanned for security compliance via OpenSCAP or similar. Passing the scan is part of the build process. Passing the scan allows the image to be tagged as compliant and pushed to a registry.

I'm uncomfortable with the thinking around (4), as this seems a bit rule bound (i.e. it's a sort of blacklist approach) and I'm sure there must be a better way.

Developer's localhost:

  1. Define "company-approved" base images (tagged as such inside a trusted registry).
  2. Image defines a user that the build process runs as (not root).
  3. Base image contains PowerBroker, enabling locking down access to sensitive commands.
  4. Create wrapper script on localhost that wraps docker build. No direct access to docker build: user must use script instead. Access to script is secured via PowerBroker. Script can also scan docker file for use of user command and forbid this.
  5. Pushing of images to registry requires tagging which requires scanning for security compliance via OpenSCAP or similar as above.

I'd like to use the OpenSCAP results plus the CI system to create an audit trail of the images that exist; similarly for the deploy process. The security team that monitor for CVEs etc should be able to understand what containers exist and have been deployed and be able to trigger rebuilds of images to make use of updated libraries, or to flag up to developers when containers need to be rebuilt/redeployed. I want to be able to demonstrate that all containers meet a security configuration policy that is itself defined as code.

Is this a sensible way to go? Is there even a risk for allowing a user to build (but not run) a container image without restriction? If there is not, what's the best way to ensure that a foolish/malicious developer has not undone the best practices inside the "approved base image", other than a manual code review (which is going to be done anyway, but might miss something)?

By the way, you must assume that all code/images are hosted in-house/on-premises, i.e. nothing is allowed to use a cloud-based product/service.

-- John
docker
dockerfile
kubernetes
security

2 Answers

9/5/2017

There is the view that as the image build process issues commands as root (or could be overridden by the user by use of the USER command), that building (not running) a container is effectively giving a user unfettered access as root during the build process

This view is not correct. When you build an image, all what you are doing is creating new docker layers (files) which are stored under /var/lib/docker/aufs/layers. There are simply no security concerns when building docker images.

There are tools to analyze the security of images you already built. One is the image analyzer built into Dockerhub.

-- yamenk
Source: StackOverflow

9/5/2017

When docker build runs each layer executes in the context of a container. So the risks presented by that command executing are constrained by what access is available to the container.

Locking down the build environment could be achieved by restricting what the Docker engine instance which will complete the build can do.

Things like ensuring that user namespaces are used can reduce the risk of a command run inside a container having a wider effect on the environment.

Of course that doesn't mitigate the risks of a developer curl|bashing from an untrusted location, but then what's to stop that being done outside of Docker? (i.e. what additional risk is being introduced by the use of Docker in this scenario)

If you have a policy of restricting externally hosted code, for example, then one option could be to just restrict access from the Docker build host to the Internet.

If you're making use of Kubernetes for the build process and are concerned about malicious software being executed in containers, it could be worth reviewing the CIS Kubernetes standard and making sure you've locked down your clusters appropriately.

-- Rory McCune
Source: StackOverflow