Why is my docker container getting a root group when running in Kubernetes?

2/16/2020

I'm in the process of ensuring all of our containers are not running as root. I'm having a bit of trouble though with group access. The short version, when I build a container, and run it locally, I get the following:

docker run -it --entrypoint /bin/sh f83823c8ee6c
~ $ id
uid=1000(metadata) gid=1000(metadata)

However when I run this same container in our kubernetes clusters I get the following:

kubectl -n kube-system get pods -l app=metadata | grep -v NAME | awk '{print $1 }' | xargs -I {} kubectl -n kube-system exec  {} -- id 
uid=1000(metadata) gid=1000(metadata) groups=0(root),1000(metadata)

The fact that the container is assigned multiple groups when running in the cluster is concerning to me and what I'm trying to solve. I expected the same output when running in the k8s cluster and when running with docker directly.


For some extra background on our deployments, and configuration... The Dockerfile this container is built with:

# build stage
FROM golang:1.13-alpine AS build-env
RUN apk add --no-cache --update alpine-sdk curl
ENV REPO_PATH=**redacted**
COPY . $REPO_PATH
WORKDIR $REPO_PATH
RUN curl https://raw.githubusercontent.com/golang/dep/master/install.sh | sh
RUN make dep
RUN make build

# final stage
FROM alpine:3.7
WORKDIR /app
COPY --from=build-env **redacted**/bin/server /app/
RUN apk add --no-cache --update ca-certificates && \
  addgroup -g 1000 metadata && \
  adduser -D -g "metadata user" -H -h "/app" -G "metadata" -u 1000 metadata && \
  chown -R metadata:metadata /app
USER 1000:1000
ENTRYPOINT /app/server

Our kubernetes version is 1.15.6, and I'm aware that RunAsGroup requires a feature gate, our running controller options (we roll our own clusters, do not use one of the cloud providers managed options):

kubectl -n kube-system get pods -l=k8s-app=kube-controller-manager | tail -1 | awk ' { print $1 }' | xargs -I {} kubectl -n kube-system get pod {} -o=json | jq '.spec.containers[].command' 
[
  "/hyperkube",
  "kube-controller-manager",
  "--log-dir=/var/log/kube-controller",
  "--logtostderr=false",
  "--cluster-cidr=172.16.0.0/16",
  "--allocate-node-cidrs=false",
  "--authentication-kubeconfig=/srv/kubernetes/controller/kubeconfig",
  "--authorization-kubeconfig=/srv/kubernetes/controller/kubeconfig",
  "--cloud-provider=aws",
  "--feature-gates=CustomResourceSubresources=true",
  "--feature-gates=ExpandInUsePersistentVolumes=true",
  "--feature-gates=ExpandPersistentVolumes=true",
  "--feature-gates=TaintNodesByCondition=true",
  "--feature-gates=TTLAfterFinished=true",
  "--feature-gates=RunAsGroup=true",
  "--kubeconfig=/srv/kubernetes/controller/kubeconfig",
  "--root-ca-file=/srv/kubernetes/ca.crt",
  "--service-account-private-key-file=/srv/kubernetes/signing.key",
  "--use-service-account-credentials=true"
]

The deployment contains the following:

kubectl -n kube-system get deployment metadata -o=json | jq .spec.template.spec.securityContext
{
  "fsGroup": 1000,
  "runAsGroup": 1000,
  "runAsNonRoot": true,
  "runAsUser": 1000
}
-- Richard Maynard
docker
kubernetes

1 Answer

2/16/2020

You can provide the supplementalGroups under securityContexts section.

 securityContext: {
    "supplementalGroups": [1000]
 }
-- Subramanian Manickam
Source: StackOverflow