AWS ECS essential container equivalent in kubernetes

7/28/2017

We are scheduling Atlassian Bamboo jobs to ECS currently and looking into doing the same on kubernetes. We have the Bamboo agent container with 1-n side service containers based on what the job needs (database, docker daemon, selenium,...). In ECS we marked the main agent container as 'essential' and when the agent finished the work and exited, the entire ECS task collapsed, exiting all the other side containers.

How would we do the same thing in Kubernetes? It seems like the only option we have is to regularly poke the cluster and check pods with bamboo-agent container terminated and terminate the pods from outside. Is there a way to make the pod auto-collapse/terminate when one of the containers dies?

-- mkleint
amazon-ecs
kubernetes

2 Answers

8/9/2017

One way appears to be to kill the pod from inside using a trap function in the container's entrypoint script. It requires permissions on the serviceaccount and ability to perform an http request to the kube cluster API.

function kube_cleanup {
    # if running in kube only.
    # assumes KUBE_POD_NAME is passed to this container.
    if [ -f '/var/run/secrets/kubernetes.io/serviceaccount/namespace' ]; then
        namespace=$(cat /var/run/secrets/kubernetes.io/serviceaccount/namespace)
        token=$(cat /var/run/secrets/kubernetes.io/serviceaccount/token)
        kube_url="https://kubernetes.default/api/v1/namespaces/$namespace/pods/$KUBE_POD_NAME"
        curl -v --tlsv1.2 --cacert /var/run/secrets/kubernetes.io/serviceaccount/ca.crt -H "Authorization: Bearer $token" -X "DELETE" $kube_url
    fi
}
trap kube_cleanup EXIT
-- mkleint
Source: StackOverflow

7/28/2017
-- Sebastien Goasguen
Source: StackOverflow