Kubernetes stop containers from restarting automatically

11/12/2018

I recently upgraded Docker for Mac to v18.06 and noticed it can run a local k8s cluster. I was excited to try it out and ran one of our services via kubectl apply -f deployments/staging/tom.yml. The yml manifest does not have a restart policy specified. I then shut it down using kubectl delete -f .... Since then, each time I start docker that container is starting automatically. Here is the output of docker ps, truncated for brevity

CONTAINER ID    IMAGE            COMMAND                  CREATED             NAMES
2794eae1f31e    b06778bfe205     "/bin/sh -c 'java -c…"   27 minutes ago      k8s_tom-joiner_tom-joiner-66fcfd84bc...
8dd19dd65486    b06778bfe205     "/bin/sh -c 'java -c…"   27 minutes ago      k8s_tom-loader_tom-loader-6cb9f7f4fb...
...

However, the image is not managed by Kubernetes, so I cannot do kubectl delete -f

kubectl get pods
No resources found.

How do I permanently shut down the image and prevent if from restarting automatically? I tried docker update --restart=no $(docker ps -a -q) with no luck

-- mbatchkarov
docker
kubernetes

1 Answer

11/12/2018

This depends on your specific deployment, but Kubernetes specifices that

A PodSpec has a restartPolicy field with possible values Always, OnFailure, and Never. The default value is Always. [...]

So, if you don't want your pod to restart for any reason, you have to explicitly tell it not to. You don't provide the example contents of your YAML file so I cannot guess the best place to do that, but that's enough for general guidance I think.


Now, for the problem that you face: Docker is probably using a custom namespace. Use

kubectl get namespaces

to see what you get, then search for pods in those namespaces with

kubectl -n <namespace> get pods

Or if you're impatient just get it over with:

kubectl --all-namespaces get pods

Reference: kubectl cheat sheet

-- rath
Source: StackOverflow