TLDR: Something keeps recreating containers with an image from my Kubernetes master machine and I cant figure out what!!?!?
I created a deployment(Web Project) and a service(HTTPS Service). The deployment created 3 replica sets of my app 'webProject'.
After I ran kubectl create -f webproject.yml, it spun everything up but then my Docker images got stuck somewhere during 'Rollout'.
So I kubectl delete deployments/webproject which then removed my deployments. I also removed the https service as well.
kubectl get pods
No resources found.
kubectl get deployments
No resources found.
kubectl get services
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 3h38m
kubectl get nodes
NAME STATUS ROLES AGE VERSION
kubemaster100 Ready master 3h37m v1.12.1
As you can see it says there are no pods or worker nodes. So when I connect to the worker node to troubleshoot the images, I noticed that it still had containers running with my deployment name.
After I
docker stop 'container'
docker rm 'container'
docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
259a77058d24 39825e5b6bcd "/usr/sbin/apache2ct…" 22 seconds ago Up 20 seconds k8s_webServer_webServer-deployment-7696fdd44c-dcjjd_default_fcf8fde0-d0c6-11e8-9f67-bc305be7abdb_2
They are instantly getting recreated again. Why?
So if you delete a node on Kubernetes it just deletes it from etcd where Kubernetes keeps its state. However, the kubelet is still running on your node and may hold a cache (not 100% sure about it). I would try:
systemctl stop kubelet
or
pkill kubelet
verify that is not running:
ps -Af | grep kubelet # should not return anything.
Then stop and remove your container like you did initially.