Quarantine pod and create a replacement

4/15/2020

I have an odd environment where I am currently not able to write the k8s pod logs out to a logging service(loggly, sumo, etc...). Occasionally one of the pods starts having issues in prod and the fix is to delete the pod and let it be replaced, which the Operations team manages(I do not have direct access to do this).

What I would like to do is to have the pod with the issues be saved and no longer have traffic routed to it and be replaced by the controller. That way I can dig through the pod and logs to determine what happened.

I looked into Labeling to do this here: https://kubernetes.io/docs/concepts/configuration/overview/

The important section is this

You can manipulate labels for debugging. Because Kubernetes controllers (such as ReplicaSet) and Services match to Pods using selector labels, removing the relevant labels from a Pod will stop it from being considered by a controller or from being served traffic by a Service. If you remove the labels of an existing Pod, its controller will create a new Pod to take its place. This is a useful way to debug a previously “live” Pod in a “quarantine” environment. To interactively remove or add labels, use kubectl label.

I have gone in and edited the pod directly using:

kubectl edit <pod> 

Then removing the label section entirely and saving.

When I do that though nothing seems to happen. The pod remains and is not replaced as far as I can see by checking:

kubectl get pods

Is it moved somewhere that I am not seeing? Should removing the labels work like I expect(Controller creates a new pod to take its place)?

Any help would be appreciated. Thanks.

Edit 1:

Removing or changing the labels from the pods as suggested in the post by mdaniel does not remove the pods and spin up a replacement.

I am deploying using Helm so there is not a Run label like in the example, but I do have the service selector section and pod labels setup like so:

Service Selector yaml section:

 selector:
    app.kubernetes.io/instance: foo
    app.kubernetes.io/name: bar

Pod Label section:

  labels:
    app.kubernetes.io/instance: foo
    app.kubernetes.io/name: bar
    controller-revision-hash: some-hash
    statefulset.kubernetes.io/pod-name: pod-name

I change the Pod Label section to:

  labels:
    app.kubernetes.io/instance: i-am-debugging-this-pod
    app.kubernetes.io/name: i-am-debugging-this-pod
    controller-revision-hash: some-hash
    statefulset.kubernetes.io/pod-name: pod-name

And then...nothing happens. The pod just keeps chugging along. It seem like it SHOULD work, but nada.

-- Aaron L.
kubernetes
kubernetes-pod

1 Answer

4/16/2020

Two answers. One is that you can do this with your readiness probe. Two, removed pod logs are kept for a little while anyway so maybe just use those? Use the -p option to kubectl logs.

-- coderanger
Source: StackOverflow