How to apply patch to make pod phase go to Succeeded or Failed?

11/18/2020

This is a follow-up from my last question - How to programmatically modify a running k8s pod status conditions? after which I realized, you can only patch the container spec from the deployment manifest to seamlessly let the controller apply the patch changes to the pods through the ReplicaSet it created.

Now my question is how to apply patch to make the Pod phase to go to Succeeded or Failed. I know for e.g. for pod phase to go to Succeeded, all the containers need to terminate successfully and shouldn't be restarted. My intention is to not modify the original command and arguments from the container image, but apply a patch to introduce a custom command which will override the one from the container image.

So I attempted to do below to run exit 0 as below

kubectl -n foo-ns patch deployment foo-manager -p '
{
  "spec": {
    "template": {
      "spec": {
        "containers": [
          {
            "name": "container1",
            "command": [
              "exit",
              "0"
            ]
          },
          {
            "name": "container2",
            "command": [
              "exit",
              "0"
            ]
          }
        ]
      }
    }
  }
}'

But since my container file system layers are built from FROM scratch, there aren't any native commands available other than the original executable supposed to run i.e. exit as a built-in is not even available.

What's the best way to do this? by patching the pod to make it transition to either of those Pod phases.

-- Inian
kubernetes
kubernetes-deployment
kubernetes-python-client

1 Answer

11/18/2020

how to apply patch to make the Pod phase to go to Succeeded or Failed

Pods are intended to be immutable - don't try to change them - instead, replace them with new Pods. You can create ReplicaSet directly, but mostly, you want to work with Deployment that is replacing the current ReplicaSet for every change on the Pod-template.

Basically I'm testing one of my custom controllers can catch a pod's phase (and act on it) when it is stuck in a certain state e.g. Pending

All Pods goes through those states. For testing, you can create Pods directly, with different binaries or arguments.

To test Pod phase Pending you could, log the phase in your controller when watching a Pod? Or you can mock the pod - so that it is in phase Pending?

I don't know kubernetes-python-client but client-go does have Fake-clients that can work with Pods, including UpdateStatus.

func (c *FakePods) UpdateStatus(ctx context.Context, pod *corev1.Pod, opts v1.UpdateOptions) (*corev1.Pod, error)

Now, looking at the Python client, it does seem to lack this feature: Issue #524 fake client for unit testing

-- Jonas
Source: StackOverflow