I have a testing scenario to check if the API requests are being handled by another pod if one goes down. I know this is the default behaviour, but I want to stimulate the following scenario.
Pod replicas - 2 (pod A and B)
During my API requests, I want to kill/stop only pod A. During downtime of A, requests should be handled by B.
I am aware that we can restart the deployment and also scale replicas to 0 and again to 2, but this won't work for me.
Is there any way to kill/stop/crash only pod A? Any help will be appreciated.
If you want to simulate what happens if one of the pods just gets lost, you can scale down the deployment
kubectl scale deployment the-deployment-name --replicas=1
and Kubernetes will terminate all but one of the pods; you should almost immediately see all of the traffic going to the surviving pod.
But if instead you want to simulate what happens if one of the pods crashes and restarts, you can delete the pod
# kubectl scale deployment the-deployment-name --replicas=2
kubectl get pods
kubectl delete pod the-deployment-name-12345-f7h9j
Once the pod starts getting deleted, the Kubernetes Service should route all of the traffic to the surviving pod(s) (those with Running
status). However, the pod is managed by a ReplicaSet that wants there to be 2 replicas, so if one of the pods is deleted, the ReplicaSet will immediately create a new one. This is similar to what would happen if the pod crashes and restarts (in this scenario you'd get the same pod and the same node, if you delete the pod it might come back in a different place).
As you mentioned you can manually kill or restart the pod that is the only solution to test the case or else you can try crashing the one single POD but in the end, it will create the same scenario POD will auto restart.
Or else may you can increase the Graceful shutdown period for deployment so this way POD might take time and stay in terminating state for a good amount of time and you can perform the test.
In kubernetes where pods are controlled by the replicaSet, if you kill a pod it will again be recreated. So the only way to do this is to scale down the number of replicas. Let's say if your deployment had 4 replicas. You can scale down to 3 by running the command below
kubectl scale deployment <deployment-name> --replicas=3
My example is as show below
kubectl scale deployment hello-world --replicas=3
deployment.apps/hello-world scaled