How to stop/crash/fail a pod manually in Kubernetes/Openshift

6/15/2019

I'm running rook-ceph-cluster on top of AWS with 3 masters - 3 worker node configuration. I have created my cluster using this.

Each worker node is 100 GiB each.

After setting everything up. I have my pods running (6 pods to be exact,3 for master and 3 for nodes).

How can I crash/fail/stop those pods manually (to test some functionality)?.

Is there is any way I can add more load manually to those pods so that it can crash?.

Or can I somehow make them Out Of Memory?.

Or can I simulate intermittent network failures and disconnection of nodes from the network?

Or any other ways like writing some script that might prevent a pod to be created?

-- Rajat Singh
amazon-web-services
kubectl
kubernetes
openshift

1 Answer

6/16/2019

You can delete pods manually as mentioned by Graham, but the rest are trickier. For simulating an OOM, you could kubectl exec into the pod and run something that will burn up RAM. Or you could set the limit down below what it actually uses. Simulating network issues would be up to your CNI plugin, but I'm not aware of any that allow failure injection. For preventing a pod from being created, you can set an affinity it that is not fulfilled by any node.

-- coderanger
Source: StackOverflow