Force moving a Pod from one worker Node to another

12/9/2019


I'd need to force moving a Pod from one Openshift Node to another, in order to do some performance tests. From the documentation it seems that setting the nodeSelector in the Deployment config is the way to go, but it doesn't work, according to my tests. Here is what I have tried as a test:

Create nginx Pod

oc new-app -f https://raw.githubusercontent.com/sclorg/nginx-ex/master/openshift/templates/nginx.json

The Pod is running on "ip-10-0-121-229.us-east-2.compute.internal" Node. Now I patch the node selector, setting a target Node:

oc patch dc nginx-example  -p '{"spec":{"template":{"spec":{"nodeSelector":{"kubernetes.io/hostname": "ip-10-0-169-74.us-east-2.compute.internal"}}}}}'

However the Pod is still running on the same Node. Even by killing the Pod it re-attaches there. Any suggestion? Thanjs

-- Carla
kubernetes
openshift

2 Answers

12/9/2019

You can use nodeName field in your pod spec to schedule the pod yourself, rather than scheduler do it.

oc explain pod.spec.nodeName

FIELD:    nodeName <string>

DESCRIPTION:
     NodeName is a request to schedule this pod onto a specific node. If it is
     non-empty, the scheduler simply schedules this pod onto that node, assuming
     that it fits resource requirements.

you can patch it in similar way. Don't forget to delete nodeSelctor fields

oc patch dc nginx-example -p '{"spec":{"template":{"spec":{"nodeName": "ip-10-0-169-74.us-east-2.compute.internal"}}}}'

-- Suresh Vishnoi
Source: StackOverflow

12/9/2019

you can try this -

kubectl get pod -o  wide

this will give you the VM on which your pod is running

then execute

kubectl cordon {name_of_that_node_in_which_POD_is_running}

then delete the pods, those you want to get aligned to other node

then run

kubectl uncordon {the_node_that_was_cordoned_above}
-- Tushar Mahajan
Source: StackOverflow