Kubectl not reflecting node after deleting and recreating node from kops

5/12/2018

one of my node went to not ready state. so I tried deleting and recreating them again. But now my kubectl is not reflecting the restarted node. but kops says the node is ready and I can see the node in aws interface too. looks like kubectl is not getting updated

kops get instancegroups --name kubernetes.xxxxxx.xxx --state s3://kops-state-xxxxxxxx

NAME                ROLE    MACHINETYPE MIN MAX ZONES
master-ap-south-1a  Master  t2.micro    1   1   ap-south-1a
nodes               Node    t2.micro    2   2   ap-south-1a

In kubectl:

kubectl get nodes
NAME                                           STATUS         AGE       VERSION
ip-xxx-xx-xx-xxx.ap-south-1.compute.internal   Ready,node     32d       v1.8.7
ip-xxx-xx-xx-xxx.ap-south-1.compute.internal    Ready,master   32d       v1.8.7
-- Gaudam Thiyagarajan
amazon-web-services
kops
kubectl
kubernetes

0 Answers