Kubernetes Autoscaler missing node

11/9/2017

I’m having an issue with what I believe to be the k8s the autoscaler.

The autoscaler launched a new cluster after a recent deploy (and I can see that instance on EC2, where our k8s deployment’s hosted), but it doesn’t show up when I do kubectl get nodes.

kubectl get nodes
NAME                             STATUS    ROLES     AGE       VERSION
ip-172-20-110-212.ec2.internal   Ready     master    322d      v1.5.1
ip-172-20-129-59.ec2.internal    Ready     master    322d      v1.5.1
ip-172-20-153-170.ec2.internal   Ready     <none>    322d      v1.5.1
ip-172-20-160-119.ec2.internal   Ready     master    322d      v1.5.1
ip-172-20-162-94.ec2.internal    Ready     <none>    316d      v1.5.1
ip-172-20-166-194.ec2.internal   Ready     <none>    322d      v1.5.1
ip-172-20-79-1.ec2.internal      Ready     <none>    112d      v1.5.1
ip-172-20-92-163.ec2.internal    Ready     <none>    322d      v1.5.1

Further, a kube-proxy pod that matches this “missing” node’s IP does show up, but is killed and relaunched every 30 seconds.

kubectl get pods
kube-proxy-ip-172-20-181-122.ec2.internal                1/1       Running   0          17s
-- Jimmy Xu
amazon-ec2
kubernetes

1 Answer

11/13/2017

I ended up manually deleting the EC2 instance. The autoscaler immediately relaunched a new instance and everything worked fine after.

-- Jimmy Xu
Source: StackOverflow