We have a requirement to have all our EC2 instances connected to our Active Directory that has a domain name companyname.internal.
When we try to spin up the Kubernetes cluster with the kubeadm, it picks up the AWS internal hostname for some reason and fails to register. I have ensured that the EC2 instance local host name and the metadata from the EC2 metadata points to the correct hostname.
http://169.254.169.254/latest/meta-data/hostname
ip-10-202-1-205.company.internal
However, in the kubelet logs, I can see the AWS standard hostname. Is there a way to override this?
2443 status_manager.go:485] Failed to get status for pod "kube-controller-manager-ip-10-202-1-205.us-gov-west-1.compute.internal_kube-system(xxxx)": Get https://api.xxx:443/let[2443]: E0917 02:50:07.887059
2443 kubelet_node_status.go:94] Unable to register node "ip-10-202-1-205.us-gov-west-1.compute.internal" with API server: Post https://api.xxx:443/api/v1/nodes: EOF
You can add the --node-name option on the kubeadm init command.
e.g.
kubeadm init --node-name ec2-18-189-6-14.us-east-2.compute.amazonaws.com --pod-address-cidr=****
Then you will see this output.
root@ip-172-31-27-139:/home/ubuntu# kubectl get nodes
NAME STATUS ROLES AGE VERSION
ec2-18-189-6-14.us-east-2.compute.amazonaws.com Ready master 10m v1.15.3