I am using kubeadm
to create a single master kubernete at version 1.11.5
. I have a kubeadm config like this:
apiVersion: kubeadm.k8s.io/v1alpha1
kind: MasterConfiguration
api:
advertiseAddress: <internal-ip>
apiServerCertSANs:
- <public-ip>
- <internal-ip>
kubernetesVersion: v1.11.5
networking:
podSubnet: 10.244.0.0/16
The machine is an ec2
and add the <public-ip>
in apiServerCertSANs
make it possible to access the cluster from my laptop use kubectl
. But the bad thing is that the kubeadm join
command running in the worker node
will be default use the <public-ip>
instead of <internal-ip>
.
I try to mannually use kubeadm join <internal-ip>:6443 --token wby3bb.vomsgxxxxxxb --discovery-token-ca-cert-hash sha256:xxxxx
but the generated file /etc/kubernetes/bootstrap-kubelet.conf
and /etc/kubernetes/kubelet.conf
is still using the <public-ip>:6443
inside it. And the step 'Downloading configuration for the kubelet from the "kubelet-config-1.11" ConfigMap in the kube-system namespace' is still access the master node from <public-ip>
.
I need to make all the kubernetes access through internal ip how can I change the ip to access?
If you want only use internal network for communication in cluster. Just update --advertise-address
on your master. It is usually stored here: /etc/systemd/system/kube-apiserver.service
--advertise-address=<internal-ip>
and then restart the api-server daemon:
systemctl status kube-apiserver
also you can check other kubernetes daemons to listen only internal IP addresses.