Kubernetes bind address

9/6/2019

I have previously setup kubernetes clusters in dev environments, using private servers without any issues. Now i created a new cluster in a datacenter (hetzner) I been trying to get everything working for several days now, reinstalling the servers many times, facing the same issues every time. Most of my services seem to have network issues, for example the dashboard, dockerreg ui, ... cannot access the resources loaded by the web interfaces. Even pushing a container to the private dockerreg start but stops and timeout after few seconds. If i configure any of the services with issues to the node port they work find.

So this is probably an issue with the kube-proxy. All of my servers (3x master node and 2x worker node) have a public and private ip address. when i get a list of pods, all thoses that are running on the host ip, use the external ip instead of the internal ip.

How can i bind these to use the internal ip only?

kubectl get pods -o wide -n kube-system

NAME                                       READY   STATUS    RESTARTS   AGE   IP                NODE             NOMINATED NODE   READINESS GATES
calico-kube-controllers-65b8787765-zj728   1/1     Running   2          12h   192.168.57.14     k8s-master-001   <none>           <none>
calico-node-cxn2p                          1/1     Running   1          12h   <external ip>     k8s-master-003   <none>           <none>
calico-node-k9g7n                          1/1     Running   1          12h   <external ip>     k8s-master-002   <none>           <none>
calico-node-mt8r7                          1/1     Running   2          12h   <external ip>     k8s-master-001   <none>           <none>
calico-node-pww9q                          1/1     Running   1          12h   <external ip>     k8s-worker-002   <none>           <none>
calico-node-wlg8g                          1/1     Running   2          12h   <external ip>     k8s-worker-001   <none>           <none>
coredns-5c98db65d4-lrzj8                   1/1     Running   0          12h   192.168.20.1      k8s-worker-002   <none>           <none>
coredns-5c98db65d4-s6tzv                   1/1     Running   1          12h   192.168.102.17    k8s-worker-001   <none>           <none>
etcd-k8s-master-001                        1/1     Running   2          12h   <external ip>     k8s-master-001   <none>           <none>
etcd-k8s-master-002                        1/1     Running   1          12h   <external ip>     k8s-master-002   <none>           <none>
etcd-k8s-master-003                        1/1     Running   1          12h   <external ip>     k8s-master-003   <none>           <none>
kube-apiserver-k8s-master-001              1/1     Running   2          12h   <external ip>     k8s-master-001   <none>           <none>
kube-apiserver-k8s-master-002              1/1     Running   2          12h   <external ip>     k8s-master-002   <none>           <none>
kube-apiserver-k8s-master-003              1/1     Running   1          12h   <external ip>     k8s-master-003   <none>           <none>
kube-controller-manager-k8s-master-001     1/1     Running   3          12h   <external ip>     k8s-master-001   <none>           <none>
kube-controller-manager-k8s-master-002     1/1     Running   1          12h   <external ip>     k8s-master-002   <none>           <none>
kube-controller-manager-k8s-master-003     1/1     Running   1          12h   <external ip>     k8s-master-003   <none>           <none>
kube-proxy-mlsnp                           1/1     Running   1          12h   <external ip>     k8s-master-003   <none>           <none>
kube-proxy-mzck9                           1/1     Running   2          12h   <external ip>     k8s-worker-001   <none>           <none>
kube-proxy-p7vfz                           1/1     Running   1          12h   <external ip>     k8s-master-002   <none>           <none>
kube-proxy-s55fr                           1/1     Running   2          12h   <external ip>     k8s-master-001   <none>           <none>
kube-proxy-tz6zn                           1/1     Running   1          12h   <external ip>     k8s-worker-002   <none>           <none>
kube-scheduler-k8s-master-001              1/1     Running   3          12h   <external ip>     k8s-master-001   <none>           <none>
kube-scheduler-k8s-master-002              1/1     Running   1          12h   <external ip>     k8s-master-002   <none>           <none>
kube-scheduler-k8s-master-003              1/1     Running   1          12h   <external ip>     k8s-master-003   <none>           <none>
traefik-ingress-controller-gxthm           1/1     Running   1          35m   192.168.57.15     k8s-master-001   <none>           <none>
traefik-ingress-controller-rdv8j           1/1     Running   0          35m   192.168.160.133   k8s-master-003   <none>           <none>
traefik-ingress-controller-w4t4t           1/1     Running   0          35m   192.168.1.133     k8s-master-002   <none>           <none>

im running kubernetes 1.15.3, using CRIO and calico. all servers are on the 10.0.0.0/24 subnet

I expect the pods running on the node ip, to use the interanal ip instead of the external ip

--- Edit 16/09/2019

The cluster is initialized using the following command sudo kubeadm init --config=kubeadm-config.yaml --upload-certs My kubeadm-config.yaml

apiVersion: kubeadm.k8s.io/v1beta2
kind: InitConfiguration
localAPIEndpoint:
  advertiseAddress: "10.0.0.2"
  bindPort: 6443
---
apiVersion: kubeadm.k8s.io/v1beta2
kind: ClusterConfiguration
kubernetesVersion: stable
controlPlaneEndpoint: "10.0.0.200:6443"
apiServer:
  certSANs:
  - "k8s.deb-ict.com"
networking:
  serviceSubnet: "10.96.0.0/12"
  podSubnet: "192.168.0.0/16"
-- Randy Deborggraeve
kubernetes
project-calico

0 Answers