I'm using kubeadm
to create a kubernetes v1.9.3 cluster on CentOS 7.4 / Docker 1.12.6. I'm following the instructions from Using kubeadm to Create a Cluster.
after a successful completion of kubeadm init
I get kube-proxy
with status CrashLoopBackOff
# kubectl -n kube-system get pods
NAME READY STATUS RESTARTS AGE
etcd-ksa-m1.blue 1/1 Running 0 1m
kube-apiserver-ksa-m1.blue 1/1 Running 0 1m
kube-controller-manager-ksa-m1.blue 1/1 Running 0 1m
kube-dns-6f4fd4bdf-24hcr 0/3 Pending 0 2m
kube-proxy-n5lxp 0/1 CrashLoopBackOff 4 2m
kube-scheduler-ksa-m1.blue 1/1 Running 0 1m
there's an error in kube-proxy
logs:
# kubectl -n kube-system logs kube-proxy-n5lxp
I0312 16:39:01.667127 1 feature_gate.go:190] feature gates: map[]
error: unable to read certificate-authority /var/run/secrets/kubernetes.io/serviceaccount/ca.crt for default due to open /var/run/secrets/kubernetes.io/serviceaccount/ca.crt: no such file or directory
I've found a similar issue reported in kubernetes github: kubernetes/issues/59461 but it's open without a solution for quite a while.
I've just found it's related to the docker systemd configuration. I got some docker configs written by puppet.
docker-mountflags.conf
seems to be causing the problemI had this config:
# cat /etc/systemd/system/docker.service.d/docker-mountflags.conf
[Service]
MountFlags=private
I was able to fix the kube-proxy
problem by changing this to the default value:
# cat /etc/systemd/system/docker.service.d/docker-mountflags.conf
[Service]
MountFlags=slave
after this change I have kube-proxy
with status Running
# kubectl -n kube-system get pods
NAME READY STATUS RESTARTS AGE
etcd-ksa-m1.blue 1/1 Running 0 18m
kube-apiserver-ksa-m1.blue 1/1 Running 0 18m
kube-controller-manager-ksa-m1.blue 1/1 Running 0 18m
kube-dns-6f4fd4bdf-lsclt 0/3 Pending 0 19m
kube-proxy-g29bt 1/1 Running 0 19m
kube-scheduler-ksa-m1.blue 1/1 Running 0 18m