systemctl start kubelet deleting docker images from the master node

9/17/2018

I am trying to set up a kubernetes HA cluster with 5 master nodes by following kubernetes documentation https://kubernetes.io/docs/setup/independent/high-availability/.

I have installed docker 1.13 and kubeadm, kubectl, and kubelet version 1.11.2 on the first master.

Downloaded all the required images on to all master nodes and initiated kubeadm on master node 1, kubelet is running no errors and created etcd cluster on master node 1.

I have copied all the required config and cert files to rest of the master nodes and initiated kubeadm on master node and started kubelet service. kubelet ran successfully on master node 2 and added etcd to the existing cluster.

But when I start the kubelet on master node 3, it's deleting all the docker images from master node 3 except pause image and was not able to create etcd or any kube-* pods and failing to join the third node in the cluster.

Same with the other two nodes.

Can anyone help me in resolving this issue?

Thanks in advance.

-- Raghu.k
docker
kubernetes

1 Answer

9/20/2018

As @Raghu.k mentioned in his last comment, the problem with master node 3 was occurring due to the lack of free space on this Node; however, the recreation of the affected Node has resolved this issue. Flagged as a community wiki for further community researches.

-- mk_sta
Source: StackOverflow