Unable to see join nodes in Kubernetes master

4/13/2018

This is my worker node:

root@ivu:~# kubeadm join 10.16.70.174:6443 --token hl36mu.0uptj0rp3x1lfw6n --discovery-token-ca-cert-hash sha256:daac28160d160f938b82b8c720cfc91dd9e6988d743306f3aecb42e4fb114f19 --ignore-preflight-errors=swap
[preflight] Running pre-flight checks.
        [WARNING Swap]: running with swap on is not supported. Please disable swap
        [WARNING FileExisting-crictl]: crictl not found in system path
Suggestion: go get github.com/kubernetes-incubator/cri-tools/cmd/crictl
[discovery] Trying to connect to API Server "10.16.70.174:6443"
[discovery] Created cluster-info discovery client, requesting info from "https://10.16.70.174:6443"
[discovery] Requesting info from "https://10.16.70.174:6443" again to validate TLS against the pinned public key
[discovery] Cluster info signature and contents are valid and TLS certificate validates against pinned roots, will use API Server "10.16.70.174:6443"
[discovery] Successfully established connection with API Server "10.16.70.174:6443"

This node has joined the cluster:
* Certificate signing request was sent to master and a response
  was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the master to see this node join the cluster.

While checking in master nodes using command kubectl get nodes, I can only able to see master:

ivum01@ivum01-HP-Pro-3330-SFF:~$ kubectl get nodes
NAME                     STATUS    ROLES     AGE       VERSION
ivum01-hp-pro-3330-sff   Ready     master    36m       v1.10.0

For question answer:

  1. docker kubelet kubeadm kubectl installed fine
  2. kubectl get node can not see the current added node; of cause kubectl get pods --all-namespaces has no result for this node;
  3. docker which in current node has no report for kubeadm command(means no k8s images pull, no running container for that)
  4. must import is kubelet not running in work node
  5. run kubelet output:

    Failed to get system container stats for "/user.slice/user-1000.slice/session-1.scope": failed to get cgroup stats for "/user.slice/user-1000.slice/session-1.scope": failed to get container info for "/user.slice/user-1000.slice/session-1.scope": unknown container "/user.slice/user-1000.slice/session-1.scope"

    same as this issue said

  6. tear down and reset cluster(kubeadm reset) and redo that has no problem in my case.

-- Chintamani
kubeadm
kubectl
kubernetes

1 Answer

6/14/2018

I had this problem and it was solved by ensuring that cgroup driver on the worker nodes also were set properly.

check with:

docker info | grep -i cgroup
cat /etc/systemd/system/kubelet.service.d/10-kubeadm.conf

set it with:

sed -i "s/cgroup-driver=systemd/cgroup-driver=cgroupfs/g" /etc/systemd/system/kubelet.service.d/10-kubeadm.conf

then restart kubelet service and rejoin the cluster:

systemctl daemon-reload
systemctl restart kubelet
kubeadm reset
kubeadm join ...

Info from docs: https://kubernetes.io/docs/tasks/tools/install-kubeadm/#configure-cgroup-driver-used-by-kubelet-on-master-node

-- haaduken
Source: StackOverflow