Trying to add a node to the kubernetes cluster using kubeadm join is showing an error

10/16/2018

I am using kubeadm to create a kubernetes cluster. Kubeadm init was successful. But when I try to add nodes, I am seeing this error. Any direction is highly appreciated.

kubeadm join 10.127.0.142:6443 --token ddd0 --discovery-token-ca-cert-hash sha256:ddddd
[preflight] running pre-flight checks
        [WARNING RequiredIPVSKernelModulesAvailable]: the IPVS proxier will not be used, because the following required kernel modules are not loaded: [ip_vs_sh ip_vs ip_vs_rr ip_vs_wrr] or no builtin kernel ipvs support: map[ip_vs_sh:{} nf_conntrack_ipv4:{} ip_vs:{} ip_vs_rr:{} ip_vs_wrr:{}]
you can solve this problem with following methods:
 1. Run 'modprobe -- ' to load missing kernel modules;
2. Provide the missing builtin kernel ipvs support

[discovery] Trying to connect to API Server "10.127.0.142:6443"
[discovery] Created cluster-info discovery client, requesting info from "https://10.127.0.142:6443"
[discovery] Requesting info from "https://10.127.0.142:6443" again to validate TLS against the pinned public key
[discovery] Cluster info signature and contents are valid and TLS certificate validates against pinned roots, will use API Server "10.127.0.142:6443"
[discovery] Successfully established connection with API Server "10.127.0.142:6443"
[kubelet] Downloading configuration for the kubelet from the "kubelet-config-1.12" ConfigMap in the kube-system namespace
configmaps "kubelet-config-1.12" is forbidden: User "system:bootstrap:mq0t2n" cannot get configmaps in the namespace "kube-system"
-- sbolla
kubeadm
kubernetes

3 Answers

3/30/2019
  1. kubectl -n kube-system get role kubeadm:kubelet-config-1.13 > kubeadm:kubelet-config-1.12-role.yaml

    #metadata/name: kubeadm:kubelet-config-1.12
    #roleRef/name: kubeadm:kubelet-config-1.12
    #delete creationTimestamp, resourceVersion, selfLink, uid (because --export option is not supported)
  2. kubectl apply -f kubeadm:kubelet-config-1.12-role.yaml

  3. kubectl -n kube-system get rolebinding kubeadm:kubelet-config-1.13 > kubeadm:kubelet-config-1.12-rolebinding.yaml

    #metadata/name: kubeadm:kubelet-config-1.12
    #roleRef/name: kubeadm:kubelet-config-1.12
    #delete creationTimestamp, resourceVersion, selfLink, uid (because --export option is not supported)
  4. kubectl apply -f kubeadm:kubelet-config-1.12-rolebinding.yaml

  5. kubectl get configmap kubelet-config-1.13 -n kube-system -oyaml > kubelet-config-1.12

    #metadata/name: kubelet-config-1.12
    #roleRef/name: kubelet-config-1.12
    #delete creationTimestamp, resourceVersion, selfLink, uid (because --export option is not supported)
  6. kubectl apply -f kubelet-config-1.12

  7. login to the node which you want to join and delete following files:

    rm /etc/kubernetes/bootstrap-kubelet.conf
    rm /etc/kubernetes/pki/ca.crt
  8. now run the kubeadm join command

-- Santosh Prasad
Source: StackOverflow

12/14/2018

I started seeing this type of message in 1.12 since Dec 5th, right after the release of 1.13. I was using a scripted install, so there was no version mismatch or anything between my master and worker nodes. If 1.12 is still the desired version, I posted a fix for that permission issue: k8s 1.12 kubeadm join permission fix.

The fix is also provided below:

Perform STEPS 1, 2, 3, 4 on Master node.

Perform STEP 5 on Worker node.

STEP 1: Create a new "kubelet-config-1.12" ConfigMap from existing "kubelet-config-1.13" ConfigMap:

$ kubectl get cm --all-namespaces
$ kubectl -n kube-system get cm kubelet-config-1.13 -o yaml --export > kubelet-config-1.12-cm.yaml
$ vim kubelet-config-1.12-cm.yaml       #modify at the bottom:
                                        #name: kubelet-config-1.12
                                        #delete selfLink
$ kubectl -n kube-system create -f kubelet-config-1.12-cm.yaml

STEP 2: Get token prefix:

$ sudo kubeadm token list           #if no output, then create a token:
$ sudo kubeadm token create
TOKEN                       ...     ...
a0b1c2.svn4my9ifft4zxgg     ...     ...
# Token prefix is "a0b1c2"

STEP 3: Create a new "kubeadm:kubelet-config-1.12" role from existing "kubeadm:kubelet-config-1.13" role:

$ kubectl get roles --all-namespaces
$ kubectl -n kube-system get role kubeadm:kubelet-config-1.13 > kubeadm:kubelet-config-1.12-role.yaml
$ vim kubeadm\:kubelet-config-1.12-role.yaml    #modify the following:
                                                #name: kubeadm:kubelet-config-1.12
                                                #resourceNames: kubelet-config-1.12
                                                #delete creationTimestamp, resourceVersion, selfLink, uid (because --export option is not supported)    
$ kubectl -n kube-system create -f kubeadm\:kubelet-config-1.12-role.yaml

STEP 4: Create a new rolebinding "kubeadm:kubelet-config-1.12" from existing "kubeadm:kubelet-config-1.13" rolebinding:

$ kubectl get rolebindings --all-namespaces
$ kubectl -n kube-system get rolebinding kubeadm:kubelet-config-1.13 > kubeadm:kubelet-config-1.12-rolebinding.yaml
$ vim kubeadm\:kubelet-config-1.12-rolebinding.yaml     #modify the following:
                                                            #metadata/name: kubeadm:kubelet-config-1.12
                                                            #roleRef/name: kubeadm:kubelet-config-1.12
                                                            #delete creationTimestamp, resourceVersion, selfLink, uid (because --export option is not supported)
- apiGroup: rbac.authorization.k8s.io                       #add these 3 lines as another group in "subjects:" at the bottom, with the 6 character token prefix from STEP 2
  kind: Group
  name: system:bootstrap:a0b1c2 
$ kubectl -n kube-system create -f kubeadm\:kubelet-config-1.12-rolebinding.yaml

STEP 5: Run kubeadm join from Worker node:

$ sudo kubeadm join --token <token> <master-IP>:6443 --discovery-token-ca-cert-hash sha256:<key-value> 
# If you receive 2 ERRORS, run kubeadm join again with the following options:
$ sudo kubeadm join --token <token> <master-IP>:6443 --discovery-token-ca-cert-hash sha256:<key-value> --ignore-preflight-errors=FileAvailable--etc-kubernetes-bootstrap-kubelet.conf,FileAvailable--etc-kubernetes-pki-ca.crt
-- chris-p
Source: StackOverflow

10/17/2018

I'm pretty sure you have version mismatch on your master and worker nodes.
Follow this official instruction to upgrade cluster to the same versions.

Second solution is to downgrade worker node to master node versions

-- VKR
Source: StackOverflow