The cluster-info ConfigMap does not yet contain a JWS signature for token ID "cjxj26"

7/15/2021

I have a master node, now I want to join the master node from a work node, I generated a never expiry token and execute join command, however I got this error:

[root@worker-node1 ~]# kubeadm join 192.168.18.136:6443 --token cjxj26.ibwrtisae30ypis6 \

--discovery-token-ca-cert-hash sha256:2659517cbbb2623b3d93408a4ab50f3592a3d021adf25d25c8050dd44345eadd preflight Running pre-flight checks WARNING IsDockerSystemdCheck: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/ WARNING Hostname: hostname "worker-node1" could not be reached WARNING Hostname: hostname "worker-node1": lookup worker-node1 on 192.168.18.2:53: no such host ^C root@worker-node1 ~# kubeadm join 192.168.18.136:6443 --token cjxj26.ibwrtisae30ypis6 --discovery-token-ca-cert-hash sha256:2659517cbbb2623b3d93408a4ab50f3592a3d021adf25d25c8050dd44345eadd --v=5 I0714 22:05:12.684249 1567 join.go:395] preflight found NodeName empty; using OS hostname as NodeName I0714 22:05:12.684489 1567 initconfiguration.go:104] detected and using CRI socket: /var/run/dockershim.sock preflight Running pre-flight checks I0714 22:05:12.684592 1567 preflight.go:90] preflight Running general checks I0714 22:05:12.684742 1567 checks.go:249] validating the existence and emptiness of directory /etc/kubernetes/manifests I0714 22:05:12.684758 1567 checks.go:286] validating the existence of file /etc/kubernetes/kubelet.conf I0714 22:05:12.684768 1567 checks.go:286] validating the existence of file /etc/kubernetes/bootstrap-kubelet.conf I0714 22:05:12.684776 1567 checks.go:102] validating the container runtime I0714 22:05:12.844191 1567 checks.go:128] validating if the "docker" service is enabled and active WARNING IsDockerSystemdCheck: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/ I0714 22:05:13.064741 1567 checks.go:335] validating the contents of file /proc/sys/net/bridge/bridge-nf-call-iptables I0714 22:05:13.064849 1567 checks.go:335] validating the contents of file /proc/sys/net/ipv4/ip_forward I0714 22:05:13.064905 1567 checks.go:649] validating whether swap is enabled or not I0714 22:05:13.064948 1567 checks.go:376] validating the presence of executable conntrack I0714 22:05:13.064986 1567 checks.go:376] validating the presence of executable ip I0714 22:05:13.065010 1567 checks.go:376] validating the presence of executable iptables I0714 22:05:13.065033 1567 checks.go:376] validating the presence of executable mount I0714 22:05:13.065057 1567 checks.go:376] validating the presence of executable nsenter I0714 22:05:13.065082 1567 checks.go:376] validating the presence of executable ebtables I0714 22:05:13.065104 1567 checks.go:376] validating the presence of executable ethtool I0714 22:05:13.065127 1567 checks.go:376] validating the presence of executable socat I0714 22:05:13.065149 1567 checks.go:376] validating the presence of executable tc I0714 22:05:13.065167 1567 checks.go:376] validating the presence of executable touch I0714 22:05:13.065199 1567 checks.go:520] running all checks I0714 22:05:13.262576 1567 checks.go:406] checking whether the given node name is reachable using net.LookupHost WARNING Hostname: hostname "worker-node1" could not be reached WARNING Hostname: hostname "worker-node1": lookup worker-node1 on 192.168.18.2:53: no such host I0714 22:05:14.338418 1567 checks.go:618] validating kubelet version I0714 22:05:14.465098 1567 checks.go:128] validating if the "kubelet" service is enabled and active I0714 22:05:14.485740 1567 checks.go:201] validating availability of port 10250 I0714 22:05:14.486043 1567 checks.go:286] validating the existence of file /etc/kubernetes/pki/ca.crt I0714 22:05:14.486068 1567 checks.go:432] validating if the connectivity type is via proxy or direct I0714 22:05:14.486125 1567 join.go:465] preflight Discovering cluster-info I0714 22:05:14.486182 1567 token.go:78] discovery Created cluster-info discovery client, requesting info from "192.168.18.136:6443" I0714 22:05:14.624417 1567 token.go:221] discovery The cluster-info ConfigMap does not yet contain a JWS signature for token ID "cjxj26", will try again I0714 22:05:20.278283 1567 token.go:221] discovery The cluster-info ConfigMap does not yet contain a JWS signature for token ID "cjxj26", will try again I0714 22:05:26.320259 1567 token.go:221] discovery The cluster-info ConfigMap does not yet contain a JWS signature for token ID "cjxj26", will try again

actually, the token is exist, when I execute kubeadm token list in the master node, it can display:

   [root@k8s-master ~]# kubeadm token list
TOKEN                     TTL         EXPIRES   USAGES                   DESCRIPTION                                                EXTRA GROUPS
cjxj26.ibwrtisae30ypis6   <forever>   <never>   authentication,signing   <none>                                                     system:bootstrappers:kubeadm:default-node-token

and the token was exist in the cluster-info configmap.

[root@k8s-master .kube]# kubectl -n kube-public get cm cluster-info -o yaml
apiVersion: v1
data:
  jws-kubeconfig-cjxj26: eyJhbGciOiJIUzI1NiIsImtpZCI6ImNqeGoyNiJ9..RgWG119Onf5oZLgCS0MPfIjcshdhm81bUz_mTq1Av54
  kubeconfig: |
    apiVersion: v1
    clusters:
    - cluster:

Did anyone get this kind of error before? I tried to search solutions in google, many people said re-generate token, but it doesn't work for me.

-- Even Chen
kubernetes

2 Answers

7/16/2021

I'm getting a similar issue using Kubespray. I haven't discovered something relevant to share yet but did you check errors in the apiserver, controller-manager and etcd services?

UPDATE: I just discovered that disabling the cloud-node-lifecycle controller was the root cause of my issue. Don't do this in a multi-node setup.

kube_kubeadm_controller_extra_args:
  controllers: "*,-cloud-node-lifecycle"
-- electrocucaracha
Source: StackOverflow

10/24/2021

ITNOA

Maybe your problem, is from token timed out.

If you want to sure this problem exist you can run below command

kubeadm token list

If above command does not show anything, your problem is token is timed out.

for resolve your problem, you can do below

kubeadm token create

If you run kubeadm token list again you can see below result

TOKEN                     TTL         EXPIRES                USAGES                   DESCRIPTION                                                EXTRA GROUPS
70jkdh.gx9oiqd7jno56nou   23h         2021-10-25T19:52:59Z   authentication,signing   <none>                                                     system:bootstrappers:kubeadm:default-node-token

As you can see this token has TTL, so after 23h this token was expired.

So before the TTL has comes, you can join another node with result token of above command.

-- sorosh_sabz
Source: StackOverflow