I have a kubeadm deployed master (v1.10.12) and I'm trying to add a new node to the cluster:
on the master I do:
sudo kubeadm token create
sudo kubeadm token list
TOKEN TTL EXPIRES USAGES DESCRIPTION EXTRA GROUPS
2txs62.83q81hpici7a0u5q 23h 2018-12-20T23:37:46Z authentication,signing <none> system:bootstrappers:kubeadm:default-node-token
and then on the new node, I run:
sudo yum install -y kubeadm-1.10.12-0
sudo yum install -y kubelet-1.10.12-0
sudo kubeadm reset
sudo kubeadm join --token 2txs62.83q81hpici7a0u5q W.X.Y.Z:6443 --discovery-token-unsafe-skip-ca-verification
[preflight] Running pre-flight checks
[discovery] Trying to connect to API Server "W.X.Y.Z:6443"
[discovery] Created cluster-info discovery client, requesting info from "https://W.X.Y.Z:6443"
[discovery] Cluster info signature and contents are valid and no TLS pinning was specified, will use API Server "W.X.Y.Z:6443"
[discovery] Successfully established connection with API Server "W.X.Y.Z:6443"
[join] Reading configuration from the cluster...
[join] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
unable to fetch the kubeadm-config ConfigMap: failed to get config map: configmaps "kubeadm-config" is forbidden: User "system:bootstrap:2txs62" cannot get configmaps in the namespace "kube-system"
on the master:
kubectl -n kube-system get cm kubeadm-config -oyaml
apiVersion: v1
data:
MasterConfiguration: |
api:
advertiseAddress: W.X.Y.Z
bindPort: 6443
controlPlaneEndpoint: ""
auditPolicy:
logDir: /var/log/kubernetes/audit
logMaxAge: 2
path: ""
authorizationModes:
- Node
- RBAC
certificatesDir: /etc/kubernetes/pki
cloudProvider: ""
criSocket: /var/run/dockershim.sock
etcd:
caFile: ""
certFile: ""
dataDir: /var/lib/etcd
endpoints: null
image: ""
keyFile: ""
imageRepository: gcr.io/google_containers
kubeProxy:
config:
bindAddress: 0.0.0.0
clientConnection:
acceptContentTypes: ""
burst: 10
contentType: application/vnd.kubernetes.protobuf
kubeconfig: /var/lib/kube-proxy/kubeconfig.conf
qps: 5
clusterCIDR: ""
configSyncPeriod: 15m0s
conntrack:
max: null
maxPerCore: 32768
min: 131072
tcpCloseWaitTimeout: 1h0m0s
tcpEstablishedTimeout: 24h0m0s
enableProfiling: false
featureGates:
"": false
healthzBindAddress: 0.0.0.0:10256
hostnameOverride: ""
iptables:
masqueradeAll: false
masqueradeBit: 14
minSyncPeriod: 0s
syncPeriod: 30s
ipvs:
minSyncPeriod: 0s
scheduler: ""
syncPeriod: 30s
metricsBindAddress: 127.0.0.1:10249
mode: ""
nodePortAddresses: null
oomScoreAdj: -999
portRange: ""
resourceContainer: /kube-proxy
udpIdleTimeout: 250ms
kubeletConfiguration: {}
kubernetesVersion: v1.10.12
networking:
dnsDomain: cluster.local
podSubnet: ""
serviceSubnet: 10.96.0.0/12
nodeName: kube-master.novalocal
privilegedPods: false
token: ""
tokenGroups:
- system:bootstrappers:kubeadm:default-node-token
tokenTTL: 24h0m0s
tokenUsages:
- signing
- authentication
unifiedControlPlaneImage: ""
kind: ConfigMap
metadata:
creationTimestamp: 2018-03-28T06:37:58Z
name: kubeadm-config
namespace: kube-system
resourceVersion: "105798137"
selfLink: /api/v1/namespaces/kube-system/configmaps/kubeadm-config
uid: 8dc493f2-3252-11e8-a270-fa163e21c438
Help!? cheers,
Sounds like you have a version mismatch and running into something like this.
You can manually try to create a Role
in the kube-system
namespace with the name kubeadm:kubeadm-config
. For example:
cat <<EOF | kubectl apply -f -
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
namespace: kube-system
name: kubeadm:kubeadm-config
rules:
- apiGroups:
- ""
resourceNames:
- kubeadm-config
resources:
- configmaps
verbs:
- get
EOF
and then create a matching RoleBinding
:
cat <<EOF | kubectl apply -f -
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
namespace: kube-system
name: kubeadm:kubeadm-config
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: kubeadm:kubeadm-config
subjects:
- apiGroup: rbac.authorization.k8s.io
kind: Group
name: system:nodes
- apiGroup: rbac.authorization.k8s.io
kind: Group
name: system:bootstrappers:kubeadm:default-node-token
EOF