forbidden returned when mounting the default tokens in HA kubernetes cluster

12/19/2017

I have a problem with mounting the default tokens in kubernetes it no longer works with me, I wanted to ask directly before creating an issue on Github, so my setup consists of basically a HA bare metal cluster with manually deployed etcd (which includes certs ca, keys).The deployments run the nodes register, I just cannot deploy pods, always giving the error:

MountVolume.SetUp failed for volume "default-token-ddj5s" : secrets "default-token-ddj5s" is forbidden: User "system:node:tweak-node-1" cannot get secrets in the namespace "default": no path found to object

where tweak-node-1 is one of my node names and hostnames, I have found some similar issues: - https://github.com/kubernetes/kubernetes/issues/18239 - https://github.com/kubernetes/kubernetes/issues/25828

but none came close to fixing my issue as the issue was not the same.I only use default namespaces when trying to run pods and tried setting both RBAC ABAC, both gave the same result, this is a template I use for deploying showing version an etcd config:

apiVersion: kubeadm.k8s.io/v1alpha1
kind: MasterConfiguration
api:
advertiseAddress: IP1
bindPort: 6443
authorizationMode: ABAC
kubernetesVersion: 1.8.5
etcd:
endpoints:
- https://IP1:2379
- https://IP2:2379
- https://IP3:2379

caFile: /opt/cfg/etcd/pki/etcd-ca.crt
certFile: /opt/cfg/etcd/pki/etcd.crt
keyFile: /opt/cfg/etcd/pki/etcd.key
dataDir: /var/lib/etcd
etcdVersion: v3.2.9
networking:
podSubnet: 10.244.0.0/16
apiServerCertSANs:
- IP1
- IP2
- IP3
- DNS-NAME1
- DNS-NAME2
- DNS-NAME3
-- abdulrahmantkhalifa
kubernetes
kubernetes-security

2 Answers

12/21/2017

update

So the specific solution, the problem was because I was using version 1.8.x and was copying the certs and keys manually each kubelet didn't have its own system:node binding or specific key as specified in https://kubernetes.io/docs/admin/authorization/node/#overview:

RBAC Node Permissions In 1.8, the binding will not be created at all.

When using RBAC, the system:node cluster role will continue to be created, for compatibility with deployment methods that bind other users or groups to that role.

I fixed using either two ways :

1 - Using kubeadm join instead of copying the /etc/kubernetes file from master1

2 - after deployment patching the clusterrolebinding for system:node

kubectl patch clusterrolebinding system:node -p '{"apiVersion": 
"rbac.authorization.k8s.io/v1beta1","kind": 
"ClusterRoleBinding","metadata": {"name": "system:node"},"subjects": 
[{"kind": "Group","name": "system:nodes"}]}'
-- abdulrahmantkhalifa
Source: StackOverflow

12/19/2017

Your node must use credentials that match its Node API object name, as described in https://kubernetes.io/docs/admin/authorization/node/#overview

In order to be authorized by the Node authorizer, kubelets must use a credential that identifies them as being in the system:nodes group, with a username of system:node:. This group and user name format match the identity created for each kubelet as part of kubelet TLS bootstrapping.

-- Jordan Liggitt
Source: StackOverflow