`kubectl top nodes` dosn't work on slave nodes

9/1/2017

I try to run on any Kube slave node:

$ kubectl top nodes

And get an error:

Error from server (Forbidden): User "system:node:ip-10-43-0-13" cannot get services/proxy in the namespace "kube-system". (get services http:heapster:)

On master node it works:

$ kubectl top nodes
NAME            CPU(cores)   CPU%      MEMORY(bytes)   MEMORY%
ip-10-43-0-10   95m          4%        2144Mi          58%
ip-10-43-0-11   656m         32%       1736Mi          47%
ip-10-43-0-12   362m         18%       2030Mi          55%
ip-10-43-0-13   256m         12%       2412Mi          65%
ip-10-43-0-14   254m         12%       2512Mi          68%

Ok, what I should do? give permissions to the system:node group I suppose

kubectl create clusterrolebinding bu-node-admin-binding --clusterrole=cluster-admin --group=system:node

It doesn't help

Ok, inspecting cluster role:

$ kubectl describe clusterrole system:node
Name:       system:node
Labels:     kubernetes.io/bootstrapping=rbac-defaults
Annotations:    rbac.authorization.kubernetes.io/autoupdate=true
PolicyRule:
  Resources                     Non-Resource URLs   Resource Names  Verbs
  ---------                     -----------------   --------------  -----
  configmaps                        []          []      [get]
  endpoints                     []          []      [get]
  events                        []          []      [create patch update]
  localsubjectaccessreviews.authorization.k8s.io    []          []      [create]
  nodes                         []          []      [create get list watch delete patch update]
  nodes/status                      []          []      [patch update]
  persistentvolumeclaims                []          []      [get]
  persistentvolumes                 []          []      [get]
  pods                          []          []      [get list watch create delete]
  pods/eviction                     []          []      [create]
  pods/status                       []          []      [update]
  secrets                       []          []      [get]
  services                      []          []      [get list watch]
  subjectaccessreviews.authorization.k8s.io     []          []      [create]
  tokenreviews.authentication.k8s.io            []          []      [create]

Trying to patch rules:

kubectl patch clusterrole system:node --type='json' -p='[{"op": "add", "path": "/rules/0", "value":{"apiGroups": [""], "resources": ["services/proxy"], "verbs": ["get", "list", "watch"]}}]'

Now:

$ kubectl describe clusterrole system:node
Name:       system:node
Labels:     kubernetes.io/bootstrapping=rbac-defaults
Annotations:    rbac.authorization.kubernetes.io/autoupdate=true
PolicyRule:
  Resources                     Non-Resource URLs   Resource Names  Verbs
  ---------                     -----------------   --------------  -----
  ...
  services/proxy                    []          []      [get list watch]
  ...

top nodes still doesn't work

Only way that it works is:

kubectl create clusterrolebinding bu-node-admin-binding --clusterrole=cluster-admin --user=system:node:ip-10-43-0-13

This also works, but it node-specific too:

kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
  # "namespace" omitted since ClusterRoles are not namespaced
  name: top-nodes-watcher
rules:
- apiGroups: [""]
  resources: ["services/proxy"]
  verbs: ["get", "watch", "list"]
---
# This cluster role binding allows anyone in the "manager" group to read secrets in any namespace.
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
  name: top-nodes-watcher-binding
subjects:
- kind: User
  name: system:node:ip-10-43-0-13
  apiGroup: rbac.authorization.k8s.io
roleRef:
  kind: ClusterRole
  name: top-nodes-watcher
  apiGroup: rbac.authorization.k8s.io

And I should apply it for each slave node. Can I do it only for one group or role? What I'm doing wrong?

More details:

$ kubectl version
Client Version: version.Info{Major:"1", Minor:"7", GitVersion:"v1.7.4", GitCommit:"793658f2d7ca7f064d2bdf606519f9fe1229c381", GitTreeState:"clean", BuildDate:"2017-08-17T08:48:23Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"7", GitVersion:"v1.7.2", GitCommit:"922a86cfcd65915a9b2f69f3f193b8907d741d9c", GitTreeState:"clean", BuildDate:"2017-07-21T08:08:00Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"linux/amd64"}

What I really need it's physical node memory and CPU usage in %

Thaks for the attention

-- Alexander
kubernetes

2 Answers

10/11/2017

I ended up with next:

  • removed NodeRestriction from kube-apiserver --admission-control option
  • removed Node from --authorization-mode option, only RBAC left here
  • patched system:node role with kubectl patch clusterrole system:node --type='json' -p='[{"op": "add", "path": "/rules/0", "value":{"apiGroups": [""], "resources": ["services/proxy"], "verbs": ["get", "list", "watch"]}}]'
-- Alexander
Source: StackOverflow

10/10/2017

To simply solve this problem(use kubectl top nodes in all slave nodes), you can copy the kubeconfig your kubectl is using on your master to all other slaves.

And to explain why you meet this problem, I think you are using kubelet's kubeconfig for your kubectl in slave nodes.(Correct me if not).

In k8s v1.7+, kubernetes have deprecated system::node role, instead using Node authorizer and NodeRestriction for default. You can read docs about system::node from here. So when you try to patch system::node, it won't take effect. Kubelet use specified system:node:[node_name] to constraint specified node's behavior.

-- Crazykev
Source: StackOverflow