User "worker-key" cannot list pods at the cluster scope

5/5/2017

How can I set kubelet config file When use --authorization-mode=RBAC at apiserver.

The config file I use now as follow:

apiVersion: v1
kind: Config
clusters:
- name: local
  cluster:
    server: https://172.23.9.102:443
    certificate-authority: /etc/kubernetes/ssl/ca.pem
users:
- name: kubelet
  user:
    client-certificate: /etc/kubernetes/ssl/worker.pem
    client-key: /etc/kubernetes/ssl/worker-key.pem
contexts:
- context:
    cluster: local
    user: kubelet
  name: kubelet-context
current-context: kubelet-context

And the log of kubelet :

May 05 07:19:15 fc-02 kubelet[27466]: E0505 07:19:15.077237   27466 event.go:199] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"fc-02.14bba4a48e5174d5", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{sec:0, nsec:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:""}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"fc-02", UID:"fc-02", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node fc-02 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"fc-02"}, FirstTimestamp:v1.Time{Time:time.Time{sec:63629565528, nsec:72746197, loc:(*time.Location)(0x4e5b080)}}, LastTimestamp:v1.Time{Time:time.Time{sec:63629565555, nsec:74581668, loc:(*time.Location)(0x4e5b080)}}, Count:19, Type:"Normal"}': 'User "worker-key" cannot patch events in the namespace "default". (patch events fc-02.14bba4a48e5174d5)' (will not retry!)
May 05 07:19:15 fc-02 kubelet[27466]: E0505 07:19:15.078703   27466 event.go:199] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"fc-02.14bba4a48e517d94", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{sec:0, nsec:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:""}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"fc-02", UID:"fc-02", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node fc-02 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"fc-02"}, FirstTimestamp:v1.Time{Time:time.Time{sec:63629565528, nsec:72748436, loc:(*time.Location)(0x4e5b080)}}, LastTimestamp:v1.Time{Time:time.Time{sec:63629565555, nsec:74588802, loc:(*time.Location)(0x4e5b080)}}, Count:19, Type:"Normal"}': 'User "worker-key" cannot patch events in the namespace "default". (patch events fc-02.14bba4a48e517d94)' (will not retry!)
May 05 07:19:15 fc-02 kubelet[27466]: E0505 07:19:15.079602   27466 event.go:199] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"fc-02.14bba4a48e51646d", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{sec:0, nsec:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:""}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"fc-02", UID:"fc-02", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientDisk", Message:"Node fc-02 status is now: NodeHasSufficientDisk", Source:v1.EventSource{Component:"kubelet", Host:"fc-02"}, FirstTimestamp:v1.Time{Time:time.Time{sec:63629565528, nsec:72741997, loc:(*time.Location)(0x4e5b080)}}, LastTimestamp:v1.Time{Time:time.Time{sec:63629565555, nsec:74571892, loc:(*time.Location)(0x4e5b080)}}, Count:19, Type:"Normal"}': 'User "worker-key" cannot patch events in the namespace "default". (patch events fc-02.14bba4a48e51646d)' (will not retry!)
May 05 07:19:15 fc-02 kubelet[27466]: E0505 07:19:15.087523   27466 reflector.go:190] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:46: Failed to list *v1.Pod: User "worker-key" cannot list pods at the cluster scope. (get pods)
May 05 07:19:15 fc-02 kubelet[27466]: E0505 07:19:15.097716   27466 reflector.go:190] k8s.io/kubernetes/pkg/kubelet/kubelet.go:390: Failed to list *v1.Node: User "worker-key" cannot list nodes at the cluster scope. (get nodes)
May 05 07:19:15 fc-02 kubelet[27466]: E0505 07:19:15.318549   27466 reflector.go:190] k8s.io/kubernetes/pkg/kubelet/kubelet.go:382: Failed to list *v1.Service: User "worker-key" cannot list services at the cluster scope. (get services)
May 05 07:19:16 fc-02 kubelet[27466]: E0505 07:19:16.094525   27466 reflector.go:190] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:46: Failed to list *v1.Pod: User "worker-key" cannot list pods at the cluster scope. (get pods)
May 05 07:19:16 fc-02 kubelet[27466]: E0505 07:19:16.099589   27466 reflector.go:190] k8s.io/kubernetes/pkg/kubelet/kubelet.go:390: Failed to list *v1.Node: User "worker-key" cannot list nodes at the cluster scope. (get nodes)
May 05 07:19:16 fc-02 kubelet[27466]: E0505 07:19:16.320025   27466 reflector.go:190] k8s.io/kubernetes/pkg/kubelet/kubelet.go:382: Failed to list *v1.Service: User "worker-key" cannot list services at the cluster scope. (get services)

I did not find anything for setting the user group of kubelet. Anyone who can help me?

-- sope
kubernetes
rbac

2 Answers

5/23/2017

I think this is caused by that your kubelet does not have the permission to access the cluster.

You should check if your credential is installed correctly.

-- Xianglin Gao
Source: StackOverflow

5/15/2017

Please use kubectl get to show clusterrolebinding and clusterrole. Checkout if user kuberlet have the permission to list nodes.

-- luke
Source: StackOverflow