CSINodes problem in Kubernetes Multiple Scheduler

11/29/2019

I try to create multiple scheduler running on kubernetes following this instruction https://kubernetes.io/docs/tasks/administer-cluster/configure-multiple-schedulers/ the new kubernetes scheduler status is running but the logs generate this error, and the pods that using the new scheduler status is pending

E1129 02:43:22.639372       1 reflector.go:156] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.CSINode: the server could not find the requested resource

and this is my clusterrole of kube-scheduler

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  annotations:
    rbac.authorization.kubernetes.io/autoupdate: "true"
  creationTimestamp: "2019-11-28T08:29:43Z"
  labels:
    kubernetes.io/bootstrapping: rbac-defaults
  name: system:kube-scheduler
  resourceVersion: "74398"
  selfLink: /apis/rbac.authorization.k8s.io/v1/clusterroles/system%3Akube-scheduler
  uid: 517e8769-911c-4833-a37c-254edf49cbaa
rules:
- apiGroups:
  - ""
  - events.k8s.io
  resources:
  - events
  verbs:
  - create
  - patch
  - update
- apiGroups:
  - ""
  resources:
  - endpoints
  verbs:
  - create
- apiGroups:
  - ""
  resourceNames:
  - kube-scheduler
  - my-scheduler
  resources:
  - endpoints
  verbs:
  - delete
  - get
  - patch
  - update
- apiGroups:
  - ""
  resources:
  - nodes
  verbs:
  - get
  - list
  - watch
- apiGroups:
  - ""
  resources:
  - pods
  verbs:
  - delete
  - get
  - list
  - watch
- apiGroups:
  - ""
  resources:
  - bindings
  - pods/binding
  verbs:
  - create
- apiGroups:
  - ""
  resources:
  - pods/status
  verbs:
  - patch
  - update
- apiGroups:
  - ""
  resources:
  - replicationcontrollers
  - services
  verbs:
  - get
  - list
  - watch
- apiGroups:
  - apps
  - extensions
  resources:
  - replicasets
  verbs:
  - get
  - list
  - watch
- apiGroups:
  - apps
  resources:
  - statefulsets
  verbs:
  - get
  - list
  - watch
- apiGroups:
  - policy
  resources:
  - poddisruptionbudgets
  verbs:
  - get
  - list
  - watch
- apiGroups:
  - ""
  resources:
  - persistentvolumeclaims
  - persistentvolumes
  verbs:
  - get
  - list
  - watch
- apiGroups:
  - authentication.k8s.io
  resources:
  - tokenreviews
  verbs:
  - create
- apiGroups:
  - authorization.k8s.io
  resources:
  - subjectaccessreviews
  verbs:
  - create
- apiGroups:
  - storage.k8s.io
  resources:
  - csinodes
  verbs:
  - watch
  - list
  - get

is there any suggestion of this problem?

Thank you

-- akhmad alimudin
kubernetes
scheduler

2 Answers

3/18/2020

@mario is right, I hit the same issue, my original kubernates version is v1.16.3 and I use Scheduler-Framework of k8s which is v1.17.x.

I test my new scheduler on another k8s cluster with v1.17.x, it could work.

-- vincent pli
Source: StackOverflow

12/3/2019

finally I try to use the old version of kubernetes before 16.3, I am using 15.6 and it works now – akhmad alimudin

OK, now I understand what was the cause of the issue you experienced. You probably tried to run newer kube-scheduler version on the a bit older k8s cluster (where the key component is kube-apiserver) which cannot be done. As you can read in the official kubernetes documentation:

kube-controller-manager, kube-scheduler, and cloud-controller-manager must not be newer than the kube-apiserver instances they communicate with. They are expected to match the kube-apiserver minor version, but may be up to one minor version older (to allow live upgrades).

Example:

kube-apiserver is at 1.13 kube-controller-manager, kube-scheduler, and cloud-controller-manager are supported at 1.13 and 1.12

So you can use kube-scheduler which is one minor version older than your currently deployed kube-apiserver but not newer.

-- mario
Source: StackOverflow