I'm creating a small application which will listen for changes on various resources through the API. For such an task it needs permissions. So I thought I'd create a ClusterRole
{{- if not .Values.skipRole }}
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: {{ include "kubewatcher.serviceAccountName" . }}
rules:
- apiGroups:
- ""
resources:
- pods
- events
- namespaces
- services
- deployments
- replicationcontrollers
- replicasets
- daemonsets
- persistentvolumes
verbs:
- list
- watch
- get
- apiGroups:
- batch
resources:
- cronjobs
- jobs
verbs:
- get
- list
- watch
{{- end }}
I also created a ServiceAccount
{{- if .Values.serviceAccount.create -}}
apiVersion: v1
kind: ServiceAccount
metadata:
name: {{ include "kubewatcher.serviceAccountName" . }}
labels:
{{- include "kubewatcher.labels" . | nindent 4 }}
{{- with .Values.serviceAccount.annotations }}
annotations:
{{- toYaml . | nindent 4 }}
{{- end }}
{{- end }}
And finally ClusterRoleBinding
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: {{ include "kubewatcher.fullname" . }}
subjects:
- kind: ServiceAccount
name: {{ include "kubewatcher.fullname" . }}
namespace: {{ .Release.Namespace }}
roleRef:
kind: ClusterRole
name: {{ include "kubewatcher.serviceAccountName" . }}
apiGroup: rbac.authorization.k8s.io
My application can interact with the API and everything seems to work just fine. However when I install another instance of my app, as I'm doing when I'm developing it further, I get the below error message.
Error: rendered manifests contain a resource that already exists. Unable to continue with install: ClusterRoleBinding "kubewatcher" in namespace "" exists and cannot be imported into the current release: invalid ownership metadata; annotation validation error: key "meta.helm.sh/release-namespace" must equal "kubewatcher-dev": current value is "kubewatcher"
The second instance is installed using the below command where I thought --set skipRole=true
would let me bind to the already present ClusterRole
helm install kubewatcher --namespace kubewatcher-dev helm/ --set skipRole=true
Am I on the right path? Is there a better way? I tried to post the relevant parts of my code, please let me know if I should post additional parts
I also got this error according to the error message Search for information about roles in operator_namespaced.yaml Then delete these two roles in kubectl and restart successfully
1, kubectl delete ClusterRole ray-operator-clusterrole
2, kubectl delete ClusterRoleBinding ray-operator-clusterrolebinding