Stakater Reloader - errors in pod logs on startup in Kubernetes

12/13/2019

I'm trying to get Stakater Reloader working on a Kubernetes cluster. Installed the stable release via Helm. Annotated my deployments as per the instructions but rolling updates didn't happen when I changed ConfigMaps. When I checked the reloader pod logs I found this:

time="2019-12-13T15:46:02Z" level=info msg="Environment:Kubernetes"
time="2019-12-13T15:46:02Z" level=info msg="Starting Reloader"
time="2019-12-13T15:46:02Z" level=warning msg="KUBERNETES_NAMESPACE is unset, will detect changes in all namespaces."
time="2019-12-13T15:46:02Z" level=info msg="Starting Controller to watch resource type: secrets"
time="2019-12-13T15:46:02Z" level=info msg="Starting Controller to watch resource type: configMaps"
time="2019-12-13T15:46:02Z" level=error msg="Failed to list deployments the server could not find the requested resource"
time="2019-12-13T15:46:02Z" level=error msg="Failed to list daemonSets the server could not find the requested resource"
time="2019-12-13T15:46:02Z" level=error msg="Failed to list statefulSets the server could not find the requested resource"

Then the last 3 lines just repeat periodically.

I'm wondering if its an RBAC issue, but the ClusterRole and ClusterRoleBinding seem to be there.

Any help would be greatly appreciated.

-- James B
kubernetes
kubernetes-helm

1 Answer

1/7/2020

Apparently the Reloader version in the stable Helm repository does not support Kubernetes 1.16.x.

See here for workaround.

-- James B
Source: StackOverflow