Getting this error message after kubectl apply -f .
error: error converting YAML to JSON: yaml: invalid map key: map[interface {}]interface {}{"include (print $.Template.BasePath \"/configmap.yaml\") . | sha256sum":interface {}(nil)}
I've tried putting checksum/config: {{ include (print $.Template.BasePath "/configmap.yaml") . | sha256sum }}
in different places, but I don't really understand YAML or JSON to figure out the issue.
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: demo
spec:
replicas: 1
selector:
matchLabels:
app: demo
template:
metadata:
labels:
app: demo
spec:
containers:
- name: demo
image: cloudnatived/demo:hello-config-env
ports:
- containerPort: 8888
env:
- name: GREETING
valueFrom:
configMapKeyRef:
name: demo-config
key: greeting
annotations:
checksum/config: {{ include (print $.Template.BasePath "/configmap.yaml") . | sha256sum }}
I just want to be able to update my pods when the config is changed. I'm supposed to helm upgrade
here somewhere but I'm not sure what arguments to give it.
You can't use the {{ ... }}
syntax with kubectl apply
. That syntax generally matches the Helm package manager. Without knowing to apply the template syntax, { ... }
looks like YAML map syntax, and the parser gets confused.
annotations:
generally belong under metadata:
, next to labels:
. Annotations in the Kubernetes documentation might be useful reading.
I just want to be able to update my pods without restarting them.
Kubernetes doesn't work that way, with some very limited exceptions.
If you're only talking about configuration data and not code, you can Add ConfigMap data to a Volume; then if the ConfigMap changes, the files the pod sees will also change. The syntax you're stumbling over is actually a workaround to force a pod to restart when the ConfigMap data changes: it is the opposite of what you're trying for, and you should delete these two lines.
For routine code changes, the standard path is to build and push a new Docker image, then update your Deployment object with the new image tag. (It must be a different image tag string than you had before, just pushing a new image with the same tag isn't enough.) Then Kubernetes will automatically start new pods with the new image, and once those start up, shut down pods with the old image. Under some circumstances Kubernetes can even delete and recreate pods on its own.
Are you using Helm? Try moving the annotations
under the top-level metadata:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: demo
annotations:
checksum/config: {{ include (print $.Template.BasePath "/configmap.yaml") . | sha256sum }}
spec:
replicas: 1
...
In any case, a (rolling) restart is often times required to the pick up some changes unless the application can detect changes to external configuration and hot-reload them.