I have a deployment which includes a configMap, persistentVolumeClaim, and a service. I have changed the configMap and re-applied the deployment to my cluster. I understand that this change does not automatically restart the pod in the deployment:
configmap change doesn't reflect automatically on respective pods
Updated configMap.yaml but it's not being applied to Kubernetes pods
I know that I can kubectl delete -f wiki.yaml && kubectl apply -f wiki.yaml
. But that destroys the persistent volume which has data I want to survive the restart. How can I restart the pod in a way that keeps the existing volume?
Here's what wiki.yaml looks like:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: dot-wiki
spec:
accessModes:
- ReadWriteOnce
volumeMode: Filesystem
resources:
requests:
storage: 4Gi
---
apiVersion: v1
kind: ConfigMap
metadata:
name: wiki-config
data:
config.json: |
{
"farm": true,
"security_type": "friends",
"secure_cookie": false,
"allowed": "*"
}
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: wiki-deployment
spec:
replicas: 1
selector:
matchLabels:
app: wiki
template:
metadata:
labels:
app: wiki
spec:
securityContext:
runAsUser: 1000
runAsGroup: 1000
fsGroup: 1000
initContainers:
- name: wiki-config
image: dobbs/farm:restrict-new-wiki
securityContext:
runAsUser: 0
runAsGroup: 0
allowPrivilegeEscalation: false
volumeMounts:
- name: dot-wiki
mountPath: /home/node/.wiki
command: ["chown", "-R", "1000:1000", "/home/node/.wiki"]
containers:
- name: farm
image: dobbs/farm:restrict-new-wiki
command: [
"wiki", "--config", "/etc/config/config.json",
"--admin", "bad password but memorable",
"--cookieSecret", "any-random-string-will-do-the-trick"]
ports:
- containerPort: 3000
volumeMounts:
- name: dot-wiki
mountPath: /home/node/.wiki
- name: config-templates
mountPath: /etc/config
volumes:
- name: dot-wiki
persistentVolumeClaim:
claimName: dot-wiki
- name: config-templates
configMap:
name: wiki-config
---
apiVersion: v1
kind: Service
metadata:
name: wiki-service
spec:
ports:
- name: http
targetPort: 3000
port: 80
selector:
app: wiki
In addition to kubectl rollout restart deployment
, there are some alternative approaches to do this:
1. Restart Pods
kubectl delete pods -l app=wiki
This causes the Pods of your Deployment to be restarted, in which case they read the updated ConfigMap.
2. Version the ConfigMap
Instead of naming your ConfigMap just wiki-config
, name it wiki-config-v1
. Then when you update your configuration, just create a new ConfigMap named wiki-config-v2
.
Now, edit your Deployment specification to reference the wiki-config-v2
ConfigMap instead of wiki-config-v1
:
apiVersion: apps/v1
kind: Deployment
# ...
volumes:
- name: config-templates
configMap:
name: wiki-config-v2
Then, reapply the Deployment:
kubectl apply -f wiki.yaml
Since the Pod template in the Deployment manifest has changed, the reapplication of the Deployment will recreate all the Pods. And the new Pods will use the new version of the ConfigMap.
As an additional advantage of this approach, if you keep the old ConfigMap (wiki-config-v1
) around rather than deleting it, you can revert to a previous configuration at any time by just editing the Deployment manifest again.
This approach is described in Chapter 1 of Kubernetes Best Practices (O'Reilly, 2019).
You should do nothing but change your ConfigMap
, and wait for the changes to be applies. The answer you have posted the link is wrong. After a ConfigMap
change, it doesn't apply the changes right away, but can take time. Like 5 minutes, or something like that.
If that doesn't happen, you can report a bug about that specific version of k8s.
For the specific question about restarting containers after the configuration is changed, as of kubectl v1.15 you can do this:
# apply the config changes
kubectl apply -f wiki.yaml
# restart the containers in the deployment
kubectl rollout restart deployment wiki-deployment