Why Aren't My Environment Variables Set in Kubernetes Pods From ConfigMap?

7/12/2021

I have the following configmap spec:

apiVersion: v1
data:
  MY_NON_SECRET: foo
  MY_OTHER_NON_SECRET: bar
kind: ConfigMap
metadata:
  name: web-configmap
  namespace: default
$ kubectl describe configmap web-configmap
Name:         web-configmap
Namespace:    default
Labels:       <none>
Annotations:  <none>

Data
====
MY_NON_SECRET:
----
foo
MY_OTHER_NON_SECRET:
----
bar
Events:  <none>

And the following pod spec:

apiVersion: v1
kind: Pod
metadata:
  name: web-pod
spec:
  containers:
    - name: web
      image: kahunacohen/hello-kube:latest
      envFrom:
        - configMapRef:
            name: web-configmap
      ports:
      - containerPort: 3000
$ kubectl describe pod web-deployment-5bb9d846b6-8k2s9
Name:         web-deployment-5bb9d846b6-8k2s9
Namespace:    default
Priority:     0
Node:         minikube/192.168.49.2
Start Time:   Mon, 12 Jul 2021 12:22:24 +0300
Labels:       app=web-pod
              pod-template-hash=5bb9d846b6
              service=web-service
Annotations:  <none>
Status:       Running
IP:           172.17.0.5
IPs:
  IP:           172.17.0.5
Controlled By:  ReplicaSet/web-deployment-5bb9d846b6
Containers:
  web:
    Container ID:   docker://8de5472c9605e5764276c345865ec52f9ec032e01ed58bc9a02de525af788acf
    Image:          kahunacohen/hello-kube:latest
    Image ID:       docker-pullable://kahunacohen/hello-kube@sha256:930dc2ca802bff72ee39604533342ef55e24a34b4a42b9074e885f18789ea736
    Port:           3000/TCP
    Host Port:      0/TCP
    State:          Running
      Started:      Mon, 12 Jul 2021 12:22:27 +0300
    Ready:          True
    Restart Count:  0
    Environment:    <none>
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-tcqwz (ro)
Conditions:
  Type              Status
  Initialized       True 
  Ready             True 
  ContainersReady   True 
  PodScheduled      True 
Volumes:
  kube-api-access-tcqwz:
    Type:                    Projected (a volume that contains injected data from multiple sources)
    TokenExpirationSeconds:  3607
    ConfigMapName:           kube-root-ca.crt
    ConfigMapOptional:       <nil>
    DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
  Type    Reason     Age   From               Message
  ----    ------     ----  ----               -------
  Normal  Scheduled  19m   default-scheduler  Successfully assigned default/web-deployment-5bb9d846b6-8k2s9 to minikube
  Normal  Pulling    19m   kubelet            Pulling image "kahunacohen/hello-kube:latest"
  Normal  Pulled     19m   kubelet            Successfully pulled image "kahunacohen/hello-kube:latest" in 2.3212119s
  Normal  Created    19m   kubelet            Created container web
  Normal  Started    19m   kubelet            Started container web

The pod has container that is running expressjs with this code which is trying to print out the env vars set in the config map:

const process = require("process");
const express = require("express");
const app = express();


app.get("/", (req, res) => {
  res.send(`<h1>Kubernetes Expressjs Example 0.3</h2>
  <h2>Non-Secret Configuration Example</h2>
  <p>This uses ConfigMaps as env vars.</p>
  <ul>
    <li>MY_NON_SECRET: "${process.env.MY_NON_SECRET}"</li>
    <li>MY_OTHER_NON_SECRET: "${process.env.MY_OTHER_NON_SECRET}"</li>
  </ul>
  `);
});


app.listen(3000, () => {
  console.log("Listening on http://localhost:3000");
})

When I deploy these pods, the env vars are undefined

When I do $ kubectl exec {POD_NAME} -- env

I don't see my env vars.

What am I doing wrong? I've tried killing the pods, waiting till they restart then check again to no avail.

-- Aaron
configmap
express
kubectl
kubernetes

1 Answer

7/12/2021

It looks like your pods are managed by web-deployment deployment. You cannot patch such pods directly.

If you run kubectl get pod <pod-name> -n <namespace> -oyaml, you'll see a block called ownerReferences under the metadata section. This tells you who is the owner/manager of this pod.

In case of a deployment, here is the ownership hierarchy:

Deployment -> ReplicaSet -> Pod

i.e. A deployment creates replicaset and replicaset in turn creates pod.

So, if you want to change anything in the pod Spec, you should make that change in the deployment, not in the replicaset or the pod directly as they will get overwritten.

Patch your deployment either by running and edit the environment field there:

kubectl edit deployment.apps <deployment-name> -n <namespace>

or update the deployment yaml with your changes and run

kubectl apply -f <deployment-yaml-file>
-- Raghwendra Singh
Source: StackOverflow