How to avoid repeating GUID in deployment definition

5/2/2018
apiVersion: apps/v1beta1
kind: Deployment
metadata:
  name: app-2993d9d2-5cb4-4f4c-a9f3-ec630036f5d0
spec:
  selector:
    matchLabels:
      client: 2993d9d2-5cb4-4f4c-a9f3-ec630036f5d0
  template:
    metadata:
      labels:
        client: 2993d9d2-5cb4-4f4c-a9f3-ec630036f5d0
    spec:
      containers:
      - name: xxx
        image: xxx
        env:
        - name: GUID
          valueFrom:
            fieldRef:
              fieldPath: spec.template.metadata.labels.client

I tried passing existing value from the definition to the env variable using different expressions and all of them didnt work:

error converting fieldPath: field label not supported: spec.template.metadata.labels.client

upd: found what you can pass in, doesnt help...

I have to essentially repeat myself 4 times, is there a way to have less repeating in the pod definition to ease management? According to this you can pass in something, it doesnt say what though.

ps. Do i really need same guid in the spec.template and spec.selector? It doesnt work without that

-- 4c74356b41
configuration
containers
docker
kubernetes

1 Answer

5/2/2018

You don’t necessarily need to use guids here, those are just lables and names... Secondly, they refer to different things (althought some of them have to be the same in some cases):

  • metadata name is name of Deployment in question. You will use it to reference and manipulator this specific Deployment during its lifecycle.
  • labels and matchlabels need to be the same if you want them matched together, which in this case you want. Kubernetes is strong and quite flexible when it comes to labeling and different assets can have multiple labels on them (say pod can have labels: app:Postfix, tier: backend, layer: mysql, env:dev). It stands to reason that label(s) that you want matched and label(s) to be matched have to be the same in order to be matched.

As for automation of labeling in Deployment to avoid repetition, maybe helm Charts or some other ‘automating kubernetes’ approach, depending on your actual need, would be better?

Additional note: for passing label to env variable following can be used starting from kubernetes 1.9:

...
template:
  metadata:
    labels:
      label_name: label-value
...
env:
  - name: ENV_NAME
    valueFrom:
      fieldRef:
        fieldPath: metadata.labels['label_name']

Below is full mock code to demonstrate this (client 1.9.3, server 1.9.0):

# cat d.yaml:

apiVersion: apps/v1beta1
kind: Deployment
metadata:
  name: app-guidhere
spec:
  selector:
    matchLabels: 
      client: guidhere
  template:
    metadata:
      labels:
        client: guidhere
    spec:
      containers:
      - name: some-name
        image: nginx
        env:
          - name: GUIDENV
            valueFrom:
              fieldRef:
                fieldPath: metadata.labels['client']

 # after: kubectl create -f d.yaml and connecting to container
 # echo $GUIDENV responds with "guidhere"

And I've just tried this and works correctly (mind k8s versions).

-- Const
Source: StackOverflow