Is it possible to know if the node where a Kubernetes Pod is being scheduled is master or worker?

5/27/2019

I'm currently using Kubernetes to schedule a DaemonSet on both master and worker nodes.

The DaemonSet definition is the same for both node types (same image, same volumes, etc), the only difference is that when the entrypoint is executed, I need to write a different configuration file (which is generated in Python with some dynamic values) if the node is a master or a worker.

Currently, to overcome this I'm using two different DaemonSet definitions with an env value which tells if the node is a master or not. Here's the yaml file (only relevant parts):

apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
  name: worker-ds
  namespace: kube-system
  labels:
    k8s-app: worker
spec:
  ...
    spec:
      hostNetwork: true
      containers:
        - name: my-image
          ...
          env:
            - name: NODE_IP
              valueFrom:
                fieldRef:
                  fieldPath: status.hostIP
            - name: IS_MASTER
              value: "false"
      ...
---
apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
  name: master-ds
  namespace: kube-system
  labels:
    k8s-app: master
spec:
  ...
    spec:
      hostNetwork: true
      nodeSelector:
        node-role.kubernetes.io/master: ""
      tolerations:
      - key: node-role.kubernetes.io/master
        operator: Exists
        effect: NoSchedule
      containers:
        - name: my-image
          ...
          env:
            - name: NODE_IP
              valueFrom:
                fieldRef:
                  fieldPath: status.hostIP
            - name: IS_MASTER
              value: "true"
      ...

However, since the only difference is the IS_MASTER value, I want to collapse both the definitions in a single one that programmatically understands if the current node where the pod is being scheduled is a master or a worker.

Is there any way to know this information about the node programmatically (even reading a configuration file [for example something that only the master has or viceversa] in the node or something like that)?

Thanks in advance.

-- Skazza
daemonset
kubernetes
pod

2 Answers

5/28/2019

Unfortunately, there is not a convenient way to access the node information in pod.

If you only want a single DaemonSet definition, you can add a sidecar container to your pod, the sidecar container can access the k8s api, then your main container can get something useful from the sidecar.

By the way, I think your current solution is properly :)

-- menya
Source: StackOverflow

5/27/2019

You can tell a node is the master if it has the label node-role.kubernetes.io/master: "". What you need to do is access that label from your containers which can be done with the Downward Api (Edit: Wrong, only Pod information can be accessed from the Downward Api). You can mount the labels inside your containers using:

volumes:
    - name: podinfo
      downwardAPI:
        items:
          - path: "labels"
            fieldRef:
              fieldPath: metadata.labels

You can then search the content of that file from within the container.

-- Alassane Ndiaye
Source: StackOverflow