Can't see changes in K8s after I apply yaml with the new API version

2/9/2021

I am upgrading K8s from 1.15 to 1.16. Before I do it, I must migrate my daemonsets, deployment, statefulset etc. to the apps/v1 version. But when I do it, I don't understand the K8s behaviour.

Let's say, that we have a daemonset:

apiVersion: apps/v1beta2
kind: DaemonSet
metadata:
  name: spot-interrupt-handler
  namespace: kube-system
spec:
  selector:
    matchLabels:
      app: spot-interrupt-handler
  template:
    metadata:
      labels:
        app: spot-interrupt-handler
    spec:
      serviceAccountName: spot-interrupt-handler
      containers:
      - name: spot-interrupt-handler
        image: madhuriperi/samplek8spotinterrupt:latest
        imagePullPolicy: Always
        env:
        - name: POD_NAME
          valueFrom:
            fieldRef:
              fieldPath: metadata.name
        - name: NAMESPACE
          valueFrom:
            fieldRef:
              fieldPath: metadata.namespace
        - name: SPOT_POD_IP
          valueFrom:
            fieldRef:
              fieldPath: status.podIP
      nodeSelector:
        lifecycle: Ec2Spot

I change the first line to apps/v1 and successfully apply this yaml to K8s. Nothing changes after it, the pods are not restarted. I get this notification:

daemonset.apps/spot-interrupt-handler configured
  1. Is it a normal behavior? Shouldn't it be restarted after I change the API version?

Then I want to see, that this API version change is really applied to K8s etcd.

kubectl get ds spot-interrupt-handler -n default -o yaml

And that is what I see at the start of the yaml definition:

apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
  annotations:
    kubectl.kubernetes.io/last-applied-configuration: |
      {"apiVersion":"apps/v1","kind":"DaemonSet","metadata":{"annotations":{},"name":"spot-interrupt-handler","namespace":"default"},"spec":{"selector":{"matchLabels":{"app":"spot-interrupt-handler"}},"template":{"metadata":{"labels":{"app":"spot-interrupt-handler"}},"spec":{"containers":[{"env":[{"name":"POD_NAME","valueFrom":{"fieldRef":{"fieldPath":"metadata.name"}}},{"name":"NAMESPACE","valueFrom":{"fieldRef":{"fieldPath":"metadata.namespace"}}},{"name":"SPOT_POD_IP","valueFrom":{"fieldRef":{"fieldPath":"status.podIP"}}}],"image":"madhuriperi/samplek8spotinterrupt:latest","imagePullPolicy":"Always","name":"spot-interrupt-handler"}],"nodeSelector":{"lifecycle":"Ec2Spot"},"serviceAccountName":"spot-interrupt-handler"}}}}
  creationTimestamp: "2021-02-09T08:34:33Z"
  1. Why is extensions/v1beta1 on the top? I expect it to be apps/v1.
  2. I see, that the new version of API is in the last-applied-configuration. Does it mean, that this DaemonSet will work after the upgrade to 1.16?

Thanks in advance

-- dice2011
kubernetes
yaml

1 Answer

2/9/2021

I've reproduced your setup in GKE environment and after upgrading kubernetes version from 1.15 to 1.16 daemonset's apiVersion has changed to apiVersion: apps/v1.

I've started with GKE version 1.15.12 and applied your configuration. Once successfully applied I've changed apiVersion to apps/v1, extensions/v1beta1 has remained as the current apiVersion.

After upgrading kubernetes version to version.Info{Major:"1", Minor:"16+", GitVersion:"v1.16.15-gke.6000" the DS in now apps/v1.

To check if the same behavior will happen again I've created a DS and upgraded kubernetes version without changing apiVersion and it has changed by itself to apps/v1.

-- kool
Source: StackOverflow