I am upgrading K8s from 1.15 to 1.16. Before I do it, I must migrate my daemonsets, deployment, statefulset etc. to the apps/v1 version. But when I do it, I don't understand the K8s behaviour.
Let's say, that we have a daemonset:
apiVersion: apps/v1beta2
kind: DaemonSet
metadata:
name: spot-interrupt-handler
namespace: kube-system
spec:
selector:
matchLabels:
app: spot-interrupt-handler
template:
metadata:
labels:
app: spot-interrupt-handler
spec:
serviceAccountName: spot-interrupt-handler
containers:
- name: spot-interrupt-handler
image: madhuriperi/samplek8spotinterrupt:latest
imagePullPolicy: Always
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: SPOT_POD_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
nodeSelector:
lifecycle: Ec2Spot
I change the first line to apps/v1 and successfully apply this yaml to K8s. Nothing changes after it, the pods are not restarted. I get this notification:
daemonset.apps/spot-interrupt-handler configured
Then I want to see, that this API version change is really applied to K8s etcd.
kubectl get ds spot-interrupt-handler -n default -o yaml
And that is what I see at the start of the yaml definition:
apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"apps/v1","kind":"DaemonSet","metadata":{"annotations":{},"name":"spot-interrupt-handler","namespace":"default"},"spec":{"selector":{"matchLabels":{"app":"spot-interrupt-handler"}},"template":{"metadata":{"labels":{"app":"spot-interrupt-handler"}},"spec":{"containers":[{"env":[{"name":"POD_NAME","valueFrom":{"fieldRef":{"fieldPath":"metadata.name"}}},{"name":"NAMESPACE","valueFrom":{"fieldRef":{"fieldPath":"metadata.namespace"}}},{"name":"SPOT_POD_IP","valueFrom":{"fieldRef":{"fieldPath":"status.podIP"}}}],"image":"madhuriperi/samplek8spotinterrupt:latest","imagePullPolicy":"Always","name":"spot-interrupt-handler"}],"nodeSelector":{"lifecycle":"Ec2Spot"},"serviceAccountName":"spot-interrupt-handler"}}}}
creationTimestamp: "2021-02-09T08:34:33Z"
Thanks in advance
I've reproduced your setup in GKE environment and after upgrading kubernetes version from 1.15 to 1.16 daemonset's apiVersion
has changed to apiVersion: apps/v1
.
I've started with GKE version 1.15.12
and applied your configuration. Once successfully applied I've changed apiVersion to apps/v1
, extensions/v1beta1
has remained as the current apiVersion
.
After upgrading kubernetes version to
version.Info{Major:"1", Minor:"16+", GitVersion:"v1.16.15-gke.6000"
the DS in now apps/v1
.
To check if the same behavior will happen again I've created a DS and upgraded kubernetes version without changing apiVersion and it has changed by itself to apps/v1
.