So I did update the manifest and replaced apiVersion: extensions/v1beta1 to apiVersion: apps/v1
apiVersion: apps/v1
kind: Deployment
metadata:
name: secretmanager
namespace: kube-system
spec:
selector:
matchLabels:
app: secretmanager
template:
metadata:
labels:
app: secretmanager
spec:
...
I then applied the change
k apply -f deployment.yaml
Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
deployment.apps/secretmanager configured
I also tried
k replace --force -f deployment.yaml
That recreated the POD (downtime :( ) but still if you try to output the yaml config of the deployment I see the old value
k get deployments -n kube-system secretmanager -o yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
annotations:
deployment.kubernetes.io/revision: "1"
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"apps/v1","kind":"Deployment",
"metadata":{"annotations":{},"name":"secretmanager","namespace":"kube-system"}....}
creationTimestamp: "2020-08-21T21:43:21Z"
generation: 2
name: secretmanager
namespace: kube-system
resourceVersion: "99352965"
selfLink: /apis/extensions/v1beta1/namespaces/kube-system/deployments/secretmanager
uid: 3d49aeb5-08a0-47c8-aac8-78da98d4c342
spec:
So I still see this apiVersion: extensions/v1beta1
I am preparing eks kubernetes v1.15 to be migrated over to v1.16
The Deployment
exists in multiple apiGroups, so it is ambiguous. Try to specify e.g. apps/v1
with:
kubectl get deployments.v1.apps
and you should see your Deployment
but with apps/v1
apiGroup.