In the following scenario of creating a new deployment:
kubectl apply -f deployment.yaml
/Mugen$ kubectl get deploy
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
my-server 1 1 1 1 1h
I'm changing my yaml
and running apply
again and am getting a message that the deployment was updated.
But then I see two pods for my deployment and an indication that there are two instances, while only one is up to date.
/Mugen$ kubectl get deploy
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
mysql-server 1 2 1 2 1h
From my understanding, if I would have used kubectl replace --force
I would practically delete the current deployment and create a new one.
However, that will cause a service outage.
Is there a proper way to create a new deployment and delete the previous one only after successful rollout?
Gracefully drain all pods (and then delete pods) and keep on adding new pods with new features.
Note that in production setups, recreate as deployment strategy is not advisable as this may lead to blackouts and service outages. Alternatively, you may want to also read more about different deployment strategies such as Canary Release (also read about Blue/Green) and/or RollingUpdate for production management.