Using AWS CloudFormation, I can create a stack based on a template that includes all required resources. I can then create a new template, adding some resources, removing some, and changing description of others. I can then update the CloudFormation stack with the new template. CloudFormation will automatically remove any resources that are no longer in the template, add the new ones, and update modified resources. In addition, the update will roll back if any of the operations fails.
Is there an equivalent to this in Kubernetes, where I can just provide an updated configuration file, and have Kubernetes automatically compare that to the previous version and remove any resources that should no longer be there?
Using deployment template will suffice your need, a deployment can be rollback at any time needed.
Rollout command when used with correct flags like "status/history/undo" should help you control the stack resource rollout or rollback..
kubectl rollout status deployment nginx
Check rollout History
kubectl rollout history deployment nginx
Rolling Back to a Previous Revision
kubectl rollout undo deployment nginx
In below example i created a deployment with two pods using deployment_v1.yaml file which has 2 containers inside a pod (nginx/redis)
kubectl create -f deployment_v1.yaml --record=true
deployment_v1.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
creationTimestamp: null
labels:
run: multi-container-deploy
name: multi-container-deploy
spec:
replicas: 1
selector:
matchLabels:
app: multi-container
template:
metadata:
labels:
app: multi-container
spec:
containers:
- image: nginx
name: nginx-1
- image: redis
name: redis-2
Checking Status during rollout
$ kubectl rollout status deployment multi-container-deploy
Waiting for deployment "multi-container-deploy" rollout to finish: 0 of 1 updated replicas are available...
deployment "multi-container-deploy" successfully rolled out
Rollout history
$ kubectl rollout history deployment multi-container-deploy
deployment.apps/multi-container-deploy
REVISION CHANGE-CAUSE
1 kubectl create --filename=deployment_v1.yaml --record=true
$ kubectl get all
NAME READY STATUS RESTARTS AGE
pod/multi-container-deploy-5fc8944c58-r4dt4 2/2 Running 0 60s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 32d
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/multi-container-deploy 1/1 1 1 60s
NAME DESIRED CURRENT READY AGE
replicaset.apps/multi-container-deploy-5fc8944c58 1 1 1 60s
Now say we remove the redis pod from the original deployment by say kubectl edit command
kubectl edit deployments multi-container-deploy
Check new rollout status after edit as below
$ kubectl rollout status deployment multi-container-deploy
Waiting for deployment "multi-container-deploy" rollout to finish: 1 old replicas are pending termination...
Waiting for deployment "multi-container-deploy" rollout to finish: 1 old replicas are pending termination...
deployment "multi-container-deploy" successfully rolled out
Check new rollout history and we will see list updated as below (disadvantage of direct edit we will not have much info on what was done on step2)
$ kubectl rollout history deployment multi-container-deploy
deployment.apps/multi-container-deploy
REVISION CHANGE-CAUSE
1 kubectl apply --filename=deployment_v1.yaml --record=true
2 kubectl apply --filename=deployment_v1.yaml --record=true
We can also check that the resource was successful removed and we only have pod running with one container.
$ kubectl get all
NAME READY STATUS RESTARTS AGE
pod/multi-container-deploy-7cdb9cbf4-jr9nc 1/1 Running 0 4m36s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 32d
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/multi-container-deploy 1/1 1 1 13m
NAME DESIRED CURRENT READY AGE
replicaset.apps/multi-container-deploy-5fc8944c58 0 0 0 13m
replicaset.apps/multi-container-deploy-7cdb9cbf4 1 1 1 4m36s
We can Undo above edit on deployment just by running below command
$ kubectl rollout undo deployment multi-container-deploy
deployment.apps/multi-container-deploy rolled back
If we check back we have the pod running back with two containers again.
$ kubectl get all
NAME READY STATUS RESTARTS AGE
pod/multi-container-deploy-5fc8944c58-xn4mz 2/2 Running 0 40s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 32d
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/multi-container-deploy 1/1 1 1 15m
NAME DESIRED CURRENT READY AGE
replicaset.apps/multi-container-deploy-5fc8944c58 1 1 1 15m
replicaset.apps/multi-container-deploy-7cdb9cbf4 0 0 0 6m59s
And rollout history will be updated as below
$ kubectl rollout history deployment multi-container-deploy
deployment.apps/multi-container-deploy
REVISION CHANGE-CAUSE
2 kubectl apply --filename=deployment_v2.yaml --record=true
3 kubectl apply --filename=deployment_v2.yaml --record=true
For single resources (e.g. a single Pod or Deployment) Kubernetes will automatically reconcile the state. So it works in a similar manner as CloudFormation in that sense. If you change a deployment and remove a pod from it, Kubernetes will automatically remove the resources.
If you want to treat multiple resources as a single object, you can look at something like Helm, which simplifies packaging multiple Kubernetes resources together.