Using kubectl roll outs to update my images, but need to also keep my deployment object in version control

12/11/2019

In My CICD, I am:

generating a new image with a unique tag. foo:dev-1339 and pushing it to my image repo (ECR). Then I am using a rolling update to update my deployment.

kubectl rolling-update frontend --image=foo:dev-1339

But I have a conflict here.

What if I also need to update some part of my deployment object as stored in a deployment.yaml file. Lets say harden a health check or add a parameter?

Then when I re apply my deployment object as a whole it will not be in sync with the current replica set, the tag will get reverted and I will lose that image update as it exists in the cluster.

How do I avoid this race condition?

-- Josh Beauregard
kubernetes
kubernetes-deployment

4 Answers

12/11/2019

Unfortunately there is no solution, either from the command line or through the yaml files

-- Iakovos Belonias
Source: StackOverflow

12/11/2019

As per the doc here, "...a Deployment is a higher-level controller that automates rolling updates of applications declaratively, and therefore is recommended" over the use of Replication Controllers and kubectl rolling-update. Updating the image of a Deployment will trigger Deployment's rollout.

An approach could be to update the Deployment configuration yaml (or json) under version control in the source repo and apply the changed Deployment configuration from the version control to the cluster.

-- gears
Source: StackOverflow

12/12/2019

A typical solution here is to use a templating layer like Helm or Kustomize.

In Helm, you'd keep your Kubernetes YAML specifications in a directory structure called a chart, but with optional templating. You can specify things like

image: myname/myapp:{{ .Values.tag | default "latest" }}

and then deploy the chart with

helm install myapp --name myapp --set tag=20191211.01

Helm keeps track of these values (in Secret objects in the cluster) so they don't get tracked in source control. You could check in a YAML-format file with settings and use helm install -f to reference that file instead.

In Kustomize, your CI tool would need to create a kustomize.yaml file for per-deployment settings, but then could set

images:
  - name: myname/myapp
    newTag: 20191211.01

If you trust your CI tool to commit to source control then it can check this modified file in as part of its deployment sequence.

-- David Maze
Source: StackOverflow

12/11/2019

Imperative vs Declarative workflow

There is two fundamental ways of using kubectl for applying changes to your cluster. The Imperative way, when you do commands is a good way for experimentation and development environment. kubectl rolling-updated is an example of an imperative command. See Managing Kubernetes using Imperative Commands.

For a production environment, it is recommended to use a Declarative workflow, by editing manifest-files, store them in a Git-repository. Automatically start a CICD work when you commit or merge. kubectl apply -f <file> or more interesting kubectl apply -k <file> is an example of this workflow. See Declarative Management using Config files or more interesting Declarative Management using Kustomize

CICD for building image and deployment

Building an artifact from source code, including a container image may be done in a CICD pipeline. Managing application config and applying it to the Kubernetes cluster may also be done in a CICD pipeline. You may want to automatize it all, e.g. for doing Continuous Deployment and combine both pipelines to a single long pipeline. This is a more complicated setup and there is no single answer on how to do it. When the build-parts is done, it may trigger an update of the image field in the app configuration repository to trigger the configuration-pipeline.

-- Jonas
Source: StackOverflow