Kubernetes: kubectl apply does not update pods when using "latest" tag

12/3/2018

I'm using kubectl apply to update my Kubernetes pods:

kubectl apply -f /my-app/service.yaml
kubectl apply -f /my-app/deployment.yaml

Below is my service.yaml:

apiVersion: v1
kind: Service
metadata:
  name: my-app
  labels:
    run: my-app
spec:
  type: NodePort
  selector:
    run: my-app 
  ports:
  - protocol: TCP
    port: 9000
    nodePort: 30769

Below is my deployment.yaml:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: my-app
spec:  
  selector:
    matchLabels:
      run: my-app
  replicas: 2
  template:
    metadata:
      labels:
        run: my-app
    spec:
      containers:
      - name: my-app
        image: dockerhubaccount/my-app-img:latest
        ports:
        - containerPort: 9000
          protocol: TCP
      imagePullSecrets:
      - name: my-app-img-credentials
  strategy:
    type: RollingUpdate
    rollingUpdate:
      maxUnavailable: 25%
      maxSurge: 25%

This works fine the first time, but on subsequent runs, my pods are not getting updated.

I have read the suggested workaround at https://github.com/kubernetes/kubernetes/issues/33664 which is:

kubectl patch deployment my-app -p "{\"spec\":{\"template\":{\"metadata\":{\"labels\":{\"date\":\"`date +'%s'`\"}}}}}"

I was able to run the above command, but it did not resolve the issue for me.

I know that I can trigger pod updates by manually changing the image tag from "latest" to another tag, but I want to make sure I get the latest image without having to check Docker Hub.

Any help would be greatly appreciated.

-- Floating Sunfish
docker
kubernetes

4 Answers

12/3/2018

If nothing changes in the deployment spec, the pods will not be updated for you. This is one of many reasons it is not recommended to use :latest, as the other answer went into more detail on. The Deployment controller is very simple and pretty much just does DeepEquals(old.Spec.Template, new.Spec.Template), so you need some actual change, such as you have with the PATCH call by setting a label with the current datetime.

-- coderanger
Source: StackOverflow

12/3/2018

You're missing an imagePullPolicy in your deployment. Try this:

containers:
- name: my-app
  image: dockerhubaccount/my-app-img:latest
  imagePullPolicy: Always

The default policy is ifNotPresent which is why yours is not updating.

I will incorporate two notes from the link:

Note: You should avoid using the :latest tag when deploying containers in production as it is harder to track which version of the image is running and more difficult to roll back properly

Note: The caching semantics of the underlying image provider make even imagePullPolicy: Always efficient. With Docker, for example, if the image already exists, the pull attempt is fast because all image layers are cached and no image download is needed

-- rath
Source: StackOverflow

12/3/2018

Turns out I misunderstood the workaround command I gave from the link.

I thought it was a one-time command that configured my deployment to treat all future kubectl apply commands as a trigger to update my pods.

I actually just had to run the command every time I wanted to update my pods:

kubectl patch deployment my-app -p "{\"spec\":{\"template\":{\"metadata\":{\"labels\":{\"date\":\"`date +'%s'`\"}}}}}"

Many thanks to everyone who helped!

-- Floating Sunfish
Source: StackOverflow

12/3/2018

There are two things here that relates the issue,

  1. It is suggested to use kubectl apply for the first time while creating a resource and later recommended to use kubectl replace or kubectl edit or kubectl patch commands which subsequently call the kubectl apply.

  2. Once you create a service using either kubectl apply or kubetcl create you cannot replace that service with a yaml file. In other words, service generates a random IP that cannot be patched or replaced. The only option to recreate a service is to delete the service and recreate it with the same name.

NOTE: When I tried replacing a service using kubectl apply command while I was trying to create a backup and restore solution resulted this below error.

kubectl apply -f replace-service.yaml -n restore-proj
The Service "test-q12" is invalid: spec.clusterIP: Invalid value: "10.102.x.x": provided IP is already allocated.
-- Venkata Surya Lolla
Source: StackOverflow