Docker image not pulling latest from dockerhub.com registry

10/24/2019

When I am implementing the CI/CD pipeline, I am using docker, kubernetes and jenkins for implementation. And I am pushing the resulting Docker image to Dockerhub repository.

When I am pulling it is not pulling the latest from the Dockerhub.com registry. So it's not showing the updated response in my application. I added the testdeployment.yaml file like the following. And repository credentials are storing in Jenkinsfile only.

spec:
  containers:
   - name: test-kube-deployment-container
     image: "spacestudymilletech010/spacestudykubernetes:latest"
     imagePullPolicy: Always
     ports:
        - name: http
          containerPort: 8085
          protocol: TCP

Jenkinsfile

 sh 'docker build -f /var/lib/jenkins/workspace/jpipeline/pipeline/Dockerfile -t spacestudymilletech010/spacestudykubernetes:latest /var/lib/jenkins/workspace/jpipeline/pipeline'
 sh 'docker login --username=<my-username> --password=<my-password>' 
 sh 'docker push spacestudymilletech010/spacestudykubernetes:latest'

How can I identify why it is not pulling latest image from the Dockerhub.com?

-- Jacob
docker
kubernetes

1 Answer

10/24/2019

It looks like you are repeatedly pushing :latest to dockerhub?

If so, then that's the reason for your issue. You push latest to the hub from your Jenkins job, but if the k8s node which runs the deployment pod already has a tag called latest stored locally, then that's what it will use.

To clarify - latest is just a string, it could equally well be foobar. It doesn't actually mean that docker will pull the most recent version of the container.

There are two takeaways from this:

  • It's almost always a very bad idea to use latest in k8s.
  • It is always a bad idea to push the same tag multiple times, in fact many repo's won't let you.

With regards to using latest at all. This comes from personal experience, at my place of work, in the early days of our k8s adoption, we used it everywhere. That is until we found one day that our puppet server wasn't working any more. On investigation we found that the node had died, the pod re-spun on a different node and a different latest was pulled which was a new major release, breaking things.

It was not obvious, because kubectl describe pod showed the same tag name as before so nothing, apparently, had changed.

To add an excellent point mentioned in the comments: You have ImagePullPolicy: 'Always', but if you're doing kubectl apply -f mypod.yaml, with the same tag name, k8s has no way of knowing you've actually changed the image

-- SiHa
Source: StackOverflow