Unable to run the pods on the kubernetes cluster (Digital Ocean) ErrImagePull

1/20/2021

I am new to docker/k8s. I was running a simple Nodejs app on my local machine. I am using skaffold.

Now I am trying to run the same thing on the Kubernetes cluster digital ocean. I am getting the following error:

Error: container auth is waiting to start: rehanpunjwani/auth:655249efc5a2a82370b44d76bbfc09e50c11ba6316492f4753a03073f48ee83c can't be pulled.

My pod status was ErrImagePull. I tried to look into the pods for the events it shows the following failures:

Type     Reason          Age                   From               Message
  ----     ------          ----                  ----               -------
  Normal   Scheduled       5m10s                 default-scheduler  Successfully assigned default/auth-699894675-66pwq to xc-k8s-dev-30pnw
  Normal   Pulling         4m54s (x2 over 5m7s)  kubelet            Pulling image "rehanpunjwani/auth:655249efc5a2a82370b44d76bbfc09e50c11ba6316492f4753a03073f48ee83c"
  Warning  Failed          4m53s (x2 over 5m5s)  kubelet            Failed to pull image "rehanpunjwani/auth:655249efc5a2a82370b44d76bbfc09e50c11ba6316492f4753a03073f48ee83c": rpc error: code = Unknown desc = Error response from daemon: manifest for rehanpunjwani/auth:655249efc5a2a82370b44d76bbfc09e50c11ba6316492f4753a03073f48ee83c not found: manifest unknown: manifest unknown
  Warning  Failed          4m53s (x2 over 5m5s)  kubelet            Error: ErrImagePull
  Normal   SandboxChanged  4m47s (x7 over 5m5s)  kubelet            Pod sandbox changed, it will be killed and re-created.
  Normal   BackOff         4m46s (x6 over 5m2s)  kubelet            Back-off pulling image "rehanpunjwani/auth:655249efc5a2a82370b44d76bbfc09e50c11ba6316492f4753a03073f48ee83c"
  Warning  Failed          4m46s (x6 over 5m2s)  kubelet            Error: ImagePullBackOff

The error is appearing only with the digital ocean. I tried to search about the issue but unable to resolve the problem. The error is relevant to pulling images from docker. My repo is public but still I am unable to pull it.

Can anyone help me to solve this problem?

Edit 1: My auth-depl.yaml is like this:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: auth
spec:
  replicas: 1
  selector:
    matchLabels:
      app: auth
  template:
    metadata:
      labels:
        app: auth
    spec:
      containers:
      - name: auth
        image: rehanpunjwani/auth:latest
        env:
          - name: JWT_KEY
            valueFrom:
              secretKeyRef:
                name: jwt-secret
                key: JWT_KEY
---

Edit 2: output of the kubectl get pod -o yaml -l app=auth

apiVersion: v1
items:
- apiVersion: v1
  kind: Pod
  metadata:
    creationTimestamp: "2021-01-20T11:22:48Z"
    generateName: auth-6c596959dc-
    labels:
      app: auth
      app.kubernetes.io/managed-by: skaffold
      pod-template-hash: 6c596959dc
      skaffold.dev/run-id: d99c01da-cb0b-49e8-bcb8-98ecd6d1c9f9
    managedFields:
    - apiVersion: v1
      fieldsType: FieldsV1
      fieldsV1:
        f:metadata:
          f:generateName: {}
          f:labels:
            .: {}
            f:app: {}
            f:app.kubernetes.io/managed-by: {}
            f:pod-template-hash: {}
            f:skaffold.dev/run-id: {}
          f:ownerReferences:
            .: {}
            k:{"uid":"a0c69a6b-fe95-4bed-8630-6abbae1d97f9"}:
              .: {}
              f:apiVersion: {}
              f:blockOwnerDeletion: {}
              f:controller: {}
              f:kind: {}
              f:name: {}
              f:uid: {}
        f:spec:
          f:containers:
            k:{"name":"auth"}:
              .: {}
              f:env:
                .: {}
                k:{"name":"JWT_KEY"}:
                  .: {}
                  f:name: {}
                  f:valueFrom:
                    .: {}
                    f:secretKeyRef:
                      .: {}
                      f:key: {}
                      f:name: {}
              f:image: {}
              f:imagePullPolicy: {}
              f:name: {}
              f:resources: {}
              f:terminationMessagePath: {}
              f:terminationMessagePolicy: {}
          f:dnsPolicy: {}
          f:enableServiceLinks: {}
          f:restartPolicy: {}
          f:schedulerName: {}
          f:securityContext: {}
          f:terminationGracePeriodSeconds: {}
      manager: kube-controller-manager
      operation: Update
      time: "2021-01-20T11:22:48Z"
    - apiVersion: v1
      fieldsType: FieldsV1
      fieldsV1:
        f:status:
          f:conditions:
            k:{"type":"ContainersReady"}:
              .: {}
              f:lastProbeTime: {}
              f:lastTransitionTime: {}
              f:message: {}
              f:reason: {}
              f:status: {}
              f:type: {}
            k:{"type":"Initialized"}:
              .: {}
              f:lastProbeTime: {}
              f:lastTransitionTime: {}
              f:status: {}
              f:type: {}
            k:{"type":"Ready"}:
              .: {}
              f:lastProbeTime: {}
              f:lastTransitionTime: {}
              f:message: {}
              f:reason: {}
              f:status: {}
              f:type: {}
          f:containerStatuses: {}
          f:hostIP: {}
          f:podIP: {}
          f:podIPs:
            .: {}
            k:{"ip":"10.244.0.22"}:
              .: {}
              f:ip: {}
          f:startTime: {}
      manager: kubelet
      operation: Update
      time: "2021-01-20T11:26:07Z"
    name: auth-6c596959dc-9ghtg
    namespace: default
    ownerReferences:
    - apiVersion: apps/v1
      blockOwnerDeletion: true
      controller: true
      kind: ReplicaSet
      name: auth-6c596959dc
      uid: a0c69a6b-fe95-4bed-8630-6abbae1d97f9
    resourceVersion: "1994444"
    selfLink: /api/v1/namespaces/default/pods/auth-6c596959dc-9ghtg
    uid: c64653af-d17c-4c96-bea1-338b50b04567
  spec:
    containers:
    - env:
      - name: JWT_KEY
        valueFrom:
          secretKeyRef:
            key: JWT_KEY
            name: jwt-secret
      image: rehanpunjwani/auth:b902346e89a8f523f5b9f281921bf2413a4686148045523670c26653e66d8526
      imagePullPolicy: IfNotPresent
      name: auth
      resources: {}
      terminationMessagePath: /dev/termination-log
      terminationMessagePolicy: File
      volumeMounts:
      - mountPath: /var/run/secrets/kubernetes.io/serviceaccount
        name: default-token-drzwc
        readOnly: true
    dnsPolicy: ClusterFirst
    enableServiceLinks: true
    imagePullSecrets:
    - name: regcred
    nodeName: xc-k8s-dev-30png
    preemptionPolicy: PreemptLowerPriority
    priority: 0
    restartPolicy: Always
    schedulerName: default-scheduler
    securityContext: {}
    serviceAccount: default
    serviceAccountName: default
    terminationGracePeriodSeconds: 30
    tolerations:
    - effect: NoExecute
      key: node.kubernetes.io/not-ready
      operator: Exists
      tolerationSeconds: 300
    - effect: NoExecute
      key: node.kubernetes.io/unreachable
      operator: Exists
      tolerationSeconds: 300
    volumes:
    - name: default-token-drzwc
      secret:
        defaultMode: 420
        secretName: default-token-drzwc
  status:
    conditions:
    - lastProbeTime: null
      lastTransitionTime: "2021-01-20T11:22:48Z"
      status: "True"
      type: Initialized
    - lastProbeTime: null
      lastTransitionTime: "2021-01-20T11:22:48Z"
      message: 'containers with unready status: [auth]'
      reason: ContainersNotReady
      status: "False"
      type: Ready
    - lastProbeTime: null
      lastTransitionTime: "2021-01-20T11:22:48Z"
      message: 'containers with unready status: [auth]'
      reason: ContainersNotReady
      status: "False"
      type: ContainersReady
    - lastProbeTime: null
      lastTransitionTime: "2021-01-20T11:22:48Z"
      status: "True"
      type: PodScheduled
    containerStatuses:
    - image: rehanpunjwani/auth:b902346e89a8f523f5b9f281921bf2413a4686148045523670c26653e66d8526
      imageID: ""
      lastState: {}
      name: auth
      ready: false
      restartCount: 0
      started: false
      state:
        waiting:
          message: Back-off pulling image "rehanpunjwani/auth:b902346e89a8f523f5b9f281921bf2413a4686148045523670c26653e66d8526"
          reason: ImagePullBackOff
    hostIP: 10.110.0.3
    phase: Pending
    podIP: 10.244.0.22
    podIPs:
    - ip: 10.244.0.22
    qosClass: BestEffort
    startTime: "2021-01-20T11:22:48Z"
kind: List
metadata:
  resourceVersion: ""
  selfLink: ""```
-- dev_rrp
digital-ocean
docker
kubernetes

2 Answers

1/20/2021

The issue is, your image does not exists:

rehanpunjwani/auth:655249efc5a2a82370b44d76bbfc09e50c11ba6316492f4753a03073f48ee83c 

Try to run this command on your local machine.

docker pull  rehanpunjwani/auth:655249efc5a2a82370b44d76bbfc09e50c11ba6316492f4753a03073f48ee83c 

If this is incorrect image, change your yaml file for the Pod/deployment

-- Daniel Hornik
Source: StackOverflow

1/20/2021

The issue was with skaffold cli. Making the build.local.push = true solved the issue for me.

deploy:
  kubectl:
    manifests:
      - ./infra/k8s/*
build:
  local:
    push: true #Here it was false
-- dev_rrp
Source: StackOverflow