why is there an error in minikube dashboard

5/16/2020

After I successfully ran my application (asp.net core - visual studio web app) in docker and exposed it onto kubernetes, I went to minikube dashboard and then I found out that the deployment and pod for my application has this error:

MountVolume.SetUp failed for volume "default-token-6sw9l" : couldn't propagate object cache: timed out waiting for the condition

How do I resolve this error?

-- BZM708
kubernetes
ubuntu-18.04
visual-studio-code

1 Answer

5/25/2020

kubectl version:

Client Version: version.Info{Major:"1", Minor:"16", GitVersion:"v1.16.2", 
GitCommit:"c97fe5036ef3df2967d086711e6c0c405941e14b", GitTreeState:"clean", 
BuildDate:"2019-10-15T19:18:23Z", GoVersion:"go1.12.10", Compiler:"gc", 
Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"16", GitVersion:"v1.16.2", 
GitCommit:"c97fe5036ef3df2967d086711e6c0c405941e14b", GitTreeState:"clean", 
  BuildDate:"2019-10-15T19:09:08Z", GoVersion:"go1.12.10", Compiler:"gc", 
 Platform:"linux/amd64"}

pod yaml for one of the applications:

  kind: Pod
  apiVersion: v1
  metadata:
   name: websa-node-7f967b8b8b-k7gmf
   generateName: websa-node-7f967b8b8b-
   namespace: default
   selfLink: /api/v1/namespaces/default/pods/websa-node-7f967b8b8b-k7gmf
   uid: f2acc429-97aa-47b0-8722-ec3270603727
   resourceVersion: '478416'
   creationTimestamp: '2020-05-25T22:33:25Z'
   labels:
     app: websa-node
     pod-template-hash: 7f967b8b8b
    ownerReferences:
    - apiVersion: apps/v1
      kind: ReplicaSet
      name: websa-node-7f967b8b8b
      uid: 090f524d-417a-4bc4-97dd-4260f8c9b1cb
      controller: true
      blockOwnerDeletion: true
  spec:
    volumes:
    - name: default-token-fqptv
      secret:
        secretName: default-token-fqptv
        defaultMode: 420
  containers:
     - name: websa
      image: websa
      resources: {}
      volumeMounts:
      - name: default-token-fqptv
      readOnly: true
      mountPath: /var/run/secrets/kubernetes.io/serviceaccount
      terminationMessagePath: /dev/termination-log
      terminationMessagePolicy: File
    imagePullPolicy: Always
    restartPolicy: Always
    terminationGracePeriodSeconds: 30
    dnsPolicy: ClusterFirst
 serviceAccountName: default
 serviceAccount: default
 nodeName: minikube
 securityContext: {}
 schedulerName: default-scheduler
 tolerations:
    - key: node.kubernetes.io/not-ready
      operator: Exists
      effect: NoExecute
      tolerationSeconds: 300
    - key: node.kubernetes.io/unreachable
      operator: Exists
      effect: NoExecute
      tolerationSeconds: 300
    priority: 0
   enableServiceLinks: true
 status:
  phase: Pending
  conditions:
   - type: Initialized
     status: 'True'
     lastProbeTime: null
     lastTransitionTime: '2020-05-25T22:33:25Z'
   - type: Ready
      status: 'False'
     lastProbeTime: null
     lastTransitionTime: '2020-05-25T22:33:25Z'
        reason: ContainersNotReady
        message: 'containers with unready status: [websa]'
      - type: ContainersReady
     status: 'False'
    lastProbeTime: null
     lastTransitionTime: '2020-05-25T22:33:25Z'
     reason: ContainersNotReady
     message: 'containers with unready status: [websa]'
   - type: PodScheduled
     status: 'True'
     lastProbeTime: null
     lastTransitionTime: '2020-05-25T22:33:25Z'
  hostIP: 192.168.99.100
  podIP: 172.17.0.11
  podIPs:
  - ip: 172.17.0.11
  startTime: '2020-05-25T22:33:25Z'
 containerStatuses:
   - name: websa
     state:
       waiting:
        reason: ImagePullBackOff
        message: Back-off pulling image "websa"
      lastState: {}
      ready: false
      restartCount: 0
      image: websa
      imageID: ''
      started: false
   qosClass: BestEffort
-- BZM708
Source: StackOverflow