FailedScheduling: 0/3 nodes are available: 3 Insufficient pods

10/6/2019

I'm trying to deploy my NodeJS application to EKS and run 3 pods with exactly the same container.

Here's the error message:

$ kubectl get pods
NAME                                 READY   STATUS             RESTARTS   AGE
cm-deployment-7c86bb474c-5txqq       0/1     Pending            0          18s
cm-deployment-7c86bb474c-cd7qs       0/1     ImagePullBackOff   0          18s
cm-deployment-7c86bb474c-qxglx       0/1     ImagePullBackOff   0          18s
public-api-server-79b7f46bf9-wgpk6   0/1     ImagePullBackOff   0          2m30s

$ kubectl describe pod cm-deployment-7c86bb474c-5txqq
Events:
  Type     Reason            Age                  From               Message
  ----     ------            ----                 ----               -------
  Warning  FailedScheduling  23s (x4 over 2m55s)  default-scheduler  0/3 nodes are available: 3 Insufficient pods.

So it says that 0/3 nodes are available However, if I run kubectl get nodes --watch

$ kubectl get nodes --watch
NAME                                                 STATUS   ROLES    AGE    VERSION
ip-192-168-163-73.ap-northeast-2.compute.internal    Ready    <none>   6d7h   v1.14.6-eks-5047ed
ip-192-168-172-235.ap-northeast-2.compute.internal   Ready    <none>   6d7h   v1.14.6-eks-5047ed
ip-192-168-184-236.ap-northeast-2.compute.internal   Ready    <none>   6d7h   v1.14.6-eks-5047ed

3 pods are running.

here are my configurations:

aws-auth-cm.yaml

apiVersion: v1
kind: ConfigMap
metadata:
  name: aws-auth
  namespace: kube-system
data:
  mapRoles: |
    - rolearn: [MY custom role ARN]
      username: system:node:{{EC2PrivateDNSName}}
      groups:
        - system:bootstrappers
        - system:nodes
deployment.yaml

apiVersion: apps/v1
kind: Deployment
metadata:
  name: cm-deployment
spec:
  replicas: 3
  selector:
    matchLabels:
      app: cm-literal
  template:
    metadata:
      name: cm-literal-pod
      labels:
        app: cm-literal
    spec:
      containers:
        - name: cm
          image: docker.io/cjsjyh/public_test:1
          imagePullPolicy: Always
          ports:
            - containerPort: 80
          #imagePullSecrets:
          #  - name: regcred
          env:
            [my environment variables]

I applied both .yaml files

How can I solve this? Thank you

-- J.S.C
eks
kubectl
kubernetes

1 Answer

10/6/2019

My guess, without running the manifests you've got is that the image tag 1 on your image doesn't exist, so you're getting ImagePullBackOff which usually means that the container runtime can't find the image to pull .

Looking at the Docker Hub page there's no 1 tag there, just latest.

So, either removing the tag or replace 1 with latest may resolve your issue.

-- Rory McCune
Source: StackOverflow