Cannot run kubenetes deployments

12/1/2019

I configuring kubernetes to have 3 images (my API, Elastic Search and Kibana)

Here is my deployment.yml file

 apiVersion: apps/v1
kind: Deployment
metadata:
  name: tooseeweb-deployment
spec:
  selector:
    matchLabels:
      app: tooseeweb-pod
  template:
    metadata:
      labels:
        app: tooseeweb-pod
    spec: 
      containers:
      - name: tooseewebcontainer
        image: tooseewebcontainer:v1
        resources:
          limits:
            memory: "128Mi"
            cpu: "500m"
        ports:
        - containerPort: 80
      - name: elasticsearch
        image: docker.elastic.co/elasticsearch/elasticsearch:6.2.4
        resources:
          limits:
            memory: "128Mi"
            cpu: "500m"
        ports:
        - containerPort: 9200
      - name: kibana
        image: docker.elastic.co/kibana/kibana:6.2.4
        resources:
          limits:
            memory: "128Mi"
            cpu: "500m"
        ports:
        - containerPort: 5601

When I run kubectl get deployments I see this

enter image description here

It's stuck on 0/1 ready. I try to reboot docker, etc. It not helps. How I can fix this?

UPDATE

I run kubectl describe pod and have this error

Warning  FailedScheduling  19s   default-scheduler  0/1 nodes are available: 1 Insufficient cpu.

How I can fix this?

-- Eugene Sukh
asp.net
asp.net-core
docker
kubernetes

3 Answers

12/2/2019

I see you are having trouble scheduling a pod. Kubernetes error message: 0/1 nodes are available: 1 Insufficient cpu, means you dont have enough resources to run your pod.

Every one of your containers is requesting 0.5 of a cpu core so a whole pod needs 1.5 cpu cores to run. Kubernetes is unable to find a node with spare 1.5 cpu cores so it fails to schedule. If you want to run it you either need to lower your limits/requests of add more resources to your node.

I also noticed that you are putting all containers in the same pod and it's not a good practice. Whenever you can put every container in separate pod. This will also allow for better load spread among nodes. Put several containers in the same pod only if these have to go together and there is no other way.

Also notice that elasticsearch is a java application so limiting its memory to 128Mi will likely cause frequent restart and this is very undesirable behavior for a database.

-- HelloWorld
Source: StackOverflow

1/22/2020

Analysis:

  • It depends on the pods that are already scheduled in the nodes.
  • Kubernetes scheduler keeps track of the resources in the nodes and the pods scheduled in each of these nodes
  • When scheduling a new pod, scheduler tries to identify a node where the pod can be scheduled
  • If it could not find a node with the necessary resources requested by the pod, the scheduler will fail to schedule the pod and the pod would be stuck in Pending state

Solutions:

  1. Reduce the resources allocated to existing pods / delete the unnecessary pods
  2. Increase the resources available in the worker nodes
-- pr-pal
Source: StackOverflow

12/1/2019

Remove these resource limits in every pods.

resources:
   limits:
      memory: "128Mi"
      cpu: "500m"

If you want to limit the resources do it later after applying once the deployment successfully.

-- Bumuthu Dilshan
Source: StackOverflow