Kubernetes pods are not spreaded across different nodes

5/28/2016

I have a Kubernetes cluster on GKE. I know Kubernetes will spread pods with the same labels, but this isn't happening for me. Here is my node description.

Name:                   gke-pubnation-cluster-prod-high-cpu-14a766ad-node-dpob
Conditions:
  Type          Status  LastHeartbeatTime                       LastTransitionTime                      Reason                          Message
  ----          ------  -----------------                       ------------------                      ------                          -------
  OutOfDisk     False   Fri, 27 May 2016 21:11:17 -0400         Thu, 26 May 2016 22:16:27 -0400         KubeletHasSufficientDisk        kubelet has sufficient disk space available
  Ready         True    Fri, 27 May 2016 21:11:17 -0400         Thu, 26 May 2016 22:17:02 -0400         KubeletReady                    kubelet is posting ready status. WARNING: CPU hardcapping unsupported
Capacity:
 cpu:           2
 memory:        1848660Ki
 pods:          110
System Info:
 Machine ID:
 Kernel Version:                3.16.0-4-amd64
 OS Image:                      Debian GNU/Linux 7 (wheezy)
 Container Runtime Version:     docker://1.9.1
 Kubelet Version:               v1.2.4
 Kube-Proxy Version:            v1.2.4
Non-terminated Pods:            (2 in total)
  Namespace                     Name                                                                                    CPU Requests    CPU Limits  Memory Requests Memory Limits
  ---------                     ----                                                                                    ------------    ----------  --------------- -------------
  kube-system                   fluentd-cloud-logging-gke-pubnation-cluster-prod-high-cpu-14a766ad-node-dpob            80m (4%)        0 (0%)              200Mi (11%)     200Mi (11%)
  kube-system                   kube-proxy-gke-pubnation-cluster-prod-high-cpu-14a766ad-node-dpob                       20m (1%)        0 (0%)              0 (0%)          0 (0%)
Allocated resources:
  (Total limits may be over 100%, i.e., overcommitted. More info: http://releases.k8s.io/HEAD/docs/user-guide/compute-resources.md)
  CPU Requests  CPU Limits      Memory Requests Memory Limits
  ------------  ----------      --------------- -------------
  100m (5%)     0 (0%)          200Mi (11%)     200Mi (11%)
No events.

Name:                   gke-pubnation-cluster-prod-high-cpu-14a766ad-node-qhw2
Conditions:
  Type          Status  LastHeartbeatTime                       LastTransitionTime                      Reason                          Message
  ----          ------  -----------------                       ------------------                      ------                          -------
  OutOfDisk     False   Fri, 27 May 2016 21:11:17 -0400         Fri, 27 May 2016 18:16:38 -0400         KubeletHasSufficientDisk        kubelet has sufficient disk space available
  Ready         True    Fri, 27 May 2016 21:11:17 -0400         Fri, 27 May 2016 18:17:12 -0400         KubeletReady                    kubelet is posting ready status. WARNING: CPU hardcapping unsupported
Capacity:
 pods:          110
 cpu:           2
 memory:        1848660Ki
System Info:
 Machine ID:
 Kernel Version:                3.16.0-4-amd64
 OS Image:                      Debian GNU/Linux 7 (wheezy)
 Container Runtime Version:     docker://1.9.1
 Kubelet Version:               v1.2.4
 Kube-Proxy Version:            v1.2.4
Non-terminated Pods:            (10 in total)
  Namespace                     Name                                                                                    CPU Requests    CPU Limits  Memory Requests Memory Limits
  ---------                     ----                                                                                    ------------    ----------  --------------- -------------
  default                       pn-minions-deployment-prod-3923308490-axucq                                             100m (5%)       0 (0%)              0 (0%)          0 (0%)
  default                       pn-minions-deployment-prod-3923308490-mvn54                                             100m (5%)       0 (0%)              0 (0%)          0 (0%)
  default                       pn-minions-deployment-staging-2522417973-8cq5p                                          100m (5%)       0 (0%)              0 (0%)          0 (0%)
  default                       pn-minions-deployment-staging-2522417973-9yatt                                          100m (5%)       0 (0%)              0 (0%)          0 (0%)
  kube-system                   fluentd-cloud-logging-gke-pubnation-cluster-prod-high-cpu-14a766ad-node-qhw2            80m (4%)        0 (0%)              200Mi (11%)     200Mi (11%)
  kube-system                   heapster-v1.0.2-1246684275-a8eab                                                        150m (7%)       150m (7%)   308Mi (17%)     308Mi (17%)
  kube-system                   kube-dns-v11-uzl1h                                                                      310m (15%)      310m (15%)  170Mi (9%)      920Mi (50%)
  kube-system                   kube-proxy-gke-pubnation-cluster-prod-high-cpu-14a766ad-node-qhw2                       20m (1%)        0 (0%)              0 (0%)          0 (0%)
  kube-system                   kubernetes-dashboard-v1.0.1-3co2b                                                       100m (5%)       100m (5%)   50Mi (2%)       50Mi (2%)
  kube-system                   l7-lb-controller-v0.6.0-o5ojv                                                           110m (5%)       110m (5%)   70Mi (3%)       120Mi (6%)
Allocated resources:
  (Total limits may be over 100%, i.e., overcommitted. More info: http://releases.k8s.io/HEAD/docs/user-guide/compute-resources.md)
  CPU Requests  CPU Limits      Memory Requests Memory Limits
  ------------  ----------      --------------- -------------
  1170m (58%)   670m (33%)      798Mi (44%)     1598Mi (88%)
No events.

Here is the description for deployments:

Name:                   pn-minions-deployment-prod
Namespace:              default
Labels:                 app=pn-minions,environment=production
Selector:               app=pn-minions,environment=production
Replicas:               2 updated | 2 total | 2 available | 0 unavailable
OldReplicaSets:         <none>
NewReplicaSet:          pn-minions-deployment-prod-3923308490 (2/2 replicas created)

Name:                   pn-minions-deployment-staging
Namespace:              default
Labels:                 app=pn-minions,environment=staging
Selector:               app=pn-minions,environment=staging
Replicas:               2 updated | 2 total | 2 available | 0 unavailable
OldReplicaSets:         <none>
NewReplicaSet:          pn-minions-deployment-staging-2522417973 (2/2 replicas created)

As you can see, all four pods are on the same node. Should I do something in additional to make this work?

-- Daiwei
google-kubernetes-engine
kubernetes

1 Answer

5/28/2016

By default, pods run with unbounded CPU and memory limits. This means that any pod in the system will be able to consume as much CPU and memory on the node that executes the pod. http://kubernetes.io/docs/admin/limitrange/

When you don't specify the CPU limit kubernetes will not know how much CPU resources are required and will try to create pods in one node.

Here is an example of Deployment

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: jenkins
spec:
  replicas: 4
  template:
    metadata:
      labels:
        app: jenkins
    spec:
      containers:
        - name: jenkins
          image: quay.io/naveensrinivasan/jenkins:0.4
          ports:
            - containerPort: 8080
          resources:
            limits:
                cpu: "400m"
#          volumeMounts:
#            - mountPath: /var/jenkins_home
#              name: jenkins-volume
#      volumes:
#         - name: jenkins-volume
#           awsElasticBlockStore:
#            volumeID: vol-29c4b99f
#            fsType: ext4
      imagePullSecrets:
         - name: registrypullsecret

Here is the output of the kubectl describe po | grep Node after creating the deployment.

~ aws_kubernetes  naveen@GuessWho  ~/revature/devops/jenkins   jenkins ● k describe po | grep Node
Node:       ip-172-20-0-26.us-west-2.compute.internal/172.20.0.26
Node:       ip-172-20-0-29.us-west-2.compute.internal/172.20.0.29
Node:       ip-172-20-0-27.us-west-2.compute.internal/172.20.0.27
Node:       ip-172-20-0-29.us-west-2.compute.internal/172.20.0.29

It is now created in 4 different nodes. It is based on cpu limits on your cluster. You could increase / decrease replicas to see it being deployed in different nodes.

This isn't GKE or AWS specific.

-- Naveen
Source: StackOverflow