Kubernetes pods won't start even though node is a ready state

4/10/2018

I'm new to kubernetes and I'm struggling with starting my frist pods. I installed kubernetes on my Ubuntu virtual machine, proceeded with

kubeadm init

followed by other instuctions

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

I can see that the node is up and running

kubectl get nodes
NAME      STATUS    ROLES     AGE       VERSION
vm24740   Ready     master    12m       v1.10.0

Nevertheless my pods won't start:

kubectl get pods
NAME                               READY     STATUS    RESTARTS   AGE
myappdeployment-588bc8ddf4-28jzj   0/1       Pending   0          11m
myappdeployment-588bc8ddf4-9bbb9   0/1       Pending   0          11m
myappdeployment-588bc8ddf4-fptft   0/1       Pending   0          11m
myappdeployment-588bc8ddf4-lxj8p   0/1       Pending   0          11m
myappdeployment-588bc8ddf4-xhg5f   0/1       Pending   0          11m

This is a detailed view on a pod:

kubectl describe pod myappdeployment-588bc8ddf4-28jzj
Name:           myappdeployment-588bc8ddf4-28jzj
Namespace:      default
Node:           <none>
Labels:         app=myapp
                pod-template-hash=1446748890
Annotations:    <none>
Status:         Pending
IP:
Controlled By:  ReplicaSet/myappdeployment-588bc8ddf4
Containers:
  myapp:
    Image:        jamesquigley/exampleapp:v1.0.0
    Port:         9000/TCP
    Host Port:    0/TCP
    Environment:  <none>
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-6rcjb (ro)
Conditions:
  Type           Status
  PodScheduled   False
Volumes:
  default-token-6rcjb:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-6rcjb
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
                 node.kubernetes.io/unreachable:NoExecute for 300s
Events:
  Type     Reason            Age                From               Message
  ----     ------            ----               ----               -------
  Warning  FailedScheduling  1m (x37 over 11m)  default-scheduler  0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.

Would somebody more experienced than me know, why the pods don't start?

-- Martin Dvoracek
cluster-computing
devops
kubernetes

1 Answer

4/10/2018

Seem like you are running single node (master) k8s:

From Documentation:

Master Isolation

By default, your cluster will not schedule pods on the master for security reasons. If you want to be able to schedule pods on the master, e.g. for a single-machine Kubernetes cluster for development, run:

kubectl taint nodes --all node-role.kubernetes.io/master-

With output looking something like:

node "test-01" untainted taint key="dedicated" and effect="" not found. taint key="dedicated" and effect="" not found.

This will remove the node-role.kubernetes.io/master taint from any nodes that have it, including the master node, meaning that the scheduler will then be able to schedule pods everywhere.

-- bits
Source: StackOverflow