I have successfully installed Kubernetes using kubeadm. I am running two VirtualBox VMs, one for the K8s master an another one for a node.
Kubernetes Master
sudo kubectl get nodes
NAME STATUS ROLES AGE VERSION
kubernetes-master Ready master 1h v1.10.2
kubernetes-node1 Ready <none> 1h v1.10.2
I can correctly ssh both into the master (ssh 192.168.56.3
) and node (ssh 192.168.56.4
).
I want to deploy nginx in the cluster using this deployment file:
apiVersion: apps/v1 # for versions before 1.9.0 use apps/v1beta2
kind: Deployment
metadata:
name: nginx-deployment
spec:
selector:
matchLabels:
app: nginx
replicas: 2
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:latest
ports:
- containerPort: 80
I am ssh'd into the master node, where I execute: sudo kubectl apply -f nginx-deployment.yml
.
I see that the pods are stuck on PENDING:
sudo kubectl get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
default nginx-deployment-64ff85b579-5vkdz 0/1 Pending 0 4m
default nginx-deployment-64ff85b579-w84lf 0/1 Pending 0 4m
This is the describe option for one of them:
sudo kubectl describe pod nginx-deployment-64ff85b579-5vkdz
Name: nginx-deployment-64ff85b579-5vkdz
Namespace: default
Node: <none>
Labels: app=nginx
pod-template-hash=2099416135
Annotations: <none>
Status: Pending
IP:
Controlled By: ReplicaSet/nginx-deployment-64ff85b579
Containers:
nginx:
Image: nginx:latest
Port: 80/TCP
Host Port: 0/TCP
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-7glwn (ro)
Conditions:
Type Status
PodScheduled False
Volumes:
default-token-7glwn:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-7glwn
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedScheduling 8s (x22 over 5m) default-scheduler 0/2 nodes are available: 1 node(s) had taints that the pod didn't tolerate, 1 node(s) were not ready, 1 node(s) were out of disk space.
What could be the problem?
Well... for some reason my node was down. I just had to restart the kubelet service and now it works:
systemctl restart kubelet.service