I just installed Kubernetes on a cluster of Ubuntu's using the explanation of digital ocean with Ansible. Everything seems to be fine; but when verifying the cluster, the master node is in a not ready status:
# kubectl get nodes
NAME STATUS ROLES AGE VERSION
jwdkube-master-01 NotReady master 44m v1.12.2
jwdkube-worker-01 Ready <none> 44m v1.12.2
jwdkube-worker-02 Ready <none> 44m v1.12.2
This is the version:
# kubectl version
Client Version: version.Info{Major:"1", Minor:"12", GitVersion:"v1.12.2", GitCommit:"17c77c7898218073f14c8d573582e8d2313dc740", GitTreeState:"clean", BuildDate:"2018-10-24T06:54:59Z", GoVersion:"go1.10.4", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"12", GitVersion:"v1.12.2", GitCommit:"17c77c7898218073f14c8d573582e8d2313dc740", GitTreeState:"clean", BuildDate:"2018-10-24T06:43:59Z", GoVersion:"go1.10.4", Compiler:"gc", Platform:"linux/amd64"}
When I check the master node, the kube-proxy is hanging in a starting mode:
# kubectl describe nodes jwdkube-master-01
Name: jwdkube-master-01
Roles: master
...
LastTransitionTime Reason Message
---- ------ ----------------- ------------------ ------ -------
OutOfDisk False Thu, 08 Nov 2018 10:24:45 +0000 Thu, 08 Nov 2018 09:36:10 +0000 KubeletHasSufficientDisk kubelet has sufficient disk space available
MemoryPressure False Thu, 08 Nov 2018 10:24:45 +0000 Thu, 08 Nov 2018 09:36:10 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available
DiskPressure False Thu, 08 Nov 2018 10:24:45 +0000 Thu, 08 Nov 2018 09:36:10 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure
PIDPressure False Thu, 08 Nov 2018 10:24:45 +0000 Thu, 08 Nov 2018 09:36:10 +0000 KubeletHasSufficientPID kubelet has sufficient PID available
Ready False Thu, 08 Nov 2018 10:24:45 +0000 Thu, 08 Nov 2018 09:36:10 +0000 KubeletNotReady runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
Addresses:
InternalIP: 104.248.207.107
Hostname: jwdkube-master-01
Capacity:
cpu: 1
ephemeral-storage: 25226960Ki
hugepages-1Gi: 0
hugepages-2Mi: 0
memory: 1008972Ki
pods: 110
Allocatable:
cpu: 1
ephemeral-storage: 23249166298
hugepages-1Gi: 0
hugepages-2Mi: 0
memory: 906572Ki
pods: 110
System Info:
Machine ID: 771c0f669c0a40a1ba7c28bf1f05a637
System UUID: 771c0f66-9c0a-40a1-ba7c-28bf1f05a637
Boot ID: 2532ae4d-c08c-45d8-b94c-6e88912ed627
Kernel Version: 4.18.0-10-generic
OS Image: Ubuntu 18.10
Operating System: linux
Architecture: amd64
Container Runtime Version: docker://18.6.1
Kubelet Version: v1.12.2
Kube-Proxy Version: v1.12.2
PodCIDR: 10.244.0.0/24
Non-terminated Pods: (5 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits
--------- ---- ------------ ---------- --------------- -------------
kube-system etcd-jwdkube-master-01 0 (0%) 0 (0%) 0 (0%) 0 (0%)
kube-system kube-apiserver-jwdkube-master-01 250m (25%) 0 (0%) 0 (0%) 0 (0%)
kube-system kube-controller-manager-jwdkube-master-01 200m (20%) 0 (0%) 0 (0%) 0 (0%)
kube-system kube-proxy-p8cbq 0 (0%) 0 (0%) 0 (0%) 0 (0%)
kube-system kube-scheduler-jwdkube-master-01 100m (10%) 0 (0%) 0 (0%) 0 (0%)
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
Resource Requests Limits
-------- -------- ------
cpu 550m (55%) 0 (0%)
memory 0 (0%) 0 (0%)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal NodeHasSufficientDisk 48m (x6 over 48m) kubelet, jwdkube-master-01 Node jwdkube-master-01 status is now: NodeHasSufficientDisk
Normal NodeHasSufficientMemory 48m (x6 over 48m) kubelet, jwdkube-master-01 Node jwdkube-master-01 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 48m (x6 over 48m) kubelet, jwdkube-master-01 Node jwdkube-master-01 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 48m (x5 over 48m) kubelet, jwdkube-master-01 Node jwdkube-master-01 status is now: NodeHasSufficientPID
Normal NodeAllocatableEnforced 48m kubelet, jwdkube-master-01 Updated Node Allocatable limit across pods
Normal Starting 48m kube-proxy, jwdkube-master-01 Starting kube-proxy.
update
running kubectl get pods -n kube-system
:
NAME READY STATUS RESTARTS AGE
coredns-576cbf47c7-8p7k2 1/1 Running 0 4h47m
coredns-576cbf47c7-s5tlv 1/1 Running 0 4h47m
etcd-jwdkube-master-01 1/1 Running 1 140m
kube-apiserver-jwdkube-master-01 1/1 Running 1 140m
kube-controller-manager-jwdkube-master-01 1/1 Running 1 140m
kube-flannel-ds-5bzrx 1/1 Running 0 4h47m
kube-flannel-ds-bfs9k 1/1 Running 0 4h47m
kube-proxy-4lrzw 1/1 Running 1 4h47m
kube-proxy-57x28 1/1 Running 0 4h47m
kube-proxy-j8bf5 1/1 Running 0 4h47m
kube-scheduler-jwdkube-master-01 1/1 Running 1 140m
tiller-deploy-6f6fd74b68-5xt54 1/1 Running 0 112m
It seems to be a problem of Flannel v0.9.1
compatibility with Kubernetes cluster v1.12.2
. Once you replace URL in master configuration playbook it should help:
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/ kube-flannel.yml
To enforce this solution on the current cluster:
On the master node delete relevant objects for Flannel v0.9.1:
kubectl delete clusterrole flannel -n kube-system
kubectl delete clusterrolebinding flannel-n kube-system
kubectl delete clusterrolebinding flannel -n kube-system
kubectl delete serviceaccount flannel -n kube-system
kubectl delete configmap kube-flannel-cfg -n kube-system
kubectl delete daemonset.extensions kube-flannel-ds -n kube-system
Proceed also with Flannel Pods deletion:
kubectl delete pod kube-flannel-ds-5bzrx -n kube-system
kubectl delete pod kube-flannel-ds-bfs9k -n kube-system
And check whether no more objects related to Flannel exist:
kubectl get all --all-namespaces
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
For me it works, however if you discover any further problems write a comment below this answer.