I have configured master slave kubernetes on ubuntu 16 in AWS server, on post configuration of master, dns and dashboard pods are not running.
Please help to solve this issue.
Below the article, I have followed till dashboard creation
https://www.edureka.co/blog/install-kubernetes-on-ubuntu
ubuntu@kmaster:~$ kubectl get pods -o wide --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
kube-system calico-kube-controllers-56ccd8fbd4-s9gz7 0/1 Pending 0 3m51s <none> <none> <none> <none>
kube-system coredns-5c98db65d4-7nr8v 0/1 Pending 0 7m27s <none> <none> <none> <none>
kube-system coredns-5c98db65d4-k69n9 0/1 Pending 0 7m27s <none> <none> <none> <none>
kube-system etcd-kmaster 1/1 Running 0 6m40s 172.31.41.180 kmaster <none> <none>
kube-system kube-apiserver-kmaster 1/1 Running 0 6m38s 172.31.41.180 kmaster <none> <none>
kube-system kube-controller-manager-kmaster 1/1 Running 0 6m31s 172.31.41.180 kmaster <none> <none>
kube-system kube-proxy-rtw76 1/1 Running 0 7m27s 172.31.41.180 kmaster <none> <none>
kube-system kube-scheduler-kmaster 1/1 Running 0 6m46s 172.31.41.180 kmaster <none> <none>
kube-system kubernetes-dashboard-7d75c474bb-x2b8x 0/1 Pending 0 66s <none> <none> <none> <none>
ubuntu@kmaster:~$
Oh no, dude. I just read the article... My advice is to throw it away and use official kubeadm documentation
Each time you will follow your article you will have the same result using old calico yaml(kubectl apply -f https://docs.projectcalico.org/v3.0/getting-started/kubernetes/installation/hosted/kubeadm/1.7/calico.yaml
): coredns pods and all deployment pods will be in Pending
state
kubectl get pods -o wide --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
kube-system calico-kube-controllers-56ccd8fbd4-t9mhs 0/1 Pending 0 11m <none> <none> <none> <none>
kube-system coredns-5c98db65d4-kbqnz 0/1 Pending 0 13m <none> <none> <none> <none>
kube-system coredns-5c98db65d4-rr8r9 0/1 Pending 0 13m <none> <none> <none> <none>
kube-system etcd-kmaster 1/1 Running 0 12m 172.31.6.171 kmaster <none> <none>
kube-system kube-apiserver-kmaster 1/1 Running 0 12m 172.31.6.171 kmaster <none> <none>
kube-system kube-controller-manager-kmaster 1/1 Running 0 12m 172.31.6.171 kmaster <none> <none>
kube-system kube-proxy-5nnhl 1/1 Running 0 13m 172.31.6.171 kmaster <none> <none>
kube-system kube-scheduler-kmaster 1/1 Running 0 12m 172.31.6.171 kmaster <none> <none>
Resolution - use up-to-date CNI yaml from Installing a pod network add-on
Use kubectl apply -f https://docs.projectcalico.org/v3.8/manifests/calico.yaml
ubuntu@kmaster:~$ kubectl apply -f https://docs.projectcalico.org/v3.8/manifests/calico.yaml
configmap/calico-config created
customresourcedefinition.apiextensions.k8s.io/felixconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamblocks.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/blockaffinities.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamhandles.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamconfigs.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/bgppeers.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/bgpconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ippools.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/hostendpoints.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/clusterinformations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworksets.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networksets.crd.projectcalico.org created
clusterrole.rbac.authorization.k8s.io/calico-kube-controllers created
clusterrolebinding.rbac.authorization.k8s.io/calico-kube-controllers created
clusterrole.rbac.authorization.k8s.io/calico-node created
clusterrolebinding.rbac.authorization.k8s.io/calico-node created
daemonset.apps/calico-node created
serviceaccount/calico-node created
deployment.apps/calico-kube-controllers created
serviceaccount/calico-kube-controllers created
ubuntu@kmaster:~$ kubectl get pods -o wide --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
kube-system calico-kube-controllers-65b8787765-fgw78 1/1 Running 0 44s 192.168.189.3 kmaster <none> <none>
kube-system calico-node-g27v9 1/1 Running 0 44s 172.31.6.171 kmaster <none> <none>
kube-system coredns-5c98db65d4-8l7ts 1/1 Running 0 59s 192.168.189.2 kmaster <none> <none>
kube-system coredns-5c98db65d4-v42wg 1/1 Running 0 59s 192.168.189.1 kmaster <none> <none>
kube-system kube-controller-manager-kmaster 1/1 Running 0 8s 172.31.6.171 kmaster <none> <none>
kube-system kube-proxy-xf4pt 1/1 Running 0 59s 172.31.6.171 kmaster <none> <none>
kube-system kube-scheduler-kmaster 1/1 Running 0 18s 172.31.6.171 kmaster <none> <none>
And please.. Do it in proper way no matter what is written in that article:
1) First you prepare nodes
2) Then you do kubeadm init
with proper --pod-network-cidr=
(see next step to understand what exactly pool you need. Its written in CNI page)
3) After that you installing CNI
3a)Pay attention that from version to version - you should sometimes
4) Next step in to Join WORKER NODE
5)And only then deploy whatever you want
So, back to out process..
After you applied cni and verified all pods are up and running - you Join WORKER NODE and wail till node status become Ready
ubuntu@kmaster:~$ kubectl get nodes -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
kmaster Ready master 10m v1.15.3 172.31.6.171 <none> Ubuntu 16.04.6 LTS 4.4.0-1087-aws docker://18.9.7
knode Ready <none> 27s v1.15.3 172.31.10.229 <none> Ubuntu 16.04.6 LTS 4.4.0-1087-aws docker://18.9.7
And only after all that stuff feel free to install dashboard :). Your dashboard yaml wasnt found in my case -
kubectl create -f https://raw.githubusercontent.com/kubernetes/dashboard/master/src/deploy/recommended/kubernetes-dashboard.yaml
error: unable to read URL "https://raw.githubusercontent.com/kubernetes/dashboard/master/src/deploy/recommended/kubernetes-dashboard.yaml", server reported 404 Not Found, status code=404
So I found it in new place
kubectl create -f https://raw.githubusercontent.com/kubernetes/dashboard/v1.10.1/src/deploy/alternative/kubernetes-dashboard.yaml
serviceaccount/kubernetes-dashboard created
role.rbac.authorization.k8s.io/kubernetes-dashboard-minimal created
rolebinding.rbac.authorization.k8s.io/kubernetes-dashboard-minimal created
deployment.apps/kubernetes-dashboard created
service/kubernetes-dashboard created
And here is you dashboard:
kubectl get pods --all-namespaces | grep dashboard
kube-system kubernetes-dashboard-6f8d67df77-mc6xf 1/1 Running 0 3m16s
Last notice: As per my experience - there is much easier, comfortable and stable to use Flannel and not Calico. It has less bugs and people less often have issue with it.
Short example for Flannel from scratch:
Master:
root@kmaster:~# kubeadm init --pod-network-cidr=10.244.0.0/16
ubuntu@kmaster:~$ mkdir -p $HOME/.kube
ubuntu@kmaster:~$ sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
ubuntu@kmaster:~$ sudo chown $(id -u):$(id -g) $HOME/.kube/config
ubuntu@kmaster:~$ kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/62e44c867a2846fefb68bd5f178daf4da3095ccb/Documentation/kube-flannel.yml
Slave:
kubeadm join *.*.*.*:6443 --token *****.********** \
--discovery-token-ca-cert-hash sha256:*********************************************************
Check:
ubuntu@kmaster:~$ kubectl get all --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system pod/coredns-5c98db65d4-9wzwc 1/1 Running 0 2m55s
kube-system pod/coredns-5c98db65d4-kblfd 1/1 Running 0 2m55s
kube-system pod/etcd-kmaster 1/1 Running 0 2m12s
kube-system pod/kube-apiserver-kmaster 1/1 Running 0 2m10s
kube-system pod/kube-controller-manager-kmaster 1/1 Running 0 2m
kube-system pod/kube-flannel-ds-amd64-4gpwt 1/1 Running 0 37s
kube-system pod/kube-flannel-ds-amd64-tchdm 1/1 Running 0 93s
kube-system pod/kube-proxy-dp6kq 1/1 Running 0 37s
kube-system pod/kube-proxy-gbw8t 1/1 Running 0 2m55s
kube-system pod/kube-scheduler-kmaster 1/1 Running 0 2m12s
NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
default service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 3m13s
kube-system service/kube-dns ClusterIP 10.96.0.10 <none> 53/UDP,53/TCP,9153/TCP 3m12s
NAMESPACE NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
kube-system daemonset.apps/kube-flannel-ds-amd64 2 2 2 2 2 beta.kubernetes.io/arch=amd64 93s
kube-system daemonset.apps/kube-flannel-ds-arm 0 0 0 0 0 beta.kubernetes.io/arch=arm 93s
kube-system daemonset.apps/kube-flannel-ds-arm64 0 0 0 0 0 beta.kubernetes.io/arch=arm64 93s
kube-system daemonset.apps/kube-flannel-ds-ppc64le 0 0 0 0 0 beta.kubernetes.io/arch=ppc64le 93s
kube-system daemonset.apps/kube-flannel-ds-s390x 0 0 0 0 0 beta.kubernetes.io/arch=s390x 93s
kube-system daemonset.apps/kube-proxy 2 2 2 2 2 beta.kubernetes.io/os=linux 3m11s
NAMESPACE NAME READY UP-TO-DATE AVAILABLE AGE
kube-system deployment.apps/coredns 2/2 2 2 3m12s
NAMESPACE NAME DESIRED CURRENT READY AGE
kube-system replicaset.apps/coredns-5c98db65d4 2 2 2 2m55s
Deploy Dashboard
ubuntu@kmaster:~$ kubectl create -f https://raw.githubusercontent.com/kubernetes/dashboard/v1.10.1/src/deploy/alternative/kubernetes-dashboard.yaml
serviceaccount/kubernetes-dashboard created
role.rbac.authorization.k8s.io/kubernetes-dashboard-minimal created
rolebinding.rbac.authorization.k8s.io/kubernetes-dashboard-minimal created
deployment.apps/kubernetes-dashboard created
service/kubernetes-dashboard created
ubuntu@kmaster:~$ kubectl get all --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system pod/coredns-5c98db65d4-9wzwc 1/1 Running 0 4m13s
kube-system pod/coredns-5c98db65d4-kblfd 1/1 Running 0 4m13s
kube-system pod/etcd-kmaster 1/1 Running 0 3m30s
kube-system pod/kube-apiserver-kmaster 1/1 Running 0 3m28s
kube-system pod/kube-controller-manager-kmaster 1/1 Running 0 3m18s
kube-system pod/kube-flannel-ds-amd64-4gpwt 1/1 Running 0 115s
kube-system pod/kube-flannel-ds-amd64-tchdm 1/1 Running 0 2m51s
kube-system pod/kube-proxy-dp6kq 1/1 Running 0 115s
kube-system pod/kube-proxy-gbw8t 1/1 Running 0 4m13s
kube-system pod/kube-scheduler-kmaster 1/1 Running 0 3m30s
kube-system pod/kubernetes-dashboard-6f8d67df77-m7w59 1/1 Running 0 20s
NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
default service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 4m31s
kube-system service/kube-dns ClusterIP 10.96.0.10 <none> 53/UDP,53/TCP,9153/TCP 4m30s
kube-system service/kubernetes-dashboard ClusterIP 10.109.132.142 <none> 80/TCP 20s
NAMESPACE NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
kube-system daemonset.apps/kube-flannel-ds-amd64 2 2 2 2 2 beta.kubernetes.io/arch=amd64 2m51s
kube-system daemonset.apps/kube-flannel-ds-arm 0 0 0 0 0 beta.kubernetes.io/arch=arm 2m51s
kube-system daemonset.apps/kube-flannel-ds-arm64 0 0 0 0 0 beta.kubernetes.io/arch=arm64 2m51s
kube-system daemonset.apps/kube-flannel-ds-ppc64le 0 0 0 0 0 beta.kubernetes.io/arch=ppc64le 2m51s
kube-system daemonset.apps/kube-flannel-ds-s390x 0 0 0 0 0 beta.kubernetes.io/arch=s390x 2m51s
kube-system daemonset.apps/kube-proxy 2 2 2 2 2 beta.kubernetes.io/os=linux 4m29s
NAMESPACE NAME READY UP-TO-DATE AVAILABLE AGE
kube-system deployment.apps/coredns 2/2 2 2 4m30s
kube-system deployment.apps/kubernetes-dashboard 1/1 1 1 20s
NAMESPACE NAME DESIRED CURRENT READY AGE
kube-system replicaset.apps/coredns-5c98db65d4 2 2 2 4m13s
kube-system replicaset.apps/kubernetes-dashboard-6f8d67df77 1 1 1 20s
EDIT1: Forgot to add - below is how you reset cluser
sudo kubeadm reset
rm -rf .kube/
sudo rm -rf /etc/kubernetes/
sudo rm -rf /var/lib/kubelet/
sudo rm -rf /var/lib/etcd /
The end.