I have setup cluster with kubeadm its working fine and 6443 port is up. but after reboot my system cluster is not getting up.
What should I do?
please find the logs
node@node1:~$ sudo kubeadm init
[init] using Kubernetes version: v1.11.1
......
node@node1:~$
node@node1:~$ mkdir -p $HOME/.kube
node@node1:~$ sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
node@node1:~$ sudo chown $(id -u):$(id -g) $HOME/.kube/config
node@node1:~$
node@node1:~$
node@node1:~$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
node1 NotReady master 4m v1.11.1
node@node1:~$ ps -ef | grep 6443
root 5542 5503 8 13:17 ? 00:00:17 kube-apiserver --authorization-mode=Node,RBAC --advertise-address=172.16.2.171 --allow-privileged=true --client-ca-file=/etc/kubernetes/pki/ca.crt --disable-admission-plugins=PersistentVolumeLabel --enable-admission-plugins=NodeRestriction --enable-bootstrap-token-auth=true --etcd-cafile=/etc/kubernetes/pki/etcd/ca.crt --etcd-certfile=/etc/kubernetes/pki/apiserver-etcd-client.crt --etcd-keyfile=/etc/kubernetes/pki/apiserver-etcd-client.key --etcd-servers=https://127.0.0.1:2379 --insecure-port=0 --kubelet-client-certificate=/etc/kubernetes/pki/apiserver-kubelet-client.crt --kubelet-client-key=/etc/kubernetes/pki/apiserver-kubelet-client.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --proxy-client-cert-file=/etc/kubernetes/pki/front-proxy-client.crt --proxy-client-key-file=/etc/kubernetes/pki/front-proxy-client.key --requestheader-allowed-names=front-proxy-client --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6443 --service-account-key-file=/etc/kubernetes/pki/sa.pub --service-cluster-ip-range=10.96.0.0/12 --tls-cert-file=/etc/kubernetes/pki/apiserver.crt --tls-private-key-file=/etc/kubernetes/pki/apiserver.key
node 6792 4426 0 13:20 pts/1 00:00:00 grep --color=auto 6443
node@node1:~$
node@node1:~$
node@node1:~$
node@node1:~$ sudo reboot
Connection to node1 closed by remote host.
Connection to node1 closed.
abc@xyz:~$ ssh node@node1
node@node1's password:
node@node1:~$ kubectl get nodes
No resources found.
The connection to the server 172.16.2.171:6443 was refused - did you specify the right host or port?`enter code here`
node@node1:~$
node@node1:~$ ps -ef | grep 6443
node 7083 1920 0 13:36 pts/0 00:00:00 grep --color=auto 6443
Your kubelet
service is not running. Try to view its logs:
$ journalctl -u kubelet
To start the service:
$ sudo systemctl start kubelet
If you want to make kubelet
running during the boot you'll need to enbale it. First of all check the kubelet
service status:
$ systemctl status kubelet
There will be a line:
...
Loaded: loaded (/etc/systemd/system/kubelet.service; (enabled|disabled)
...
"disabled" entry means you should enable it:
$ sudo systemctl enable kubelet
But, highly likely it is already enabled, because this was done by "systemd vendor preset", so you will have to debug why kubelet
falls. You can post logs output here and stackoverflow's community will help you.
I assume that you did not install Kubernetes from packages delivered to your Linux distribution - as far as I know, installation on Ubuntu makes services dependent on Kubernetes installed to avoid the situation you are describing.
The problem you are facing is the lack of support for starting kubelet by systemd or other runtime scripts. Systemd is a system and a service manager. On behalf of them, Kubernetes starts on system boot.
You may try to repair your installation by copying/creating required systemd configuration kubernetes.service
to your installation /etc/systemd
directory.
[Unit]
Description=Kubernetes Kubelet
Documentation=https://github.com/kubernetes/kubernetes
[Service]
ExecStartPre=/usr/bin/mkdir -p /etc/kubernetes/manifests
ExecStart=/usr/bin/kubelet \
--api-servers=http://127.0.0.1:8080 \
--allow-privileged=true \
--config=/etc/kubernetes/manifests \
--v=2
Restart=on-failure
RestartSec=5
[Install]
WantedBy=multi-user.target
and enable service with systemctl:
sudo systemctl enable kubelet
The journalctl logs may provide information about problems with Kubernetes Services if they still exist.
sudo journalctl -xeu kubelet