How to stop single node kubernetes cluster gracefully

2/18/2020

I've set up single node kubernetes according to [official tutorial][1].

In addition to official documentation I've set-up single node cluster:

kubectl taint nodes --all node-role.kubernetes.io/master-

Disabled eviction limit:

cat << EOF >> /var/lib/kubelet/config.yaml
evictionHard:
  imagefs.available: 1%
  memory.available: 100Mi
  nodefs.available: 1%
  nodefs.inodesFree: 1%
EOF

systemctl daemon-reload
systemctl restart kubelet

And set systemd driver for Docker:

cat << EOF > /etc/docker/daemon.json
{
  "exec-opts": ["native.cgroupdriver=systemd"],
  "log-driver": "json-file",
  "log-opts": {
    "max-size": "100m"
  },
  "storage-driver": "overlay2"
}
EOF

systemctl daemon-reload
systemctl restart docker

How can I temporary stop Kubernetes cluster (including all it's services, podd, etc)? I've issued systemctl stop kubelet but I stil see some kubernetes stuff among processes

$ ps -elf | grep kube
4 S root       6032   5914  1  80   0 - 2653148 -    Feb17 ?        00:35:10 etcd --advertise-client-urls=https://192.168.1.111:2379 --cert-file=/etc/kubernetes/pki/etcd/server.crt --client-cert-auth=true --data-dir=/var/lib/etcd --initial-advertise-peer-urls=https://192.168.1.111:2380 --initial-cluster=ubuntu=https://192.168.1.111:2380 --key-file=/etc/kubernetes/pki/etcd/server.key --listen-client-urls=https://127.0.0.1:2379,https://192.168.1.111:2379 --listen-metrics-urls=http://127.0.0.1:2381 --listen-peer-urls=https://192.168.1.111:2380 --name=ubuntu --peer-cert-file=/etc/kubernetes/pki/etcd/peer.crt --peer-client-cert-auth=true --peer-key-file=/etc/kubernetes/pki/etcd/peer.key --peer-trusted-ca-file=/etc/kubernetes/pki/etcd/ca.crt --snapshot-count=10000 --trusted-ca-file=/etc/kubernetes/pki/etcd/ca.crt
4 S root       7536   7495  0  80   0 - 35026 -      Feb17 ?        00:01:04 /usr/local/bin/kube-proxy --config=/var/lib/kube-proxy/config.conf --hostname-override=ubuntu
4 S root       9868   9839  0  80   0 - 34463 -      Feb17 ?        00:00:59 /usr/bin/kube-controllers
4 S root      48394  48375  2  80   0 - 36076 -      13:41 ?        00:01:09 kube-scheduler --authentication-kubeconfig=/etc/kubernetes/scheduler.conf --authorization-kubeconfig=/etc/kubernetes/scheduler.conf --bind-address=127.0.0.1 --kubeconfig=/etc/kubernetes/scheduler.conf --leader-elect=true
4 S root      48461  48436  3  80   0 - 52484 -      13:41 ?        00:01:53 kube-controller-manager --allocate-node-cidrs=true --authentication-kubeconfig=/etc/kubernetes/controller-manager.conf --authorization-kubeconfig=/etc/kubernetes/controller-manager.conf --bind-address=127.0.0.1 --client-ca-file=/etc/kubernetes/pki/ca.crt --cluster-cidr=10.244.0.0/16 --cluster-signing-cert-file=/etc/kubernetes/pki/ca.crt --cluster-signing-key-file=/etc/kubernetes/pki/ca.key --controllers=*,bootstrapsigner,tokencleaner --kubeconfig=/etc/kubernetes/controller-manager.conf --leader-elect=true --node-cidr-mask-size=24 --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.crt --root-ca-file=/etc/kubernetes/pki/ca.crt --service-account-private-key-file=/etc/kubernetes/pki/sa.key --service-cluster-ip-range=10.96.0.0/12 --use-service-account-credentials=true
4 S root      52675  52586  7  80   0 - 123895 -     14:00 ?        00:02:01 kube-apiserver --advertise-address=192.168.1.111 --allow-privileged=true --authorization-mode=Node,RBAC --client-ca-file=/etc/kubernetes/pki/ca.crt --enable-admission-plugins=NodeRestriction --enable-bootstrap-token-auth=true --etcd-cafile=/etc/kubernetes/pki/etcd/ca.crt --etcd-certfile=/etc/kubernetes/pki/apiserver-etcd-client.crt --etcd-keyfile=/etc/kubernetes/pki/apiserver-etcd-client.key --etcd-servers=https://127.0.0.1:2379 --insecure-port=0 --kubelet-client-certificate=/etc/kubernetes/pki/apiserver-kubelet-client.crt --kubelet-client-key=/etc/kubernetes/pki/apiserver-kubelet-client.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --proxy-client-cert-file=/etc/kubernetes/pki/front-proxy-client.crt --proxy-client-key-file=/etc/kubernetes/pki/front-proxy-client.key --requestheader-allowed-names=front-proxy-client --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6443 --service-account-key-file=/etc/kubernetes/pki/sa.pub --service-cluster-ip-range=10.96.0.0/12 --tls-cert-file=/etc/kubernetes/pki/apiserver.crt --tls-private-key-file=/etc/kubernetes/pki/apiserver.key
-- Wakan Tanka
kubernetes

3 Answers

2/18/2020

There are two questions asked in one:

  • Stop kubernetes cluster and
  • Temporarily stop

I will answer keeping both in mind. Also it is not clear how you created your cluster but it seems like you used kubeadm.

Steps:

  1. As mentioned by @arghya-sadhu & @henry, gracefull shutdown is recommended but not mandatory if this is a test cluster and you don't care about the workloads (pods,etc)
    • kubectl cordon <node name>
    • kubectl drain <node name>
  2. kubeadm hosts master plane pods as static pods ref! mv /etc/kubernetes/manifests/ /tmp as explained here
-- garlicFrancium
Source: StackOverflow

2/18/2020

You should use kubectl drain <node name>

When kubectl drain returns successfully, that indicates that all of the pods have been safely evicted respecting the desired graceful termination period.

Once it returns (without giving an error), you can power down the node (or equivalently, if on a cloud platform, delete the virtual machine backing the node). If you leave the node in the cluster during the maintenance operation, you need to run

kubectl uncordon <node name>

afterwards to tell Kubernetes that it can resume scheduling new pods onto the node

-- Arghya Sadhu
Source: StackOverflow

2/18/2020

If you really want to stop everything what is running by kubernetes/docker for what ever reason - you can just stop both kubelet and docker.

Perform these commands on the node you like to stop kubernetes/docker

systemctl stop kubelet 
systemctl stop docker

I strongly recommend to drain the node first, but if you just like to kill everything without any caution that would be one way to stop kubernetes and the running containers on the node :)

once you like to start everything again just start docker and kubelet again or just reboot the machine.

cheers

-- Henry
Source: StackOverflow