How to restart master node in kubernetes

1/3/2020

I have a kubernetes cluster with 3 masters and 3 workers, I want to restart one of the masters to update the system of the master machine.

So can I just reboot the machine directly on the console with reboot, or some steps need to be done before the reboot to void the risk of out of service and data loss?

-- touchingsoil
kubernetes

3 Answers

1/3/2020

Whenever you wish to reboot OS on the particular Node(Master, worker), K8s cluster engine does not aware for that action and it keeps all the cluster related events in ETCD key value storage, backing up the most recent data. As soon as you wish carefully prepare cluster Node reboot, you might have to adjust Maintenance job on this Node in order to drain it from scheduling and gracefully terminate all the existing Pods.

If you compose any relevant K8s resource within defined set of replicas, then ReplicationController guarantees that a specified number of pod replicas are running at any one time through each available Node. It simply re-spawns Pods if they failed health check, deleted or terminated, matching desired replicas. In case of Master nodes which host ETCDs you need to be extra careful in terms of rolling upgrade of ETCD and backing up the data.

1. Backup a single master As mentioned previously, we need to backup etcd. In addition to that, we need the certificates and optionally the kubeadm configuration file for easily restoring the master. If you set up your cluster using kubeadm (with no special configuration) you can do it similar to this:

 Backup certificates:

    $ sudo cp -r /etc/kubernetes/pki backup/

Make etcd snapshot:

    $ sudo docker run --rm -v $(pwd)/backup:/backup \
        --network host \
        -v /etc/kubernetes/pki/etcd:/etc/kubernetes/pki/etcd \
        --env ETCDCTL_API=3 \
        k8s.gcr.io/etcd-amd64:3.2.18 \
        etcdctl --endpoints=https://127.0.0.1:2379 \
        --cacert=/etc/kubernetes/pki/etcd/ca.crt \
        --cert=/etc/kubernetes/pki/etcd/healthcheck-client.crt \
        --key=/etc/kubernetes/pki/etcd/healthcheck-client.key \
        snapshot save /backup/etcd-snapshot-latest.db

Backup kubeadm-config:

    $ sudo cp /etc/kubeadm/kubeadm-config.yaml backup/

Note that the contents of the backup folder should then be stored somewhere safe, where it can survive if the master is completely destroyed. You perhaps want to use e.g. AWS S3 (or similar) for this.

There are three commands in the example and all of them should be run on the master node. The first one copies the folder containing all the certificates that kubeadm creates. These certificates are used for secure communications between the various components in a Kubernetes cluster. The final command is optional and only relevant if you use a configuration file for kubeadm. Storing this file makes it easy to initialize the master with the exact same configuration as before when restoring it.

If master update went wrong you can then simply restore old version of master node.

You can also automate etcd backups. Doing a single backup manually may be a good first step but you really need to make regular backups for them to be useful. The easiest way to do this is probably to take the commands from the example above, create a small script and a cron job that runs the script every now and then. But since we are running Kubernetes anyway, use a Kubernetes CronJob. This would allow you to keep track of the backup jobs inside Kubernetes just like you monitor your workloads.

More information you can find here: backups-kubernetes.

2. Next step is to mark a node unschedulable, run this command:

    $ kubectl drain $NODENAME

The kubectl drain command should only be issued to a single node at a time. However, you can run multiple kubectl drain commands for different nodes in parallel, in different terminals or in the background. Multiple drain commands running concurrently will still respect the PodDisruptionBudget you specify.

3. Execute the system update or patch and reboot.

4. Finally uncordon the node back to the cluster, execute command below:

$ kubectl uncordon $NODENAME

On GCP there is option such auto-upgrading nodes which improve managing node updates. About maintenance Kubernetes nodes's you can read here: node-maintenace.

-- MaggieO
Source: StackOverflow

1/3/2020

If you need to reboot a node (such as for a kernel upgrade, libc upgrade, hardware repair, etc.), and the downtime is brief, then when the Kubelet restarts, it will attempt to restart the pods scheduled to it. If the reboot takes longer (the default time is 5 minutes, controlled by --pod-eviction-timeout on the controller-manager), then the node controller will terminate the pods that are bound to the unavailable node. If there is a corresponding replica set (or replication controller), then a new copy of the pod will be started on a different node. So, in the case where all pods are replicated, upgrades can be done without special coordination, assuming that not all nodes will go down at the same time

If you want more control over the upgrading process, you may use the following workflow:

Use kubectl drain to gracefully terminate all pods on the node while marking the node as unschedulable:

kubectl drain $NODENAME

This keeps new pods from landing on the node while you are trying to get them off. For pods with a replica set, the pod will be replaced by a new pod which will be scheduled to a new node. Additionally, if the pod is part of a service, then clients will automatically be redirected to the new pod. For pods with no replica set, you need to bring up a new copy of the pod, and assuming it is not part of a service, redirect clients to it. Perform maintenance work on the node. Make the node schedulable again:

kubectl uncordon $NODENAME

Additionally if the node is hosting ETCD then you need to be extra careful in terms of rolling upgrade of ETCD and backing up the data

-- Arghya Sadhu
Source: StackOverflow

1/3/2020

Take a backup of the ETCD if it's hosting the ETCD. You can use the in-built command to backup the data like

ETCDCTL_API=3 etcdctl --endpoints=https://[127.0.0.1]:2379 --cacert=/etc/kubernetes/pki/etcd/ca.crt \
     --cert=/etc/kubernetes/pki/etcd/server.crt --key=/etc/kubernetes/pki/etcd/server.key \
     snapshot save /tmp/snapshot-pre-boot.db

Now drain the node using

kubectl drain <master01>

Do the System update | patches and reboot.

Now uncordon the node back to the cluster

kubectl uncordon <master01>
-- Vaisakh PS
Source: StackOverflow