When provisioning a kubernetes cluster with kubeadmin init
it creates a cluster which keeps the kube-apiserver
, etcd
, kube-controller-manager
and kube-scheduler
processes within docker containers.
Whenever some configuration (e.g. access tokens) for the kube-apiserver
is changed, I've to restart the related server. While I could usually run systemctl restart kube-apiserver.service
on other installations, I've kill the docker container on that installation or restart the system to restart it.
So is there a better way to restart the kube-apiserver
?
You can delete the kube-apiserver Pod. It's a static Pod (in case of a kubeadm installation) and will be recreated immediately.
If I recall correctly the manifest directory for that installation is /etc/kubernetes/manifest, but I will check later and edit this answer. Just doing a touch on the kube-apiserver.json will also recreate the Pod.
There is actually a command for that:
docker restart <containername/ID>
Some configuration changes will be applied by reloading the services. Like common directives in nginx.conf. If your service supports reload you can try the following:
docker kill -s HUP <nginx container>
This will reload nginx service.
Restarting container's main process will make the container stop.
I've kill the docker container on that installation or restart the system to restart it.
Please dont restart the whole server for this.