I have a kubernetes setup in which one is master
node and two worker
nodes. After the deployment, which is a daemonset
, it starts pods on both the worker
nodes. These pods contain 2 containers. These containers have a python script running in them. The python scripts runs normally but at a certain point, after some time, it needs to send a shutdown
command to the host. I can directly issue command shutdown -h now
but this will run on the container not on the host and gives below error:
Failed to connect to bus: No such file or directory
Failed to talk to init daemon.
To resolve this, I can get the ip address
of the host and then I can ssh into it and then run the command to safely shutdown the host.
But is there any other way I can issue command to the host in kubernetes/dockers.?
You can access your cluster using kube api.
https://kubernetes.io/docs/tasks/access-application-cluster/access-cluster/
Accessing the API from a Pod When accessing the API from a pod, locating and authenticating to the apiserver are somewhat different.
The recommended way to locate the apiserver within the pod is with the kubernetes.default.svc DNS name, which resolves to a Service IP which in turn will be routed to an apiserver.
The recommended way to authenticate to the apiserver is with a service account credential. By kube-system, a pod is associated with a service account, and a credential (token) for that service account is placed into the filesystem tree of each container in that pod, at /var/run/secrets/kubernetes.io/serviceaccount/token.
Draining the node you can use this
The Eviction API
https://kubernetes.io/docs/tasks/administer-cluster/safely-drain-node/
But i dont really sure about on pod can drain own node. Workaround can be controlling other pod from different node.