I have a node.js container running on kubernetes that handles websocket connections that will normally be kept open until the user quits. Now when I do a rolling update or when the deployment scales down, is there a way to stop the pod from being killed until the last user has disconnected?
At the moment, Kubernetes does not support graceful connection closing.
terminationGracePeriodSeconds
only define the timer before the Pod termination. It doesn't take care of Pod connections.
The only way to deal with rolling updates is to adapt your application to be able to switch clients to other pods transparently.
You may also want to take a look at Disruption Budget. It may help in some cases to have more reliable setup.
A PDB specifies the number of replicas that an application can tolerate having, relative to how many it is intended to have. For example, a Deployment which has a .spec.replicas: 5 is supposed to have 5 pods at any given time. If its PDB allows for there to be 4 at a time, then the Eviction API will allow voluntary disruption of one, but not two pods, at a time.
PDBs cannot prevent involuntary disruptions from occurring, but they do count against the budget.
Pods which are deleted or unavailable due to a rolling upgrade to an application do count against the disruption budget, but controllers (like deployment and stateful-set) are not limited by PDBs when doing rolling upgrades – the handling of failures during application updates is configured in the controller spec. (Learn about updating a deployment.)
When a pod is evicted using the eviction API, it is gracefully terminated (see
terminationGracePeriodSeconds
in PodSpec.)