rolling-update is forced to delete old pod?

1/19/2016

if rolling-update is forced to delete old pod, some responses about old pod will be interrupted. Example: Rc (myapp) define replicas 2 and contain 2 pods(myapp-mtlz8, myapp-an78c). rolling-update rc (myapp). When create a new pod(myapp-ed4988fb0b53ed961037a026068d1a3d-i8wvt) and start to delete a old pod(myapp-mtlz8). If old pod is being processed some requests, but the old pod is deleted. So some request don't complete about old pod.

Whether or not: 1. service don't proxy to old pod 2. Waiting to old pod completed all request 3. delete old pod

-- ttyyll
kubernetes

1 Answer

2/2/2016

I filed an issue to document the recommended practice. I put a sketch of the approach in the issue:

https://github.com/kubernetes/kubernetes/issues/20473

  • ensure the pods have a non-zero terminationGracePeriodSeconds set
  • configure a readinessProbe on the main serving container of the pods
  • handle SIGTERM in the application: fail the readinessProbe but continue * to handle normal requests and do not exit
  • set maxUnavailable and/or maxSurge large enough to ensure enough serving instances in the Deployment API spec (available in 1.2)
-- briangrant
Source: StackOverflow