How to Configure Pod initialization in a specific order in Kubernetes?

7/8/2019

I want to know how I can start my deployments in a specific order. I am aware of initContainers but that is not working for me. I have a huge platform with around 20 deployments and 5 statefulsets that each of them has their own service, environment variables, volumes, horizontal autoscaler, etc. So it is not possible (or I don't know how) to define them in another yaml deployment as initContainers.

Is there another option to launch deployments in a specific order?

-- AVarf
deployment
kubernetes
kubernetes-pod

7 Answers

7/8/2019

As already answered in the other answers, you can't define the order of initialization between PODs outside of the deployment.

Each deployment(POD) is meant to be an independent unit that should have it's own lifecycle, if one POD depends on other PODs to be running for initialization you probably gonna need to review your design.

  • What would happen if the POD was up at startup and failed after the other POD started?
  • If POD B is being updated and the POD A is updated after?

You should design your systems with the idea that they will always fail, if service B start before service A, the POD would behave the same way as if they were started in correct order and service A (that is a dependency on B) failed afterwards.

Your application should handle these instead of offloading this to the orchestrator.

.

In case you really need to implement the ordering and changing the applications is out of question, you could use the init containers to make requests to the health(ready) endpoints in the other containers, the same way K8s does to check if your container is ready, when they answer with a successful response, you then complete the init execution and let the POD run the other containers.

-- Diego Mendes
Source: StackOverflow

7/8/2019

It's possible to order the launch of initContainers in a Pod, or Pods that belong in the same StatefulSet. However, those solutions do not apply to your case.

This is because ordering initialization is not the standard approach for solving your issue. In a microservices architecture, and more specifically Kubernetes, you would write your containers such that they try to call the services they depend on (whether they are up or not) and if they aren't available, you let your containers crash. This works because Kubernetes provides a self-healing mechanism that automatically restarts containers if they fail. This way, your containers will try to connect to the services they depend on, and if the latter aren't available, the containers will crash and try again later using exponential back-off.

By removing unnecessary dependencies between services, you simplify the deployment of your application and reduce coupling between different services.

-- Alassane Ndiaye
Source: StackOverflow

7/8/2019

There is no "depends_on-like" option in k8s and I think it's not implemented just because in a cloud native ( = microservices) environment, application should be stateless. To be stateless implies also that no app should know about the state of another one: every app should be capable to be started, killed, restored in any moment without affecting others, except that platform services can have a quality degradation, of course!

If you have this kind of constraints (that is reasonable if you deploy a message broker and every consumer have to wait that this is up and running before to establish connections) you have to manage this in a "stateless fashion": for instance you can block your boot process until a broker connection is not established, then do periodic retry. With kubernetes healthcecks, you can even declare your service "not ready" in that time window or "not healthy" if a number of retries are failed

You can translate this pattern in other context, try to give an example of what you are trying to achieve

-- Carmine Ingaldi
Source: StackOverflow

7/8/2019

I hope your containers have liveness probe defined. Use them in the dependant deployment to create a initContainer that will check for the app is ready. After the initContainer verifies that the other container is ready, the dependant container starts.

What exactly is the issue you faced with initContainer? A sample link where initContainer is used to start the dependant container is here.

One other approach would be to write a shell wrapper and then create the initial deployment. Then use an until loop to wait till the initial deployment status is ready. Then trigger the deployment that depends on the initial deployment.

-- Malathi
Source: StackOverflow

7/8/2019

As many other answers have outlined, an app in a micro-service architecture shouldn't break if a pod / service isn't available.

However, even if it does, Kubernetes should be clever enough to automatically try to recover from that failure and restart the pod. This should repeat until the app's dependencies have been fulfilled.

Kubernetes isn't inherently a release-manager, but rather a platform. If you need to deploy pods or services sequentially or in a particular order, you may need to have a look at an actual release-manager such as Helm, using particular deployment/design-patterns such as an umbrella-chart pattern (StackOverflow example). This may include some extra work, but may be what you are looking for.

I really hope that helped you out a little bit at least. :)

-- Todai
Source: StackOverflow

7/10/2019

To create dependency among the deployment, there need to be sequence of certain condition true.

For example, Wait for the pod "busybox1" to contain the status condition of type "Ready".

kubectl wait --for=condition=Ready pod/busybox1

after than you can rollout the next deployment.

for further detail kubect-wait

Here is an another example from @Michael Hausenblas job-dependacies, having dependencies among the job objects

if you’d like to kick off another job after worker has completed? Here you go:

$ kubectl -n waitplayground \
             wait --for=condition=complete --timeout=32s \     
             job/worker
job.batch/worker condition met
-- Suresh Vishnoi
Source: StackOverflow

7/8/2019

Just launch them all in parallel and let them crash. Kubernetes will restart the failing ones after some delay.

Say you have service A, which depends on B, which depends on C. A starts first and as part of its startup sequence tries to make a call to B. That fails (because B isn't up) and the pod changes to Error status. It may retry once or twice and then go to CrashLoopBackOff status. Kubernetes pauses for a couple of seconds before retrying again. The same thing will happen for B.

Eventually service C (at the bottom of the stack) will come up, and some time after that the automated restart will start B (immediately above it). This time B will start successfully. Some time after that an automated restart will start A, which this time will come up successfully.

The one thing you need to be aware of is that if a Pod does wind up in CrashLoopBackOff state, it could be because of a code bug, or a misconfiguration, or just because a service it depends on isn't up yet. You'll need to look at kubectl logs (and make sure your service code writes out usable diagnostics) to understand which case you're in.

-- David Maze
Source: StackOverflow