Need some help on implementing better kubernetes resource deployment.
Essentially, we are trying to mention every resource in a single values.yaml file. When you install the chart. All resources are created parallelly. Among these I've 2 components. Let's say component1 and component2.
For component1, It's main function will be to install some dars into the server machine. This will take between 45 min to an hour.
For component2, It is dependent on some dars that will be installed onto server by commponent1.
Problem is, When you deploy helm chart and every pod is created at the same time. Even though status for a pod for component2 will be running. When you inspect the container logs it will tell you process start up failed. Due to some missing classes(Which would've been installed by component1)
I am looking for a way, by which I can either introduce some delay until component1 is done or Keep destroying and recreating resources for component2 until component1 is done.
Delay based on if all dars are installed into server machine.
For restarting all resources for component2 I was thinking about creating a 3rd pod or a maintenance pod. Which will keep looking up both components1 and 2 , and it will keep restarting resource creation for component2 until component1 is done.
Readiness and liveliness probes will not work here because even though service startup has failed. Pod status will be running.
Any tips or suggestions on how to implement this will greatly help. Or if there's a better way to handle this.
You can try adding the flag --wait or --wait-for-job according to your usecase as Helm will wait until a minimum expected number of Pods in the deployment are launched before marking the release as successful. Helm will wait as long as what is set with --timeout. Please refer the link --wait flag detailed description and https://helm.sh/docs/helm/helm_upgrade/#options
helm upgrade --install --wait --timeout 20 demo demo