Whenever I do a Kubernetes deployment with some sort of configuration error, the pod ends up in CrashLoopBackOff, constantly restarting the (totally broken) pod. What I would like is for any sort of errors during a deployment to immediately fail the deployment, rather than just blindly retrying until the deployment times out.
Deploy with restartPolicy: never
and then use kubectl patch to modify the restart policy of that deployment.
To avoid continuous restart attempt of failing pod there is one open issue.
Also there is one open pull request to add this feature which is about to get merged, where you will have ability to specify max retries for restart policy OnFailure.
Till this feature get merged and released, kubectl patch
seems to be the only way.
You can first deploy your cluster with restartPolicy: never
, then use kubectl patch to modify the restart policy of the running deployment.