Using k8s internal dns for k8s apps result in http 502 errors on scaling

1/6/2022

I have an k8s application "alpha" running under the k8s service dns exposed as alpha-service.namespace, which is used by another application - "beta".

Application "beta" connects to application "alpha" via the service dns "alpha-service.namespace". Basis the scaling policies the pods within "alpha-service.namespace" service scale up and down.

However, on scaling down of "alpha" app, the "beta" app faces http "502" error for the requests on the pods which scaled down.

What is the ideal way to solve / avoid this and have the pods scale up / down not affect the "beta" application ?

-- Valerian Pereira
k8s-serviceaccount
kubernetes

1 Answer

1/6/2022

That is what readiness and liveness probes are for:

https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/

Whenever your application fails the liveness probes for failureThreshold defined times it will remove that particular pod from the service Endpoints, thus avoiding HTTP 502

-- paltaa
Source: StackOverflow