I have a nginx
deployment in k8s cluster which proxies my api/
calls like this:
server {
listen 80;
location / {
root /usr/share/nginx/html;
index index.html index.htm;
try_files $uri $uri/ /index.html =404;
}
location /api {
proxy_pass http://backend-dev/api;
}
}
This works most of the time, however sometimes when api
pods aren't ready, nginx fails with error:
nginx: [emerg] host not found in upstream "backend-dev" in /etc/nginx/conf.d/default.conf:12
After couple of hours exploring internets, I found the article which pretty much the same issue. I've tried this:
location /api {
set $upstreamName backend-dev;
proxy_pass http://$upstreamName/api;
}
Now nginx returns 502. And this:
location /api {
resolver 10.0.0.10 valid=10s;
set $upstreamName backend-dev;
proxy_pass http://$upstreamName/api;
}
Nginx returns 503.
What's the correct way to fix it on k8s?
If your API pods are not ready, Nginx wouldn't be able to route traffic to them.
From Kubernetes documentation:
The kubelet uses readiness probes to know when a Container is ready to start accepting traffic. A Pod is considered ready when all of its Containers are ready. One use of this signal is to control which Pods are used as backends for Services. When a Pod is not ready, it is removed from Service load balancers.
If you are not using liveness or readiness probes, then your pod will be marked as "ready" even if your application running inside the container has not finished it's startup process and is ready to accept traffic.
The relevant section regarding Pods and DNS records can be found here
Because A records are not created for Pod names, hostname is required for the Pod’s A record to be created. A Pod with no hostname but with subdomain will only create the A record for the headless service (default-subdomain.my-namespace.svc.cluster-domain.example), pointing to the Pod’s IP address. Also, Pod needs to become ready in order to have a record unless publishNotReadyAddresses=True is set on the Service.
UPDATE: I would suggest using NGINX as an ingress controller.
When you use NGINX as an ingress controller, the NGINX service starts successfully and whenever an ingress rule is deployed, the NGINX configuration is reloaded on the fly.
This will help you avoid NGINX pod restarts.