When a request takes over 60s to respond it seems that the ingress controller will bounce
From what I can see our NGINX ingress controller returns 504 to the client after a request takes more than 60s to process. I can see this from the NGINX logs:
2019/01/25 09:54:15 [error] 2878#2878: *4031130 upstream timed out (110: Connection timed out) while reading response header from upstream, client: 10.244.0.1, server: myapplication.com, request: "POST /api/text HTTP/1.1", upstream: "http://10.244.0.39:45606/api/text", host: "myapplication.com"
10.244.0.1 - [10.244.0.1] - - [25/Jan/2019:09:54:15 +0000] "POST /api/text HTTP/1.1" 504 167 "-" "PostmanRuntime/7.1.6" 2940 60.002 [default-myapplication-service-80] 10.244.0.39:45606 0 60.000 504 bdc1e0571e34bf1223e6ed4f7c60e19d
The second log item shows 60 seconds for both upstream response time and request time (see NGINX log format here)
But I have specified all the timeout values to be 3 minutes in the ingress configuration:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: aks-ingress
annotations:
kubernetes.io/ingress.class: nginx
certmanager.k8s.io/cluster-issuer: letsencrypt
nginx.ingress.kubernetes.io/rewrite-target: /
nginx.ingress.kubernetes.io/send_timeout: "3m"
nginx.ingress.kubernetes.io/proxy-connect-timeout: "3m"
nginx.ingress.kubernetes.io/proxy-read-timeout: "3m"
nginx.ingress.kubernetes.io/proxy-send-timeout: "3m"
spec:
tls:
- hosts:
- myapplication.com
secretName: tls-secret
rules:
- host: myapplication.com
http:
paths:
- path: /
backend:
serviceName: myapplication-service
servicePort: 80
What am I missing?
I am using nginx-ingress-1.1.0 and k8s 1.9.11 on Azure (AKS).
The issue was fixed by provided integer values (in seconds) for these annotations:
nginx.ingress.kubernetes.io/proxy-connect-timeout: "180"
nginx.ingress.kubernetes.io/proxy-read-timeout: "180"
nginx.ingress.kubernetes.io/proxy-send-timeout: "180"
It seems that this variation of the NGINX ingress controller requires such.
Because you appear to be using the actual ingress from ngnix.com, you need to use nginx.org/proxy-connect-timeout: "3m"
style annotations, as one can see in their example
I am still pretty sure that my debugging trick of kubectl cp
-ing the nginx.conf
off the controller Pod would have helped you debug that situation on your own, but for sure reading the documentation for your ingress controller will go a long way, too
While this might not matter to you, their latest release is also 1.4.3 so I hope you are on an old version on purpose.