Consider this NodePort service:
k describe service myservice
...
Type:                     NodePort
IP:                       10.15.248.153
Port:                     myservice  8080/TCP
TargetPort:               8080/TCP
NodePort:                 myservice  30593/TCP
Endpoints:                10.12.223.56:8080Consider a request taking exactly 120s:
# time curl -vkH 'http://myservce:8080/test?timeout=120'
*   Trying 10.15.248.153...
* TCP_NODELAY set
* Connected myservice (10.15.248.153) port 8080 (#0)
...
< HTTP/1.1 200
real    2m0.023s
user    0m0.009s
sys 0m0.009s
This is good. So I configure nginx-ingress timeouts:
nginx.ingress.kubernetes.io/proxy-send-timeout: "900"
nginx.ingress.kubernetes.io/proxy-read-timeout: "900"
nginx.ingress.kubernetes.io/proxy-connect-timeout: "60"I can confirm it is in the nginx.conf:
proxy_connect_timeout                   60s;
proxy_send_timeout                      900s;
proxy_read_timeout                      900s;So now, I try enter the nginx ingress pod and try to access myservice via nginx-ingress:
time curl -vkH 'http://127.0.0.1/myservice/test?timeout=120'But this time I get empty response - sometimes after 35s, sometimes after 90s, but it will get there:
* Curl_http_done: called premature == 0
* Empty reply from server
* Connection #0 to host 127.0.0.1 left intact
curl: (52) Empty reply from serverI have no mored ideas what might be happening. It looks as if the nginx was randomly restarted and my connections were dropped.
For some reason the nginx process was constantly reloaded - every 30 seconds. With each reload, all connections were dropped.
The solution is to set:
worker-shutdown-timeout: "900s"In the nginx-ingress config-map.