So we are intermittently getting a socket timeout error, while transferring huge files to K8S via a custom REST API. Apparently, works fine in off work hours.
We am able to send a huge file with following properties->
annotations:
kubernetes.io/ingress.class: "nginx"
nginx.ingress.kubernetes.io/proxy-body-size: 20G
nginx.ingress.kubernetes.io/proxy-read-timeout: "300000"
nginx.ingress.kubernetes.io/proxy-send-timeout: "300000"
However, I found this https://kubernetes.io/blog/2019/03/29/kube-proxy-subtleties-debugging-an-intermittent-connection-reset/ and tried adding this in the deployment yaml file ->
- name: STARTUP_SCRIPT
value: |
#! /bin/bash
echo 1 > /proc/sys/net/ipv4/netfilter/ip_conntrack_tcp_be_liberal
echo done
I don't see it getting triggered. Neither the issue is fixed. Have you faced a similar issue? Or, recommend any alternative fix?