epoll_wait() reported that client prematurely closed connection, so upstream connection is closed too while sending request to upstream

3/20/2020

I Have a web page and it is working fine. but for only one path /user/reg it is giving 502 error code since that path takes little more time to process, so i have added liveness and readiness probes to the container but still having same issue. I am using kubernetes ingress and complete stack deployed in GCP.

Below is my ingress config

apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
  annotations:
      kubernetes.io/ingress.global-static-ip-name: "nonprod"
      ingress.kubernetes.io/force-ssl-redirect: "true"
      kubernetes.io/ingress.allow-http: "false"
      ingress.kubernetes.io/upstream-max-fails: "999"
      ingress.kubernetes.io/upstream-fail-timeout: "999"
  name: nonprod
  namespace: nonprod
spec:
  tls:
  - hosts:
    - hostname.example.com
    secretName: nonprod-tls
  rules:
    - host: hostname.example.com
      http:
        paths:
        - path: /*
          backend:
            serviceName: nonprod-nodeport
            servicePort: 80

Below is my deployment yaml

  image: gcr.io/image:v1
        livenessProbe:
          failureThreshold: 3
          httpGet:
            path: /
            port: 80
            scheme: HTTP
          initialDelaySeconds: 180
          periodSeconds: 10
          successThreshold: 1
          timeoutSeconds: 1
        name: drupal
        ports:
        - containerPort: 80
        readinessProbe:
          failureThreshold: 3
          httpGet:
            path: /
            port: 80
            scheme: HTTP
          initialDelaySeconds: 180
          periodSeconds: 10
          successThreshold: 1
          timeoutSeconds: 1
        resources:
          limits:
            cpu: "1"
            memory: 2Gi
          requests:
            cpu: "1"
            memory: 2Gi

I have checked other similar questions but didnt help

When i checked logs

2020/03/20 06:46:35 [info] 91#91: *60234 epoll_wait() reported that client prematurely closed connection, so upstream connection is closed too while sending request to upstream, clie
nt: 10.44.0.1, server: example.com, request: "POST /user/register HTTP/1.1", upstream: "fastcgi://unix:/var/run/php-fpm.sock:", host: "hostname.example.com", referre
r: "https://hostname.example.com/user/register"
10.44.0.1 - - [20/Mar/2020:06:46:35 +0000] "POST /user/register HTTP/1.1" 499 0 "https://hostname.example.com/user/register" "Mozilla/5.0 (Macintosh; Intel Mac OS X 
10_14_6) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/80.0.3987.132 Safari/537.36 Edg/80.0.361.66" "27.59.32.87, 34.107.231.112"
10.44.0.1 - - [20/Mar/2020:06:46:35 +0000] "GET / HTTP/1.1" 200 17662 "-" "kube-probe/1.15+" "-"
-- sri05
google-kubernetes-engine
kubernetes
nginx

1 Answer

3/24/2020

I could able to fix the issue by increasing the LoadBalancer backend server timeout

-- sri05
Source: StackOverflow