How to debug failed requests with client_disconnected_before_any_response

9/10/2018

We have an HTTP(s) Load Balancer created by a kubernetes ingress, which points to a backend formed by set of pods running nginx and Ruby on Rails.

Taking a look to the load balancer logs we have detected an increasing number of requests with a response code of 0 and statusDetails = client_disconnected_before_any_response.

We're trying to understand why this his happening, but we haven't found anything relevant. There is nothing in the nginx access or error logs.

This is happening for multiple kind of requests, from GET to POST.

We also suspect that sometimes despite of the request being logged with that error, the requests is actually passed to the backend. For instance we're seeing PG::UniqueViolation errors, due to idential sign up requests being sent twice to the backend in our sign up endpoint.

Any kind of help would be appreciated. Thanks!


 UPDATE 1

As requested here is the yaml file for the ingress resource:


 UPDATE 2

I've created a log-based Stackdriver metric, to count the number of requests that present this behavior. Here is the chart:

chart

The big peaks approximately match the timestamp for these kubernetes events:

events

Full error: Readiness probe failed: Get http://10.48.1.28:80/health_check: net/http: request canceled (Client.Timeout exceeded while awaiting headers)"

So it seems sometimes the readiness probe for the pods behind the backend fails, but not always.

Here is the definition of the readinessProbe

readinessProbe:
  failureThreshold: 3
  httpGet:
    httpHeaders:
    - name: X-Forwarded-Proto
      value: https
    - name: Host
      value: [redacted]
    path: /health_check
    port: 80
    scheme: HTTP
  initialDelaySeconds: 1
  periodSeconds: 30
  successThreshold: 1
  timeoutSeconds: 5
-- jävi
google-cloud-networking
google-kubernetes-engine
kubernetes
kubernetes-ingress

1 Answer

9/12/2018

A response code of 0 and statusDetails = client_disconnected_before_any_response means the client closed the connection before the Load Balancer being able to provide a response as per this GCP documentation.

Investigating why it did not respond in time, one of the reasons could be the difference between the keepalive timeouts from nginx and the GCP Load Balancer, even if this will most-like provide a backend_connection_closed_before_data_sent_to_client caused by a 502 Bad Gateway race condition.

To make sure the backend responds to the request and to see if how long it takes, you can repeat this process for a couple of times (since you still get some valid responses):

curl response time

$ curl -w "@curl.txt" -o /dev/null -s IP_HERE

curl.txt content(create and save this file first):

   time_namelookup:  %{time_namelookup}\n
      time_connect:  %{time_connect}\n
   time_appconnect:  %{time_appconnect}\n
  time_pretransfer:  %{time_pretransfer}\n
     time_redirect:  %{time_redirect}\n
time_starttransfer:  %{time_starttransfer}\n
                ----------\n
        time_total:  %{time_total}\n

If this is the case, please review the sign up endpoint code for any type of loop like the PG::UniqueViolation errors that you mentioned.

-- Daniel
Source: StackOverflow