Kubernetes nginx ingress controller returns 502 but only for AJAX/XmlHttpRequest requests

10/27/2016

I have a web app running Kubernetes behind an nginx ingress controller, it works fine for request browsing, but any AJAX/XMLHTTPRequest from the browser gets a 502 error from nginx.

I captured the HTTP headers for both regular and AJAX request that they look fine, correct Host header, protocol, etc. I am confused why only XMLHttpRequest requests get the 502 from nginx. There is no delay/hang, the 502 is immediate. The requests appear to never reaches app, but get rejected by nginx itself. Switching nginx out for a direct Load Balancer and the problem goes away.

I am going to dig further but I wondered if anyone else using the nginx ingress controller seen this problem before and solved it?

I picked this error out of the nginx log which suggests the container return a response with a too large header for an nginx buffer. However I checked nginx.conf and buffering is disabled: 'proxy_buffering off;'

2016/10/27 19:55:51 [error] 309#309: *43363 upstream sent too big header while reading response header from upstream, client: 10.20.51.1, server: foo.example.com, request: "GET /admin/pages/listview HTTP/2.0", upstream: "http://10.20.66.97:80/admin/pages/listview", host: "foo.example.com", referrer: "https://foo.example.com/admin/pages"

Strangely you get the 504 error only if XmlHttpRequest requests the URL. If I request the same URL with curl it works fine and the response header is as below. What about the AJAX/XmlHttpRequest of the same URL would make the response headers too large?

HTTP/1.1 200 OK
Server: nginx/1.11.3
Date: Thu, 27 Oct 2016 20:15:16 GMT
Content-Type: text/html; charset=utf-8
Content-Length: 6596
Connection: keep-alive
X-Frame-Options: SAMEORIGIN
X-Powered-By: PHP/5.5.9-1ubuntu4.19
Expires: Thu, 19 Nov 1981 08:52:00 GMT
Cache-Control: max-age=0, must-revalidate, no-transform, no-cache, no-store
Pragma: no-cache
X-Controller: CMSPagesController
X-Title: Example+Site+-+Pages
X-Frame-Options: SAMEORIGIN
Vary: X-Requested-With,Accept-Encoding,User-Agent
Strict-Transport-Security: max-age=15724800; includeSubDomains; preload
-- Aaron
kubernetes
nginx

1 Answer

10/28/2016

I resolved the issue. The reason that only XmlHttpRequests failed was because the application has a special behavior when it saw an XmlHttpRequest request, where it dumped about 3000 bytes of extra headers into the response. This made the total header size larger than the default nginx header buffer.

nginx chocking on large HTTP header payloads is common, because its default buffer size is smaller than most other web servers, only 4k or 8k. The solution to the errors was to increase the buffer used for headers to 16k by adding these settings.

proxy_buffers         8 16k;  # Buffer pool = 8 buffers of 16k
proxy_buffer_size     16k;    # 16k of buffers from pool used for headers

The nginx documentation is pretty murky and these settings ambiguously named. As I understand it there is a pool of buffers for each connection. In this case 8 buffers of 16k each. These buffers are used for receiving data from the upstream web server and passing data back to the client.

So proxy_buffers determines the pool. Then proxy_buffer_size determine host much of that buffer pool is available for receiving the HTTP headers from the upstream server (rounded up to whole buffer size I think). A third setting proxy_busy_buffer_size determines how much of the buffer pool can be busy being sent to the client (rounded up to whole buffer size I think). By default the proxy_busy_buffer_size is automatically set to the number of buffers in the pool minus 1.

So the proxy_buffers pool must be big enough to fit the proxy_busy_buffer_size and still have enough buffers left over to fit at least the proxy_buffer_size for HTTP headers from the upstream web server.

The net net of that is that if you increase proxy_busy_buffer_size at all you'll probably immediately get the confusing error: "proxy_busy_buffers_size" must be less than the size of all "proxy_buffers" minus one buffer, and then you have to increase the size of the pool.

And the proxy_buffering off setting you ask? Well that does not disable proxy buffering! Rather that is whether nginx will buffer up to the whole response (to the buffer pool or disk) while sending it to the browser, or whether it will buffer only what fits in the the buffer pool. So even if you turn off proxy_buffering proxy buffering still happens.

http://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_buffer_size

Maximum on http header values?

I saw a bunch of recommendation for setting large buffer pools, e.g. big numbers like 8 x 512kb buffers (= 4MB). There is a buffer pool for each connection, so the smaller you keep the buffer pool, the more connections you can handle.

-- Aaron
Source: StackOverflow