Nginx returning status 400 when using kubernetes ingress

4/9/2019

I'm setting up container deployments with a backend server and socket server. When I try to connect to my web server endpoint its fine, but when connecting to the socket server endpoint it's returning a 400

I followed up several topics like: WebSocket handshake: Unexpected response code: 400 in kubernetes-ingress

But adding the annotation WebSocket-services and proxy timeout is not working.

When I port forward the deployment pod it's working fine. So the problem must be the nginx ingress controller.

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: ingress
  namespace: {{ include "namespace" . }}
  labels:
    helm.sh/chart: {{ include "chartname" . }}
    app.kubernetes.io/managed-by: {{ .Release.Service }}
    app.kubernetes.io/stage: {{ .Values.stage }}
  annotations:
    kubernetes.io/ingress.class: "nginx"
    nginx.ingress.kubernetes.io/proxy-read-timeout: "3600"
    nginx.ingress.kubernetes.io/proxy-send-timeout: "3600"
spec:
  tls:
  - secretName: tls-certificate
    hosts:
    {{ range .Values.hosts }}
    - {{ . | quote }}
    {{ end }}
  rules:
  - host: mydomain.tk
    http:
      paths:
        - path: /
          backend:
            serviceName: web
            servicePort: http
        - path: /socket
          backend:
            serviceName: web
            servicePort: socket

And here is my service.yml file:

apiVersion: v1
kind: Service
metadata:
  name: web
  namespace: {{ include "namespace" . }}
  labels:
    helm.sh/chart: {{ include "chartname" . }}
    app.kubernetes.io/managed-by: {{ .Release.Service }}
    app.kubernetes.io/stage: {{ .Values.stage }}
spec:
  type: NodePort
  sessionAffinity: ClientIP
  ports:
  - port: {{ .Values.web.port }}
    targetPort: http
    protocol: TCP
    name: http
  - port: {{ .Values.web.socket }}
    targetPort: socket
    protocol: TCP
    name: socket
  selector:
    name: web

No matter what I'm doing it's always returning me a 400 on the /socket endpoint.

I'm currently using the latest version of the nginx ingress and GKE version 1.11.7

The output of ingress:

Name:             ingress
Namespace:        backend
Address:
Default backend:  default-http-backend:80 (10.48.0.9:8080)
TLS:
  tls-certificate terminates mydomain.tk
Rules:
  Host     Path  Backends
  ----     ----  --------
  mydomain.tk
           /         web:http (10.48.0.28:8080)
           /socket   web:socket (10.48.0.28:9092)
Annotations:
  kubernetes.io/ingress.class:                     nginx
  nginx.ingress.kubernetes.io/proxy-read-timeout:  3600
  nginx.ingress.kubernetes.io/proxy-send-timeout:  3600
Events:
  Type    Reason  Age   From                      Message
  ----    ------  ----  ----                      -------
  Normal  CREATE  13s   nginx-ingress-controller  Ingress backend/ingress
-- Stefan Gies
kubernetes
nginx
nginx-ingress
socket.io

2 Answers

3/11/2020

In my case the issue was caused by too long header line that client sent. I've resolved it by setting/increasing the following configuration parameters of NGINX Ingress Controller ConfigMap:

-- Karapet Kostandyan
Source: StackOverflow

4/11/2019

I got the problem resolved by basically adding cross origin headers to my applications.

-- Stefan Gies
Source: StackOverflow