Ingress-nginx https and wss sample applications not working. 400 error. AWS EKS NLB

1/25/2022

I've been working on this a while and I suppose I'm stuck. So I'm hoping someone with a bit more experience can help me out.

I'm on AWS EKS, created via eksctl, all latest versions. I am trying to create a single NLB that points to an ingress-nginx service on the cluster which can then route traffic to other services via ingress rules.

In order to create the NLB first I install aws-load-balancer-controller as follows:

helm repo add eks https://aws.github.io/eks-charts
helm repo update
helm install aws-load-balancer-controller eks/aws-load-balancer-controller \
  -n kube-system \
  --set clusterName=$CLUSTER_NAME

here is some info about the deployment created

❯ kubectl describe deployment -n kube-system aws-load-balancer-controller
Name:                   aws-load-balancer-controller
Namespace:              kube-system
CreationTimestamp:      Thu, 16 Dec 2021 10:19:25 -0500
Labels:                 app.kubernetes.io/instance=aws-load-balancer-controller
                        app.kubernetes.io/managed-by=Helm
                        app.kubernetes.io/name=aws-load-balancer-controller
                        app.kubernetes.io/version=v2.2.4
                        helm.sh/chart=aws-load-balancer-controller-1.2.7
                        objectset.rio.cattle.io/hash=38789cb4dbde08053f6bc0f04161a585653319af
Annotations:            deployment.kubernetes.io/revision: 1
                        meta.helm.sh/release-name: aws-load-balancer-controller
                        meta.helm.sh/release-namespace: kube-system
                        objectset.rio.cattle.io/id: default-dev-eks-eks-aws-load-balancer-controller
Selector:               app.kubernetes.io/instance=aws-load-balancer-controller,app.kubernetes.io/name=aws-load-balancer-controller
Replicas:               2 desired | 2 updated | 2 total | 2 available | 0 unavailable
StrategyType:           RollingUpdate
MinReadySeconds:        0
RollingUpdateStrategy:  25% max unavailable, 25% max surge
Pod Template:
  Labels:           app.kubernetes.io/instance=aws-load-balancer-controller
                    app.kubernetes.io/name=aws-load-balancer-controller
  Annotations:      prometheus.io/port: 8080
                    prometheus.io/scrape: true
  Service Account:  aws-load-balancer-controller
  Containers:
   aws-load-balancer-controller:
    Image:       602401143452.dkr.ecr.us-west-2.amazonaws.com/amazon/aws-load-balancer-controller:v2.2.4
    Ports:       9443/TCP, 8080/TCP
    Host Ports:  0/TCP, 0/TCP
    Command:
      /controller
    Args:
      --cluster-name=c-lp45l
      --ingress-class=alb
    Liveness:     http-get http://:61779/healthz delay=30s timeout=10s period=10s #success=1 #failure=2
    Environment:  <none>
    Mounts:
      /tmp/k8s-webhook-server/serving-certs from cert (ro)
  Volumes:
   cert:
    Type:               Secret (a volume populated by a Secret)
    SecretName:         aws-load-balancer-tls
    Optional:           false
  Priority Class Name:  system-cluster-critical
Conditions:
  Type           Status  Reason
  ----           ------  ------
  Progressing    True    NewReplicaSetAvailable
  Available      True    MinimumReplicasAvailable
OldReplicaSets:  <none>
NewReplicaSet:   aws-load-balancer-controller-7d4944f54 (2/2 replicas created)
Events:          <none>

Then I install ingress-nginx using helmfile like so:

repositories:
- name: ingress-ngix
  url: https://kubernetes.github.io/ingress-nginx

releases:
  # Published chart example
  - name: ingress-nginx
    namespace: ingress-nginx
    chart: ingress-ngix/ingress-nginx
    version: "4.0.16"
    values:
      - controller:
          config:
            force-ssl-redirect: "true"
          service:
            type: LoadBalancer
            annotations:
              service.beta.kubernetes.io/aws-load-balancer-type: "external"
              service.beta.kubernetes.io/aws-load-balancer-nlb-target-type: "instance"
              service.beta.kubernetes.io/aws-load-balancer-scheme: internet-facing
              service.beta.kubernetes.io/aws-load-balancer-ssl-cert: {{requiredEnv "SSL_ARN"}}
              service.beta.kubernetes.io/aws-load-balancer-backend-protocol: "http"
              service.beta.kubernetes.io/aws-load-balancer-ssl-ports: "443"
              service.beta.kubernetes.io/aws-load-balancer-proxy-protocol: "*"
SSL_ARN=... helmfile -f helmfile.yaml apply

The description of what is created is as follows

❯ kubectl describe service ingress-nginx-controller -n ingress-nginx
Name:                     ingress-nginx-controller
Namespace:                ingress-nginx
Labels:                   app.kubernetes.io/component=controller
                          app.kubernetes.io/instance=ingress-nginx
                          app.kubernetes.io/managed-by=Helm
                          app.kubernetes.io/name=ingress-nginx
                          app.kubernetes.io/part-of=ingress-nginx
                          app.kubernetes.io/version=1.1.1
                          helm.sh/chart=ingress-nginx-4.0.16
Annotations:              
                          meta.helm.sh/release-name: ingress-nginx
                          meta.helm.sh/release-namespace: ingress-nginx
                          service.beta.kubernetes.io/aws-load-balancer-backend-protocol: http
                          service.beta.kubernetes.io/aws-load-balancer-nlb-target-type: instance
                          service.beta.kubernetes.io/aws-load-balancer-proxy-protocol: *
                          service.beta.kubernetes.io/aws-load-balancer-scheme: internet-facing
                          service.beta.kubernetes.io/aws-load-balancer-ssl-cert: $SSL_ARN
                          service.beta.kubernetes.io/aws-load-balancer-ssl-ports: 443
                          service.beta.kubernetes.io/aws-load-balancer-type: external
Selector:                 app.kubernetes.io/component=controller,app.kubernetes.io/instance=ingress-nginx,app.kubernetes.io/name=ingress-nginx
Type:                     LoadBalancer
IP Family Policy:         SingleStack
IP Families:              IPv4
IP:                       ...
IPs:                      ...
LoadBalancer Ingress:     k8s-ingressn-ingressn-<RANDOM_NUMBER>.elb.us-east-1.amazonaws.com
Port:                     http  80/TCP
TargetPort:               http/TCP
NodePort:                 http  32361/TCP
Endpoints:                ...
Port:                     https  443/TCP
TargetPort:               https/TCP
NodePort:                 https  31339/TCP
Endpoints:                ...
Session Affinity:         None
External Traffic Policy:  Cluster
Events:                   <none>

Then I notice than an NLB shows up. Here is it's description:

{
    "LoadBalancers": [
        {
            "LoadBalancerArn": "arn:aws:elasticloadbalancing:us-east-1:...:loadbalancer/net/k8s-ingressn-ingressn-.../...",
            "DNSName": "k8s-ingressn-ingressn-<RANDOM_NUMBER>.elb.us-east-1.amazonaws.com",
            "CanonicalHostedZoneId": "...",
            "CreatedTime": "2022-01-25T17:37:44.916000+00:00",
            "LoadBalancerName": "k8s-ingressn-ingressn-...",
            "Scheme": "internet-facing",
            "VpcId": "vpc-...",
            "State": {
                "Code": "active"
            },
            "Type": "network",
            "AvailabilityZones": [
                {
                    "ZoneName": "us-east-1a",
                    "SubnetId": "subnet-...",
                    "LoadBalancerAddresses": []
                },
                {
                    "ZoneName": "us-east-1d",
                    "SubnetId": "subnet-...",
                    "LoadBalancerAddresses": []
                },
                {
                    "ZoneName": "us-east-1c",
                    "SubnetId": "subnet-...",
                    "LoadBalancerAddresses": []
                },
                {
                    "ZoneName": "us-east-1b",
                    "SubnetId": "subnet-...",
                    "LoadBalancerAddresses": []
                }
            ],
            "IpAddressType": "ipv4"
        }
    ]
}

If I curl this endpoint I get the following

<html>
<head><title>400 Bad Request</title></head>
<body>
<center><h1>400 Bad Request</h1></center>
<hr><center>nginx</center>
</body>
</html>

Not good, but I'm going to continue.

So I create my Route53 A record, and the CNAME for my SSL certificate, which is being managed by ACM.

Now I curl my new Route53 A record. Both from http and https. Both receive:

<html>
<head><title>400 Bad Request</title></head>
<body>
<center><h1>400 Bad Request</h1></center>
<hr><center>nginx</center>
</body>
</html>

Now I create two services. One is for testing http traffic. The other is for testing websocket traffic.

apiVersion: apps/v1
kind: Deployment
metadata:
  name: http-sample-app
  namespace: sample-apps
  labels:
    app: http
spec:
  replicas: 1
  selector:
    matchLabels:
      app: http
  template:
    metadata:
      labels:
        app: http
    spec:
      containers:
      - name: httpbin
        image: kennethreitz/httpbin
        ports:
        - containerPort: 80
        resources:
          requests:
            cpu: 1000m
            memory: 1000Mi
          limits:
            cpu: 1000m
            memory: 1000Mi
---
apiVersion: v1
kind: Service
metadata:
  name: http-sample-service
  namespace: sample-apps
spec:
  selector:
    app: http
  ports:
    - protocol: TCP
      port: 80
      targetPort: 80
# REF: https://gist.github.com/jsdevtom/7045c03c021ce46b08cb3f41db0d76da
# REF: https://github.com/kubernetes/ingress-nginx/issues/1822
apiVersion: apps/v1
kind: Deployment
metadata:
  name: ws-sample-app
  namespace: sample-apps
  labels:
    app: ws
spec:
  replicas: 1
  selector:
    matchLabels:
      app: ws
  template:
    metadata:
      labels:
        app: ws
    spec:
      containers:
      - name: ws-test
        image: ksdn117/web-socket-test
        ports:
        - containerPort: 8010
        resources:
          requests:
            cpu: 1000m
            memory: 1000Mi
          limits:
            cpu: 1000m
            memory: 1000Mi
---
apiVersion: v1
kind: Service
metadata:
  name: ws-sample-service
  namespace: sample-apps
spec:
  selector:
    app: ws
  ports:
    - protocol: TCP
      port: 80
      targetPort: 8010

And then the ingress resources.

apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
  name: ingress-sample-services
  namespace: sample-apps
  annotations:
    kubernetes.io/ingress.class: "nginx"
    nginx.ingress.kubernetes.io/proxy-read-timeout: "3600"
    nginx.ingress.kubernetes.io/proxy-send-timeout: "3600"
spec:
  rules:
  - host: ...
    http:
      paths:
      - path: /sample/http
        backend:
          serviceName: http-sample-service
          servicePort: 80
      - path: /sample/ws
        backend:
          serviceName: ws-sample-service
          servicePort: 80

Now I try to curl http-sample-service.

<html>
<head><title>400 Bad Request</title></head>
<body>
<center><h1>400 Bad Request</h1></center>
<hr><center>nginx</center>
</body>
</html>

And wscat -c the websocket

error: Unexpected server response: 400

So just to verify I try to port-forward to each of these services.

First I port-forward to ingress-nginx

kubectl port-forward svc/ingress-nginx-controller -n ingress-nginx 8081:80
curl localhost:8081

And receive:

<html>
<head><title>404 Not Found</title></head>
<body>
<center><h1>404 Not Found</h1></center>
<hr><center>nginx</center>
</body>
</html>

So that's a 404 not a 400.

Then I try http-sample-service

kubectl port-forward svc/http-sample-service -n sample-apps 8081:80
curl localhost:8081

I get a normal response (too big to copy, a whole webpage).

Then I try ws-sample-service

kubectl port-forward svc/ws-sample-service -n sample-apps 8081:80
wscat -c localhost:8081

It connects.

Thank you so much for reading. Any ideas are appreciated. Ultimately this needs to be able to serve https and wss traffic.

-- Ryan
amazon-web-services
kubernetes
nginx-ingress

0 Answers