502 Bad Gateway on Angular App deployed on K8 cluster

4/14/2020

I have deployed 3 services on a k8 cluster which uses a Traffic ingress controller. I get a 502 Bad Gateway error for accessing my Angular built front end but by backend node server and mongo db work fine.

I have tried an Nginx ingress controller set up with the same issue. I am aware a production build of the app would be better for the final build but as far as I'm aware dev access should still be possible. The Traefik ingress routes correctly as per IP and port, but somewhere along the way fails. I have exec'd into the 'frontend' pod and curl confirms that the page is being hosted on localhost:4200 as expected.

My docker-compose file is as follows:

version: '3.7'

services:
    frontend:
        image: [image location]
        ports:
            - "4200"
        volumes:
            - ./frontend:/app

    s3-server:
        image: [image location]
        ports:
            - "3000"
        links:
            - database

    database:
        image: mongo
        ports:
            - "27017"

My traefik yaml is as follows:

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: frontend
  annotations:
    kubernetes.io/ingress.class: traefik
spec:
  rules:
  - host: [domainname] 
    http:
      paths:
      - path: /
        backend:
          serviceName: frontend
          servicePort: 4200
      - path: /api
        backend:
          serviceName: s3-server
          servicePort: 3000
      - path: /db
        backend:
          serviceName: database
          servicePort: 27017

frontend service (created with compose) yaml:

apiVersion: v1
kind: Service
metadata:
  annotations:
    kompose.cmd: kompose convert
    kompose.version: 1.21.0 ()
  creationTimestamp: null
  labels:
    io.kompose.service: frontend
  name: frontend
spec:
  ports:
  - name: "4200"
    port: 4200
    targetPort: 4200
  selector:
    io.kompose.service: frontend
status:
  loadBalancer: {}

Ingress shows:

Host              Path  Backends
  ----              ----  --------
  api.cailean.tech  
                    /      frontend:4200 (192.168.1.27:4200)
                    /api   s3-server:3000 (192.168.2.20:3000)
                    /db    database:27017 (192.168.2.14:27017)

Pods show:

pod/database-798b8df4bd-zzxpx    1/1     Running   0          17h   192.168.2.14   kube-node-ea4d   <none>           <none>
pod/s3-server-76dd6b6b57-pq9lp   1/1     Running   0          15h   192.168.2.20   kube-node-ea4d   <none>           <none>
pod/nginx-86c57db685-qbcd4       1/1     Running   0          47m   192.168.1.26   kube-node-f94c   <none>           <none>
pod/frontend-5b8c7979d8-fggr6    1/1     Running   0          18m   192.168.1.27   kube-node-f94c   <none>           <none>

k describe svc frontend:

Name:              frontend
Namespace:         default
Labels:            io.kompose.service=frontend
Annotations:       kompose.cmd: kompose convert
                   kompose.version: 1.21.0 ()
                   kubectl.kubernetes.io/last-applied-configuration:
                     {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{"kompose.cmd":"kompose convert","kompose.version":"1.21.0 ()"},"creationTim...
Selector:          io.kompose.service=frontend
Type:              ClusterIP
IP:                192.168.141.36
Port:              4200  4200/TCP
TargetPort:        4200/TCP
Endpoints:         192.168.1.27:4200
Session Affinity:  None
Events:            <none>

When connected to a Nginx server pod (set up for testing), if I curl the IP address of the frontend I get connection refused.

* Expire in 0 ms for 6 (transfer 0x559ae5cb5f50)
*   Trying 192.168.1.27...
* TCP_NODELAY set
* Expire in 200 ms for 4 (transfer 0x559ae5cb5f50)
* connect to 192.168.1.27 port 4200 failed: Connection refused
* Failed to connect to 192.168.1.27 port 4200: Connection refused
* Closing connection 0
curl: (7) Failed to connect to 192.168.1.27 port 4200: Connection refused

Curl from inside frontend pod gives me:

Rebuilt URL to: localhost:4200/
*   Trying ::1...
* TCP_NODELAY set
* connect to ::1 port 4200 failed: Connection refused
*   Trying 127.0.0.1...
* TCP_NODELAY set
* Connected to localhost (127.0.0.1) port 4200 (#0)
> GET / HTTP/1.1
> Host: localhost:4200
> User-Agent: curl/7.52.1
> Accept: */*
> 
< HTTP/1.1 200 OK
< X-Powered-By: Express
< Access-Control-Allow-Origin: *
< Accept-Ranges: bytes
< Content-Type: text/html; charset=UTF-8
< Content-Length: 761
< ETag: W/"2f9-Ft4snhWFNqmPXU8vVB/M50CiWRU"
< Date: Tue, 14 Apr 2020 12:54:50 GMT
< Connection: keep-alive
< 
<!doctype html>
<html lang="en">
<head>
  <meta charset="utf-8">
  <title>S3 Manoeuvre Selector</title>
  <base href="/">
  <meta name="viewport" content="width=device-width, initial-scale=1">
  <link rel="icon" type="image/x-icon" href="favicon.ico">
  <link href="https://fonts.googleapis.com/css?family=Roboto:300,400,500&display=swap" rel="stylesheet">
  <link href="https://fonts.googleapis.com/icon?family=Material+Icons" rel="stylesheet">
</head>
<body class="mat-typography">
  <app-root></app-root>
<script src="runtime.js" type="module"></script><script src="polyfills.js" type="module"></script><script src="styles.js" type="module"></script><script src="vendor.js" type="module"></script><script src="main.js" type="module"></script></body>
</html>
* Curl_http_done: called premature == 0
* Connection #0 to host localhost left intact

As you can see it initially fails but resolves correctly. Any solution to this?

Any ideas why Bad Gateway for the angular frontend but not the mongo db or express api?

Update: Making any arbitrary change 4-6 times to service yaml and using 'kubectl apply -f' will result in no bad gateway error and working as expected, even if the service yaml is exactly the same as the one initially used to start the service. Cannot discover any reason why this might be...

-- cwelsh4
angular
bad-gateway
kubernetes
traefik
traefik-ingress

2 Answers

4/14/2020

It seems that you have defined your service as a LoadBalancer type. The LoadBalancer type is the type you use at the "outermost" scope and expose to the external network, while a ClusterIp service is more fitting within the cluster itself.

Your actual ingress controller will take care of the loadbalancing and routing for you (while that one should actually have a loadbalancer service in case you are on a platform which uses that).

-- Jite
Source: StackOverflow

4/14/2020

It seems you are mixing up front end with s3 server pod. The service looks good because it has got Endpoints populated. Connection refused to pod IP generally means there is no container in the pod listening on the port(4200) that you are performing curl against.

-- Arghya Sadhu
Source: StackOverflow