Kubernetes AWS EKS Failed to load resource: net::ERR_NAME_NOT_RESOLVED

7/29/2020

I have an issue with the following AWS EKS deployment, where the front end always get a Failed to load resource: net::ERR_NAME_NOT_RESOLVED from the backend

Failed to load resource: net::ERR_NAME_NOT_RESOLVED

The reason appears to be that the frontend app running from the browser has no access from the internet to the backend-API in Kubernetes http://restapi-auth-nginx/api/

(see attached browser image)

Here are the details of the configuration

- file: restapi-auth-api.yaml
Description: Backend API using GUNICORN
Details: Correctly download the image and create the pods
I can do kubectl exec -it <popId> /bin/bash
Port 5000
- file: restapi-auth-nginx.yaml 
Description: NGINX proxy for the API 
Details: Correctly download the image and create the pods 
I can do kubectl exec -it <popId> /bin/bash I can also reach the api pod  from the nginx pod so this part is working fine
- file: frontend.yaml
Description: NGINX proxy plus Angular App in a multistage deployment
Details: Correctly download the image and create the pods
I can do kubectl exec -it <popId> /bin/ash
I can also reach the api pod  from the frontend pod so this part is working fine

However, from the browser, I still get the above error even if all the components appear to be working fine

(see the image of the web site working from the browser)

Let me show you how I can access the api through its NGINX pod from the frontend pod. here are our pods

kubectl get  pods
NAME                                  READY   STATUS    RESTARTS   AGE
frontend-7674f4d9bf-jbd2q             1/1     Running   0          35m
restapi-auth-857f94b669-j8m7t         1/1     Running   0          39m
restapi-auth-nginx-5d885c7b69-xt6hf   1/1     Running   0          38m
udagram-frontend-5fbc78956c-nvl8d     1/1     Running   0          41m

Now let's log into one pod and get frontend pod and do a curl to the NGINX proxy that sever the API.

Let's try with this curl request directly from the frontend pod to the Nginx backend

curl --location --request POST 'http://restapi-auth-nginx/api/users/auth/login' \
> --header 'Content-Type: application/json' \
> --data-raw '{
> "email":"david@me.com",
> "password":"SuperPass"
> }'

Now let's log into the frontend pod and see if it works

kubectl exec -it frontend-7674f4d9bf-jbd2q /bin/ash
/usr/share/nginx/html # curl --location --request POST 'http://restapi-auth-nginx/api/users/auth/login' \
> --header 'Content-Type: application/json' \
> --data-raw '{
> "email":"david@me.com",
> "password":"SuperPass"
> }'
{
  "auth": true,
  "token": "eyJ0eXAiOiJKV1QiLCJhbGciOiJIUzI1NiJ9.eyJ1c2VyIjoiZGF2aWRAcHlta455aS5vcmciLCJleHAiOjE1OTYwNTk7896.OIkuwLsyLhrlCMTVlccg8524OUMnkJ2qJ5fkj-7J5W0",
  "user": "david@me.com"
}

It works perfectly meaning that the frontend is correctly communicating to the restapi-auth-Nginx API reverse proxy

Here in this image, you have the output of multiple commands

enter image description here

Here are the .yaml files

LOAD BALANCER and FRONT END

apiVersion: apps/v1
kind: Deployment
metadata:
  name: frontend
  labels:
    app: udagram
spec:
  replicas: 1
  selector:
    matchLabels:
      app: udagram
      tier: frontend
  template:
    metadata:
      labels:
        app : udagram
        tier: frontend 
    spec:
      containers:
        - name: udagram-frontend
          image: pythonss/frontend_udacity_app
          resources:
            requests:
              cpu: 100m
              memory: 100Mi
          imagePullPolicy: Always
          ports:
          - containerPort: 80
      imagePullSecrets:
        - name: regcred

---

apiVersion: v1
kind: Service
metadata:
  name: frontend-lb
  labels:
    app: udagram
    tier: frontend
  
spec:
  type: LoadBalancer
  ports:
  -  port: 80
  selector:
     app: udagram
     tier: frontend

Nginx reverse proxy for API backend

apiVersion: apps/v1
kind: Deployment
metadata:
  annotations:
    kompose.cmd: kompose convert
    kompose.version: 1.21.0 ()
  creationTimestamp: null
  labels:
    io.kompose.service: restapi-auth-nginx
  name: restapi-auth-nginx
spec:
  replicas: 1
  selector:
    matchLabels:
      io.kompose.service: restapi-auth-nginx
  strategy: {}
  template:
    metadata:
      annotations:
        kompose.cmd: kompose convert
        kompose.version: 1.21.0 ()
      creationTimestamp: null
      labels:
        io.kompose.service: restapi-auth-nginx
    spec:
      containers:
      - image:  pythonss/restapi_auth_microservice_nginx
        imagePullPolicy: Always
        name: restapi-auth-nginx-nginx
        ports:
        - containerPort: 80
        resources: {}
      imagePullSecrets: 
      - name: regcred
      restartPolicy: Always
      serviceAccountName: ""
      volumes: null
status: {}

---

apiVersion: v1
kind: Service
metadata:
  annotations:
    kompose.cmd: kompose convert
    kompose.version: 1.21.0 ()
  creationTimestamp: null
  labels:
    io.kompose.service: restapi-auth-nginx
  name: restapi-auth-nginx
spec:
  ports:
  - name: "80"
    port: 80
    targetPort: 80
  selector:
    io.kompose.service: restapi-auth-nginx
status:
  loadBalancer: {}

For brevity, I will not share the API app server .yaml file.

So my questions are:

How could I grant access from the internet to the backend API gateway without exposing the API to the world?

Or should I expose the API, through an LB as so:

apiVersion: v1
kind: Service
metadata:
  name: backend-lb
  labels:
    io.kompose.service: restapi-auth-nginx
spec:
  type: LoadBalancer
  ports:
  -  port: 80
  selector:
     io.kompose.service: restapi-auth-nginx

This will solve the issue as it will expose the API. However, I now need to add the FrontEnd LB to CORS in the API and the BackEndLB to the FrontEnd so it can make the calls.

Could someone explain how to do otherwise without exposing the APIs? What are the common patterns for this architetcure?

BR

-- MasterOfTheHouse
amazon-eks
kubernetes
kubernetes-ingress
kubernetes-pod
nginx-reverse-proxy

1 Answer

7/31/2020

The solution to this is to just expose the NGINX pods ( note that the docker-compose create 2 images) through a Load balancer service

We need this another yaml:

apiVersion: v1
kind: Service
metadata:
  name: backend-lb
  labels:
    io.kompose.service: restapi-auth-nginx
spec:
  type: LoadBalancer
  ports:
  -  port: 80
  selector:
     io.kompose.service: restapi-auth-nginx

Now we will have 3 services and 2 deployments. This latest service exposes the API to the world

Below is an image for this deployment

enter image description here

As you can see the world has access to this latest services only. The GNINX and GUNICORN are unreachable from the internet.

Now your frontend application can access the API through the exposed LB (represented in black) inside the Kubernetes

-- MasterOfTheHouse
Source: StackOverflow