I'm creating a web application that comprises a React frontend and a node.js (express) server. The frontend makes an internal api call to the express server and the express server then makes an external api call to gather some data. The frontend and the server are in different containers within the same Kubernetes pod.
The frontend service is an nginx:1.14.0-alpine
image. The static files are built (npm build
) in a CI pipeline and the build
directory is copied to the image during docker build
. The package.json
contains a proxy key, "proxy": "http://localhost:8080"
, that routes traffic from the app to localhost:8080
- which is the port that the express server is listening on for an internal api call. I think the proxy
key will have no bearing once the files are packaged into static files and served up onto an nginx
image?
When running locally, i.e. running npm start
instead of npm build
, this all works. The express server picks up the api requests sent out by the frontend on port 8080
.
The express server is a simple service that adds authentication to the api call that the frontend makes, that is all. But the authentication relies on secrets as environment variables, making them incompatible with React. The server is started by running node server.js
; locally the server service successfully listens (app.listen(8080)
)to the api calls from the React frontend, adds some authentication to the request, then makes the the request to the external api and passes the response back to the frontend once it is received.
In production, in a Kubernetes pod, things aren't so simple. The traffic from the React frontend proxying through the node server needs to be handled by kubernetes now, and I haven't been able to figure it out.
It may be important to note that there are no circumstances in which the frontend will make any external api calls directly, they will all go through the server.
React frontend Dockerfile
FROM nginx:1.14.0-alpine
# Copy static files
COPY client/build/ /usr/share/nginx/html/
# The rest has been redacted for brevity but is just copying of favicons etc.
Express Node Server
FROM node:10.16.2-alpine
# Create app directory
WORKDIR /app
# Install app dependencies
COPY server/package*.json .
RUN npm install
EXPOSE 8080
CMD [ "node", "server.js" ]
Kubernetes Manifest
- Redacted for brevity
apiVersion: apps/v1beta1
kind: Deployment
containers:
- name: frontend
image: frontend-image:1.0.0
imagePullPolicy: IfNotPresent
ports:
- name: http
containerPort: 80
volumeMounts:
- mountPath: /etc/nginx/conf.d/default.conf
name: config-dir
subPath: my-config.conf
- name: server
image: server-image:1.0.0
imagePullPolicy: IfNotPresent
volumes:
- name: config-tmpl
configMap:
name: app-config
defaultMode: 0744
- name: my-config-directory
emptyDir: {}
---
apiVersion: v1
kind: ConfigMap
metadata:
name: app-config
namespace: my-namespace
data:
my-conf.conf: |-
server {
listen 80;
server_name _;
location api/ {
proxy_pass http://127.0.0.1:8080/;
}
.....
In Kubernetes, thes pod share the same network interface with all container inside it, so for the frontend container localhost:8080 is the backend, and for the backend container localhost:80 is the frontend. As for any container application, you should ensure that they are listening in other interfaces than 127.0.0.1 if you want traffic from outside.
Migrating an aplication from one server - where every application talks from 127.0.0.1 - to a pod was intended to be simple as in a dedicated machine.
Your nginx.conf looks a little bit strange, should be location /api/ {
.
Here is functional example:
nginx.conf
server {
server_name localhost;
listen 0.0.0.0:80;
error_page 500 502 503 504 /50x.html;
location / {
root html;
}
location /api/ {
proxy_pass http://127.0.0.1:8080/;
}
}
Create this ConfigMap:
kubectl create -f nginx.conf
app.js
const express = require('express')
const app = express()
const port = 8080
app.get('/', (req, res) => res.send('Hello from Express!'))
app.listen(port, () => console.log(`Example app listening on port ${port}!`))
Dockerfile
FROM alpine
RUN apk add nodejs npm && mkdir -p /app
COPY . /app
WORKDIR /app
RUN npm install express --save
EXPOSE 8080
CMD node app.js
You can build this image or use the one I've made hectorvido/express.
Then, create the pod YAML definition:
pod.yml
apiVersion: v1
kind: Pod
metadata:
name: front-back
labels:
app: front-back
spec:
containers:
- name: front
image: nginx:alpine
volumeMounts:
- name: nginx-conf
mountPath: /etc/nginx/conf.d/
ports:
- containerPort: 80
- name: back
image: hectorvido/express
ports:
- containerPort: 8080
volumes:
- name: nginx-conf
configMap:
name: nginx
Put on the cluster:
kubectl create -f pod.yml
Get the IP:
kubectl get pods -o wide
I tested with Minikube, so if the pod IP was 172.17.0.7 I have to do:
minikube ssh
curl -L 172.17.0.7/api
If you had an ingress in the front, it should still working. I enabled an nginx ingress controller on minikube, so we need to create a service and a ingress:
service
kubectl expose pod front-back --port 80
ingress.yml
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: front-back
spec:
rules:
- host: fb.192-168-39-163.nip.io # minikube ip
http:
paths:
- path: /
backend:
serviceName: front-back
servicePort: 80
The test still works:
curl -vL http://fb.192-168-39-163.nip.io/api/