I'm kubernetes running on windows using docker desktop
Cluster info Kubernetes control plane is running at https://kubernetes.docker.internal:6443 CoreDNS is running at https://kubernetes.docker.internal:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'. The angular deployment file used
apiVersion: apps/v1
kind: Deployment
metadata:
name: frontend
labels:
app: angular
tier: frontend
spec:
replicas: 1
selector:
matchLabels:
tier: frontend
template:
metadata:
labels:
tier: frontend
spec:
containers:
- name: angular
image: nimnambi/sense-frontend
ports:
- containerPort: 4200
---
kind: Service
apiVersion: v1
metadata:
name: angular-replicaset-svc
spec:
selector:
app: angular
ports:
- protocol: TCP
port: 4200
targetPort: 4200
type: LoadBalancer
The docker file for the frontend:
FROM node:12-alpine as builder
WORKDIR /app
COPY package*.json /app
RUN npm install
COPY . /app
RUN npm run build
FROM nginx:alpine
# ----------------------------------
# Clean nginx
RUN rm -rf /usr/share/nginx/html/*
# Copy dist
COPY --from=builder /app/dist /usr/share/nginx/html
COPY ./nginx.conf /etc/nginx/default.conf
WORKDIR /usr/share/nginx/html
# Permission
RUN chown root /usr/share/nginx/html/*
RUN chmod 755 /usr/share/nginx/html/*
# Expose port
EXPOSE 4200
# Start
CMD ["nginx", "-g", "daemon off;"]
The ngnix.config file
http {
server{
listen 4200;
location / {
root /usr/share/nginx/html;
index index.html index.htm;
try_files $uri $uri/ /index.html =404;
}
}
}
The service created service
I thought it was the load balancer being used as a service creating the problems but my express back is accessible as a loadbalancer at localhost.