I am duplicating a Kubernetes cluster containing divolte from one GCP (Google Cloud Platform) project to another. I have the exact same configurations in the already running project as the target project. In my new project I don't get load balancer running with the right health check and when I try to connect to te static IP form the load balancer I get a 502 server error.
I've followed the same steps as in the original project:
Deployment file:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: divolte
spec:
selector:
matchLabels:
app: divolte
replicas: 2
strategy:
rollingUpdate:
maxSurge: 0
maxUnavailable: 1
type: RollingUpdate
template:
metadata:
labels:
app: divolte
spec:
containers:
- name: divolte
imagePullPolicy: Always
image: "eu.gcr.io/project-name/divolte-collector:latest"
ports:
- containerPort: 8290
env:
- name: JAVA_OPTS
value: "-Xms512m -Xmx2048m -XX:+UseG1GC -Djava.awt.headless=true"
resources:
limits:
cpu: 1
memory: 3072Mi
requests:
cpu: 1
memory: 2048Mi
livenessProbe:
httpGet:
path: /divolte.js
port: 8290
initialDelaySeconds: 22
timeoutSeconds: 1
readinessProbe:
httpGet:
path: /ping
port: 8290
initialDelaySeconds: 22
periodSeconds: 1
terminationGracePeriodSeconds: 30
Service file:
apiVersion: v1
kind: Service
metadata:
name: divolte
spec:
ports:
- name: http
port: 80
targetPort: 8290
nodePort: 30964
selector:
app: divolte
type: NodePort
Ingress file:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: divolte
annotations:
kubernetes.io/ingress.global-static-ip-name: ip-name-here
spec:
tls:
- secretName: ssl-cert-name-here
backend:
serviceName: divolte
servicePort: 80
I expected that the load balancer would pick up the configuration file as in my previous GCP project and would reroute the traffic to the cluster correctly, but I cannot get my GCP load balancer with the health check to work. Any ideas what to do next?
I have found the solution to the problem.
Thanks for the suggestions, I was looking to a sympton in the load balancer, but it was the wrong direction.
The kubernetes pods were stuck in a "crashloopbackoff" because I forgot to create a Google Storage bucket, it wasn't in my documentation to setup the environment so I overlooked it. I found it with the "kubectl logs" command. The app is up and running now.