I am having some problems connecting to my new kubernetes cluster. I want to connect from an external dns using port 80 to port 3000 on my cluster which should be running a docker container that is hosting an express app that is listening on this port.
I have configured a kubernetes service which is running. The docker container spins up locally fine and I can reach it on localhost:3000
I have tried configuring a NAT inbound rule by my kubernetes backend pool does not appear as a target vm.
I have configured a load balancer rule with 80 on the outside edge routing to 3000 on the backend pool but I still cannot reach it.
I also cannot see under the covers to find out how the route is configured or how to troubleshoot.
Without showing us the manifest files you used to create your resources, it's hard to pinpoint the problem. You may have simply forgot to specify a target port on the load balancer definition.
In this example, the load balancer listens on port 80 and redirect the traffic on port 8080 to the pod(s) using the app:hpa-pod selector.
apiVersion: v1
kind: Service
metadata:
name: hpa-lb
spec:
type: LoadBalancer
ports:
- port: 80
targetPort: 8080
selector:
app: hpa-pod
Pod listening on port 8080
apiVersion: apps/v1
kind: Deployment
metadata:
name: hpa-deployment
spec:
replicas: 1
selector:
matchLabels:
app: hpa-pod
template:
metadata:
labels:
app: hpa-pod
spec:
restartPolicy: Always
containers:
- name: hpa-pod
image: k8sacademy/kuard:latest
ports:
- containerPort: 8080
name: http
resources:
requests:
memory: "64Mi"
cpu: "200m"
limits:
memory: "128Mi"
cpu: "500m"