I have a simple Express.js server Dockerized and when I run it like:
docker run -p 3000:3000 mytag:my-build-id
http://localhost:3000/ responds just fine and also if I use the LAN IP of my workstation like http://10.44.103.60:3000/
Now if I deploy this to MicroK8s with a service deployment declaration like:
apiVersion: v1
kind: Service
metadata:
creationTimestamp: null
name: my-service
spec:
type: NodePort
ports:
- name: "3000"
port: 3000
targetPort: 3000
status:
loadBalancer: {}
and pod specification like so (update 2019-11-05):
apiVersion: apps/v1
kind: Deployment
metadata:
creationTimestamp: null
name: my-service
spec:
replicas: 1
selector:
matchLabels:
name: my-service
strategy: {}
template:
metadata:
creationTimestamp: null
labels:
name: my-service
spec:
containers:
- image: mytag:my-build-id
name: my-service
ports:
- containerPort: 3000
resources: {}
restartPolicy: Always
status: {}
and obtain the exposed NodePort via kubectl get services
to be 32750 and try to visit it on the MicroK8s host machine like so:
then the request just hangs and if I try to visit the LAN IP of the MicroK8s host from my workstation at http://192.168.191.248:32750/ then the request is immediately refused.
But, if I try to port forward into the pod with
kubectl port-forward my-service-5db955f57f-q869q 3000:3000
then http://localhost:3000/ works just fine.
So the pod deployment seems to be working fine and example services like the microbot-service work just fine on that cluster.
I've made sure the Express.js server listens on all IPs with
app.listen(port, '0.0.0.0', () => ...
So what can be the issue?
You need to add a selector to your service. This will tell Kubernetes how to find your deployment. Additionally you can use nodePort to specify the port number of your service. After doing that you will be able to curl your MicroK8s IP.
Your Service YAML should look like this:
apiVersion: v1
kind: Service
metadata:
creationTimestamp: null
name: my-service
spec:
type: NodePort
ports:
- name: http
port: 80
targetPort: 80
nodePort: 30001
selector:
name: my-service
status:
loadBalancer: {}
the LAN IP of the MicroK8s host from my workstation
That is the central source of your misunderstanding; localhost, 127.0.0.1
, and your machine's LAN IP have nothing to do with what is apparently a virtual machine running microk8s (which it would have been infinitely valuable to actually include that information in your question, rather than us having to deduce it from one buried sentence)
I've made sure the Express.js server listens on all IPs with
Based on what you reported later:
at http://192.168.191.248:32750/ then the request is immediately refused.
then it appears that your express server is not, in fact, listening on all interfaces. That explains why you can successfully port-forward into the Pod (which causes traffic to appear on the Pod's localhost) but not reach it from "outside" the Pod
You can also test that theory by using another Pod inside the cluster to curl
its Pod IP on port 3000 (in order to side-step the Service and thus NodePort parts)
There is a small chance that you have misconfigured your Pod
and Service
relationship, but since you didn't post your PodSpec
, and the behavior you are describing sounds a lot more like an express misconfiguration, we'll go with that until we have evidence to the contrary