I am trying to deploy a small Node.js
server using Kubernetes
. And I have exposed this app internally as well as externally using ClusterIP
type service and NodePort
type service respectively.
I can, without any problem connect internally to the app using ClusterIP
service.
Problem is I can't use NodePort
service to connect to app
I am running curl
cmd the ClusterIP and NodePort from my master
node. As mentioned, only the ClusterIP is working.
Here is my deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: deployment-test
spec:
replicas: 2
selector:
matchLabels:
name: deployment-test
template:
metadata:
labels:
# you can specify any labels you want here
name: deployment-test
spec:
containers:
- name: deployment-test
# image must be the same as you built before (name:tag)
image: banukajananathjayarathna/bitesizetroubleshooter:v1
ports:
- name: http
containerPort: 8080
protocol: TCP
imagePullPolicy: Always
terminationGracePeriodSeconds: 60
And here is my 'clusterip.yaml`
kind: Service
apiVersion: v1
metadata:
labels:
# these labels can be anything
name: deployment-test-clusterip
name: deployment-test-clusterip
spec:
selector:
name: deployment-test
ports:
- protocol: TCP
port: 80
# target is the port exposed by your containers (in our example 8080)
targetPort: 8080
and here is nodeport.yaml
kind: Service
apiVersion: v1
metadata:
labels:
name: deployment-test-nodeport
name: deployment-test-nodeport
spec:
# this will make the service a NodePort service
type: NodePort
selector:
name: deployment-test
ports:
- protocol: TCP
# new -> this will be the port used to reach it from outside
# if not specified, a random port will be used from a specific range (default: 30000-32767)
# nodePort: 32556
port: 80
targetPort: 8080
And here are my services:
$ kubectl get svc -n test49
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
deployment-test-clusterip ClusterIP 172.31.118.67 <none> 80/TCP 3d8h
deployment-test-nodeport NodePort 172.31.11.65 <none> 80:30400/TCP 3d8h
When I try (from master) $ curl 172.31.118.67
, then it gives Hello world
as the output from the app.
But, when I run $ curl 172.31.11.65
, I get the following error:
$ curl 172.31.11.65
curl: (7) Failed to connect to 172.31.11.65 port 80: Connection refused
I even tried $ curl 172.31.11.65:80
and $ curl 172.31.11.65:30400
, it still gives the error.
Can someone please tell me what I have done wrong here?
When I try (from master)
$ curl 172.31.118.67
, then it givesHello world
as the output from the app.
It works because you exposed your deployment within the cluster using ClusterIP Service which "listens" on port 80 on its ClusterIP
(172.31.118.67). As the name says it is an IP available only within your cluster. If you want to expose your Deployment
to so called external world you cannot do it using this Service
type.
Generally you use ClusterIP
to make your application component (e.g. set of backend Pods
) available for other application components, in other words to expose it within your cluster. Good use case for ClusterIP
service is exposing your database (which can be additionally clustered and run as a set of Pods
) for backend Pods
which need to connect to it, as a single endpoint.
But, when I run
$ curl 172.31.11.65
, I get the following error:
bash $ curl 172.31.11.65 curl: (7) Failed to connect to 172.31.11.65 port 80: Connection refused
Where are you trying to connect from ? It should be accessible from other Pods
in your cluster as well as from your nodes. As @suren already mentioned in his answer, NodePort
Service has all features of ClusterIP
Service, but it additionally exposes your Deployment on some random port in range 30000-32767
on your Node's IP address (if another port within this range wasn't specified explicitly in spec.ports.nodePort
in your Service
definition). So basically any Service
like NodePort or LoadBalancer has also its ClusterIP
.
I even tried
$ curl 172.31.11.65:80
and$ curl 172.31.11.65:30400
, it still gives the error.
curl 172.31.11.65:80
should have exactly the same effect as curl 172.31.11.65
as 80
is the default http port. Doing curl 172.31.11.65:30400
is pointless as it is not "listening" on this port (Service is actually nothing more than set of iptables port-forwarding rules so in fact there is nothing really listening on this port) on its ClusterIP
. This port is used only to expose your Pods
on your worker nodes IP addresses. Btw. you can check them simply by running ip -4 a
and searching through available network interfaces (applied if you have kubernetes on premise installation). If you are rather using some cloud environment, you will not see your nodes external IPs in your system e.g. in GCP you can see them on your Compute Engine VMs list in GCP console. Additionally you need to set appropriate firewall rules as by default traffic to some random ports is blocked. So after allowing TCP ingress traffic to port 30400
, you should be able to access your application on http://<any-of-your-k8s-cluster-worker-nodes-external-IP-address>:30400
ip -4 a
will still show you the internal IP address of your node and you should be able to connect using curl <this-internal-ip>:30400
as well as using curl 127.0.0.1:30400
(from node) as by default kube-proxy
considers all available network interfaces for NodePort
Service:
The default for --nodeport-addresses is an empty list. This means that kube-proxy should consider all available network interfaces for NodePort.
If you primarily want to expose your Deployment to external world and you want it to be available on standard port I would recommend you using LoadBalancer rather than NodePort
. If you use some cloud environment it is available out of the box and you can easily define it as any other Service type without a need of additional configuration. If you have on-prem k8s installation you'll need to resort to some additional sollution such as MetalLB.
Let me know if it clarified a bit using Service
in kubernetes. It's still not completely clear to me what you want to achieve so it would be nice if you explain it in more detail.
You service is not running on 172.31.11.65:30400
, but on 172.31.11.65:80
from within the cluster, so that's the ClusterIP
of your NodePort
type service.
Note that with this said, you will never need to create two services to test ClusterIP
and NodePort
. You can do it with the same NodePort
service, as it includes ClusterIP
type service as well.
Your service runs on ANY_NODE_EXTERNAL_IP:30400
.
The ClusterIP 172.31.11.65
is not visible or directly accessible from outside the cluster.
If on the master node, try
curl http://<host-IP>:30400
...where <host-IP>
is the IP address of the master's host/VM/server.
If you create a Service of NodePort
type, it exposes the Service on each Node's IP at a static port that means .spec.ports[].nodePort
.
So try $ curl http://<NODE_IP>:<NODE_PORT>
.
Ref:
Edit:
$ kubectl get nodes -o yaml | grep ExternalIP -C 1
- address: 104.197.41.11
type: ExternalIP
allocatable:
--
- address: 23.251.152.56
type: ExternalIP
allocatable:
...
$ curl https://<EXTERNAL-IP>:<NODE-PORT>
If you are using an EKS cluster, you can also follow this tutorial, https://eksworkshop.com/beginner/130_exposing-service/exposing/.