In the past I've tried NodePort
service and if I add a firewall rule to the corresponding Node, it works like a charm:
type: NodePort
ports:
- nodePort: 30000
port: 80
targetPort: 5000
I can access my service from outside and long as the node has an external IP(which it does by default in GKE). However, the service can only be assigned to 30000+ range ports, which is not very convenient. By the way, the Service looks as follows:
kubectl get service -o=wide
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
web-engine-service NodePort 10.43.244.110 <none> 80:30000/TCP 11m app=web-engine-pod
Recently, I've come across a different configuration option that is documented here.
I've tried is as it seems quite promising and should allow to expose my service on any port I want.
The configuration is as follows:
ports:
- name: web-port
port: 80
targetPort: 5000
externalIPs:
- 35.198.163.215
After the service updated, I can see that External IP is indeed assigned to it:
$ kubectl get service -o=wide
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
web-engine-service ClusterIP 10.43.244.110 35.198.163.215 80/TCP 19m app=web-engine-pod
(where 35.198.163.215
- Node's external IP in GKE)
And yet, my app is not available on the Node's IP, unlike in the first scenario(I did add firewall rules for all ports I'm working with including 80
, 5000
, 30000
).
What's the point of externalIPs
configuration then? What does it actually do?
Note: I'm creating a demo project, so please don't tell me about LoabBalancer
type, I'm well aware of that and will get to that a bit later.
In the API documentation, externalIPs
is documented as (emphasis mine):
externalIPs is a list of IP addresses for which nodes in the cluster will also accept traffic for this service. These IPs are not managed by Kubernetes. The user is responsible for ensuring that traffic arrives at a node with this IP. A common example is external load-balancers that are not part of the Kubernetes system.
So you can put any IP address you want there, and it will show up in kubectl get service
output, but it doesn't mean the cluster will actually accept traffic there.
To accept inbound traffic from outside the cluster, you need a minimum of a NodePort service; in a cloud environment a LoadBalancer service or an Ingress is a more common setup. You can't really short-cut around these. Conversely, a LoadBalancer isn't especially advanced or difficult, just change type: LoadBalancer
in the configuration you already show and GKE will create the endpoint for you. The GKE documentation has a more complete example.
("Inside the cluster" and "outside the cluster" are different networks, and like other NAT setups pods can generally make outbound calls but you need specific setup to accept inbound calls. That's what a NodePort service does, and in the standard setup a LoadBalancer service builds on top of that.)
I wanted to give you more insight on:
GKE
.You will need to enter internal IP of your node/nodes to the service definition where externalIP
resides.
Example:
apiVersion: v1
kind: Service
metadata:
name: hello-external
spec:
selector:
app: hello
version: 2.0.0
ports:
- name: http
protocol: TCP
port: 80 # port to send the traffic to
targetPort: 50001 # port that pod responds to
externalIPs:
- 10.156.0.47
- 10.156.0.48
- 10.156.0.49
I've prepared an example to show you why it doesn't work.
Assuming that you have:
tcpdump
10.156.0.51
35.246.207.189
1111
to this VMYou can run below command (on VM) to capture the traffic coming to the port: 1111
:
$ tcpdump port 1111 -nnvvS
- -nnvvS - don't resolve DNS or Port names, be more verbose when printing info, print the absolute sequence numbers
You will need to send a request to external IP: 35.246.207.189
of your VM with a port of: 1111
$ curl 35.246.207.189:1111
You will get a connection refused message but the packet will be captured. You will get an output similar to this:
tcpdump: listening on ens4, link-type EN10MB (Ethernet), capture size 262144 bytes
12:04:25.704299 IP OMMITED
YOUR_IP > 10.156.0.51.1111: Flags [S], cksum 0xd9a8 (correct), seq 585328262, win 65535, options [mss 1460,nop,wscale 6,nop,nop,TS val 1282380791 ecr 0,sackOK,eol], length 0
12:04:25.704337 IP OMMITED
10.156.0.51.1111 > YOUR_IP: Flags [R.], cksum 0x32e3 (correct), seq 0, ack 585328263, win 0, length 0
By that example you can see the destination IP address for your packet coming to the VM. As shown above it's the internal IP of your VM and not external. That's why putting external IP in your YAML
definition is not working.
This example also works on
GKE
. For simplicity purposes you can create aGKE
cluster with Ubuntu as base image and do the same as shown above.
You can read more about IP addresses by following link below:
GKE
What's the point of
externalIPs
configuration then? What does it actually do?
In simple terms it will allow the traffic to enter your cluster. Request sent to your cluster will need to have destination IP the same as in the externalIP
parameter in your service definition to be routed to the corresponding service.
This method requires you to track the IP addresses of your nodes and could be prone to issues when the IP address of your node will not be available (nodes autoscaling for example).
I recommend you to expose your services/applications by following official GKE
documentation:
As mentioned before, LoadBalancer
type of service will automatically take into consideration changes that were made to the cluster. Things like autoscaling which increase/decrease count of your nodes. With the service shown above (with externalIP
) this would require manual changes.
Please let me know if you have any questions to that.