I have a Kubernetes cluster with a master node and two other nodes:
sudo kubectl get nodes
NAME STATUS ROLES AGE VERSION
kubernetes-master Ready master 4h v1.10.2
kubernetes-node1 Ready <none> 4h v1.10.2
kubernetes-node2 Ready <none> 34m v1.10.2
Each of them is running on a VirtualBox Ubuntu VM, accessible from the guest computer:
kubernetes-master (192.168.56.3)
kubernetes-node1 (192.168.56.4)
kubernetes-node2 (192.168.56.6)
I deployed an nginx server with two replicas, having one pod per kubernetes-node-x:
sudo kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE
nginx-deployment-64ff85b579-5k5zh 1/1 Running 0 8s 192.168.129.71 kubernetes-node1
nginx-deployment-64ff85b579-b9zcz 1/1 Running 0 8s 192.168.22.66 kubernetes-node2
Next I expose a service for the nginx-deployment as a NodePort to access it from outside the cluster:
sudo kubectl expose deployment/nginx-deployment --type=NodePort
sudo kubectl get services
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 4h
nginx-deployment NodePort 10.96.194.15 <none> 80:32446/TCP 2m
sudo kubectl describe service nginx-deployment
Name: nginx-deployment
Namespace: default
Labels: app=nginx
Annotations: <none>
Selector: app=nginx
Type: NodePort
IP: 10.96.194.15
Port: <unset> 80/TCP
TargetPort: 80/TCP
NodePort: <unset> 32446/TCP
Endpoints: 192.168.129.72:80,192.168.22.67:80
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>
I can access each pod in a node directly using their node IP
kubernetes-node1 http://192.168.56.4:32446/
kubernetes-node2 http://192.168.56.6:32446/
But, I thought that K8s provided some kind of external cluster ip that balanced the requests to the nodes from the outside. What is that IP??
But, I thought that K8s provided some kind of external cluster ip that balanced the requests to the nodes from the outside. What is that IP??
Cluster IP is internal to Cluster. Not exposed to outside, it is for intercommunication across the cluster.
Indeed, you have LoadBanacer type of service that can do such a trick that you need, only it is dependent on cloud providers or minikube/docker edge to work properly.
I can access each pod in a node directly using their node IP
As a sidenote, in order to do this on bare metal and do ssl or such, you need to provision ingress of your own. Say, place one nginx on specific node and then reference all appropriate services you want exposed (mind fqdn for service) as upstream(s) that can run on multiple nodes with as many nginx of their own as desired - you don't need to handle exact details of that since k8s runs the show. That way you have one node point (ingress nginx) with known IP address that is handling incoming traffic and redirecting it to services inside k8s that can run across any node(s). I suck with ascii art but will give it a try:
(outside) -> ingress (nginx) +--> my-service FQDN (running accross nodes):
[node-0] | [node-1]: my-service-pod-01 with nginx-01
| [node 2]: my-service-pod-02 with nginx-02
| ...
+--> my-second-service FQDN
| [node-1]: my-second-service-pod with apache?
...
In above sketch you have nginx ingress on node-0 (known IP) that takes external traffic and then handles my-service (running on two pods on two nodes) and my-second-service (single pod) as upstreams. You only need to expose FQDN on services for this to work without worrying about details of IPs of specific nodes. More info you can find in documentation: https://kubernetes.io/docs/tutorials/kubernetes-basics/expose/expose-intro/
Also way better than my ansi-art is this representation from same link as in previous point that illustrate idea behind ingress:
Why isn't the service load balancing the used pods from the service?
--nodeport-addresses
you can change node-proxy behavior. Turns out that round-robin was old kube-proxy behavior, apparently new one should be random. Finally, to exclude connection and session issues from browser, did you try this from anonymous session as well? Do you have dns cached locally maybe?Something else, I killed the pod from node1, and when calling to node1 it didn't use the pod from node 2.
and each Node will proxy that port (the same port number on every Node) into your Service
. But if that is your case then probably LB or Ingress, maybe even ClusterIP with external address (see below) can do the trick for you.if the service is internal (ClusterIP) ... does it load balance to any of the pods in the nodes
Most definitely yes. One more thing, you can use this behavior to also expose 'load balanced' behavior in 'standard' port range as opposed to 30k+ from NodePort. Here is excerpt of service manifest we use for ingress controller.
apiVersion: v1
kind: Service
metadata:
namespace: ns-my-namespace
name: svc-nginx-ingress-example
labels:
name: nginx-ingress-example
role: frontend-example
application: nginx-example
spec:
selector:
name: nginx-ingress-example
role: frontend-example
application: nginx-example
ports:
- protocol: TCP
name: http-port
port: 80
targetPort: 80
- protocol: TCP
name: ssl-port
port: 443
targetPort: 443
externalIPs:
- 123.123.123.123
Note that in the above example imaginary 123.123.123.123 that is exposed with externalIPs
represents ip address of one of our worker nodes. Pods running in svc-nginx-ingress-example
service doesn't need to be on this node at all but they still get the traffic routed to them (and loadbalanced across the pods as well) when that ip is hit on specified port.