I created a ReplicaSet
with replicas = 3
. These pods internal-pod-a
, internal-pod-b
, internal-pod-c
serve for internal needs only. Then I created a service with ClusterIP
type to route requests to the pods.
For testing purposes I tried to see how the traffic will be distributed. After port forwarding, I executed several requests to the service:
kubectl port-forward svc/internal-service-cip 8081:80 -n prod
All the requests were served on the same pod internal-pod-a
.
So I'm confused, why does the ClusterIP
send all requests to the same pod?
k8s manifests:
apiVersion: apps/v1
kind: ReplicaSet
metadata:
namespace: prod
name: internal-pod
labels:
app: internal-pod
environment: prod
spec:
replicas: 3
selector:
matchLabels:
app: internal-pod
environment: prod
template:
metadata:
labels:
app: internal-pod
environment: prod
spec:
containers:
- name: internal-pod
image: bla-bla-repo
ports:
- containerPort: 8080
-----------------------------
apiVersion: v1
kind: Service
metadata:
namespace: prod
name: internal-service-cip
spec:
type: ClusterIP
selector:
app: internal-pod
ports:
- name: http
port: 80
targetPort: 8080
protocol: TCP
In general, a Service does provide load balancing across the selected pods. You can see this using a debugging shell:
kubectl run --namespace=prod debug \
--generator=run-pod/v1 \
--image=busybox --rm --stdin --tty -- \
/bin/sh
wget -O- http://internal-service-cip/
(The first command is the Kubernetes equivalent of docker run --rm -it busybox sh
, but launches it in your Kubernetes namespace.)
Under the hood, kubectl port-forward
always connects to some single pod (emphasis mine):
Forward one or more local ports to a pod. ... If there are multiple pods matching the criteria, a pod will be selected automatically.
So if you kubectl port-forward service/internal-service-cip
, internally it looks at the Endpoints of the Service, picks one of the matching pods, and port-forwards to that pod. Since every connection goes to the same pod, it looks like there's no load balancing in this scenario.