Do you guys know if a ClusterIP
service distributes the workload between the target deployment replicas?
I have 5 replicas of a backend with a ClusterIP service selecting them. I also have another 5 replicas of nginx pod pointing to the this back end deployment. But when I run a heavy request the back end stops responding other requests until it finishes the heavy one.
Here is my configuration:
Note: I've replaced some info that are related to the company.
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: frontend
spec:
replicas: 3
template:
metadata:
labels:
app: webapp
tier: frontend
spec:
containers:
- name: python-gunicorn
image: <my-user>/webapp:1.1.2
command: ["/env/bin/gunicorn", "--bind", "0.0.0.0:8000", "main:app", "--chdir", "/deploy/app", "--error-logfile", "/var/log/gunicorn/error.log", "--timeout", "7200"]
resources:
requests:
# memory: "64Mi"
cpu: "0.25"
limits:
# memory: "128Mi"
cpu: "0.4"
ports:
- containerPort: 8000
imagePullPolicy: Always
livenessProbe:
httpGet:
path: /login
port: 8000
initialDelaySeconds: 30
timeoutSeconds: 1200
imagePullSecrets:
# NOTE: the secret has to be created at the same namespace level on which this deployment was created
- name: dockerhub
apiVersion: v1
kind: Service
metadata:
name: frontend
labels:
app: webapp
tier: frontend
spec:
# type: LoadBalancer
ports:
- port: 8000
targetPort: 8000
selector:
app: webapp
tier: frontend
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: nginx-deployment
spec:
replicas: 5
template:
metadata:
labels:
app: nginx
spec:
volumes:
- name: secret-volume
secret:
secretName: nginxsecret
- name: configmap-volume
configMap:
name: nginxconfigmap
containers:
- name: nginxhttps
image: ymqytw/nginxhttps:1.5
command: ["/home/auto-reload-nginx.sh"]
ports:
- containerPort: 443
- containerPort: 80
livenessProbe:
httpGet:
path: /index.html
port: 80
initialDelaySeconds: 30
timeoutSeconds: 1200
resources:
requests:
# memory: "64Mi"
cpu: "0.1"
limits:
# memory: "128Mi"
cpu: "0.25"
volumeMounts:
- mountPath: /etc/nginx/ssl
name: secret-volume
- mountPath: /etc/nginx/conf.d
name: configmap-volume
apiVersion: v1
kind: Service
metadata:
name: nginxsvc
labels:
app: nginxsvc
spec:
type: LoadBalancer
ports:
- port: 80
protocol: TCP
name: http
- port: 443
protocol: TCP
name: https
selector:
app: nginx
server {
server_name local.mydomain.com;
rewrite ^(.*) https://local.mydomain.com$1 permanent;
}
server {
listen 80 default_server;
listen [::]:80 default_server ipv6only=on;
listen 443 ssl;
root /usr/share/nginx/html;
index index.html;
keepalive_timeout 70;
server_name www.local.mydomain.com local.mydomain.com;
ssl_certificate /etc/nginx/ssl/tls.crt;
ssl_certificate_key /etc/nginx/ssl/tls.key;
location / {
proxy_pass http://localhost:8000;
proxy_connect_timeout 7200;
proxy_send_timeout 7200;
proxy_read_timeout 7200;
send_timeout 7200;
}
}
Yes, service type ClusterIP
uses kube-proxy
's iptables
rules to distribute the requests roughly evenly in a round robin
manner.
The documentation says:
By default, the choice of backend is round robin.
Although, the round robin
distribution of requests may be affected by things like:
ClusterIP
multiple times)iptables
rules outside kubernetesClusterIP is implemented by kube-proxy
by means of probability matching on iptables NAT rules, so yes, it is distributing requests more or less evenly among the pods backing given service.
Depending on your backend, this still may result in less then ideal situation where a portion of your requests is blocked on one of the backends as it awaits a heavy request to finish processing.
Also, keep in mind that this is done on connection level, so if you've established a connection and then run multiple requests via the same TCP connection, it will not jump between backends.