Can someone please skim over this guide and tell me the use case of HAProxy in this guide?
Install and configure a multi-master Kubernetes cluster with kubeadm
I've gone through the guide and set this up. Everything is working properly between my Kubernetes cluster and HAProxy, from what I can tell.
HAProxy has been set up on a VM separate from my Kubernetes cluster. The HAProxy IP is 10.1.160.170.
I was hoping to visit my HAProxy IP and be redirected to one of my Kuberenetes nodes that is being load balanced. This isn't the case.
I can set up an Nginx deployment with:
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-nginx
spec:
selector:
matchLabels:
run: my-nginx
replicas: 2
template:
metadata:
labels:
run: my-nginx
spec:
containers:
- name: my-nginx
image: nginx
ports:
- containerPort: 80
Then create the service:
kubectl expose deployment my-nginx --port=80 --type=NodePort
user@KUBENODE01:~$ kubectl get service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 93d
my-nginx NodePort 10.108.33.134 <none> 80:30438/TCP 46s
If I now try and visit my HAProxy IP 10.1.160.170, I'm not redirected to my Kubernetes node on port 30438.
user@computer:~/nginx_testing$ curl http://10.1.160.170
curl: (7) Failed to connect to 10.1.160.170 port 80: Connection refused
user@computer:~/nginx_testing$ curl https://10.1.160.170
curl: (7) Failed to connect to 10.1.160.170 port 443: Connection refused
user@computer:~/nginx_testing$ curl 10.1.160.170:30438
curl: (7) Failed to connect to 10.1.160.170 port 30438: Connection refused
Is HAProxy not meant to forward connection requests to the actual Kubernetes nodes in this article?
I've also tried this with the service type LoadBalancer.
Here is my haproxy.cfg:
global
log /dev/log local0
log /dev/log local1 notice
chroot /var/lib/haproxy
stats socket /run/haproxy/admin.sock mode 660 level admin expose-fd listeners
stats timeout 30s
user haproxy
group haproxy
daemon
# Default SSL material locations
ca-base /etc/ssl/certs
crt-base /etc/ssl/private
# See: https://ssl-config.mozilla.org/#server=haproxy&server-version=2.0.3&config=intermediate
ssl-default-bind-ciphers ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:EC>
ssl-default-bind-ciphersuites TLS_AES_128_GCM_SHA256:TLS_AES_256_GCM_SHA384:TLS_CHACHA20_POLY1305_SHA256
ssl-default-bind-options ssl-min-ver TLSv1.2 no-tls-tickets
defaults
log global
mode http
option httplog
option dontlognull
timeout connect 5000
timeout client 50000
timeout server 50000
errorfile 400 /etc/haproxy/errors/400.http
errorfile 403 /etc/haproxy/errors/403.http
errorfile 408 /etc/haproxy/errors/408.http
errorfile 500 /etc/haproxy/errors/500.http
errorfile 502 /etc/haproxy/errors/502.http
errorfile 503 /etc/haproxy/errors/503.http
errorfile 504 /etc/haproxy/errors/504.http
frontend kubernetes
# bind 10.1.160.170:80
bind 10.1.160.170:6443
# http-request redirect scheme https unless { ssl_fc }
option tcplog
mode tcp
default_backend kubernetes-master-nodes
backend kubernetes-master-nodes
mode tcp
balance roundrobin
option tcp-check
server kubenode01 10.1.160.79:6443 check fall 3 rise 2
server kubenode02 10.1.160.80:6443 check fall 3 rise 2
server kubenode03 10.1.160.81:6443 check fall 3 rise 2
The port 6443 is for k8s API server. kubectl access this API server to do its work.
In k8s scenario with one master, you can access k8s API with that masters node IP.
But in k8s scenario with 3 master which is considered HA setup you should use load balancing even you can still access any of master directly because thats the whole point.
For example in HA setup you should set your server address to HAProxy IP in your kubeconfig file so your kubectl commands will be redirect to one of the masters which is healthy, by HAProxy