I am using nginx to proxy requests to multiple Headless services of StatefulSets in a kubernetes cluster. The problem I am having now is that whenever the service IP changes, the nginx does not resolve the service endpoint to updated IP address but still using the outdated cached IP address. I have tried to use the variable in proxy_pass in nginx configuration but it's to no avail at all. Both on my local cluster as well as deployed on AWS EKS. Here is a snippet of my nginx configuration:
upstream svc-foo {
server svc-foo:8080;
keepalive 1024;
}
server {
resolver 127.0.0.1 [::1]:5353 valid=10s;
set $foo http://svc-foo;
location /foo/ {
proxy_pass $foo;
proxy_http_version 1.1;
}
}
I expect no downtime when I update the service which causes the service IP to change. Any insight and advice is appreciated.
Best way is to use an DNS sidecar on your nginx pod as below:
apiVersion: v1
kind: ConfigMap
metadata:
namespace: issue-795
name: nginx-config
data:
nginx.conf: |-
user nginx;
worker_processes 1;
events {
worker_connections 4096; ## Default: 1024
}
http {
server { # php/fastcgi
listen 80;
resolver 127.0.0.1:53 ipv6=off valid=10s;
set $upstream http://backend:8080;
location / {
proxy_pass $upstream;
proxy_http_version 1.1;
}
}
}
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
namespace: issue-795
name: proxy
spec:
replicas: 1
template:
metadata:
labels:
app: proxy
spec:
containers:
- name: nginx
image: nginx
ports:
- containerPort: 80
volumeMounts:
- name: nginx-config
mountPath: /etc/nginx/nginx.conf
subPath: nginx.conf
- name: dnsmasq
image: "janeczku/go-dnsmasq:release-1.0.7"
args:
- --listen
- "127.0.0.1:53"
- --default-resolver
- --append-search-domains
volumes:
- name: nginx-config
configMap:
name: nginx-config
---
apiVersion: v1
kind: Service
metadata:
namespace: issue-795
name: backend
spec:
ports:
- port: 80
targetPort: 8080
clusterIP: None
selector:
app: backend
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: backend
namespace: issue-795
spec:
serviceName: "backend"
replicas: 2
selector:
matchLabels:
app: backend
template:
metadata:
labels:
app: backend
spec:
containers:
- name: echoserver
image: gcr.io/google_containers/echoserver:1.4
imagePullPolicy: Always
ports:
- containerPort: 8080
I would recommand the use of Ingress resource on Kubernetes with the Nginx Ingress Controller.
Its whole purpose is to have a proxy inside your Kubernetes cluster that redirects the traffic to ClusterIP Services.
So you only have one external ELB that redirects all the traffic into your Kubernetes cluster. The Ingress Controller then redirects the traffic to different services.
For more advanced ingress controllers, you can look at Kong Ingress Controller.