I have one master and two worker node kubernetes cluster on AWS
. And I have two environments (qc and prod) in the cluster and I created two namespaces. I have the same service running on qc
and prod
namespaces.
I have created ingress for both namespaces.
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: my-ingress
namespace: prod
spec:
rules:
- host: "*.qc-k8s.example.com"
http:
paths:
- path: /app
backend:
serviceName: client-svc
servicePort: 80
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: my-ingress
namespace: qc
spec:
rules:
- host: "*.qc-k8s.example.com"
http:
paths:
- path: /app-qc
backend:
serviceName: client-svc
servicePort: 80
I have client-svc
in both qc
and prod
namespaces and open the nodeport 80. Then I created ELB
service and daemonset
as below.
kind: Service
apiVersion: v1
metadata:
name: ingress-svc
namespace: deafult
annotations:
service.beta.kubernetes.io/aws-load-balancer-ssl-cert: arn:aws:acm:ca-central-1:492276880714:certificate/xxxxxxxxxxxxxx
service.beta.kubernetes.io/aws-load-balancer-ssl-ports: "443"
service.beta.kubernetes.io/aws-load-balancer-backend-protocol: http
spec:
type: LoadBalancer
selector:
app: my-app
ports:
- name: http
port: 80
targetPort: http
- name: https
port: 443
targetPort: http
---
apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
name: ingress-nginx
namespace: deafult
spec:
template:
metadata:
labels:
app: my-app
spec:
terminationGracePeriodSeconds: 60
containers:
- image: gcr.io/google_containers/nginx-ingress-controller:0.9.0-beta.6
name: ingress-nginx
imagePullPolicy: Always
ports:
- name: http
containerPort: 80
hostPort: 80
livenessProbe:
httpGet:
path: /healthz
port: 10254
scheme: HTTP
initialDelaySeconds: 30
timeoutSeconds: 5
readinessProbe:
httpGet:
path: /healthz
port: 10254
scheme: HTTP
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
resources:
limits:
cpu: 100m
memory: 100Mi
requests:
cpu: 100m
memory: 100Mi
args:
- /nginx-ingress-controller
- --default-backend-service=$(POD_NAMESPACE)/nginx-default-backend
When I tried to curl curl -iv https://gayan.qc-k8s.example.com/app/
. Then Im getting an error.
2017/06/27 15:43:31 [error] 158#158: *981 connect() failed (111: Connection refused) while connecting to upstream, client: 209.128.50.138, server: gayan.qc-k8s.example.com, request: "GET /app/ HTTP/1.1", upstream: "http://100.82.2.47:80/app/", host: "gayan.qc-k8s.example.com" 209.128.50.138 - [209.128.50.138, 209.128.50.138] - - [27/Jun/2017:15:43:31 +0000] "GET /app/ HTTP/1.1" 500 193 "-" "curl/7.51.0" 198 0.014 100.82.2.47:80, 100.96.2.48:80 0, 193 0.001, 0.013 502, 500
If I curl curl -iv https://gayan.qc-k8s.example.com/app-qc
, I'm getting the same issue. Anyone has experienced this error previously ? any clue to resolve this issue?
Thank you
I solved this by https://github.com/kubernetes/kubernetes/issues/17088
An example, from a real document we use:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: ingress
namespace: dev-1
spec:
rules:
- host: api-gateway-dev-1.faceit.com
http:
paths:
- backend:
serviceName: api-gateway
servicePort: 80
path: /
- host: api-shop-dev-1.faceit.com
http:
paths:
- backend:
serviceName: api-shop
servicePort: 80
path: /
- host: api-search-dev-1.faceit.com
http:
paths:
- backend:
serviceName: api-search
servicePort: 8080
path: /
tls:
- hosts:
- api-gateway-dev-1.faceit.com
- api-search-dev-1.faceit.com
- api-shop-dev-1.faceit.com
secretName: faceitssl
We make one of these for each of our namespaces for each track.
Then, we have a single namespace with an Ingress Controller which runs automatically configured NGINX pods. Another AWS Load balancer points to these pods which run on a NodePort using a DaemonSet to run at most and at least one on every node in our cluster.
As such, the traffic is then routed:
Internet -> AWS ELB -> NGINX (on node) -> Pod
We keep the isolation between namespaces while using Ingresses as they were intended. It's not correct or even sensible to use one ingress to hit multiple namespaces. It just doesn't make sense, given how they are designed. The solution is to use one ingress per each namespace with a cluster-scope ingress controller which actually does the routing.