we have a Kubernetes Cluster in AWS (EKS). In our setup we need to have two ingress-nginx Controllers so that we can enforce different security policies. To accomplish that, I am leveraging
kubernetes.io/ingress.class and -ingress-class
As advised here, I created one standard Ingress Controller with default 'mandatory.yaml' from ingress-nginx repository.
For creating the second ingress controller, I have customized the ingress deployment from 'mandatory.yaml' a little bit. I have basically added the tag:
'env: internal'
to deployment definition.
I have also created another Service accordingly, specifying the 'env: internal' tag in order to bind this new service with my new ingress controller. Please, take a look at my yaml definition:
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-ingress-controller-internal
namespace: ingress-nginx
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
env: internal
spec:
replicas: 1
selector:
matchLabels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
env: internal
template:
metadata:
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
env: internal
annotations:
prometheus.io/port: "10254"
prometheus.io/scrape: "true"
spec:
# wait up to five minutes for the drain of connections
terminationGracePeriodSeconds: 300
serviceAccountName: nginx-ingress-serviceaccount
nodeSelector:
kubernetes.io/os: linux
containers:
- name: nginx-ingress-controller-internal
image: quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.26.1
args:
- /nginx-ingress-controller
- --configmap=$(POD_NAMESPACE)/nginx-configuration
- --tcp-services-configmap=$(POD_NAMESPACE)/tcp-services
- --udp-services-configmap=$(POD_NAMESPACE)/udp-services
- --publish-service=$(POD_NAMESPACE)/ingress-nginx
- --annotations-prefix=nginx.ingress.kubernetes.io
- --ingress-class=nginx-internal
securityContext:
allowPrivilegeEscalation: true
capabilities:
drop:
- ALL
add:
- NET_BIND_SERVICE
# www-data -> 33
runAsUser: 33
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
ports:
- name: http
containerPort: 80
- name: https
containerPort: 443
livenessProbe:
failureThreshold: 3
httpGet:
path: /healthz
port: 10254
scheme: HTTP
initialDelaySeconds: 10
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 10
readinessProbe:
failureThreshold: 3
httpGet:
path: /healthz
port: 10254
scheme: HTTP
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 10
lifecycle:
preStop:
exec:
command:
- /wait-shutdown
---
kind: Service
apiVersion: v1
metadata:
name: ingress-nginx-internal
namespace: ingress-nginx
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
env: internal
spec:
externalTrafficPolicy: Local
type: LoadBalancer
selector:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
env: internal
ports:
- name: http
port: 80
targetPort: http
- name: https
port: 443
targetPort: https
After applying this definition, my Ingress Controller is created along with a new LoadBalancer Service:
$ kubectl get deployments -n ingress-nginx
NAME READY UP-TO-DATE AVAILABLE AGE
nginx-ingress-controller 1/1 1 1 10d
nginx-ingress-controller-internal 1/1 1 1 95m
$ kubectl get service -n ingress-nginx
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
ingress-nginx LoadBalancer 172.20.6.67 xxxx.elb.amazonaws.com 80:30857/TCP,443:31863/TCP 10d
ingress-nginx-internal LoadBalancer 172.20.115.244 yyyyy.elb.amazonaws.com 80:30036/TCP,443:30495/TCP 97m
So far so good, everything is working fine.
However, when I create two ingresses resources, each of these resources bound to different Ingress Controllers (notice 'kubernetes.io/ingress.class:'):
External ingress resource:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: accounting-ingress
annotations:
kubernetes.io/ingress.class: nginx
spec: ...
Internal ingress resource:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: internal-ingress
annotations:
kubernetes.io/ingress.class: nginx-internal
spec: ...
I see that they both contain the same ADDRESS, the address of the first Ingress Controller:
$ kg ingress
NAME HOSTS ADDRESS PORTS AGE
external-ingress bbb.aaaa.com xxxx.elb.amazonaws.com 80, 443 10d
internal-ingress ccc.aaaa.com xxxx.elb.amazonaws.com 80 88m
I would expect that the ingress bound to 'ingress-class=nginx-internal' would contain this address: 'yyyyy.elb.amazonaws.com'. Everything seems to be working fine though, but this is annoying me, I have the impression something is wrong.
Where should I start troubleshooting it to understand what is happening behind the scenes?
####---UPDATE---####
Besides what is described above, I have added the line '"ingress-controller-leader-nginx-internal"' inside manadatory.yaml as can be seen below. I did that based on one commentary I found inside mandatory.yaml file:
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: Role
metadata:
name: nginx-ingress-role
namespace: ingress-nginx
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
rules:
- apiGroups:
- ""
resources:
- configmaps
- pods
- secrets
- namespaces
verbs:
- get
- apiGroups:
- ""
resources:
- configmaps
resourceNames:
# Defaults to "<election-id>-<ingress-class>"
# Here: "<ingress-controller-leader>-<nginx>"
# This has to be adapted if you change either parameter
# when launching the nginx-ingress-controller.
- "ingress-controller-leader-nginx"
- "ingress-controller-leader-nginx-internal"
Unfortunately nginx documentation only mention about 'kubernetes.io/ingress.class and -ingress-class' for defining a new controller. There is a chance I am messing with some minor detail.
Try changing this line:
- --configmap=$(POD_NAMESPACE)/nginx-configuration
In your code it should be something like this:
- --configmap=$(POD_NAMESPACE)/internal-nginx-configuration
This way you will have a different configuration for each nginx-controller, otherwise you will have the same configuration, it may seems to work, but you will have some bugs when updating... (Already been there....)