I am setting up NGINX ingress controller on AWS EKS.
I went through k8s Ingress resource and it is very helpful to understand we map LB ports to k8s service ports with e.g file def. I installed nginx controller till pre-requisite step. Then the tutorial directs me to create an ingress resource.
But below it is telling me to apply a service config. I am confused with this provider-specific step. Which is different in terms of kind, version, spec
definition (Service vs Ingress).
I am missing something here?
the NGINX ingress controller is the actual process that shapes your traffic to your services. basically like the nginx or loadbalancer installation on a traditional vm. the ingress resource (kind: Ingress) is more like the nginx-config on your old VM, where you would define host mappings, paths and proxies.
This is a concept that is at first a little tricky to wrap your head around. The Nginx ingress controller is nothing but a service of type LoadBalancer
. What is does is be the public-facing endpoint for your services. The IP address assigned to this service can route traffic to multiple services. So you can go ahead and define your services as ClusterIP
and have them exposed through the Nginx ingress controller.
Here's a diagram to portray the concept a little better: image source
On that note, if you have acquired a static IP for your service, you need to assign it to your Nginx ingress-controller. So what is an ingress? Ingress is basically a way for you to communicate to your Nginx ingress-controller how to direct traffic incoming to your LB public IP. So as it is clear now, you have one loadbalancer service, and multiple ingress resources. Each ingress corresponds to a single service that can change based on how you define your services, but you get the idea.
Let's get into some yaml code. As mentioned, you will need the ingress controller service regardless of how many ingress resources you have. So go ahead and apply this code on your EKS cluster.
Now let's see how you would expose your pod to the world through Nginx-ingress. Say you have a wordpress
deployment. You can define a simple ClusterIP
service for this app:
apiVersion: v1
kind: Service
metadata:
labels:
app: ${WORDPRESS_APP}
namespace: ${NAMESPACE}
name: ${WORDPRESS_APP}
spec:
type: ClusterIP
ports:
- port: 9000
targetPort: 9000
name: ${WORDPRESS_APP}
- port: 80
targetPort: 80
protocol: TCP
name: http
- port: 443
targetPort: 443
protocol: TCP
name: https
selector:
app: ${WORDPRESS_APP}
This creates a service for your wordpress
app which is not accessible outside of the cluster. Now you can create an ingress resource to expose this service:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
namespace: ${NAMESPACE}
name: ${INGRESS_NAME}
annotations:
kubernetes.io/ingress.class: nginx
kubernetes.io/tls-acme: "true"
spec:
tls:
- hosts:
- ${URL}
secretName: ${TLS_SECRET}
rules:
- host: ${URL}
http:
paths:
- path: /
backend:
serviceName: ${WORDPRESS_APP}
servicePort: 80
Now if you run kubectl get svc
you can see the following:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
wordpress ClusterIP 10.23.XXX.XX <none> 9000/TCP,80/TCP,443/TCP 1m
nginx-ingress-controller LoadBalancer 10.23.XXX.XX XX.XX.XXX.XXX 80:X/TCP,443:X/TCP 1m
Now you can access your wordpress
service through the URL defined, which maps to the public IP of your ingress controller LB service.