I m bit new to Kubernetes and was going over "Ingress". after reading the k8 docs and googling , I summarised the following. Can somebody confirm/correct my understanding:
To understand Ingress, I divided it into 2 sections :
Cloud Infrastructure:
In this, there is in-built ingress controller which runs in the master node(but we can't see it when running kubectl get pods -n all). To configure , first create ur Deployment Pods and expose them through services (Service Type=NodePort must). Also, make sure to create default-backend-service. Then create ingress rules as follows:
kind: Ingress
metadata:
name: app-ingress
spec:
backend:
serviceName: default-svc
servicePort: 80
rules:
- host: api.foo.com
http:
paths:
- path: /v1/
backend:
serviceName: api-svc-v1
servicePort: 80
- path: /v2/
backend:
serviceName: api-svc-v2
servicePort: 80
Once you apply the ingress rules to the API server, ingress controller listens to the API and updates the /etc/nginx.conf. Also, after few mins, nginx controller creates an external Load balancer with an IP(lets say LB_IP)
now to test: from your browser, enter http://api.foo.com/(or(http://api.foo.com/(or) http://) which will redirect to default service and http://api.foo.com/v1(or(http://api.foo.com/v1(or) http:///v1) which will redirect it service api-svc-v1
Question:
how can I see /etc/nginx files since the ingress controller pod is not visible.
During the time, ingress rules are applied and an external LB_IP is getting created, does all the DNS servers of all registrars are updated with DNS entry "api.foo.com "
In-house kubernetes deployment using kubeadm:
In this, there is no external ingress controller and you need to install it manually. To configure, first create ur deployment pods and expose them through service (make sure that service Type=NodePort). Also, make sure to create default-backend-service.Create Ingress controller using the below yaml file:
spec:
containers:
-
args:
- /nginx-ingress-controller
- "--default-backend-service=\\$(POD_NAMESPACE)/default-backend"
image: "gcr.io/google_containers/nginx-ingress-controller:0.8.3"
imagePullPolicy: Always
livenessProbe:
httpGet:
path: /healthz
port: 10254
scheme: HTTP
initialDelaySeconds: 10
timeoutSeconds: 5
name: nginx-ingress-controller
readinessProbe:
httpGet:
path: /healthz
port: 10254
scheme: HTTP
we can see the ingress controller running in node3 using "kubectl get pods" and login to this pod, we can see /etc/nginx/nginx.conf
Now create the ingress rules as follows:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
ingress.kubernetes.io/rewrite-target: /
name: app-ingress
spec:
rules:
- host: testabc.com
http:
paths:
- backend:
serviceName: appsvc1
servicePort: 80
path: /app1
- backend:
serviceName: appsvc2
servicePort: 80
path: /app2
Once you apply the ingress rules to the API server, ingress controller listens to the API and updates the /etc/nginx.conf. But note that there is no Load balancer created . Instead when you do "kubectl get ingress", you get Host=testabc.com and IP=127.0.0.1. Now to expose this ingress-controller outside, I need to create a service with type=NodePort or type=Loadbalancer
kind: Service
metadata:
name: nginx-ingress
spec:
type: NodePort
ports:
- port: 80
nodePort: 33200
name: http
selector:
app: nginx-ingress-lb
After this, we will get an external IP(if type=Loadbalancer)
now to test: from your browser, enter http://testabc.com/(or(http://testabc.com/(or) http://) which will redirect to default service and http://testabc.com/v1(or(http://testabc.com/v1(or) http:///v1) which will redirect it service api-svc-v1
Question:
3.if the ingress-controller pod is running in node3, how it can listen to ingress api which is running in node1
Q.1 How can I see /etc/nginx files since the ingress controller pod is not visible?
Answer: Whenever you install an Nginx Ingress via Helm, it creates an entire Deployment for that Ingress. This deployment resides in Kube-System Namespace. All the pods bind to this deployment also resides in Kube-System Namespace. So, if you want to get attach to the container of this pod you need to get into that namespace and attache to it. Then you will be able to see the Pods in that namespace. Here You can see the Namespace is Kube-System & the 1st deployment in the list is for Nginx Ingress.
Q.3 If the ingress-controller pod is running in node3, how it can listen to ingress api which is running in node1?
Answer: Entire Communication between the pods & nodes take place using the Services in Kubernetes. Service Exposes the pod to each & every node using a NodePort as well as Internal Endpoint & External Endpoint. This service is then attached to the deployment (ingress-deployment in this case) via Labels and is known through out the cluster for communication. I hope you know how to attach a service to a deployment. So even if the controller pod is running on node3, service knows this and transfer the incoming traffic to the pod. Endpoints exposed to entire cluster, right above the curser.