How does Traffic Flow inside a Kubernetes Cluster?

9/9/2018

(While learning Kubernetes I never really found any good resources explaining this)

Scenario:
I own mywebsite1.com and mywebsite2.com and I want to host them both inside a Kubernetes Cluster.

I deploy a generic cloud ingress controller according to the following website with 2
kubectl apply -f < url > commands. (mandatory.yaml and generic ingress.yaml)
https://kubernetes.github.io/ingress-nginx/deploy/

So the question is what does that architecture look like? and how does the data flow into the Cluster?

-- neokyle
kubernetes
kubernetes-ingress
nginx-ingress

1 Answer

9/9/2018

I convert 2 certificates to 2 .key and 2 .crt files
I use those files to make 2 TLS secrets (1 for each website so they'll have HTTPS enabled)

I create 2 Ingress Objects:

  • one that says website1.com/, points to a service called website1fe, and references website1's HTTPS/TLS certificate secret.
    (The website1fe service only listens on port 80, and forwards traffic to pods spawned by a website1fe deployment)

  • the other says website2.com/, points to a service called website2fe, and references website2's HTTPS/TLS certificate secret.
    (The website2fe service only listens on port 80, and forwards traffic to pods spawned by a website2fe deployment)

I have a 3 Node Kubernetes Cluster that exists in a Private Subnet.
They have IPs

 10.1.1.10     10.1.1.11     10.1.1.12

When I ran the 2
kubectl apply -f < url > commands
Those commands generated:

  • An Ingress Controller deployment
  • A L7 Nginx LB Service of type ClusterIP that listens on port 80 and port 443
  • A L7 Nginx LB deployment that listens on port 80 and port 443
    (the pods in this deployment are managed/configured by the ingress controller pod, which will configure the pods to the desired state specified by the ingress objects)
  • A L7 Nginx LB Service of type NodePort (randomly picked from the range 30000 - 32767, but for clarity sake I'll say the NodePort service is listening on ports 30080 and 30443)
  • A L4 LB VM with a public IP address.

kubectl get svc --all-namespaces
Gives the IPv4 IP address of the L4 LB (let's say it's 1.2.3.4)

Since I own both domains: I configure internet DNS so that website1.com and website2.com both point to 1.2.3.4

Note: The ingress controller is cloud provider aware so it automatically did the following reverse proxy/load balancing configuration:

L4LB 1.2.3.4:80 --(LB between)--> 10.1.1.10:30080, 10.1.1.11:30080, 10.1.1.12:30080
L4LB 1.2.3.4:443 --(LB between)--> 10.1.1.10:30443, 10.1.1.11:30443, 10.1.1.12:30443

KubeProxy makes it so that requests on any node's port 30080 or 30443 get forwarded inside the cluster to the L7 Nginx LB Service of type ClusterIP, which then forwards the traffic to the L7 Nginx LB Pods.
The L7 Nginx LB pods terminate the HTTPS connection and forward traffic to website1.com and website2.com services, which are listening on unencrypted port 80.
(It's ok that it's unencrypted because we're in the cluster where no one would be sniffing the traffic.)
(The L7 LB knows which service to forward to based on the L7 address that traffic is coming in on)


Note a mistake to avoid: Let's say that website1.com wants to access some resources that exist on website2.com

Well website2.com actually has 2 IP addresses and 2 DNS names.
website2fe.default.svc.cluster.local <-- inner cluster resolvable DNS address
website2.com <-- Externally resolving DNS address

Instead of having website1 access resources via website2.com You should have website1 access resources via website2fe.default.svc.cluster.local (It's more efficient routing)

-- neokyle
Source: StackOverflow