I need help understanding kubernetes architecture best practices

1/23/2020

I have 2 nodes on GCP in a kubernetes cluster. I also have a load balancer in GCP as well. this is a regular cluster (not GCK). I am trying to expose my front-end-service to the world. I am trying nginx-ingress type:nodePort as a solution. Where should my loadbalancer be pointing to? is this a good architecture approach?

world --> GCP-LB --> nginx-ingress-resource(GCP k8s cluster) --> services(pods)

to access my site I would have to point LB to worker-node-IP where nginx pod is running. Is this bad practice. I am new in this subject and trying to understand. Thank you

deployservice:

apiVersion: v1
kind: Service
metadata:
  name: mycha-service
  labels:
    run: mycha-app
spec:
  ports:
  - port: 80
    targetPort: 3000
    protocol: TCP
  selector:
    app: mycha-app

nginxservice:

apiVersion: v1
kind: Service
metadata:
  name: nginx-ingress
  labels:
    app: nginx-ingress
spec:
  type: NodePort 
  ports: 
  - nodePort: 31000
    port: 80
    targetPort: 80
    protocol: TCP
    name: http
  - port: 443
    targetPort: 443
    protocol: TCP
    name: https
  selector:
    name: nginx-ingress
    apiVersion: v1
    kind: Service
    metadata:
      name: nginx-ingress
      labels:
        run: nginx-ingress
    spec:
      type: NodePort
      ports:
      - nodePort: 31000
        port: 80
        targetPort: 3000
        protocol: TCP
      selector:
        app: nginx-ingress

nginx-resource:

apiVersion: extensions/v1beta1
kind: Ingress
metadata: 
  name: mycha-ingress
  annotations:
    kubernetes.io/ingress.class: "nginx"
spec:
  rules:
  - http:
      paths:
        - path: /
          backend:
            serviceName: mycha-service
            servicePort: 80

This configuration is not working.

-- Roberto Rios
architecture
kubernetes
nginx

2 Answers

2/7/2020

The best practise of exposing application is:

World > LoadBalancer/NodePort (for connecting to the cluster) > Ingress (Mostly to redirect traffic) > Service

If you are using Google Cloud Platform, I would use GKE as it is optimized for containers and configure many things automatically for you.

Regarding your issue, I also couldn't obtain IP address for LB <Pending> state, however you can expose your application using NodePort and VMs IP. I will try few other config to obtain ExternalIP and will edit answer.

Below is one of examples how to expose your app using Kubeadm on GCE.

On GCE, your VM already have ExternalIP. This way you can just use Service with NodePort and Ingress to redirect traffic to proper services.

Deploy Nginx Ingress using Helm 3 as tiller is not required anymore ($ helm install nginx stable/nginx-ingress).

As Default it will deploy service with LoadBalancer type but it won't get externalIP and it will stuck in <Pending> state. You have to change it to NodePort and apply changes.

$ kubectl edit svc nginx-nginx-ingress-controller

Default it will open Vi editor. If you want other you need to specify it

$ KUBE_EDITOR="nano" kubectl edit svc nginx-nginx-ingress-controller

Now you can deploy service, deployment and ingress.

apiVersion: v1
kind: Service
metadata:
  name: fs
spec:
  selector:
    key: app
  ports:
    - port: 80
      targetPort: 8080
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: fd
spec:
  replicas: 1
  selector:
    matchLabels:
      key: app
  template:
    metadata:
      labels:
        key: app
    spec:
      containers:
      - name: hello1
        image: gcr.io/google-samples/hello-app:1.0
        imagePullPolicy: IfNotPresent
        ports:
        - containerPort: 8080
---
apiVersion: apps/v1
kind:  Deployment
metadata:
  name: mycha-deploy
  labels:
    app: mycha-app
spec:
  replicas: 1
  selector:
    matchLabels:
      app: mycha-app
  template:
    metadata:
      labels:
        app: mycha-app
    spec:
      containers:
        - name: mycha-container
          image: nginx
          ports:
          - containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
  name: mycha-service
  labels:
    app: mycha-app
spec:
  ports:
  - port: 80
    targetPort: 80
    protocol: TCP
  selector:
    app: mycha-app
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: ingress
  annotations:
    nginx.ingress.kubernetes.io/rewrite-target: /
spec:
  rules:
  - host: my.pod.svc
    http:
      paths:
      - path: /mycha
        backend:
          serviceName: mycha-service
          servicePort: 80
      - path: /hello
        backend:
          serviceName: fs
          servicePort: 80

service/fs created
deployment.apps/fd created
deployment.apps/mycha-deploy created
service/mycha-service created
ingress.extensions/two-svc-ingress created

$ kubectl get svc nginx-nginx-ingress-controller
NAME                             TYPE       CLUSTER-IP       EXTERNAL-IP   PORT(S)                      AGE
nginx-nginx-ingress-controller   NodePort   10.105.247.148   <none>        80:31143/TCP,443:32224/TCP   97m

Now you should use your VM ExternalIP (slave VM) with port from NodePort service. My VM ExternalIP: 35.228.133.12, service: 80:31143/TCP,443:32224/TCP

IMPORTANT

If you would curl your VM with port you would get response:

$ curl 35.228.235.99:31143
curl: (7) Failed to connect to 35.228.235.99 port 31143: Connection timed out

As you are doing this manually, you also need add Firewall rule to allow traffic from outside on this specific port or range.

Information about creation of Firewall Rules can be found here.

If you will set proper values (open ports, set IP range (0.0.0.0/0), etc) you will be able to get service from you machine.

Curl from my local machine:

$ curl -H "HOST: my.pod.svc" http://35.228.235.99:31143/mycha
<!DOCTYPE html>
...
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>

$ curl -H "HOST: my.pod.svc" http://35.228.235.99:31143/hello
Hello, world!
Version: 1.0.0
Hostname: fd-c6d79cdf8-dq2d6
-- PjoterS
Source: StackOverflow

1/23/2020

When you use ingress in-front of your workload pods the service type for workload pods will always be of type clusterIP because you are not exposing pods directly outside the cluster. But you need to expose the ingress controller outside the cluster either using NodePort type service or using Load Balancer type service and for production its recommended to use Loadbalancer type service.

This is the recommended pattern.

Client -> LoadBalancer -> Ingress Controller -> Kubernetes Pods

Ingress controller avoids usage of kube-proxy and load balancing provided by kube-proxy. You can configure layer 7 load balancing in the ingress itself.

-- Arghya Sadhu
Source: StackOverflow