Ingress creating health check on HTTP instead of TCP

12/22/2021

I am actually trying to run 3 containers in my gke cluster. I have them exposed via a network load balancer and over that, I am using ingress so I can reach my services from different domains with SSL certs on them.

Here is the complete manifest

apiVersion: apps/v1
kind: Deployment
metadata:
  name: web
spec:
  replicas: 3
  selector:
    matchLabels:
      app: web
  template:
    metadata:
      labels:
        app:web
    spec:
      containers:
      - name: web
        image: us-east4-docker.pkg.dev/web:e856485      # docker image
        ports:
        - containerPort: 3000
        env:
        - name: NODE_ENV
          value: production
---
# DEPLOYMENT MANIFEST #
apiVersion: apps/v1
kind: Deployment
metadata:
  name: cms
spec:
  replicas: 3
  selector:
    matchLabels:
      app: cms
  template:
    metadata:
      labels:
        app: cms
    spec:
      containers:
      - name: cms
        image: us-east4-docker.pkg.dev/cms:4e1fe2f      # docker image
        ports:
        - containerPort: 8055
        env:
        - name  : DB
          value : "postgres"

        - name  : DB_HOST
          value : 10.142.0.3

        - name  : DB_PORT
          value : "5432"
---
# DEPLOYMENT MANIFEST #
apiVersion: apps/v1
kind: Deployment
metadata:
  name: api
spec:
  replicas: 3
  selector:
    matchLabels:
      app: api
  template:
    metadata:
      labels:
        app: api
    spec:
      containers:
      - name: api
        image: us-east4-docker.pkg.dev/api:4e1fe2f      # docker image
        ports:
        - containerPort: 8080
        env:
        - name  : HOST
          value : "0.0.0.0"

        - name  : PORT
          value : "8080"
     
        - name  : NODE_ENV
          value : production
---
# SERVICE MANIFEST #
apiVersion: v1
kind: Service
metadata:
  name: web-lb
  annotations:
    cloud.google.com/neg: '{"ingress": true}'
  labels:
    app: web
spec:
  ports:
  - port: 3000
    protocol: TCP
    targetPort: 3000
  selector:
    app: web
  type: NodePort
---
# SERVICE MANIFEST #
apiVersion: v1
kind: Service
metadata:
  name: cms-lb
  annotations:
    cloud.google.com/neg: '{"ingress": true}'
  labels:
    app: cms
spec:
  ports:
  - port: 8055
    protocol: TCP
    targetPort: 8055
  selector:
    app: cms
  type: NodePort
---
# SERVICE MANIFEST #
apiVersion: v1
kind: Service
metadata:
  name: api-lb
  annotations:
    cloud.google.com/neg: '{"ingress": true}'
  labels:
    app: api
spec:
  ports:
  - port: 8080
    protocol: TCP
    targetPort: 8080
  selector:
    app: api
  type: NodePort
---
apiVersion: v1
data:
  tls.crt: abc
  tls.key: abc
kind: Secret
metadata:
  name: web-cert
type: kubernetes.io/tls
---
apiVersion: v1
data:
  tls.crt: abc
  tls.key: abc
kind: Secret
metadata:
  name: cms-cert
type: kubernetes.io/tls
---
apiVersion: v1
data:
  tls.crt: abc
  tls.key: abc
kind: Secret
metadata:
  name: api-cert
type: kubernetes.io/tls
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: ingress
  annotations:
    # If the class annotation is not specified it defaults to "gce".
    kubernetes.io/ingress.class: "gce"
spec:
  tls:
  - secretName: api-cert
  - secretName: cms-cert
  - secretName: web-cert
  rules:
  - host: web-gke.dev
    http:
      paths:
      - pathType: ImplementationSpecific
        backend:
          service:
            name: web-lb
            port:
              number: 3000
  - host: cms-gke.dev
    http:
      paths:
      - pathType: ImplementationSpecific
        backend:
          service:
            name: cms-lb
            port:
              number: 8055
  - host: api-gke.dev
    http:
      paths:
      - pathType: ImplementationSpecific
        backend:
          service:
            name: api-lb
            port:
              number: 8080

The containers are accessible through the load balancer(network), but from ingress(L7 lb) the health check is failingIngress showing unhealthy services.

I tried editing the health checks manually from HTTP:80 to TCP:8080/8055/3000 for 3 services and it works.ALB failing health checks

But eventually, ingress reverts it back to HTTP health check and it fails again. I also tried using NodePort instead of load balancer as service type but no luck. Any help?

-- Sohaib Mustafa
google-cloud-platform
google-kubernetes-engine
kubernetes
kubernetes-ingress

1 Answer

12/22/2021

The first thing I would like to mention is that you need to recheck your implementation because from what I see, you are creating an Ingress which will create a LoadBanacer, and this Ingress is using three services of type LoadBalancer in which each one of them will also create its LoadBalancer (I'm assuming the default behaviour, unless you applied the famous workaround of deleting the service's LoadBalancer manually after it is created).

And I don't think this is correct unless you need that design for some reason. So, my suggestion is that you might want to change your services types to NodePort.


As for answering your question, what you are missing is:

You need to implement a BackendConfig with custom HealthCheck configurations.

1- Create the Backendconfig:

apiVersion: cloud.google.com/v1
kind: BackendConfig
metadata:
  name: api-lb-backendconfig
spec:
  healthCheck:
    checkIntervalSec: INTERVAL
    timeoutSec: TIMEOUT
    healthyThreshold: HEALTH_THRESHOLD
    unhealthyThreshold: UNHEALTHY_THRESHOLD
    type: PROTOCOL
    requestPath: PATH
    port: PORT

2- Use this config in your service/s

apiVersion: v1
kind: Service
metadata:
  annotations:
    cloud.google.com/backend-config: '{"ports": {
    "PORT_NAME_1":"api-lb-backendconfig"
    }}'
spec:
  ports:
  - name: PORT_NAME_1
    port: PORT_NUMBER_1
    protocol: TCP
    targetPort: TARGET_PORT

Once you apply such configurations, your Ingress's LoadBalanacer will be created with the BackendConfig "api-lb-backendconfig"


Consider this documentation page as your reference.

-- Atef Hares
Source: StackOverflow