Google Kubernetes Ingress health check always failing

11/5/2019

I have configured a web application pod exposed via apache on port 80. I'm unable to configure a service + ingress for accessing from the internet. The issue is that the backend services always report as UNHEALTHY.

Pod Config:

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  labels:
    name: webapp
  name: webapp
  namespace: my-app
spec:
  replicas: 1
  selector:
    matchLabels:
      name: webapp
  template:
    metadata:
      labels:
        name: webapp
    spec:
      containers:
      - image: asia.gcr.io/my-app/my-app:latest
        name: webapp
        ports:
        - containerPort: 80
          name: http-server

Service Config:

apiVersion: v1
kind: Service
metadata:
  name: webapp-service
spec:
  type: NodePort
  selector:
    name: webapp
  ports:
    - protocol: TCP
      port: 50000
      targetPort: 80

Ingress Config:

kind: Ingress
metadata:
  name: webapp-ingress
spec:
  backend:
    serviceName: webapp-service
    servicePort: 50000

This results in backend services reporting as UNHEALTHY.

The health check settings:

Path: /
Protocol: HTTP
Port: 32463
Proxy protocol: NONE

Additional information: I've tried a different approach of exposing the deployment as a load balancer with external IP and that works perfectly. When trying to use a NodePort + Ingress, this issue persists.

-- Praveen Selvam
google-cloud-platform
google-kubernetes-engine
kubernetes
kubernetes-ingress

1 Answer

11/5/2019

With GKE, the health check on the Load balancer is created automatically when you create the ingress. Since the HC is created automatically, so are the firewall rules.

Since you have no readinessProbe configured, the LB has a default HC created (the one you listed). To debug this properly, you need to isolate where the point of failure is.

First, make sure your pod is serving traffic properly;

kubectl exec [pod_name] -- wget localhost:80

If the application has curl built in, you can use that instead of wget. If the application has neither wget or curl, skip to the next step.

  1. get the following output and keep track of the output:

    kubectl get po -l name=webapp -o wide
    kubectl get svc webapp-service

You need to keep the service and pod clusterIPs

  1. SSH to a node in your cluster and run sudo toolbox bash

  2. Install curl:

apt-get install curl`

  1. Test the pods to make sure they are serving traffic within the cluster:

curl -I [pod_clusterIP]:80

This needs to return a 200 response

  1. Test the service:

curl -I [service_clusterIP]:80

If the pod is not returning a 200 response, the container is either not working correctly or the port is not open on the pod.

if the pod is working but the service is not, there is an issue with the routes in your iptables which is managed by kube-proxy and would be an issue with the cluster.

Finally, if both the pod and the service are working, there is an issue with the Load balancer health checks and also an issue that Google needs to investigate.

-- Patrick W
Source: StackOverflow