Kubernetes ingress reports backend as unhealthy - though all pods are running and ready

8/25/2017

I don't understand why one of the 'backends' is reported as unhealthy. How can I further diagnose this issue?

If I exec into the pod and do a HTTP request, I'm able to get a HTTP 200 response (though initially there is a HTTP redirect it follows).

Output of kubectl describe ingress wordpress:

Name:           wordpress
Namespace:      default
Address:        ********
Default backend:    default-http-backend:80 (10.8.1.2:8080)
TLS:
  echoserver-tls terminates *****.ddns.net
Rules:
  Host              Path    Backends
  ----              ----    --------
  ****.ddns.net 
                    /.well-known/acme-challenge/*   kube-lego-gce:8080 (<none>)
                    /*              wordpress:80 (<none>)
Annotations:
  backends:         {"k8s-be-31077--b5d488621bcc5fc2":"UNHEALTHY","k8s-be-31508--b5d488621bcc5fc2":"HEALTHY","k8s-be-32128--b5d488621bcc5fc2":"HEALTHY"}
  url-map:          k8s-um-default-wordpress--b5d488621bcc5fc2
  forwarding-rule:      k8s-fw-default-wordpress--b5d488621bcc5fc2
  https-forwarding-rule:    k8s-fws-default-wordpress--b5d488621bcc5fc2
  https-target-proxy:       k8s-tps-default-wordpress--b5d488621bcc5fc2
  ssl-cert:         k8s-ssl-default-wordpress--b5d488621bcc5fc2
  static-ip:            k8s-fw-default-wordpress--b5d488621bcc5fc2
  target-proxy:         k8s-tp-default-wordpress--b5d488621bcc5fc2
Events:
  FirstSeen LastSeen    Count   From            SubObjectPath   Type        Reason  Message
  --------- --------    -----   ----            -------------   --------    ------  -------
  4h        5m      138 loadbalancer-controller         Normal      Service no user specified default backend, using system default

Output of kubectl get pods --all-namespaces:

    NAMESPACE     NAME                                                         READY     STATUS    RESTARTS   AGE
    default       nfs-server-9tfqh                                             1/1       Running   0          4h
    default       wordpress-2676538145-61brl                                   3/3       Running   0          45m
    default       wordpress-2676538145-fwz5w                                   3/3       Running   0          45m
    kube-lego     kube-lego-3839924375-2rr3l                                   1/1       Running   0          2h
    kube-system   fluentd-gcp-v2.0-dnt4m                                       2/2       Running   0          2d
    kube-system   fluentd-gcp-v2.0-zq6sl                                       2/2       Running   0          2d
    kube-system   heapster-v1.3.0-191291410-skhqp                              2/2       Running   0          2d
    kube-system   kube-dns-1829567597-82djj                                    3/3       Running   0          2d
    kube-system   kube-dns-1829567597-8s40h                                    3/3       Running   0          2d
    kube-system   kube-dns-autoscaler-2501648610-j7g91                         1/1       Running   0          2d
    kube-system   kube-proxy-gke-stagingwordpress-default-pool-a3dc998d-0m00   1/1       Running   0          2d
    kube-system   kube-proxy-gke-stagingwordpress-default-pool-a3dc998d-dg39   1/1       Running   0          2d
    kube-system   kubernetes-dashboard-490794276-s1tgl                         1/1       Running   0          2d
    kube-system   l7-default-backend-3574702981-tt9vt                          1/1       Running   0          2d

Within the GCP's console load balance interface I see the following. enter image description here

-- Chris Stryczynski
google-cloud-platform
kubernetes

1 Answer

8/25/2017

https://github.com/kubernetes/kubernetes/tree/master/cluster/addons/cluster-loadbalancing/glbc#prerequisites

According to that it must return a 200 HTTP response. I changed the redirect 301 response to a 200 response for the health check and it is now working.

-- Chris Stryczynski
Source: StackOverflow