Where are these kubernetes healthchecks coming from?

11/6/2018

So I have deployments exposed behing a GCE ingress. On the deployment, implemented a simple readinessProbe on a working path, as follows :

    readinessProbe:
      failureThreshold: 3
      httpGet:
        path: /claim/maif/login/?next=/claim/maif
        port: 8888
        scheme: HTTP
      initialDelaySeconds: 20
      periodSeconds: 60
      successThreshold: 1
      timeoutSeconds: 1

Everything works well, the first healthchecks comes 20 seconds later, and answer 200 :

{address space usage: 521670656 bytes/497MB} {rss usage: 107593728 bytes/102MB} [pid: 92|app: 0|req: 1/1] 10.108.37.1 () {26 vars in 377 bytes} [Tue Nov  6 15:13:41 2018] GET /claim/maif/login/?next=/claim/maif => generated 4043 bytes in 619 msecs (HTTP/1.1 200) 7 headers in 381 bytes (1 switches on core 0)

But, just after that, I get tons of other requests from other heathchecks, on / :

{address space usage: 523993088 bytes/499MB} {rss usage: 109850624 bytes/104MB} [pid: 92|app: 0|req: 2/2] 10.132.0.14 () {24 vars in 277 bytes} [Tue Nov  6 15:13:56 2018] GET / => generated 6743 bytes in 53 msecs (HTTP/1.1 200) 4 headers in 124 bytes (1 switches on core 0)
{address space usage: 515702784 bytes/491MB} {rss usage: 100917248 bytes/96MB} [pid: 93|app: 0|req: 1/3] 10.132.0.20 () {24 vars in 277 bytes} [Tue Nov  6 15:13:56 2018] GET / => generated 1339 bytes in 301 msecs (HTTP/1.1 200) 4 headers in 124 bytes (1 switches on core 0)
{address space usage: 518287360 bytes/494MB} {rss usage: 103759872 bytes/98MB} [pid: 93|app: 0|req: 2/4] 10.132.0.14 () {24 vars in 277 bytes} [Tue Nov  6 15:13:58 2018] GET / => generated 6743 bytes in 52 msecs (HTTP/1.1 200) 4 headers in 124 bytes (1 switches on core 0)
{address space usage: 518287360 bytes/494MB} {rss usage: 103837696 bytes/99MB} [pid: 93|app: 0|req: 3/5] 10.132.0.21 () {24 vars in 277 bytes} [Tue Nov  6 15:13:58 2018] GET / => generated 6743 bytes in 50 msecs (HTTP/1.1 200) 4 headers in 124 bytes (1 switches on core 0)
{address space usage: 523993088 bytes/499MB} {rss usage: 109875200 bytes/104MB} [pid: 92|app: 0|req: 3/6] 10.132.0.4 () {24 vars in 275 bytes} [Tue Nov  6 15:13:58 2018] GET / => generated 6743 bytes in 50 msecs (HTTP/1.1 200) 4 headers in 124 bytes (1 switches on core 0)

As I understand it, the documentations says that

The Ingress controller looks for a compatible readiness probe first, if it finds one, it adopts it as the GCE loadbalancer's HTTP(S) health check. If there's no readiness probe, or the readiness probe requires special HTTP headers, the Ingress controller points the GCE loadbalancer's HTTP health check at '/'. This is an example of an Ingress that adopts the readiness probe from the endpoints as its health check.

But I don't understand this behaviour. How can I limit the healthchecks to be just the one I defined on my deployment ?

Thanks,

-- Simon Lacoste
google-kubernetes-engine
kubernetes
kubernetes-health-check
kubernetes-ingress

2 Answers

9/19/2019

You need to define ports in your deployment.yaml for port numbers used in readinessProbe like

        ports:
      - containerPort: 8888
        name: health-check-port
-- suneetha
Source: StackOverflow

2/23/2019

Ok so this very well may not work. I ran into a similar issue where my readiness probes we're not being respected. I was able to edit this from the GCP console GUI interface. Search for 'healthcheck' and then find the health checks created by GKE for the service.

I was able to change mine to TCP which made it work for some reason.

Worth a try. Personally I ran into it when running a multi-region ingress so my setup is likely different but still relies on GCE-Ingress.

-- Necevil
Source: StackOverflow