I have a web server on port 8080. I have a Ingress object made by Google routing external traffic to a service that points to the pod.
My logs are littered with a request from 10.4.0.1:(some rotating large port) at "/" and it is eating CPU cycles as my web server generates the HTML to respond. It seems like a health check probe.
My pod has the following probe configs on my deployment:
readinessProbe:
httpGet:
path: "/status"
port: 8080
initialDelaySeconds: 10
livenessProbe:
httpGet:
path: "/status"
port: 8080
initialDelaySeconds:
Though it looks like I may have missed something in the configuration.
I've used tcpdump -X port 8080
to examine the traffic. It looks as though the same source (10.4.0.1) are conducting the status check (at "/status") and a mysterious check at root ("/") back to back. It seems as though it is the kubelet but I haven't found proof. The pod IP range is 10.4.0.0/14
. It also seems as though the new configuration worked but the default probe config wasn't removed.
After applying changes to the deployment, do I need to purge and restart the service? Ingress? Node? I'm new to Kubernetes and am lost.
Help of any kind is greatly appreciated!
To resolve the issue I had to change the configuration on a VM Instance Health Check that is part of Google's Compute Engine API. Setting the path to "/status" seemed to do the trick. So in short there is a health check from both kubernetes and GCE.