I'm trying to report custom metrics to Prometheus by exposing an http "metrics" service (running on the same pod as my main service) as a k8s endpoint. But connection attempts from the prometheus' pod to the my metrics endpoint are refused (even though I can reach my main service from the prometheus pod using wget <mainservicename>:8010). It seems I've exposed the main service port, but something is blocking traffic to my metrics port on the same pod? HELP!
kubectl get svc mysvc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
mysvc LoadBalancer 10.106.36.79 localhost 8767:31285/TCP,8010:30953/TCP 3m23s
kubectl describe ep mysvc
Name: mysvc
Namespace: default
Labels: app.kubernetes.io/managed-by=Helm
Annotations: endpoints.kubernetes.io/last-change-trigger-time: 2021-08-06T22:37:54Z
Subsets:
Addresses: 10.1.18.170
NotReadyAddresses: <none>
Ports:
Name Port Protocol
---- ---- --------
metrics 8767 TCP
mysvcport 8010 TCP
Events: <none>
Prometheus attempts to fetch metrics from the "metrics" endpoint, but reports: "Get "http://10.1.18.170:8767/metrics": dial tcp 10.1.18.170:8767: connect: connection refused"
I can confirm mysvc:8767 is not accessible from the prometheus pod, but mysvc:8010 is!
On mysvc's pod, I can reach my metrics service via localhost:8767 but not via mysvc:8767.
In that case, port 8767 is only exposed on the Pod's localhost interface (127.0.0.1) but not on the Pod's public network interface.
You can verify this by doing an exec into the Pod and running something like:
netstat -tulpn
If it says 127.0.0.1:8767
, the port is only exposed on the localhost interface and not accessible from outside the Pod.
To change this, you have to make sure in the code of your metrics container that the port is exposed as 0.0.0.0:8767
or :8767
or a similar notation that exposes the port on the Pod's public network interface.