Prometheus custom metric service discovery on k8s


I'm trying to report custom metrics to Prometheus by exposing an http "metrics" service (running on the same pod as my main service) as a k8s endpoint. But connection attempts from the prometheus' pod to the my metrics endpoint are refused (even though I can reach my main service from the prometheus pod using wget <mainservicename>:8010). It seems I've exposed the main service port, but something is blocking traffic to my metrics port on the same pod? HELP!

kubectl get svc mysvc
NAME       TYPE           CLUSTER-IP     EXTERNAL-IP   PORT(S)                         AGE
mysvc      LoadBalancer   localhost     8767:31285/TCP,8010:30953/TCP   3m23s
kubectl describe ep mysvc
Name:         mysvc
Namespace:    default
Annotations: 2021-08-06T22:37:54Z
  NotReadyAddresses:  <none>
    Name      Port  Protocol
    ----      ----  --------
    metrics   8767  TCP
    mysvcport 8010  TCP

Events:  <none>

Prometheus attempts to fetch metrics from the "metrics" endpoint, but reports: "Get "": dial tcp connect: connection refused"

I can confirm mysvc:8767 is not accessible from the prometheus pod, but mysvc:8010 is!

On mysvc's pod, I can reach my metrics service via localhost:8767 but not via mysvc:8767.

-- Rod LN

1 Answer


In that case, port 8767 is only exposed on the Pod's localhost interface ( but not on the Pod's public network interface.

You can verify this by doing an exec into the Pod and running something like:

netstat -tulpn

If it says, the port is only exposed on the localhost interface and not accessible from outside the Pod.

To change this, you have to make sure in the code of your metrics container that the port is exposed as or :8767 or a similar notation that exposes the port on the Pod's public network interface.

-- weibeld
Source: StackOverflow