I have an kubernetes installation with nginx-proxy on worker nodes. It looks like one nginx pod per node deployed by kubespray with manifest: https://github.com/kubernetes-sigs/kubespray/blob/master/roles/kubernetes/node/templates/manifests/nginx-proxy.manifest.j2
root@d6422c83-bd13-11e9-aa68-fa163ee27044:~# kubectl -n kube-system get po -l "k8s-app=kube-nginx"
NAME READY STATUS RESTARTS AGE
nginx-proxy-d8ffb7c4-bd13-11e9-aa68-fa163ee27044 1/1 Running 1 40h
nginx-proxy-d901b40b-bd13-11e9-aa68-fa163ee27044 1/1 Running 3 40h
nginx-proxy-d9029362-bd13-11e9-aa68-fa163ee27044 1/1 Running 1 40h
Nginx exporter requires scrape-uri pointed to exact Nginx service: https://github.com/nginxinc/nginx-prometheus-exporter/blob/master/client/nginx.go#L13 And that exporter is deployed by Helm chart:
The usual solution for such cases is to use sidecar container with the exporter in application pod.
But here we can't do that because nginx-proxy is managed by kubespray/ansible and nginx-exporter is managed by helm chart.
I suppose it should look like: one additional pod with nginx-exporter for each not-master node with scrape-uri http://127.0.0.1:10800
So somehow port 10800 should be available from nodes' localhost for nginx-exporter pod.
Any suggestions here?
If I have understood correctly this case - the sidecar was removed.
Please refer to the github issue. If I am correct: please enable metrics controller.
metrics:
enabled: true
|---------------------------|-----------------------------------|---------|
| Parameter | Description | Default |
|---------------------------|-----------------------------------|---------|
|controller.metrics.enabled | if true, enable Prometheus metrics| false |
|---------------------------|-----------------------------------|---------|
Please let me know if it helps.
Solved by using hostNetwork: true
option for both pods.