Just installed stable/prometheus chart with below values and I'm able to access the server frontend from pods but not from host's web browser.
My values.yaml:
alertmanager:
ingress:
enabled: true
annotations:
kubernetes.io/ingress.class: nginx
hosts:
- localhost/alerts
server:
ingress:
enabled: true
annotations:
kubernetes.io/ingress.class: nginx
hosts:
- localhost/prom
pushgateway:
ingress:
enabled: true
annotations:
kubernetes.io/ingress.class: nginx
hosts:
- localhost/push
I use nginx ingress and ingresses get created but for some unknown reason, it doesn't seem to map to the service.
Some data:
I'm able to access the server from ingress pods (also all others) via default and dns service names:
kubectl exec -it nginx-ingress-controller-5cb489cd48-t4dgv -- sh
/etc/nginx $ curl prometheus-server.default.svc.cluster.local
<a href="/graph">Found</a>
/etc/nginx $ curl prometheus-server
<a href="/graph">Found</a>
List of active ingresses created by the chart:
kubectl get ingress
NAME HOSTS ADDRESS PORTS AGE
nginx-ingress localhost localhost 80 37h
prometheus-alertmanager localhost localhost 80 43m
prometheus-pushgateway localhost localhost 80 43m
prometheus-server localhost localhost 80 43m
List of active service resources:
kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 37h
nginx-deployment ClusterIP 10.100.1.167 <none> 80/TCP 37h
nginx-ingress-controller LoadBalancer 10.109.57.131 localhost 80:32382/TCP,443:30669/TCP 36h
nginx-ingress-default-backend ClusterIP 10.107.91.35 <none> 80/TCP 36h
php-deployment ClusterIP 10.105.73.26 <none> 9000/TCP 37h
prometheus-alertmanager ClusterIP 10.97.89.149 <none> 80/TCP 44m
prometheus-kube-state-metrics ClusterIP None <none> 80/TCP,81/TCP 44m
prometheus-node-exporter ClusterIP None <none> 9100/TCP 44m
prometheus-pushgateway ClusterIP 10.105.81.111 <none> 9091/TCP 44m
prometheus-server ClusterIP 10.108.225.187 <none> 80/TCP 44m
On the other hand, if I declare subdomain as an ingress host, Prometheus is accessible:
alertmanager:
ingress:
enabled: true
annotations:
kubernetes.io/ingress.class: nginx
hosts:
- alerts.localhost
server:
ingress:
enabled: true
annotations:
kubernetes.io/ingress.class: nginx
hosts:
- prom.localhost
pushgateway:
ingress:
enabled: true
annotations:
kubernetes.io/ingress.class: nginx
hosts:
- push.localhost
Am I doing something wrong or there's some sort of issue with this? Any suggestions?
Thanks in advance!
Version of Helm and Kubernetes: Helm 3.0.3 / Kubernetes 1.15.5 (Docker for Mac, MacOS Catalina)
I reproduced your scenario and by running some tests I understood that this is not going to work in the way you want it to work. This is not the right way to implement it.
Let's dive into it a bit.
You can add nginx.ingress.kubernetes.io/rewrite-target
in your ingresses as in this example:
$ kubectl get ingresses. myprom-prometheus-pushgateway -oyaml
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/rewrite-target: /
creationTimestamp: "2020-02-18T09:51:32Z"
generation: 1
labels:
app: prometheus
chart: prometheus-10.4.0
component: pushgateway
heritage: Helm
release: myprom
name: myprom-prometheus-pushgateway
namespace: default
resourceVersion: "3239"
selfLink: /apis/extensions/v1beta1/namespaces/default/ingresses/myprom-prometheus-pushgateway
uid: 499372f4-52b1-4b37-982c-b52e70657d37
spec:
rules:
- host: localhost
http:
paths:
- backend:
serviceName: myprom-prometheus-pushgateway
servicePort: 9091
path: /push
status:
loadBalancer:
ingress:
- ip: 192.168.39.251
After adding this, you are going to be capable to access as you intended. Unfortunately after doing this you're going to face a new problem. If we inspect the html output from a curl command we can see this:
<!DOCTYPE html>
<html lang="en">
<head>
<meta http-equiv="Content-Type" content="text/html; charset=utf-8">
<meta name="robots" content="noindex,nofollow">
<title>Prometheus Pushgateway</title>
<link rel="shortcut icon" href="/static/favicon.ico?v=793293bdadd51fdaca69de5bb25637b0f93b656b">
<script src="/static/jquery-3.4.1.min.js?v=793293bdadd51fdaca69de5bb25637b0f93b656b"></script>
<script src="/static/bootstrap-4.3.1-dist/js/bootstrap.min.js?v=793293bdadd51fdaca69de5bb25637b0f93b656b"></script>
<script src="/static/functions.js?v=793293bdadd51fdaca69de5bb25637b0f93b656b"></script>
<link type="text/css" rel="stylesheet" href="/static/bootstrap-4.3.1-dist/css/bootstrap.min.css?v=793293bdadd51fdaca69de5bb25637b0f93b656b">
<link type="text/css" rel="stylesheet" href="/static/prometheus.css?v=793293bdadd51fdaca69de5bb25637b0f93b656b">
<link type="text/css" rel="stylesheet" href="/static/bootstrap4-glyphicons/css/bootstrap-glyphicons.min.css?v=793293bdadd51fdaca69de5bb25637b0f93b656b">
</head>
As can be seen, we have /static/jquery-3.4.1.min.js
for example, so this is going to redirect your browser to a different location. The problem is that you don't have this location.
That's why i suggest you to avoid using path and stick tom your secondary solution that involves using a sub-domain.
alertmanager:
ingress:
enabled: true
annotations:
kubernetes.io/ingress.class: nginx
hosts:
- alerts.localhost
server:
ingress:
enabled: true
annotations:
kubernetes.io/ingress.class: nginx
hosts:
- prom.localhost
pushgateway:
ingress:
enabled: true
annotations:
kubernetes.io/ingress.class: nginx
hosts:
- push.localhost