K8S - Check certificate validation with Prometheus

12/8/2019

I need to find the certificate validation for K8S cluster , e.g. to use the alert manager to notify when the certificate is about to expire and send sutible notification.

I found this repo but not I’m not sure how configure it, what is the target and how to achieve it?

https://github.com/ribbybibby/ssl_exporter

which based on the black-box exporter

https://github.com/prometheus/blackbox_exporter

- job_name: "ssl"
  metrics_path: /probe
  static_configs:
    - targets:
        - 127.0.0.1
  relabel_configs:
    - source_labels: [__address__]
      target_label: __param_target
    - source_labels: [__param_target]
      target_label: instance
    - target_label: __address__
      replacement: 127.0.0.1:9219 # SSL exporter.

I want to check the current K8S cluster (where Prometheus is deployed) , to see whether the certificate is valid or not. What should I put there inside the target to make it work?

Do I need to expose something in cluster ?

update This is where out certificate located in the system

      tls:
        mode: SIMPLE
        privateKey: /etc/istio/bide-tls/tls.key
        serverCertificate: /etc/istio/bide-tls/tls.crt

My scenario is:

Prometheus and the ssl_exporter are in the same cluster, that the certificate which they need to check is in the same cluster also. (see the config above)

-- Jhon D
kubernetes
prometheus
security
ssl-certificate

1 Answer

12/9/2019

What should I put there inside the target to make it work?

I think the "Targets" section of the readme is clear: it contains the endpoints that you wish the monitor to report on:

static_configs:
  - targets:
      - kubernetes.default.svc.cluster.local:443
      - gitlab.com:443
relabel_configs:
  - source_labels: [__address__]
    target_label: __param_target
  - source_labels: [__param_target]
    target_label: instance
  - target_label: __address__
    # rewrite to contact the SSL exporter
    replacement: 127.0.0.1:9219

Do I need to expose something in cluster ?

Depends on if you want to report on internal certificates, or whether the ssl_exporter can reach the endpoints you want. For example, in the snippet above, I used the KubeDNS name kubernetes.default.svc.cluster.local with the assumption that ssl_exporter is running as a Pod within the cluster. If that doesn't apply to you, the you would want to change that endpoint to be k8s.my-cluster-dns.example.com:6443 or whatever your kubernetes API is listening upon that your kubectl can reach.

Then, in the same vein, if both prometheus and your ssl_exporter are running inside the cluster, then you would change replacement: to be the Service IP address that is backed by your ssl_exporter Pods. If prometheus is outside the cluster and ssl_monitor is inside the cluster, then you'll want to create a Service of type: NodePort so you can point your prometheus at one (or all?) of the Node IP addresses and the NodePort upon which ssl_exporter is listening

The only time one would use the literal 127.0.0.1:9219 is if prometheus and the ssl_exporter are running on the same machine or in the same Pod, since that's the only way that 127.0.0.1 is meaningful from prometheus's point of view

-- mdaniel
Source: StackOverflow