I'm trying to install Prometheus to my EKS cluster, using the default prometheus helm chart located at https://github.com/kubernetes/charts/tree/master/stable/prometheus. It deploys successfully, but in the Kubernetes Dashboard the AlertManager and Server deployments say:
pod has unbound PersistentVolumeClaims (repeated 3 times)
I've tried tinkering with the values.yaml
file to no avail.
I know this isn't much to go on, but I'm not really sure what else I can look up when it comes to logging.
Here's the output from running helm install stable/prometheus --name prometheus --namespace prometheus
root@fd9c3cc3f356:~/charts# helm install stable/prometheus --name prometheus --namespace prometheus
NAME: prometheus
LAST DEPLOYED: Wed Jun 20 14:55:41 2018
NAMESPACE: prometheus
STATUS: DEPLOYED
RESOURCES:
==> v1beta1/ClusterRole
NAME AGE
prometheus-kube-state-metrics 1s
prometheus-server 1s
==> v1/ServiceAccount
NAME SECRETS AGE
prometheus-alertmanager 1 1s
prometheus-kube-state-metrics 1 1s
prometheus-node-exporter 1 1s
prometheus-pushgateway 1 1s
prometheus-server 1 1s
==> v1/PersistentVolumeClaim
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
prometheus-alertmanager Pending 1s
prometheus-server Pending 1s
==> v1beta1/ClusterRoleBinding
NAME AGE
prometheus-kube-state-metrics 1s
prometheus-server 1s
==> v1/Service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
prometheus-alertmanager ClusterIP 10.100.3.32 <none> 80/TCP 1s
prometheus-kube-state-metrics ClusterIP None <none> 80/TCP 1s
prometheus-node-exporter ClusterIP None <none> 9100/TCP 1s
prometheus-pushgateway ClusterIP 10.100.243.103 <none> 9091/TCP 1s
prometheus-server ClusterIP 10.100.144.15 <none> 80/TCP 1s
==> v1beta1/DaemonSet
NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
prometheus-node-exporter 3 3 2 3 2 <none> 1s
==> v1beta1/Deployment
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
prometheus-alertmanager 1 1 1 0 1s
prometheus-kube-state-metrics 1 1 1 0 1s
prometheus-pushgateway 1 1 1 0 1s
prometheus-server 1 1 1 0 1s
==> v1/Pod(related)
NAME READY STATUS RESTARTS AGE
prometheus-node-exporter-dklx8 0/1 ContainerCreating 0 1s
prometheus-node-exporter-hphmn 1/1 Running 0 1s
prometheus-node-exporter-zxcnn 1/1 Running 0 1s
prometheus-alertmanager-6df98765f4-l9vq2 0/2 Pending 0 1s
prometheus-kube-state-metrics-6584885ccf-8md7c 0/1 ContainerCreating 0 1s
prometheus-pushgateway-5495f55cdf-brxvr 0/1 ContainerCreating 0 1s
prometheus-server-5959898967-fdztb 0/2 Pending 0 1s
==> v1/ConfigMap
NAME DATA AGE
prometheus-alertmanager 1 1s
prometheus-server 3 1s
NOTES:
The Prometheus server can be accessed via port 80 on the following DNS name from within your cluster:
prometheus-server.prometheus.svc.cluster.local
Get the Prometheus server URL by running these commands in the same shell:
export POD_NAME=$(kubectl get pods --namespace prometheus -l "app=prometheus,component=server" -o jsonpath="{.items[0].metadata.name}")
kubectl --namespace prometheus port-forward $POD_NAME 9090
The Prometheus alertmanager can be accessed via port 80 on the following DNS name from within your cluster:
prometheus-alertmanager.prometheus.svc.cluster.local
Get the Alertmanager URL by running these commands in the same shell:
export POD_NAME=$(kubectl get pods --namespace prometheus -l "app=prometheus,component=alertmanager" -o jsonpath="{.items[0].metadata.name}")
kubectl --namespace prometheus port-forward $POD_NAME 9093
The Prometheus PushGateway can be accessed via port 9091 on the following DNS name from within your cluster:
prometheus-pushgateway.prometheus.svc.cluster.local
Get the PushGateway URL by running these commands in the same shell:
export POD_NAME=$(kubectl get pods --namespace prometheus -l "app=prometheus,component=pushgateway" -o jsonpath="{.items[0].metadata.name}")
kubectl --namespace prometheus port-forward $POD_NAME 9091
For more information on running Prometheus, visit:
https://prometheus.io/
Turns out, EKS clusters are not created with any persistent storage enabled:
Amazon EKS clusters are not created with any storage classes. You must define storage classes for your cluster to use and you should define a default storage class for your persistent volume claims.
This guide explains how to add a kubernetes StorageClass for EKS
After adding the StorageClass as instructed, deleting my prometheus deployment using helm delete prometheus --purge
and re-creating the deployment, all of my pods are now fully functional.