I created an EBS volume with 30 GiB size. Made two manifest files:
In pv-ebs.yml:
apiVersion: v1
kind: PersistentVolume
metadata:
name: ebs
spec:
capacity:
storage: 30Gi
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Recycle
awsElasticBlockStore:
fsType: ext4
# The EBS volume ID
volumeID: vol-111222333aaabbbccc
in pvc-ebs.yml
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: prometheus-prometheus-alertmanager
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
selector:
matchLabels:
release: "stable"
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: prometheus-prometheus-server
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
selector:
matchLabels:
release: "stable"
Use helm
installed it: helm install --name prometheus stable/prometheus
.
But on the k8s dashboard, got message:
prometheus-prometheus-alertmanager-3740839786-np7kb
No nodes are available that match all of the following predicates:: NoVolumeZoneConflict (2).
prometheus-prometheus-server-3176041168-m3w2g
PersistentVolumeClaim is not bound: "prometheus-prometheus-server" (repeated 2 times)
Is there anything wrong about my method?
If you installed your cluster with KOPs the PVs will be created for you automatically. Just wait a few min and refresh your screen. The errors will go away.
If you have setup your cluster in another way you want to create your volumes in AWS ec2 create-volume and then create PVs and then when helm runs it will claim those PVs.
When EBS is created, it is provisioned in a particular AZ and it can not be cross-zone mounted. If you do not have nodes available in the same zone for scheduling the pod, it will not start.
Another thing is that with a properly configured kube cluster, you should not need to create PV on your own at all, just create PVC and let dynamic provisioning do it's thing.