When I create PersistentVolumeClaim, it will dynamically create EBS volume with PersistentVolume on EKS.
I'm trying to create new PersistentVolume manually and bind it to new PersistentValumeClaim, but once I create it, it does not create EBS volume.
Where is PersistentVolume created?
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv1
labels:
type: storage1
app: rabbitmq1
spec:
claimRef:
namespace: default
name: pvc1
accessModes:
- ReadWriteMany
capacity:
storage: 1Gi
hostPath:
path: "/etc/rabbitmq"
storageClassName: gp2
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
labels:
type: storage1
app: rabbitmq1
name: pvc1
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 1Gi
I'm trying to create new PersistentVolume manually and bind it to new PersistentValumeClaim, but once I create it, it does not create EBS volume.
As far as I understand you, you want to provision your storage manually, so you cannot expect that at the same time EBS volume, which is specific AWS storage type, will be created.
Look, what you defined in your yaml manifests is manual provisioning using your node local storage and has nothing to do with EBS. It seems to me that you confuse two concepts: manual and dynamic storage provisioning. Let's clarify it a bit. You could actually choose two different paths and decide to use either manual provisioning or dynamic one but not both of them at the same time.
as @Anton Kostenko suggested in his answer you may give up on using your local node storage, delete the mentioned fragment from your manifest and let Kubernetes and AWS with its EBS do it for you dynamically. You will need only to define PersistentVolumeClaim
, choosing the proper storageClassName
and PV
will be provisioned automatically.
you may follow your original idea of creating PersistentVolume
manually using hostPath
. In this case you need to set storageClassName
to manual
in both PersistentVolume
(which in this case is manually defined by you unlike in the 1st case) and PersistentVolumeClaim
as in this example from official Kubernetes documentation. I've just checked it and it works perfectly. It's important to use same storage class so both PV
and PVC
can be bound together.
If you decide to choose the second path, your particular yaml manifests will look like that:
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv1
labels:
type: storage1
app: rabbitmq1
spec:
claimRef:
namespace: default
name: pvc1
accessModes:
- ReadWriteMany
capacity:
storage: 1Gi
hostPath:
path: "/etc/rabbitmq"
storageClassName: manual
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
labels:
type: storage1
app: rabbitmq1
name: pvc1
spec:
storageClassName: manual
accessModes:
- ReadWriteMany
resources:
requests:
storage: 1Gi
Once you put it let's say in a file storage.yaml
simply issue the following command:
kubectl apply -f storage.yaml
And in a while both your pv
and pvc
will be created and you should see their status as Bound
:
$ kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
pv1 1Gi RWX Retain Bound default/pvc1 manual 33s
$ kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
pvc1 Bound pv1 1Gi RWX manual 38s
You created a host path volume, that is from your spec:
hostPath:
path: "/etc/rabbitmq"
Just remove that path and K8s will create a new PV in with a EBS backend.