I am working on an application on Kubernetes in GCP and I need a really huge SSD storage for it.
So I created a StorageClass
recourse, a PersistentVolumeClaim
that requests 500Gi of space and then a Deployment
recourse.
StorageClass.yaml:
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: faster
provisioner: kubernetes.io/gce-pd
parameters:
type: pd-ssd
PVC.yaml:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: mongo-volume
spec:
storageClassName: faster
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 500Gi
Deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: mongo-deployment
spec:
replicas: 2
selector:
matchLabels:
app: mongo
template:
metadata:
creationTimestamp: null
labels:
app: mongo
spec:
containers:
- image: mongo
name: mongo
ports:
- containerPort: 27017
volumeMounts:
- mountPath: /data/db
name: mongo-volume
volumes:
- name: mongo-volume
persistentVolumeClaim:
claimName: mongo-volume
When I applied the PVC, it stuck in Pending...
state for hours. I found out experimentally that it binds correctly with maximum 200Gi of requested storage space.
However, I can create several 200Gi PVCs. Is there a way to bind them to one path to work as one big PVC in Deployment.yaml? Or maybe the 200Gi limit can be expanded?
I have just tested it on my own env and it works perfectly. So the problem is in Quotas.
For this check:
IAM & admin -> Quotas -> Compute Engine API Local SSD (GB) "your region" Amount which you used.
I've created the situation when I`m run out of Quota and it stack in pending status the same as your. It happens because you create PVC for each pod for 500GB each.