I have the following PersistentVolumeClaim:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: nginx-pvc
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 256Mi
storageClassName: fask
and Deployment:
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
spec:
replicas: 2
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.7.9
ports:
- containerPort: 80
volumeMounts:
- name: data
mountPath: "/var/www/html"
volumes:
- name: data
persistentVolumeClaim:
claimName: nginx-pvc
If I run with a single replica Deployment my PV gets dynamically created by the vsphere StorageClass
However, if I have more than 2 replicas it will fail to create the second PV:
AttachVolume.Attach failed for volume "pvc-8facf319-6b1a-11e8-935b-00505680b1b8" : Failed to add disk 'scsi0:1'.
Unable to mount volumes for pod "nginx-deployment-7886f48dcd-lzms8_default(b0e38764-6b1a-11e8-935b-00505680b1b8)": timeout expired waiting for volumes to attach or mount for pod "default"/"nginx-deployment-7886f48dcd-lzms8". list of unmounted volumes=[data]. list of unattached volumes=[data default-token-5q7kr]
However, if I have more than 2 replicas it will fail to create the second PV
You should then probably use StatefulSet
and volumeClaimTemplates
within it instead of Deployment
and PersistentVolumeClaim
.
In your case each deployment is having same PersistentVolumeClaim
(that is ReadWriteOnly and can't be mounted on second request), while with volumeClaimTemplates
you get different one provisioned per each replica.