I'm not sure why the persistent volume is not being claimed, or what steps I could take to further diagnose this?
Should the claim size match the volume size? Should the volume size match the GCP volume size?
This is so difficult to test and figure out...
My goal here is just to be able to create a Wordpress instance with even a single replica as long as it would support rolling deployments....
Output of kubectl get pods
:
NAME READY STATUS RESTARTS AGE
wordpress-1546832918-mz4rt 0/3 Pending 0 47m
wordpress-1546832918-p0s1s 0/3 Pending 0 47m
Output of kubectl describe pods
:
...truncated...
Events:
FirstSeen LastSeen Count From SubObjectPath Type Reason Message
--------- -------- ----- ---- ------------- -------- ------ -------
47m 3s 168 default-scheduler Warning FailedScheduling [SchedulerPredicates failed due to PersistentVolumeClaim is not bound: "task-pv-claim", which is unexpected., SchedulerPredicates failed due to PersistentVolumeClaim is not bound: "task-pv-claim", which is unexpected.]
Output of kubectl get pvc
:
NAME STATUS VOLUME CAPACITY ACCESSMODES STORAGECLASS AGE
task-pv-claim Pending manual 4h
Output of kubectl get pv
:
NAME CAPACITY ACCESSMODES RECLAIMPOLICY STATUS CLAIM STORAGECLASS REASON AGE
pv0001 10Gi RWX Retain Available manual 4h
production.yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: wordpress
labels:
app: wordpress
spec:
replicas: 2
selector:
matchLabels:
app: wordpress
template:
metadata:
labels:
app: wordpress
spec:
terminationGracePeriodSeconds: 30
containers:
- image: eu.gcr.io/abcxyz/wordpress:deploy-1502807720
name: wordpress
imagePullPolicy: "Always"
env:
- name: WORDPRESS_HOST
value: localhost
- name: WORDPRESS_DB_USERNAME
valueFrom:
secretKeyRef:
name: cloudsql-db-credentials
key: username
volumeMounts:
- name: wordpress-persistent-storage
mountPath: /var/www/html
- image: eu.gcr.io/abcxyz/nginx:deploy-1502807720
name: nginx
imagePullPolicy: "Always"
ports:
- containerPort: 80
name: nginx
volumeMounts:
- name: wordpress-persistent-storage
mountPath: /var/www/html
readOnly: true
- image: gcr.io/cloudsql-docker/gce-proxy:1.09
name: cloudsql-proxy
command: ["/cloud_sql_proxy", "--dir=/cloudsql",
"-instances=abcxyz:europe-west1:wordpressdb2=tcp:3306",
"-credential_file=/secrets/cloudsql/credentials.json"]
volumeMounts:
- name: cloudsql-instance-credentials
mountPath: /secrets/cloudsql
readOnly: true
- name: ssl-certs
mountPath: /etc/ssl/certs
- name: cloudsql
mountPath: /cloudsql
volumes:
- name: wordpress-persistent-storage
persistentVolumeClaim:
claimName: "task-pv-claim"
- name: cloudsql-instance-credentials
secret:
secretName: cloudsql-instance-credentials
- name: ssl-certs
hostPath:
path: /etc/ssl/certs
- name: cloudsql
emptyDir:
pVolume.yaml
apiVersion: "v1"
kind: "PersistentVolume"
metadata:
name: "pv0001"
spec:
storageClassName: manual
capacity:
storage: "10Gi"
accessModes:
- "ReadWriteMany"
gcePersistentDisk:
fsType: "ext4"
pdName: "wordpress-disk"
pVolumeClaim.yaml
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: task-pv-claim
spec:
storageClassName: manual
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 3Gi
The spec.accessModes
of your persistent volume claim has to match that in the persistent volume. Try change both of them to the same value.
If that didn't work, you can add the spec.selector
definition to your persistent volume claim definition, by updating it to match your persistent volume metadata.labels
like this:
apiVersion: "v1"
kind: "PersistentVolume"
metadata:
name: "pv0001"
labels:
name: "pv0001" # can be anything as long as it matches the selector in the pvc
spec:
...
----
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: task-pv-claim
spec:
storageClassName: manual
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 3Gi
selector:
matchLabels:
name: "pv0001"
The spec.selector
serves as a filter to ensure that only PV with the specified labels are matched.