I created two PersistentVolumeClaims(one for redis, one for persistent logs) and tried to mount both in a single deployment, but after creating the deployment, I get the following error:
nodes are available: 3 node(s) didn't match node selector, 4 node(s) had no available volume zone.
However as soon as I remove one PVC from the deployment yaml file, it works fine. I am running it on Google Cloud Platform using Kubernetes engine.
kind: PersistentVolumeClaim apiVersion: v1 metadata: name: pvc-log spec: accessModes: - ReadWriteOnce volumeMode: Filesystem resources: requests: storage: 20Gi storageClassName: standard
kind: PersistentVolumeClaim apiVersion: v1 metadata: name: pvc-redis spec: accessModes: - ReadWriteOnce volumeMode: Filesystem resources: requests: storage: 20Gi storageClassName: standard
apiVersion: extensions/v1beta1 kind: Deployment metadata: name: 'prod-deployment' spec: replicas: 1 template: metadata: labels: app: foo release: canary environment: production spec: containers: - name: api-server image: 'foo:latest' volumeMounts: - mountPath: /logs name: log-storage - name: redis image: 'redis' volumeMounts: - mountPath: /data name: redis-data volumes: - name: redis-data persistentVolumeClaim: claimName: pvc-redis - name: log-storage persistentVolumeClaim: claimName: pvc-log
It is rejected by scheduler for "NoVolumeZoneConflict" predicate, here is the declaration: https://github.com/kubernetes/kubernetes/blob/master/pkg/scheduler/algorithm/predicates/predicates.go#L564 (since I do not find any better doc, but I think the comment in code is really clear for this).
And as Rico said, you have to limit the volume zone for volumes in a pod via maybe storageclass or pv directly but not recommended.
This is similar to this. It's most likely due to a PVC trying to create a volume on an availability zone where you don't have a node in. You can try restricting the standard
StorageClass to just the availability zones where you have Kubernetes nodes. Something like this:
kind: StorageClass apiVersion: storage.k8s.io/v1 metadata: name: standard provisioner: kubernetes.io/gce-pd parameters: type: pd-standard allowedTopologies: - matchLabelExpressions: - key: failure-domain.beta.kubernetes.io/zone values: - us-central1-a - us-central1-b