Kubernetes pod failing with Invalid Volume Zone mismatch

10/12/2021

I have a jenkins service deployed in EKS v 1.16 using helm chart. The PV and PVC had been accidentally deleted so I have recreated the PV and PVC as follows:

Pv.yaml

apiVersion: v1
kind: PersistentVolume
metadata:
  name: jenkins-vol
spec:
  accessModes:
  - ReadWriteOnce
  awsElasticBlockStore:
    fsType: ext4
    volumeID: aws://us-east-2b/vol-xxxxxxxx
  capacity:
    storage: 120Gi
  claimRef:
    apiVersion: v1
    kind: PersistentVolumeClaim
    name: jenkins-ci
    namespace: ci
  persistentVolumeReclaimPolicy: Retain
  storageClassName: gp2
  volumeMode: Filesystem
status:
  phase: Bound

PVC.yaml

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: jenkins-ci
  namespace: ci
spec:
  storageClassName: gp2
  accessModes:
  - ReadWriteOnce
  resources:
    requests:
      storage: 120Gi
  volumeMode: Filesystem
  volumeName: jenkins-vol
status:
  accessModes:
  - ReadWriteOnce
  capacity:
    storage: 120Gi
  phase: Bound

kubectl describe sc gp2

Name:            gp2
IsDefaultClass:  Yes
Annotations:     kubectl.kubernetes.io/last-applied-configuration={"apiVersion":"storage.k8s.io/v1","kind":"StorageClass","metadata":{"annotations":{"storageclass.kubernetes.io/is-default-class":"true"},"name":"gp2","namespace":""},"parameters":{"fsType":"ext4","type":"gp2"},"provisioner":"kubernetes.io/aws-ebs"}
,storageclass.kubernetes.io/is-default-class=true
Provisioner:           kubernetes.io/aws-ebs
Parameters:            fsType=ext4,type=gp2
AllowVolumeExpansion:  True
MountOptions:          <none>
ReclaimPolicy:         Delete
VolumeBindingMode:     Immediate
Events:                <none>

The issue I'm facing is that the pod is not running when its scheduled on a node in a different availability zone than the EBS volume? How can I fix this

-- DevopsinAfrica
amazon-eks
amazon-web-services
kubernetes
kubernetes-helm
persistent-volumes

2 Answers

10/12/2021

Add a nodeSelector to your deployment file, which will match it to a node in the needed availability zone (in your case us-east-2b):

  nodeSelector:
    topology.kubernetes.io/zone: us-east-2b
-- paltaa
Source: StackOverflow

3/16/2022

Add following labels to the PersistentVolume.

  labels:
    failure-domain.beta.kubernetes.io/region: us-east-2b
    failure-domain.beta.kubernetes.io/zone: us-east-2b

example:

apiVersion: v1
kind: PersistentVolume
metadata:
  annotations:
    pv.beta.kubernetes.io/gid: "1000"
  labels:
    failure-domain.beta.kubernetes.io/region: us-east-2b
    failure-domain.beta.kubernetes.io/zone: us-east-2b
  name: test-pv-1
spec:
  accessModes:
  - ReadWriteOnce
  csi:
    driver: ebs.csi.aws.com
    fsType: xfs
    volumeHandle: vol-0d075fdaa123cd0e
  capacity:
    storage: 100Gi
  persistentVolumeReclaimPolicy: Retain
  volumeMode: Filesystem

With the above labels the pod will automatically run in the same AZ where the volume is.

-- GZU5
Source: StackOverflow