Force PersistentVolumeClaim and Deployment to land in same availability zone

3/23/2019

I have a kubernetes cluster in AWS with ec2 worker nodes in the following AZs along with corresponding PersistentVolumes in each AZ.

us-west-2a
us-west-2b
us-west-2c
us-west-2d

My problem is I want to create a Deployment with a volume mount that references a PersistentVolumeClaim and guarantee they land in the same AZ because right now it is luck whether both the Deployment and PersistentVolumeClaim end up in the same AZ. If they don't land in the same AZ then the deployment fails to find the volume mount.

I create 4 PersistentVolumes by manually creates EBS volumes in each AZ and copying the ID to the spec.

{
  "apiVersion": "v1",
  "kind": "PersistentVolume",
  "metadata": {
    "name": "pv-2"
  },
  "spec": {
    "capacity": {
      "storage": "1Gi"
    },
    "accessModes": [
      "ReadWriteOnce"
    ],
    "persistentVolumeReclaimPolicy": "Retain",
    "awsElasticBlockStore": {
      "volumeID": "vol-053f78f0c16e5f20e",
      "fsType": "ext4"
    }
  }
}
{
   "kind": "PersistentVolumeClaim",
   "apiVersion": "v1",
   "metadata": {
      "name": "mydata",
      "namespace": "staging"
   },
   "spec": {
      "accessModes": [
         "ReadWriteOnce"
      ],
      "resources": {
         "requests": {
            "storage": "10Mi"
         }
      }
   }
}
{
   "apiVersion": "extensions/v1beta1",
   "kind": "Deployment",
   "metadata": {
      "name": "myapp",
      "namespace": "default",
      "labels": {
         "app": "myapp"
      }
   },
   "spec": {
      "replicas": 1,
      "selector": {
         "matchLabels": {
            "app": "myapp"
         }
      },
      "template": {
         "metadata": {
            "labels": {
               "app": "myapp"
            }
         },
         "spec": {
            "containers": [
               {
                  "name": "hello",
                  "image": "centos:7",
                  "volumeMounts": [ {  
                        "name":"mydata",
                        "mountPath":"/etc/data/"
                     } ]
               }
            ],
            "volumes": [ {  
                  "name":"mydata",
                  "persistentVolumeClaim":{  
                     "claimName":"mydata"
                  }
               }]
         }
      }
   }
}
-- Mike
amazon-web-services
aws-ebs
kubernetes
persistent-volume-claims
persistent-volumes

1 Answer

3/24/2019

You could try setting annotation for region and AvailabilityZone as mentioned in here and here

-- jijo
Source: StackOverflow