Kubernetes: How can I set the number of replicas more than 1 using awsElasticBlockStore?

5/11/2016

I'm trying to create a Cassandra cluster in Kubernetes. I want to use awsElasticBlockStore to make the data persistent. As a result, I've written a YAML file like following for the corresponding Replication Controller:

apiVersion: v1
kind: ReplicationController
metadata:
  name: cassandra-rc
spec:
  # Question: How can I do this?
  replicas: 2
  selector:
    name: cassandra
  template:
    metadata:
      labels:
        name: cassandra
    spec:
      containers:
        - resources:
            limits :
              cpu: 1.0
          image: cassandra:2.2.6
          name: cassandra
          ports:
            - containerPort: 7000
              name: comm
            - containerPort: 9042
              name: cql
            - containerPort: 9160
              name: thrift
          volumeMounts:
            - name: cassandra-persistent-storage
              mountPath: /cassandra_data
      volumes:
        - name: cassandra-persistent-storage
          awsElasticBlockStore:
            volumeID: aws://ap-northeast-1c/vol-xxxxxxxx
            fsType: ext4

However, only one pod can be properly launched with this configuration.

$ kubectl get pods
NAME                 READY     STATUS              RESTARTS   AGE
cassandra-rc-xxxxx   0/1       ContainerCreating   0          5m
cassandra-rc-yyyyy   1/1       Running             0          5m

When I run $ kubectl describe pod cassandra-rc-xxxxx, I see an error like following:

Error syncing pod, skipping: Could not attach EBS Disk "aws://ap-northeast-1c/vol-xxxxxxxx": Error attaching EBS volume: VolumeInUse: vol-xxxxxxxx is already attached to an instance

It's understandable because an ELB Volume can be mounted from only one node. So only one pod can successfully mount the volume and bootup, while others just fail.

Is there any good solution for this? Do I need to create multiple Replication Controllers for each pod?

-- aeas44
amazon-ebs
amazon-web-services
kubernetes

1 Answer

6/1/2016

You are correct, one EBS volume can only be mounted on a single EC2 at a given time. To solve you have the following options:

-- Steve Sloka
Source: StackOverflow