I want to use volumes for deployments with more than one replica. How do I define an PersistentVolumeClaim
so it will be generated for each replica? At the moment (see example below) I am able to generate a volume and assign it to the pods. The problem is, that only one volume gets generated which causes this error messages:
38m 1m 18 {kubelet worker-1.loc} Warning FailedMount Unable to mount volumes for pod "solr-1254544937-zblou_default(610b157c-549e-11e6-a624-0238b97cfe8f)": timeout expired waiting for volumes to attach/mount for pod "solr-1254544937-zblou"/"default". list of unattached/unmounted volumes=[datadir]
38m 1m 18 {kubelet worker-1.loc} Warning FailedSync Error syncing pod, skipping: timeout expired waiting for volumes to attach/mount for pod "solr-1254544937-zblou"/"default". list of unattached/unmounted volumes=[datadir]
How can I tell Kubernetes to generate a volume for each replica?
I am using Kubernetes 1.3.
Example:
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: solr-datadir
annotations:
volume.alpha.kubernetes.io/storage-class: anything
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 50Gi
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: solr
labels:
team: platform
tier: search
app: solr
spec:
revisionHistoryLimit: 3
replicas: 3
template:
metadata:
name: solr
labels:
team: platform
tier: search
app: solr
spec:
containers:
- name: solr
image: solr:6-alpine
imagePullPolicy: IfNotPresent
ports:
- name: http
containerPort: 80
resources:
requests:
cpu: 512m
memory: 512Mi
command:
- /bin/bash
args:
- -c
- /opt/solr/bin/solr start -f -z zookeeper:2181
volumeMounts:
- mountPath: "/opt/solr/server/solr/mycores"
name: datadir
volumes:
- name: datadir
persistentVolumeClaim:
claimName: solr-datadir
Generated pods:
$ kubectl get pods -lapp=solr
NAME READY STATUS RESTARTS AGE
solr-1254544937-chenr 1/1 Running 0 55m
solr-1254544937-gjud0 0/1 ContainerCreating 0 55m
solr-1254544937-zblou 0/1 ContainerCreating 0 55m
Generated volumes:
$ kubectl get pv
NAME CAPACITY ACCESSMODES STATUS CLAIM REASON AGE
pvc-3955e8f1-549e-11e6-94be-060ea3314be5 50Gi RWO Bound default/solr-datadir 57m
Generated claims:
$ kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESSMODES AGE
solr-datadir Bound pvc-3955e8f1-549e-11e6-94be-060ea3314be5 0 57m
ReplicaSets treat volumes as stateless. If your replicaset pod template specifies a volume that can only be attached read-write once, then the same volume is used by all pods in that replicaset. If that volume can only be attached read-write to one node at a time (like GCE PDs), then after the first pod is successfully scheduled and started, subsequent instances of the pod will fail to start if they are scheduled to a different node, because the volume will not be able to attach to the second node.
What you are looking for is Pet Sets which enable you to generate a volume for each replica. See http://kubernetes.io/docs/user-guide/petset/ The feature is currently in alpha but should address your usecase.
Update: In Kubernetes 1.5+ PetSets were renamed to StatefulSets. See the documentation here.