I have run into a Kubernetes related issue. I just moved from a Pod
configuration to a ReplicationController
for a Ruby on Rails app and I'm using persistent disks for the Rails pod. When I try apply the ReplicationController it gives the following error:
The ReplicationController "cartelhouse-ror" is invalid. spec.template.spec.volumes[0].gcePersistentDisk.readOnly: Invalid value: false: must be true for replicated pods > 1; GCE PD can only be mounted on multiple machines if it is read-only
Does this mean there is no way to use persistent disks (R/W) when using ReplicationControllers or is there another way?
If not, how can I scale and/or apply rolling updates to the Pod configuration?
apiVersion: v1
kind: Pod
metadata:
name: appname
labels:
name: appname
spec:
containers:
- image: gcr.io/proj/appname:tag
name: appname
env:
- name: POSTGRES_PASSWORD
# Change this - must match postgres.yaml password.
value: pazzzzwd
- name: POSTGRES_USER
value: rails
ports:
- containerPort: 80
name: appname
volumeMounts:
# Name must match the volume name below.
- name: appname-disk-per-sto
# Mount path within the container.
mountPath: /var/www/html
volumes:
- name: appname-disk-per-sto
gcePersistentDisk:
# This GCE persistent disk must already exist.
pdName: appname-disk-per-sto
fsType: ext4
apiVersion: v1
kind: ReplicationController
metadata:
labels:
name: appname
name: appname
spec:
replicas: 2
selector:
name: appname
template:
metadata:
labels:
name: appname
spec:
containers:
- image: gcr.io/proj/app:tag
name: appname
env:
- name: POSTGRES_PASSWORD
# Change this - must match postgres.yaml password.
value: pazzzzwd
- name: POSTGRES_USER
value: rails
ports:
- containerPort: 80
name: appname
volumeMounts:
# Name must match the volume name below.
- name: appname-disk-per-sto
# Mount path within the container.
mountPath: /var/www/html
volumes:
- name: appname-disk-per-sto
gcePersistentDisk:
# This GCE persistent disk must already exist.
pdName: appname-disk-per-sto
fsType: ext4
You can't achieve this with current Kubernetes - see Independent storage for replicated pods. This will be covered by the implementation of PetSets due in v1.3.
The problem is not with Kubernetes, but with shared block device and filesystem that can not be mounted at the same time to more than one host. https://unix.stackexchange.com/questions/68790/can-the-same-ext4-disk-be-mounted-from-two-hosts-one-readonly
You can try to use Claims: http://kubernetes.io/docs/user-guide/persistent-volumes/
Or another filesystem, e.g. nfs: http://kubernetes.io/docs/user-guide/volumes/