I'm trying to define a shared persistent volume in k8s between two different deployments, and I've encountered some issues:
I have 2 pods for each deployment and between the deployments I'm trying to configure a shared volume - that mean that if I create a txt file in deplyment1/pod1 and I take a look in deplyment1/pod2 - I can't see the file.
The second issue is that I can't see the files in another deployment (deplyment2) - what's currently happening is that each pod created its own separated volume instead of sharing the same volume.
My goal, in the end, is to create a shared volume between the pods and the deployments. It's important to note that I'm running on GKE.
Below are my current configurations
Deployment 1:
apiVersion: apps/v1
kind: Deployment
metadata:
name: app1
namespace: test
spec:
selector:
matchLabels:
app: app1
template:
metadata:
labels:
app: app1
spec:
containers:
- name: server
image: app1
ports:
- name: grpc
containerPort: 11111
resources:
requests:
cpu: 300m
limits:
cpu: 500m
volumeMounts:
- name: test
mountPath: /etc/test/configs
volumes:
- name: test
persistentVolumeClaim:
claimName: my-claim
Deployment 2:
apiVersion: apps/v1
kind: Deployment
metadata:
name: app2
namespace: test
spec:
selector:
matchLabels:
app: app2
template:
metadata:
labels:
app: app2
spec:
containers:
- name: server
image: app2
ports:
- name: http
containerPort: 22222
resources:
requests:
cpu: 300m
limits:
cpu: 500m
volumeMounts:
- name: test
mountPath: /etc/test/configs
volumes:
- name: test
persistentVolumeClaim:
claimName: my-claim
Persistent Volume:
apiVersion: v1
kind: PersistentVolume
metadata:
name: test-pv
namespace: test
spec:
capacity:
storage: 5Gi
volumeMode: Filesystem
accessModes:
- ReadWriteMany
persistentVolumeReclaimPolicy: Retain
storageClassName: fast
local:
path: /etc/test/configs
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: cloud.google.com/gke-nodepool
operator: In
values:
- default-pool
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: my-claim
namespace: test
annotations:
volume.beta.kubernetes.io/storage-class: fast
spec:
accessModes:
- ReadWriteMany
storageClassName: fast
resources:
requests:
storage: 5Gi
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: fast
provisioner: kubernetes.io/gce-pd
parameters:
type: pd-ssd
fstype: ext4
replication-type: regional-pd
and describe pv and pvc:
$ kubectl describe pvc -n test
Name: my-claim
Namespace: test
StorageClass: fast
Status: Bound
Volume: test-pv
Labels: <none>
Annotations: pv.kubernetes.io/bind-completed: yes
pv.kubernetes.io/bound-by-controller: yes
volume.beta.kubernetes.io/storage-class: fast
Finalizers: [kubernetes.io/pvc-protection]
Capacity: 5Gi
Access Modes: RWX
VolumeMode: Filesystem
Mounted By: <none>
Events: <none>
$ kubectl describe pv -n test
Name: test-pv
Labels: <none>
Annotations: pv.kubernetes.io/bound-by-controller: yes
Finalizers: [kubernetes.io/pv-protection]
StorageClass: fast
Status: Bound
Claim: test/my-claim
Reclaim Policy: Retain
Access Modes: RWX
VolumeMode: Filesystem
Capacity: 5Gi
Node Affinity:
Required Terms:
Term 0: cloud.google.com/gke-nodepool in [default-pool]
Message:
Source:
Type: LocalVolume (a persistent volume backed by local storage on a node)
Path: /etc/test/configs
Events: <none>
GCE-PD CSI storage driver does not support ReadWriteMany
. You need to use ReadOnlyMany
. For ReadWriteMany
you need to use GFS mounts.
From the docs on how to use persistent disks with multiple readers
Creating a PersistentVolume
and PersistentVolumeClaim
apiVersion: v1
kind: PersistentVolume
metadata:
name: my-readonly-pv
spec:
storageClassName: ""
capacity:
storage: 10Gi
accessModes:
- ReadOnlyMany
claimRef:
namespace: default
name: my-readonly-pvc
gcePersistentDisk:
pdName: my-test-disk
fsType: ext4
readOnly: true
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: my-readonly-pvc
spec:
# Specify "" as the storageClassName so it matches the PersistentVolume's StorageClass.
# A nil storageClassName value uses the default StorageClass. For details, see
# https://kubernetes.io/docs/concepts/storage/persistent-volumes/#class-1
storageClassName: ""
accessModes:
- ReadOnlyMany
resources:
requests:
storage: 10Gi
Using the PersistentVolumeClaim
in a Pod
apiVersion: v1
kind: Pod
metadata:
name: pod-pvc
spec:
containers:
- image: k8s.gcr.io/busybox
name: busybox
command:
- "sleep"
- "3600"
volumeMounts:
- mountPath: /test-mnt
name: my-volume
readOnly: true
volumes:
- name: my-volume
persistentVolumeClaim:
claimName: my-readonly-pvc
readOnly: true
Now, you can have multiple Pods on different nodes that can all mount this PersistentVolumeClaim
in read-only mode. However, you can't attach persistent disks in write mode on multiple nodes at the same time