Im trying to figure out how to share data between a cronjob and a kubernetes deployment
I'm running Kubernetes hosted on AWS EKS
I've created a persistent volume with a claim and have tried to loop in the claim through both the cronjob and the deployment containers, however after the cronjob runs on the schedule the data still isn't in the other container where it should be
I've seen some threads about using AWS EBS but Im not so sure whats the way to go
Another thread talked about running different schedules to get the persistentvolume
- name: +vars.cust_id+-sophoscentral-logs
persistentVolumeClaim:
claimName: +vars.cust_id+-sophoscentral-logs-pvc
---
kind: PersistentVolume
apiVersion: v1
metadata:
name: +vars.cust_id+-sp-logs-pv
spec:
persistentVolumeReclaimPolicy: Retain
claimRef:
name: +vars.cust_id+-sp-logs-pvc
namespace: +vars.namespace+
capacity:
storage: 10Gi
accessModes:
- ReadWriteMany
hostPath:
path: "/var/lib/+vars.cust_id+-sophosdata"
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: +vars.cust_id+-sp-logs-pvc
namespace: +vars.namespace+
labels:
component: sp
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 1Gi
volumeName: +vars.cust_id+-sp-logs-pv
EBS volumes do not support ReadWriteMany as a mode. If you want to stay within the AWS ecosystem, you would need to use EFS which is a hosted NFS product. Other options include self hosted Ceph or Gluster and their related CephFS and GlusterFS tools.
This should generally be avoided if possible. NFS brings a whole host of problems to the table and while CephFS (and probably GlusterFS but I'm less familiar with that one personally) is better it's still a far cry from a "normal" network block device volume. Make sure you understand the limitations this brings with it before you include this in a system design.