NOTE: After further troubleshooting I believe this is an issue with minikube mount. Much of the below description may not pertain to the exact issue. See my comments for additional info. Keeping the question as-asked.
I'm attempting to use a local SDCard as a mount point within minikube, create a persistent volume within the mount point, create a corresponding persistent volume claim, and then use this as a volume mount within a container hosting a postgres instance. I'll first describe how I've created up to the PVC, then dive into the pod definition.
1. I'm using the minikube command to start the minikube VM (Hyper-V) with the mountpoint:
minikube start --vm-driver="hyperv" --hyperv-virtual-switch="My Virtual Switch" --mount --mount-string="D:\data:/data"
After running this I've used minikube ssh
to identify that the directory is mounted properly and has owner/group of root.
2. Create the PersistentVolume:
apiVersion: v1
kind: PersistentVolume
metadata:
name: postgres-data-pv
labels:
type: local
application: postgres
spec:
capacity:
storage: 500Mi
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
hostPath:
path: "/data/gather-client/postgres/data"
3. Create the PersistentVolumeClaim:
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: postgres-data-pvc
annotations:
volume.beta.kubernetes.io/storage-class: ""
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 500Mi
4. Create the Pod and Container My first attempt at creating the pod was a straight-forward, well-documented approach:
kind: Pod
apiVersion: v1
metadata:
name: gather-client
spec:
volumes:
- name: postgres-data
persistentVolumeClaim:
claimName: postgres-data-pvc
containers:
- name: metadata-db
image: postgres:9.6.5
env:
- name: PGDATA
value: /var/lib/postgresql/data/pgdata
volumeMounts:
- name: postgres-data
mountPath: /var/lib/postgresql/data
At this point I get a CrashLoopBackoff and, upon checking container log find the error:
chown: changing ownership of ‘/var/lib/postgresql/data/pgdata’: Input/output error
I believe what's happening here is that the container has a user, postgres, that is running the initdb command while the directory is owned by root? Then again, I don't feel as though I'd receive an input/output error in this case.
Anyhow, I've tried two additional approaches:
1. Using securityContext with an fsGroup matching the UID/GID of the postgres user (999)
Here, I simply added this snippet at the top of the spec in hopes that the volume would be created with the context of the postgres user (I'm not too sure this is how this works...):
securityContext:
fsGroup: 999
I receive the same error and quickly moved on to the next approach...
2. Using an initContainer to perform the chown command. This is where I've spent most of my time. I added a bit of debugging commands to better understand what was happening here.
initContainers:
- name: metadata-db-init
image: postgres:9.6.5
command: ["sh"]
args: ["-c", "whoami; ls -l /var/lib/postgresql; ls -l /var/lib/postgresql/data; chown postgres /var/lib/postgresql/data; chown postgres /var/lib/postgresql/data/pgdata"]
volumeMounts:
- name: postgres-data
mountPath: /var/lib/postgresql/data
With this I get a pod status of Init:CrashLoopBackOff. Here's what I get in the log within the initContainer:
root
total 0 drwxrwxrwx 1 root root 0 Jan 1 1970 data
total 1 drwxrwxrwx 1 root root 0 Jan 1 1970 pgdata -rw-rw-rw- 1 root root 5 Jan 1 1970 test
chown: changing ownership of ‘/var/lib/postgresql/data’: Input/output error
chown: changing ownership of ‘/var/lib/postgresql/data/pgdata’: Input/output error
I've also written a simple test file from the shell script to the pgdata directory and find it exists on my SDCard. From this I've surmised that the input/output error is not permissions related. Also, being able to write to the directory as root means the filesystem is in working order. I just can't chown (as root). How could this be?