I'm using two VMs with Atomic Host (1 Master, 1 Node; Centos Image). I want to use NFS shares from another VM (Ubuntu Server 16.04) as persistent volumes for my pods. I can mount them manually and in Kubernetes (Version 1.5.2) the persistent volumes are successfully created and bound to my PVCs. Also they are mounted in my pods. But when I try to write or even read from the corresponding folder inside the pod, I get the error Permission denied
. From my research I think, the problem lies within the folders permission/owner/group on my NFS Host.
My exports file on the Ubuntu VM (/etc/exports
) has 10 shares with the following pattern (The two IPs are the IPs of my Atomic Host Master and Node):
/home/user/pv/pv01 192.168.99.101(rw,insecure,async,no_subtree_check,no_root_squash) 192.168.99.102(rw,insecure,async,no_subtree_check,no_root_squash)
In the image for my pods I create a new user named guestbook
, so that the container doesn't use a privileged user, as this insecure. I read many post like this one, that state, you have to set the permissions to world-writable or using the same UID and GID for the shared folders. So in my Dockerfile I create the guestbook
user with the UID 1003
and a group with the same name and GID 1003
:
RUN groupadd -r guestbook -g 1003 && useradd -u 1003 -r -g 1003 guestbook
On my NFS Host I also have a user named guestbook
with UID 1003
as a member of the group nfs
with GID 1003
. The permissions of the shared folders (with ls -l
) are as following:
drwxrwxrwx 2 guestbook nfs 4096 Feb 19 11:23 pv01
(world writable, owner guestbook, group nfs). In my Pod I can see the permissions of the mounted folder /data
(again with ls -l
) as:
drwxrwxrwx. 2 guestbook guestbook 4096 Feb 9 13:37 data
The persistent Volumes are created with an YAML file with the pattern:
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv01
annotations:
pv.beta.kubernetes.io/gid: "1003"
spec:
capacity:
storage: 200Mi
accessModes:
- ReadWriteOnce
- ReadWriteMany
persistentVolumeReclaimPolicy: Recycle
nfs:
path: /home/user/pv/pv01
server: 192.168.99.104
The Pod is created with this YAML file:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: get-started
spec:
replicas: 3
template:
metadata:
labels:
app: get-started
spec:
containers:
- name: get-started
image: docker.io/cebberg/get-started:custom5
ports:
- containerPort: 2525
env:
- name: GET_HOSTS_FROM
value: dns
- name: REDIS_PASSWORD
valueFrom:
secretKeyRef:
name: redis
key: database-password
volumeMounts:
- name: log-storage
mountPath: "/data/"
imagePullPolicy: Always
securityContext:
privileged: false
volumes:
- name: log-storage
persistentVolumeClaim:
claimName: get-started
restartPolicy: Always
dnsPolicy: ClusterFirst
And the PVC with YAML file:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: get-started
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 100Mi
I tried different configuration for the owner/group of the folders. If I use my normal user (which is the same on all systems) as owner and group, I can mount manually and read and write in the folder. But I don't want to use my normal user, but use another user (and especially not a privileged user).
What permissions do I have to set, so that the user I create in my Pod can write to the NFS volume?
I found the solution to my problem: By accident I found log entries, that appear everytime I try to access the NFS volumes from my pods. They say, that SELinux has blocked the access to the folder because of different security context.
To resolve the issue, I simply had to turn on the corresponding SELinux boolean virt_use_nfs
with the command
setsebool virt_use_nfs on
This has to be done on all nodes to make it work correctly.
EDIT: I remembered, that I now use sec=sys
as mount option in /etc/exports
. This provides access controll based on UID and GID of the user creating a file (which seems to be the default). If you use sec=none
you also have to turn on the SELinux boolean nfsd_anon_write
, so that the user nfsnobody
has the permission to create files.