The rbd volumes config of rbd issue in kubernetes

3/30/2016

I want to use the volumes rbd config to mount the folder on ceph images . But it seems the container mount a host path.

I used the daemon of "https://github.com/kubernetes/kubernetes/tree/master/examples/rbd". The pod and container start successfully.

  • I use the "docker exec " login the container and watch the /mnt folder. root@test-rbd-read-01:/usr/local/tomcat# findmnt /mnt TARGET SOURCE FSTYPE OPTIONS /mnt /dev/vda1[/var/lib/kubelet/pods/****/volumes/kubernetes.io~rbd/rbd] xfs rw,relatime,attr2,inode64,noquota root@test-rbd-read-01:/usr/local/tomcat# ls /mnt/ root@test-rbd-read-01:/usr/local/tomcat#
  • And then I watch the host path that mount on the ceph. The 1.txt had existed on ceph image. [20:52 root@mongodb:/home] # mount |grep kubelet /dev/rbd0 on /var/lib/kubelet/plugins/kubernetes.io/rbd/rbd/wujianlin-image-zlh_test type ext4 (ro,relatime,stripe=1024,data=ordered) /dev/rbd0 on /var/lib/kubelet/pods/****/volumes/kubernetes.io~rbd/rbd type ext4 (ro,relatime,stripe=1024,data=ordered) [20:53 root@mongodb:/home] # ll /var/lib/kubelet/pods/****/volumes/kubernetes.io~rbd/rbd total 20K drwx------ 2 root root 16K Mar 18 09:49 lost+found -rw-r--r-- 1 root root 4 Mar 18 09:53 1.txt [20:53 root@mongodb:/home] # rbd showmapped id pool image snap device 0 wujianlin zlh_test - /dev/rbd0

    It should except that the container folder /mnt is same as the host path /var/lib/kubelet/pods/ * * * */volumes/kubernetes.io~rbd/rbd, but it was not.`

    And I try to write file to /mnt, it also can not see any changes in /var/lib/kubelet/pods/* * * */volumes/kubernetes.io~rbd/rbd

    So is my some config wrong, or someting misunderstand ?

    k8s version: Release v1.2.0 Here is my config:

apiVersion: v1 kind: Pod metadata: name: test-rbd-read-01 spec: containers: - name: tomcat-read-only-01 image: tomcat volumeMounts: - name: rbd mountPath: /mnt volumes: - name: rbd rbd: monitors: - 10.63.90.177:6789 pool: wujianlin image: zlh_test user: wujianlin secretRef: name: ceph-client-admin-keyring keyring: /etc/ceph/ceph.client.wujianlin.keyring fsType: ext4 readOnly: true

-- zhulinhong
kubernetes

1 Answer

3/31/2016

what did you do when you restart docker? Are you able to reproduce this issue after the docker is restarted and pod is recreated?

-- Huamin
Source: StackOverflow