Pod mounts wrong directory on Node when a flexvolume with cifs is configured

4/9/2019

The following problem occurs on a Kubernetes cluster with 1 master and 3 nodes and also on a single-machine Kubernetes.

I set up the Kubernetes with flexvolume smb support (https://github.com/Azure/kubernetes-volume-drivers/tree/master/flexvolume/smb). When I apply a new pod with flexvolume the Node mounts the smb share as expected. But the Pod points his share to some docker directory on the Node.

My installation:

Create Pod with

smb-secret.yaml

apiVersion: v1
kind: Secret
metadata:
  name: smb-secret
type: microsoft.com/smb
data:
  username: YVVzZXI=
  password: YVBhc3N3b3Jk

nginx-flex-smb.yaml

apiVersion: v1
kind: Pod
metadata:
  name: nginx-flex-smb
spec:
  containers:
  - name: nginx-flex-smb
    image: nginx
    volumeMounts:
    - name: test
      mountPath: /data
  volumes:
  - name: test
    flexVolume:
      driver: "microsoft.com/smb"
      secretRef:
        name: smb-secret
      options:
        source: "//<host.with.smb.share>/kubetest"
        mountoptions: "vers=3.0,dir_mode=0777,file_mode=0777"

What happens

  • Mount point on Node is created on /var/lib/kubelet/pods/bef26895-5ac7-11e9-a668-00155db9c92e/volumes/microsoft.com~smb.
  • mount returns //<host.with.smb.share>/kubetest on /var/lib/kubelet/pods/bef26895-5ac7-11e9-a668-00155db9c92e/volumes/microsoft.com~smb/test type cifs (rw,relatime,vers=3.0,cache=strict,username=aUser,domain=,uid=0,noforceuid,gid=0,noforcegid,addr=172.27.72.43,file_mode=0777,dir_mode=0777,soft,nounix,serverino,mapposix,rsize=1048576,wsize=1048576,echo_interval=60,actimeo=1)
  • read and write works as expected on host and on the Node itself
  • on Pod
    • mountfor /data points to tmpfs on /data type tmpfs (rw,nosuid,nodev,seclabel,size=898680k,nr_inodes=224670,mode=755)
    • but the content of the directory /data comes from /run/docker/libcontainerd/8039742ae2a573292cd9f4ef7709bf7583efd0a262b9dc434deaf5e1e20b4002/ on the node.

I tried to install the Pod with a PersistedVolumeClaime and get the same problem. Searching for this problem got me no solutions.

Our other pods uses GlusterFS and heketi which works fine.

Is there maybe a configuration failure? Something missing?

EDIT: Solution
I upgraded Docker to the latest validated Version 18.06 and everything works well now.

-- GedNX
cifs
kubernetes

1 Answer

4/15/2019

I upgraded Docker to the latest validated Version 18.06 and everything works well now.

To install it follow the instructions on Get Docker CE for CentOS.

-- GedNX
Source: StackOverflow