Unable to mount volume when pod in other namespace than default

8/12/2021

Trying to run the Jiva busy box example found in the documentation. When I run the pod in the default namespace, everything works well, but when I try to run it in different namespace (jiva-test for my case) I got the following error :

/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-p4kwm (ro)
Conditions:
  Type              Status
  Initialized       True 
  Ready             False 
  ContainersReady   False 
  PodScheduled      True 
Volumes:
  demo-vol1:
    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
    ClaimName:  demo-vol1-claim
    ReadOnly:   false
  kube-api-access-p4kwm:
    Type:                    Projected (a volume that contains injected data from multiple sources)
    TokenExpirationSeconds:  3607
    ConfigMapName:           kube-root-ca.crt
    ConfigMapOptional:       <nil>
    DownwardAPI:             true
QoS Class:                   Burstable
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
  Type     Reason       Age                 From               Message
  ----     ------       ----                ----               -------
  Normal   Scheduled    19m                 default-scheduler  Successfully assigned jiva-test/busybox3-b794ff748-cnqpl to k8-worker3
  Warning  FailedMount  8m9s (x3 over 14m)  kubelet            Unable to attach or mount volumes: unmounted volumes=[demo-vol1], unattached volumes=[kube-api-access-p4kwm demo-vol1]: timed out waiting for the condition
  Warning  FailedMount  79s (x5 over 17m)   kubelet            Unable to attach or mount volumes: unmounted volumes=[demo-vol1], unattached volumes=[demo-vol1 kube-api-access-p4kwm]: timed out waiting for the condition

I delete everything and start again, but same issue. So that is the new setup

kubectl describe -n jiva-test pod/busybox-796d4477f5-wdrmd                                                                                 
Name:           busybox-796d4477f5-wdrmd
Namespace:      jiva-test
Priority:       0
Node:           k8-worker3/192.168.1.23
Start Time:     Thu, 12 Aug 2021 04:11:07 -0400
Labels:         app=busybox
                pod-template-hash=796d4477f5
Annotations:    <none>
Status:         Pending
IP:             
IPs:            <none>
Controlled By:  ReplicaSet/busybox-796d4477f5
Containers:
  busybox:
    Container ID:  
    Image:         busybox
    Image ID:      
    Port:          3306/TCP
    Host Port:     0/TCP
    Command:
      sh
      -c
      echo Container 1 is Running ; sleep 3600
    State:          Waiting
      Reason:       ContainerCreating
    Ready:          False
    Restart Count:  0
    Limits:
      cpu:  500m
    Requests:
      cpu:        500m
    Environment:  <none>
    Mounts:
      /var/lib/mysql from demo-vol (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-h2s8f (ro)
Conditions:
  Type              Status
  Initialized       True 
  Ready             False 
  ContainersReady   False 
  PodScheduled      True 
Volumes:
  demo-vol:
    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
    ClaimName:  demo-vol-claim
    ReadOnly:   false
  kube-api-access-h2s8f:
    Type:                    Projected (a volume that contains injected data from multiple sources)
    TokenExpirationSeconds:  3607
    ConfigMapName:           kube-root-ca.crt
    ConfigMapOptional:       <nil>
    DownwardAPI:             true
QoS Class:                   Burstable
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
  Type     Reason                  Age    From                     Message
  ----     ------                  ----   ----                     -------
  Warning  FailedScheduling        8m15s  default-scheduler        0/4 nodes are available: 4 persistentvolumeclaim "demo-vol-claim" not found.
  Warning  FailedScheduling        8m13s  default-scheduler        0/4 nodes are available: 4 pod has unbound immediate PersistentVolumeClaims.
  Normal   Scheduled               8m11s  default-scheduler        Successfully assigned jiva-test/busybox-796d4477f5-wdrmd to k8-worker3
  Normal   SuccessfulAttachVolume  8m11s  attachdetach-controller  AttachVolume.Attach succeeded for volume "pvc-ac5395e1-6c5c-4aa3-87ee-491d3fe5fc67"
  Warning  FailedMount             4m37s  kubelet                  MountVolume.MountDevice failed for volume "pvc-ac5395e1-6c5c-4aa3-87ee-491d3fe5fc67" : format of disk "/dev/disk/by-path/ip-10.106.182.232:3260-iscsi-iqn.2016-09.com.openebs.jiva:pvc-ac5395e1-6c5c-4aa3-87ee-491d3fe5fc67-lun-0" failed: type:("ext4") target:("/var/lib/kubelet/plugins/kubernetes.io/iscsi/iface-default/10.106.182.232:3260-iqn.2016-09.com.openebs.jiva:pvc-ac5395e1-6c5c-4aa3-87ee-491d3fe5fc67-lun-0") options:("defaults") errcode:(exit status 1) output:(mke2fs 1.45.5 (07-Jan-2020)
Discarding device blocks: done                            
Creating filesystem with 1310720 4k blocks and 327680 inodes
Filesystem UUID: 98a14904-d3ed-47fa-aac9-feea0bb8e12a
Superblock backups stored on blocks: 
  32768, 98304, 163840, 229376, 294912, 819200, 884736

Allocating group tables: done                            
Writing inode tables: done                            
Creating journal (16384 blocks): done
Writing superblocks and filesystem accounting information: mkfs.ext4: Input/output error while writing out and closing file system
)
  Warning  FailedMount  98s (x3 over 6m8s)  kubelet  Unable to attach or mount volumes: unmounted volumes=[demo-vol], unattached volumes=[demo-vol kube-api-access-h2s8f]: timed out waiting for the condition
  Warning  FailedMount  78s                 kubelet  MountVolume.MountDevice failed for volume "pvc-ac5395e1-6c5c-4aa3-87ee-491d3fe5fc67" : format of disk "/dev/disk/by-path/ip-10.106.182.232:3260-iscsi-iqn.2016-09.com.openebs.jiva:pvc-ac5395e1-6c5c-4aa3-87ee-491d3fe5fc67-lun-0" failed: type:("ext4") target:("/var/lib/kubelet/plugins/kubernetes.io/iscsi/iface-default/10.106.182.232:3260-iqn.2016-09.com.openebs.jiva:pvc-ac5395e1-6c5c-4aa3-87ee-491d3fe5fc67-lun-0") options:("defaults") errcode:(exit status 1) output:(mke2fs 1.45.5 (07-Jan-2020)
Discarding device blocks: done                            
Creating filesystem with 1310720 4k blocks and 327680 inodes
Filesystem UUID: ca4664be-1f58-470b-9b3c-4818fe5be8a0
Superblock backups stored on blocks: 
  32768, 98304, 163840, 229376, 294912, 819200, 884736

Allocating group tables: done                            
Writing inode tables: done                            
Creating journal (16384 blocks): done
Writing superblocks and filesystem accounting information: mkfs.ext4: Input/output error while writing out and closing file system
)

The volume

kubectl get pvc -n jiva-test
NAME             STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS                      AGE
demo-vol-claim   Bound    pvc-ac5395e1-6c5c-4aa3-87ee-491d3fe5fc67   5G         RWO            openebs-jiva-netgeardisk1-3repl   8m55s

The openebs pods:

kubectl get -n openebs po                                                                                                                  
NAME                                                              READY   STATUS    RESTARTS   AGE
cspc-operator-6d6575f468-v7r2w                                    1/1     Running   5          22d
cstor-disk-f5ys-dc75f8649-62j9p                                   3/3     Running   3          32h
cstor-disk-jo1u-5cf8db674c-rnxl5                                  3/3     Running   15         32h
cstor-disk-ny6o-7f84f5c9c5-229wl                                  3/3     Running   490        32h
cvc-operator-7945f749b4-474nr                                     1/1     Running   5          22d
maya-apiserver-7d866d99c5-j8p5k                                   1/1     Running   10         22d
openebs-admission-server-6c8795749f-6n48x                         1/1     Running   10         22d
openebs-cstor-admission-server-8b6778d4b-klxks                    1/1     Running   8          22d
openebs-cstor-csi-controller-0                                    6/6     Running   69         22d
openebs-cstor-csi-node-4884r                                      2/2     Running   24         23d
openebs-cstor-csi-node-mf99j                                      2/2     Running   12         23d
openebs-cstor-csi-node-zbwdh                                      2/2     Running   29         22d
openebs-localpv-provisioner-7445754ddd-dfch2                      1/1     Running   14         22d
openebs-ndm-blnm6                                                 1/1     Running   10         22d
openebs-ndm-operator-8648dcd475-kbls5                             1/1     Running   10         22d
openebs-ndm-vtbrv                                                 1/1     Running   7          22d
openebs-ndm-z84j9                                                 1/1     Running   21         22d
openebs-provisioner-845b6dcc49-xv57g                              1/1     Running   14         22d
openebs-snapshot-operator-7f67b56cbb-qf8mm                        2/2     Running   20         22d
pvc-ac5395e1-6c5c-4aa3-87ee-491d3fe5fc67-ctrl-56ff59f8f8-zr5m7    2/2     Running   0          9m20s
pvc-ac5395e1-6c5c-4aa3-87ee-491d3fe5fc67-rep-1-94ddbf4cd-8h68v    1/1     Running   5          9m12s
pvc-ac5395e1-6c5c-4aa3-87ee-491d3fe5fc67-rep-2-7d54754544-dgt6t   1/1     Running   4          9m8s
pvc-ac5395e1-6c5c-4aa3-87ee-491d3fe5fc67-rep-3-776b757dd9-b9cd6   1/1     Running   4          9m10s
pvc-e6213635-b7fb-4ee4-805b-e36eff365f09-ctrl-6b48c4fc5-trvls     2/2     Running   0          18m
pvc-e6213635-b7fb-4ee4-805b-e36eff365f09-rep-1-6d85666f8d-msld9   1/1     Running   0          17m
pvc-e6213635-b7fb-4ee4-805b-e36eff365f09-rep-2-84f4cc79d6-mlctr   1/1     Running   0          17m
pvc-e6213635-b7fb-4ee4-805b-e36eff365f09-rep-3-7dcf57b4c5-jf4ts   1/1     Running   0          17m
sjr-pvc-061eebed-fc73-4592-a737-578809fc4bc9-go9x-bs9rf           1/1     Running   0          28m
sjr-pvc-3a10d3e3-1b92-4a9d-a947-798b7a453d95-l9lk-zhl8v           1/1     Running   0          26m
sjr-pvc-4910df77-26c6-46e4-91ba-ef9681222838-j9mt-c4zlz           1/1     Running   0          68m
sjr-pvc-7def1e54-65f9-4a27-81a4-05e97d49b0b3-tmt6-wpppq           1/1     Running   0          68m
sjr-pvc-8b22ccc8-115d-4e32-9d83-d312d325573f-b0zk-2nx5j           1/1     Running   0          21m
sjr-pvc-a2990021-bfdc-4fe2-9ba3-50d9cdc76fb4-3jjq-9w825           1/1     Running   0          68m
sjr-pvc-c0d3ae9d-2abe-4e4b-8d6f-fd80ebe565b2-slh2-4t9kl           1/1     Running   0          21m
sjr-pvc-d6d8d199-f069-44f6-9fb8-51b47146ca7d-7tca-2k5kx           1/1     Running   0          21m
sjr-pvc-e652078d-7c9c-49e3-81ab-33cbe216ce5a-j6ly-dj6bh           1/1     Running   0          28m
sjr-pvc-f63f8e24-9521-4551-a3c6-d158a2110a0e-4xhv-964fb           1/1     Running   0          31m

The logs of the Jiva ctrl using the command: kubectl logs -n openebs pvc-ac5395e1-6c5c-4aa3-87ee-491d3fe5fc67-ctrl-56ff59f8f8-zr5m7 -c pvc-ac5395e1-6c5c-4aa3-87ee-491d3fe5fc67-ctrl-con

The logs is very long, found the full log here : https://jpst.it/2AMFm

time="2021-08-12T08:21:19Z" level=info msg="Previously Volume RO: true, Currently: true, Total Replicas: 1, RW replicas: 0, Total backends: 1"
time="2021-08-12T08:21:19Z" level=info msg="prevCheckpoint: , currCheckpoint: "
time="2021-08-12T08:21:19Z" level=info msg="Monitoring stopped tcp://10.1.86.198:9502"
time="2021-08-12T08:21:19Z" level=info msg="RemoveReplica tcp://10.1.86.198:9502 ReplicasAdded:1 FrontendState:Up"
time="2021-08-12T08:21:19Z" level=info msg="check if replica tcp://10.1.86.198:9502 is already added"
time="2021-08-12T08:21:19Z" level=info msg="RemoveReplica tcp://10.1.86.198:9502 not found"
time="2021-08-12T08:21:24Z" level=error msg="Mode: ReadOnly"
time="2021-08-12T08:21:24Z" level=error msg="Mode: ReadOnly"
time="2021-08-12T08:21:24Z" level=error msg="Mode: ReadOnly"
time="2021-08-12T08:21:24Z" level=error msg="Error from backend: Mode: ReadOnly"
time="2021-08-12T08:21:24Z" level=warning msg="opcode: 2ah err: busy"
time="2021-08-12T08:21:25Z" level=error msg="Mode: ReadOnly"
time="2021-08-12T08:21:25Z" level=error msg="Mode: ReadOnly"
time="2021-08-12T08:21:25Z" level=error msg="Mode: ReadOnly"
time="2021-08-12T08:21:25Z" level=error msg="Error from backend: Mode: ReadOnly"
time="2021-08-12T08:21:25Z" level=warning msg="opcode: 2ah err: busy"
time="2021-08-12T08:21:26Z" level=error msg="Mode: ReadOnly"
time="2021-08-12T08:21:26Z" level=error msg="Mode: ReadOnly"
time="2021-08-12T08:21:26Z" level=error msg="Mode: ReadOnly"
time="2021-08-12T08:21:26Z" level=error msg="Error from backend: Mode: ReadOnly"
time="2021-08-12T08:21:26Z" level=warning msg="opcode: 2ah err: busy"
time="2021-08-12T08:21:27Z" level=error msg="Mode: ReadOnly"
time="2021-08-12T08:21:27Z" level=error msg="Mode: ReadOnly"
time="2021-08-12T08:21:27Z" level=error msg="Mode: ReadOnly"
time="2021-08-12T08:21:27Z" level=error msg="Error from backend: Mode: ReadOnly"
time="2021-08-12T08:21:27Z" level=warning msg="opcode: 2ah err: busy"
time="2021-08-12T08:21:28Z" level=error msg="Mode: ReadOnly"
time="2021-08-12T08:21:28Z" level=error msg="Mode: ReadOnly"
time="2021-08-12T08:21:28Z" level=error msg="Mode: ReadOnly"
time="2021-08-12T08:21:28Z" level=error msg="Error from backend: Mode: ReadOnly"
time="2021-08-12T08:21:28Z" level=warning msg="opcode: 2ah err: busy"
time="2021-08-12T08:21:29Z" level=error msg="Mode: ReadOnly"
time="2021-08-12T08:21:29Z" level=error msg="Mode: ReadOnly"
time="2021-08-12T08:21:29Z" level=error msg="Mode: ReadOnly"
time="2021-08-12T08:21:29Z" level=error msg="Error from backend: Mode: ReadOnly"
time="2021-08-12T08:21:29Z" level=warning msg="opcode: 2ah err: busy"
time="2021-08-12T08:21:30Z" level=error msg="Mode: ReadOnly"
time="2021-08-12T08:21:30Z" level=error msg="Mode: ReadOnly"
time="2021-08-12T08:21:30Z" level=error msg="Mode: ReadOnly"
time="2021-08-12T08:21:30Z" level=error msg="Error from backend: Mode: ReadOnly"
time="2021-08-12T08:21:30Z" level=warning msg="opcode: 2ah err: busy"
time="2021-08-12T08:21:31Z" level=error msg="Mode: ReadOnly"
time="2021-08-12T08:21:31Z" level=error msg="Mode: ReadOnly"
time="2021-08-12T08:21:31Z" level=error msg="Mode: ReadOnly"
time="2021-08-12T08:21:31Z" level=error msg="Error from backend: Mode: ReadOnly"
time="2021-08-12T08:21:31Z" level=warning msg="opcode: 2ah err: busy"
time="2021-08-12T08:21:32Z" level=error msg="Mode: ReadOnly"
time="2021-08-12T08:21:32Z" level=error msg="Mode: ReadOnly"
time="2021-08-12T08:21:32Z" level=error msg="Mode: ReadOnly"
time="2021-08-12T08:21:32Z" level=error msg="Error from backend: Mode: ReadOnly"
time="2021-08-12T08:21:32Z" level=warning msg="opcode: 2ah err: busy"
time="2021-08-12T08:21:33Z" level=error msg="Mode: ReadOnly"
time="2021-08-12T08:21:33Z" level=error msg="Mode: ReadOnly"
time="2021-08-12T08:21:33Z" level=error msg="Mode: ReadOnly"
time="2021-08-12T08:21:33Z" level=error msg="Error from backend: Mode: ReadOnly"
time="2021-08-12T08:21:33Z" level=warning msg="opcode: 2ah err: busy"
time="2021-08-12T08:21:34Z" level=error msg="Mode: ReadOnly"
time="2021-08-12T08:21:34Z" level=error msg="Mode: ReadOnly"
time="2021-08-12T08:21:34Z" level=error msg="Mode: ReadOnly"
time="2021-08-12T08:21:34Z" level=error msg="Error from backend: Mode: ReadOnly"
time="2021-08-12T08:21:34Z" level=warning msg="opcode: 2ah err: busy"
time="2021-08-12T08:21:35Z" level=error msg="Mode: ReadOnly"
time="2021-08-12T08:21:35Z" level=error msg="Mode: ReadOnly"
time="2021-08-12T08:21:35Z" level=error msg="Mode: ReadOnly"
time="2021-08-12T08:21:35Z" level=error msg="Error from backend: Mode: ReadOnly"
time="2021-08-12T08:21:35Z" level=warning msg="opcode: 2ah err: busy"
time="2021-08-12T08:21:36Z" level=error msg="Mode: ReadOnly"
time="2021-08-12T08:21:36Z" level=error msg="Mode: ReadOnly"
time="2021-08-12T08:21:36Z" level=error msg="Mode: ReadOnly"
time="2021-08-12T08:21:36Z" level=error msg="Error from backend: Mode: ReadOnly"
time="2021-08-12T08:21:36Z" level=warning msg="opcode: 2ah err: busy"

For the replica one logs

kubectl logs -n openebs pvc-ac5395e1-6c5c-4aa3-87ee-491d3fe5fc67-rep-1-94ddbf4cd-8h68v
time="2021-08-12T08:25:28Z" level=info msg="MAX_CHAIN_LENGTH env not set, default value is 512"
time="2021-08-12T08:25:28Z" level=info msg="Read log info"
time="2021-08-12T08:25:28Z" level=info msg="Configured logging with retentionPeriod: 180, maxLogFileSize: 100, maxBackups: 5"
time="2021-08-12T08:25:28Z" level=info msg="Starting replica having replicaType: , frontendIP: 10.106.182.232, size: 5G, dir: /openebs"
time="2021-08-12T08:25:28Z" level=info msg="Setting replicaAddr: 10.1.70.248:9502, controlAddr: 10.1.70.248:9502, dataAddr: 10.1.70.248:9503, syncAddr: 10.1.70.248:9504"
time="2021-08-12T08:25:28Z" level=info msg="Waiting for s.Replica() to be non nil"
time="2021-08-12T08:25:28Z" level=info msg="Listening on data 10.1.70.248:9503"
time="2021-08-12T08:25:28Z" level=info msg="Listening on control 10.1.70.248:9502"
time="2021-08-12T08:25:28Z" level=info msg="Closing replica"
time="2021-08-12T08:25:28Z" level=info msg="Skip closing replica, s.r not set"
time="2021-08-12T08:25:28Z" level=info msg="CheckAndResetFailedRebuild tcp://10.1.70.248:9502"
time="2021-08-12T08:25:28Z" level=info msg="Opening volume /openebs, size 5368709120/512"
time="2021-08-12T08:25:28Z" level=info msg="Listening on sync 10.1.70.248:9504 start: 9700 end: 9800"
time="2021-08-12T08:25:28Z" level=info msg="Update revison count: 1 of snapshot: volume-snap-0e83b5aa-9a59-4dea-976e-78e79bb0d60b.img"
time="2021-08-12T08:25:28Z" level=info msg="Update revison count: 1 of snapshot: volume-snap-ca477ee2-cd3b-462b-91c4-126a66200336.img"
time="2021-08-12T08:25:28Z" level=info msg="Update revison count: 1 of snapshot: volume-snap-8adc59c7-130c-4e84-8fde-c5996ab14e39.img"
time="2021-08-12T08:25:28Z" level=info msg="Update revison count: 1 of snapshot: volume-snap-e9d88d64-057e-4355-b95f-2cd7d0bb0f7a.img"
time="2021-08-12T08:25:28Z" level=info msg="Update revison count: 1 of snapshot: volume-snap-1228d38c-949d-4556-be07-343ba9a595a6.img"
time="2021-08-12T08:25:28Z" level=info msg="Closing replica"
time="2021-08-12T08:25:30Z" level=info msg="Addreplica tcp://10.1.70.248:9502"
time="2021-08-12T08:25:30Z" level=info msg="Get Volume info from controller"
time="2021-08-12T08:25:30Z" level=info msg="Adding replica tcp://10.1.70.248:9502 in WO mode"
time="2021-08-12T08:25:30Z" level=info msg="GetReplica for id 1"
time="2021-08-12T08:25:30Z" level=error msg="Failed to create replica, error: Bad response: 500 500 Internal Server Error: {\"actions\":{},\"code\":\"Server Error\",\"detail\":\"\",\"links\":{\"self\":\"http://10.106.182.232:9501/v1/replicas\"},\"message\":\"can only have one WO replica at a time, found WO Replica: tcp://10.1.80.144:9502\",\"status\":500,\"type\":\"error\"}\n"
time="2021-08-12T08:25:30Z" level=info msg="Waiting for s.Replica() to be non nil"
time="2021-08-12T08:25:32Z" level=info msg="Waiting for s.Replica() to be non nil"
time="2021-08-12T08:25:34Z" level=info msg="Waiting for s.Replica() to be non nil"
2021-08-12T08:25:35.048Z	ERROR	app/add_replica.go:65		{"eventcode": "jiva.volume.replica.add.failure", "msg": "Failed to add Jiva volume replica", "rname": "tcp://10.1.70.248:9502"}
github.com/openebs/jiva/app.AutoAddReplica
	/go/src/github.com/openebs/jiva/app/add_replica.go:65
github.com/openebs/jiva/app.AutoConfigureReplica
	/go/src/github.com/openebs/jiva/app/replica.go:170
time="2021-08-12T08:25:35Z" level=info msg="Closing replica"
time="2021-08-12T08:25:35Z" level=info msg="Skip closing replica, s.r not set"
time="2021-08-12T08:25:35Z" level=fatal msg="Failed to add replica to controller, err: Bad response: 500 500 Internal Server Error: {\"actions\":{},\"code\":\"Server Error\",\"detail\":\"\",\"links\":{\"self\":\"http://10.106.182.232:9501/v1/replicas\"},\"message\":\"can only have one WO replica at a time, found WO Replica: tcp://10.1.80.144:9502\",\"status\":500,\"type\":\"error\"}\n, Shutting down..."

PS: I'm using a network drive multiple network drive (1 hard drive splitter in multiple partition ). I mount only one network drive on each worker node using cifs.

-- hubert
cifs
kubernetes
openebs

0 Answers