I would like to add an iSCSI volume to a pod as in this this example. I have already prepared an iSCSI target on a Debian server and installed open-iscsi
on all my worker nodes. I have also confirmed that I can mount the iSCSI target on a worker node with command line tools (i.e. still outside Kubernetes). This works fine. For simplicity, there is no authentication (CHAP) in play yet, and there is already a ext4
file system present on the target.
I would now like for Kubernetes 1.14 to mount the same iSCSI target into a pod with the following manifest:
---
apiVersion: v1
kind: Pod
metadata:
name: iscsipd
spec:
containers:
- name: iscsipd-ro
image: kubernetes/pause
volumeMounts:
- mountPath: "/mnt/iscsipd"
name: iscsivol
volumes:
- name: iscsivol
iscsi:
targetPortal: 1.2.3.4 # my target
iqn: iqn.2019-04.my-domain.com:lun1
lun: 0
fsType: ext4
readOnly: true
According to kubectl describe pod
this works in the initial phase (SuccessfulAttachVolume
), but then fails (FailedMount
). The exact error message reads:
Warning FailedMount ... Unable to mount volumes for pod "iscsipd_default(...)": timeout expired waiting for volumes to attach or mount for pod "default"/"iscsipd". list of unmounted volumes=[iscsivol]. list of unattached volumes=[iscsivol default-token-7bxnn]
Warning FailedMount ... MountVolume.WaitForAttach failed for volume "iscsivol" : failed to get any path for iscsi disk, last err seen:
Could not attach disk: Timeout after 10s
How can I further diagnose and overcome this problem?
UPDATE In this related issue the solution consisted of using a numeric IP address for the target. However, this does not help in my case, since I am already using a targetPortal
of the form 1.2.3.4
(have also tried both with and without port number 3260).
UPDATE Stopping scsid.service
and/or open-iscsi.service
(as suggested here) did not make a difference either.
UPDATE The error apparently gets triggered in pkg/volume/iscsi/iscsi_util.go
if waitForPathToExist(&devicePath, multipathDeviceTimeout, iscsiTransport)
fails. However, what is strange is that when it is triggered the file at devicePath
(/dev/disk/by-path/ip-...-iscsi-...-lun-...
) does actually exist on the node.
UPDATE I have used this procedure for defining an simple iSCSI target for these test purposes:
pvcreate /dev/sdb
vgcreate iscsi /dev/sdb
lvcreate -L 10G -n iscsi_1 iscsi
apt-get install tgt
cat >/etc/tgt/conf.d/iscsi_1.conf <<EOL
<target iqn.2019-04.my-domain.com:lun1>
backing-store /dev/mapper/iscsi-iscsi_1
initiator-address 5.6.7.8 # my cluster node #1
... # my cluster node #2, etc.
</target>
EOL
systemctl restart tgt
tgtadm --mode target --op show
This is probably because of authentication issue to your iscsi target.
If you don't use CHAP authentication yet, you still have to disable authentication. For example, if you use targetcli
, you can run below commands to disable it.
$ sudo targetcli
/> /iscsi/iqn.2003-01.org.xxxx/tpg1 set attribute authentication=0 # will disable auth
/> /iscsi/iqn.2003-01.org.xxxx/tpg1 set attribute generate_node_acls=1 # will force to use tpg1 auth mode by default
If this doesn't help you, please share your iscsi target configuration, or guide that you followed.