-
Notifications
You must be signed in to change notification settings - Fork 596
Description
As it stands the host iscsid
and the iscsid
inside the kubelet
container will conflict with each other (whichever is launched first wins). If iscsi is not used for PVCs but is used by the host generally all will be well. If the host does not use iscsi but the cluster is using iscsi PVCs all will be well. If you need both things get messy.
I think there are 2 use-cases where this is important:
- Where the host needs to use iscsi outside the cluster for other purposes
- Mixing
csi
drivers which useiscsid
(ie: NetApp trident et al) with in cluster legacy iscsi workloads (csi
drivers tend to leverage the host daemon/binaries)
I'm currently working on a csi
driver and bumped into this situation when deploying it to a cluster which has a 'legacy' iscsi provisioner already installed. Both work independently but once deployed jointly in the same cluster things blow up.
I'm not entirely sure how to solve this. I think it can be solved with the following:
- mount
/var/lib/iscsi
and/etc/iscsi
intokubelet
container - create an
iscsiadm
wrapper script (as noted in the blog entry below) which simply invokes the hostiscsiadm
in a chroot inside the container (apparently the client binary needs to match the version of the running daemon)
Given that step 2 requires generally a full host mount of the root (/
) step 1 may not be required.
Interested in hearing other thoughts/feedback to hopefully find a solution.