若iSCSI Initiator已经与某个iSCSI Target建立连接,之后HBlock再新建关联该iSCSI Target的LUN,iSCSI Initiator如何在不断开已有连接的情况下发现新的LUN?
更新时间 2026-04-13 11:15:49
最近更新时间: 2026-04-13 11:15:49
根据客户端不同,可以使用如下方式发现新的LUN:
Windows:在服务器管理->文件和存储服务器->卷->磁盘,点击刷新即可完成LUN的添加。
Linux:在挂载新建LUN前,需要在Linux客户端执行下列命令:
Linux客户端:扫描新卷。
挂载新建LUN前,请根据环境选择执行以下命令之一:
rescan-scsi-bus.sh # 使用此命令前,系统需要安装sg3_utils或
for host in /sys/class/scsi_host/host*/scan; do echo "- - -" > $host; done或
iscsiadm -m node -T iSCSI_TARGET_IQN -p SERVER_IP:port --rescan将iSCSI磁盘分区挂载到本地目录,挂载之后可以写入数据。
挂载单机版的卷:
mkfs.ext4 /dev/sdX mount /dev/sdX PATH # PATH为磁盘路径挂载集群版的卷:
mkfs -t ext4 /dev/mapper/mpathX # 格式化成 ext4 mkdir DIRECTORY_NAME_OR_PATH #创建目录 mount /dev/mapper/mpathX DIRECTORY_NAME_OR_PATH #将mpathX挂载到目录
示例
集群版:卷lun08b已经连客户端,并已经挂载。HBlock端新建与lun08b对应相同target的卷lun08a,并继续连接该客户端。
HBlock端:
[root@hblockserver CTYUN_HBlock_Plus_4.0.0]# ./stor lun ls -n lun08a
LUN Name: lun08a (LUN 0)
Storage Mode: Local
Capacity: 800 GiB
Status: Normal
Auto Failback: Enabled
iSCSI Target: iqn.2012-08.cn.ctyunapi.oos:target08.5 (192.168.0.67:3260,Active)
iqn.2012-08.cn.ctyunapi.oos:target08.6 (192.168.0.64:3260,Standby)
Create Time: 2026-01-04 16:44:40
Local Storage Class: EC 2+1+16 KiB
Minimum Replicas: 2
Redundancy Overlap: 1
Local Sector Size: 4096 Bytes
Storage Pool: pool1
Data Health: 100% normal, 0% low redundancy, 0% error
High Availability: ActiveStandby
Write Policy: WriteBack
WWID: 33000000011e926ff
UUID: lun-uuid-e4cadb0c-b76e-476e-b212-b1f637ae278f
Snapshot Count: 0
Snapshot Size: 0 B (Note: Snapshot size may vary due to LUN issues or parent snapshot deletion.)
[root@ hblockserver CTYUN_HBlock_Plus_4.0.0_x64]# ./stor lun ls -n lun08b
LUN Name: lun08b (LUN 1)
Storage Mode: Local
Capacity: 810 GiB
Status: Normal
Auto Failback: Enabled
iSCSI Target: iqn.2012-08.cn.ctyunapi.oos:target08.5 (192.168.0.67:3260,Active)
iqn.2012-08.cn.ctyunapi.oos:target08.6 (192.168.0.64:3260,Standby)
Create Time: 2026-01-04 14:53:06
Local Storage Class: EC 2+1+16 KiB
Minimum Replicas: 2
Redundancy Overlap: 1
Local Sector Size: 4096 Bytes
Storage Pool: pool1
Data Health: 100% normal, 0% low redundancy, 0% error
High Availability: ActiveStandby
Write Policy: WriteBack
WWID: 33fffffff820da244
UUID: lun-uuid-739eb843-efb5-468b-93ad-295657aff7ce
Snapshot Count: 0
Snapshot Size: 0 B (Note: Snapshot size may vary due to LUN issues or parent snapshot deletion.)客户端挂载新卷lun08a:
[root@client ~]# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sdb 8:16 0 810G 0 disk
└─mpathn 252:2 0 810G 0 mpath /mnt/mpathn
sdd 8:48 0 810G 0 disk
└─mpathn 252:2 0 810G 0 mpath /mnt/mpathn
sr0 11:0 1 378K 0 rom
vda 253:0 0 40G 0 disk
└─vda1 253:1 0 40G 0 part /
vdb 253:16 0 100G 0 disk
vdc 253:32 0 100G 0 disk /mnt/storage01
vdd 253:48 0 100G 0 disk
└─ceph--8f1a1320--6e78--44c6--aeb1--48d6f65939cc-osd--block--ab98336f--d28e--485b--8b22--d4a20c154a7c 252:0 0 100G 0 lvm
[root@client ~]# lsscsi
[0:0:0:0] cd/dvd QEMU QEMU DVD-ROM 2.5+ /dev/sr0
[4:0:0:1] disk CTYUN iSCSI LUN Device 1.00 /dev/sdb
[5:0:0:1] disk CTYUN iSCSI LUN Device 1.00 /dev/sdd
[root@client ~]# iscsiadm -m session
tcp: [3] 192.168.0.67:3260,1 iqn.2012-08.cn.ctyunapi.oos:target08.5 (non-flash)
tcp: [4] 192.168.0.64:3260,1 iqn.2012-08.cn.ctyunapi.oos:target08.6 (non-flash)
[root@client ~]# for host in /sys/class/scsi_host/host*/scan; do echo "- - -" > $host; done
[root@client ~]# lsscsi
[0:0:0:0] cd/dvd QEMU QEMU DVD-ROM 2.5+ /dev/sr0
[4:0:0:0] disk CTYUN iSCSI LUN Device 1.00 /dev/sda
[4:0:0:1] disk CTYUN iSCSI LUN Device 1.00 /dev/sdb
[5:0:0:0] disk CTYUN iSCSI LUN Device 1.00 /dev/sdc
[5:0:0:1] disk CTYUN iSCSI LUN Device 1.00 /dev/sdd
[root@client ~]# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 800G 0 disk
└─mpathp 252:1 0 800G 0 mpath
sdb 8:16 0 810G 0 disk
└─mpathn 252:2 0 810G 0 mpath /mnt/mpathn
sdc 8:32 0 800G 0 disk
└─mpathp 252:1 0 800G 0 mpath
sdd 8:48 0 810G 0 disk
└─mpathn 252:2 0 810G 0 mpath /mnt/mpathn
sr0 11:0 1 378K 0 rom
vda 253:0 0 40G 0 disk
└─vda1 253:1 0 40G 0 part /
vdb 253:16 0 100G 0 disk
vdc 253:32 0 100G 0 disk /mnt/storage01
vdd 253:48 0 100G 0 disk
└─ceph--8f1a1320--6e78--44c6--aeb1--48d6f65939cc-osd--block--ab98336f--d28e--485b--8b22--d4a20c154a7c 252:0 0 100G 0 lvm
[root@client ~]# multipath -ll
mpathp (0x3000000011e926ff) dm-1 CTYUN ,iSCSI LUN Device
size=800G features='1 queue_if_no_path' hwhandler='1 alua' wp=rw
|-+- policy='round-robin 0' prio=50 status=active
| `- 4:0:0:0 sda 8:0 active ready running
`-+- policy='round-robin 0' prio=1 status=enabled
`- 5:0:0:0 sdc 8:32 active ghost running
mpathn (0x3fffffff820da244) dm-2 CTYUN ,iSCSI LUN Device
size=810G features='1 queue_if_no_path' hwhandler='1 alua' wp=rw
|-+- policy='round-robin 0' prio=50 status=active
| `- 4:0:0:1 sdb 8:16 active ready running
`-+- policy='round-robin 0' prio=1 status=enabled
`- 5:0:0:1 sdd 8:48 active ghost running
[root@client ~]# mkfs -t ext4 /dev/mapper/mpathp
mke2fs 1.42.9 (28-Dec-2013)
Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
Stride=0 blocks, Stripe width=0 blocks
52428800 inodes, 209715200 blocks
10485760 blocks (5.00%) reserved for the super user
First data block=0
Maximum filesystem blocks=2357198848
6400 block groups
32768 blocks per group, 32768 fragments per group
8192 inodes per group
Superblock backups stored on blocks:
32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,
4096000, 7962624, 11239424, 20480000, 23887872, 71663616, 78675968,
102400000
Allocating group tables: done
Writing inode tables: done
Creating journal (32768 blocks): done
Writing superblocks and filesystem accounting information: done
[root@client ~]# mkdir /mnt/mpathp
[root@client ~]# mount /dev/mapper/mpathp /mnt/mpathp
[root@client ~]# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 800G 0 disk
└─mpathp 252:1 0 800G 0 mpath /mnt/mpathp
sdb 8:16 0 810G 0 disk
└─mpathn 252:2 0 810G 0 mpath /mnt/mpathn
sdc 8:32 0 800G 0 disk
└─mpathp 252:1 0 800G 0 mpath /mnt/mpathp
sdd 8:48 0 810G 0 disk
└─mpathn 252:2 0 810G 0 mpath /mnt/mpathn
sr0 11:0 1 378K 0 rom
vda 253:0 0 40G 0 disk
└─vda1 253:1 0 40G 0 part /
vdb 253:16 0 100G 0 disk
vdc 253:32 0 100G 0 disk /mnt/storage01
vdd 253:48 0 100G 0 disk
└─ceph--8f1a1320--6e78--44c6--aeb1--48d6f65939cc-osd--block--ab98336f--d28e--485b--8b22--d4a20c154a7c 252:0 0 100G 0 lvm