配置 Proxmox VE 8.x 以支持 NVMe-oF 和ONTAP存储
Proxmox VE 8.x 主机支持光纤通道上的 NVMe (NVMe/FC) 和 TCP 上的 NVMe (NVMe/TCP) 协议,并支持非对称命名空间访问 (ANA)。ANA 提供与 iSCSI 和 FCP 环境中的非对称逻辑单元访问 (ALUA) 等效的多路径功能。
了解如何为 Proxmox VE 8.x 配置 NVMe over Fabrics (NVMe-oF) 主机。如需更多支持和功能信息,请参阅 "ONTAP支持和功能"。
NVMe-oF 与 Proxmox VE 8.x 存在以下已知限制:
-
不支持 NVMe-FC 的 SAN 启动配置。
步骤 1:安装 Proxmox VE 和 NVMe 软件并验证您的配置
要为 NVMe-oF 配置主机,您需要安装主机和 NVMe 软件包,启用多路径,并验证主机 NQN 配置。
-
在服务器上安装 Proxmox 8.x。安装完成后,请确认您运行的是指定的 Proxmox 8.x 内核:
uname -r以下示例显示了 Proxmox 内核版本:
6.8.12-10-pve
-
安装
NVMe-CLI软件包:apt list|grep nvme-cli下面的例子展示了 `nvme-cli`软件包版本:
nvme-cli/oldstable,now 2.4+really2.3-3 amd64
-
安装
libnvme软件包:apt list|grep libnvme下面的例子展示了 `libnvme`软件包版本:
libnvme1/oldstable,now 1.3-1+deb12u1 amd64
-
在主机上,检查 hostnqn 字符串
/etc/nvme/hostnqn:cat /etc/nvme/hostnqn下面的例子展示了 `hostnqn`价值:
nqn.2014-08.org.nvmexpress:uuid:1536c9a6-f954-11ea-b24d-0a94efb46eaf
-
在ONTAP系统中,验证以下信息: `hostnqn`字符串匹配 `hostnqn`ONTAP数组中对应子系统的字符串:
::> vserver nvme subsystem host show -vserver proxmox_120_122显示示例
Vserver Subsystem Priority Host NQN ------- --------- -------- --------- proxmox_120_122 proxmox_120_122 regular nqn.2014-08.org.nvmexpress:uuid:1536c9a6-f954-11ea-b24d-0a94efb46eaf regular nqn.2014-08.org.nvmexpress:uuid:991a7476-f9bf-11ea-8b73-0a94efb46c3b proxmox_120_122_tcp regular nqn.2014-08.org.nvmexpress:uuid:1536c9a6-f954-11ea-b24d-0a94efb46eaf regular nqn.2014-08.org.nvmexpress:uuid:991a7476-f9bf-11ea-8b73-0a94efb46c3b 2 entries were displayed.如果 `hostnqn`字符串不匹配,请使用 `vserver modify`命令来更新 `hostnqn`相应ONTAP存储系统子系统上的字符串以匹配 `hostnqn`字符串来自 `/etc/nvme/hostnqn`在主机上。
步骤 2:配置 NVMe/FC 和 NVMe/TCP
使用 Broadcom/Emulex 或 Marvell/QLogic 适配器配置 NVMe/FC,或使用手动发现和连接操作配置 NVMe/TCP。
为Broadcom/Emulex适配器配置NVMe/FC。
-
验证您使用的适配器型号是否受支持:
-
显示模型名称:
cat /sys/class/scsi_host/host*/modelname您应看到以下输出:
LPe35002-M2 LPe35002-M2
-
显示模型描述:
cat /sys/class/scsi_host/host*/modeldesc您应该看到类似于以下示例的输出:
Emulex LPe35002-M2 2-Port 32Gb Fibre Channel Adapter Emulex LPe35002-M2 2-Port 32Gb Fibre Channel Adapter
-
-
确认您使用的是建议的Broadcom
lpfc固件和内置驱动程序:-
显示固件版本:
cat /sys/class/scsi_host/host*/fwrev该命令返回固件版本:
14.0.505.12, sli-4:6:d 14.0.505.12, sli-4:6:d
-
显示收件箱驱动程序版本:
cat /sys/module/lpfc/version以下示例显示了驱动程序版本:
0:14.2.0.17
有关支持的适配器驱动程序和固件版本的最新列表,请参见"互操作性表工具"。
-
-
请验证
lpfc_enable_fc4_type设置为3:cat /sys/module/lpfc/parameters/lpfc_enable_fc4_type -
验证是否可以查看启动程序端口:
cat /sys/class/fc_host/host*/port_name您应该看到类似以下内容的输出:
0x100000109b95467e 0x100000109b95467f
-
验证启动程序端口是否联机:
cat /sys/class/fc_host/host*/port_state您应看到以下输出:
Online Online
-
验证NVMe/FC启动程序端口是否已启用且目标端口是否可见:
cat /sys/class/scsi_host/host*/nvme_info显示示例
NVME Initiator Enabled XRI Dist lpfc0 Total 6144 IO 5894 ELS 250 NVME LPORT lpfc0 WWPN x10005ced8c531948 WWNN x20005ced8c531948 DID x082400 ONLINE NVME RPORT WWPN x200ed039eac79573 WWNN x200dd039eac79573 DID x060902 TARGET DISCSRVC ONLINE NVME RPORT WWPN x2001d039eac79573 WWNN x2000d039eac79573 DID x060904 TARGET DISCSRVC ONLINE NVME Statistics LS: Xmt 0000000034 Cmpl 0000000034 Abort 00000000 LS XMIT: Err 00000000 CMPL: xb 00000000 Err 00000000 Total FCP Cmpl 0000000000142cfb Issue 0000000000142cfc OutIO 0000000000000001 abort 00000005 noxri 00000000 nondlp 00000000 qdepth 00000000 wqerr 00000000 err 00000000 FCP CMPL: xb 00000005 Err 00000005 NVME Initiator Enabled XRI Dist lpfc1 Total 6144 IO 5894 ELS 250 NVME LPORT lpfc1 WWPN x10005ced8c531949 WWNN x20005ced8c531949 DID x082500 ONLINE NVME RPORT WWPN x2010d039eac79573 WWNN x200dd039eac79573 DID x062902 TARGET DISCSRVC ONLINE NVME RPORT WWPN x2007d039eac79573 WWNN x2000d039eac79573 DID x062904 TARGET DISCSRVC ONLINE NVME Statistics LS: Xmt 0000000034 Cmpl 0000000034 Abort 00000000 LS XMIT: Err 00000000 CMPL: xb 00000000 Err 00000000 Total FCP Cmpl 00000000000d39f1 Issue 00000000000d39f2 OutIO 0000000000000001 abort 00000005 noxri 00000000 nondlp 00000000 qdepth 00000000 wqerr 00000000 err 00000000 FCP CMPL: xb 00000005 Err 00000005
为Marvell/QLogic适配器配置NVMe/FC。
-
验证您使用的适配器驱动程序和固件版本是否受支持:
cat /sys/class/fc_host/host*/symbolic_name以下示例显示了驱动程序和固件版本:
QLE2872 FW:v9.15.00 DVR:v10.02.09.300-k QLE2872 FW:v9.15.00 DVR:v10.02.09.300-k
-
请验证
ql2xnvmeenable已设置。这样、Marvell适配器便可用作NVMe/FC启动程序:cat /sys/module/qla2xxx/parameters/ql2xnvmeenable预期输出为1。
NVMe/TCP 协议不支持自动连接操作。相反,您可以通过执行 NVMe/TCP 来发现 NVMe/TCP 子系统和命名空间 `connect`或者 `connect-all`手动操作。
-
检查启动器端口是否可以跨支持的 NVMe/TCP LIF 获取发现日志页面数据:
nvme discover -t tcp -w host-traddr -a traddr显示示例
nvme discover -t tcp -w 192.168.2.22 -a 192.168.2.30 Discovery Log Number of Records 12, Generation counter 13 =====Discovery Log Entry 0====== trtype: tcp adrfam: ipv4 subtype: current discovery subsystem treq: not specified portid: 10 trsvcid: 8009 subnqn: nqn.1992-08.com.netapp:sn.ae9f2d55a7ec11ef8751d039ea9e891c:discovery traddr: 192.168.2.30 eflags: explicit discovery connections, duplicate discovery information sectype: none =====Discovery Log Entry 1====== trtype: tcp adrfam: ipv4 subtype: current discovery subsystem treq: not specified portid: 9 trsvcid: 8009 subnqn: nqn.1992-08.com.netapp:sn.ae9f2d55a7ec11ef8751d039ea9e891c:discovery traddr: 192.168.1.30 eflags: explicit discovery connections, duplicate discovery information sectype: none =====Discovery Log Entry 2====== trtype: tcp adrfam: ipv4 subtype: current discovery subsystem treq: not specified portid: 12 trsvcid: 8009 subnqn: nqn.1992-08.com.netapp:sn.ae9f2d55a7ec11ef8751d039ea9e891c:discovery traddr: 192.168.2.25 eflags: explicit discovery connections, duplicate discovery information sectype: none =====Discovery Log Entry 3====== trtype: tcp adrfam: ipv4 subtype: current discovery subsystem treq: not specified portid: 11 trsvcid: 8009 subnqn: nqn.1992-08.com.netapp:sn.ae9f2d55a7ec11ef8751d039ea9e891c:discovery traddr: 192.168.1.25 eflags: explicit discovery connections, duplicate discovery information sectype: none =====Discovery Log Entry 4====== trtype: tcp adrfam: ipv4 subtype: nvme subsystem treq: not specified portid: 10 trsvcid: 4420 subnqn: nqn.1992-08.com.netapp:sn.ae9f2d55a7ec11ef8751d039ea9e891c:subsystem.proxmox_120_122 traddr: 192.168.2.30 eflags: none sectype: none =====Discovery Log Entry 5====== trtype: tcp adrfam: ipv4 subtype: nvme subsystem treq: not specified portid: 9 trsvcid: 4420 subnqn: nqn.1992-08.com.netapp:sn.ae9f2d55a7ec11ef8751d039ea9e891c:subsystem.proxmox_120_122 traddr: 192.168.1.30 eflags: none sectype: none =====Discovery Log Entry 6====== trtype: tcp adrfam: ipv4 subtype: nvme subsystem treq: not specified portid: 12 trsvcid: 4420 subnqn: nqn.1992-08.com.netapp:sn.ae9f2d55a7ec11ef8751d039ea9e891c:subsystem.proxmox_120_122 traddr: 192.168.2.25 eflags: none sectype: none =====Discovery Log Entry 7====== trtype: tcp adrfam: ipv4 subtype: nvme subsystem treq: not specified portid: 11 trsvcid: 4420 subnqn: nqn.1992-08.com.netapp:sn.ae9f2d55a7ec11ef8751d039ea9e891c:subsystem.proxmox_120_122 traddr: 192.168.1.25 eflags: none sectype: none =====Discovery Log Entry 8====== trtype: tcp adrfam: ipv4 subtype: nvme subsystem treq: not specified portid: 10 trsvcid: 4420 subnqn: nqn.1992-08.com.netapp:sn.ae9f2d55a7ec11ef8751d039ea9e891c:subsystem.proxmox_120_122_tcp traddr: 192.168.2.30 eflags: none sectype: none =====Discovery Log Entry 9====== trtype: tcp adrfam: ipv4 subtype: nvme subsystem treq: not specified portid: 9 trsvcid: 4420 subnqn: nqn.1992-08.com.netapp:sn.ae9f2d55a7ec11ef8751d039ea9e891c:subsystem.proxmox_120_122_tcp traddr: 192.168.1.30 eflags: none sectype: none =====Discovery Log Entry 10====== trtype: tcp adrfam: ipv4 subtype: nvme subsystem treq: not specified portid: 12 trsvcid: 4420 subnqn: nqn.1992-08.com.netapp:sn.ae9f2d55a7ec11ef8751d039ea9e891c:subsystem.proxmox_120_122_tcp traddr: 192.168.2.25 eflags: none sectype: none =====Discovery Log Entry 11====== trtype: tcp adrfam: ipv4 subtype: nvme subsystem treq: not specified portid: 11 trsvcid: 4420 subnqn: nqn.1992-08.com.netapp:sn.ae9f2d55a7ec11ef8751d039ea9e891c:subsystem.proxmox_120_122_tcp traddr: 192.168.1.25 eflags: none sectype: none
-
验证其他 NVMe/TCP 启动器-目标 LIF 组合是否可以成功检索发现日志页面数据:
nvme discover -t tcp -w host-traddr -a traddr显示示例
nvme discover -t tcp -w 192.168.1.22 -a 192.168.1.30 nvme discover -t tcp -w 192.168.2.22 -a 192.168.2.30 nvme discover -t tcp -w 192.168.1.22 -a 192.168.1.25 nvme discover -t tcp -w 192.168.2.22 -a 192.168.2.25
-
运行
nvme connect-all在节点中所有受支持的NVMe/TCP启动程序-目标SIP上运行命令:nvme connect-all -t tcp -w host-traddr -a traddr显示示例
nvme connect-all -t tcp -w 192.168.1.22 -a 192.168.1.30 nvme connect-all -t tcp -w 192.168.2.22 -a 192.168.2.30 nvme connect-all -t tcp -w 192.168.1.22 -a 192.168.1.25 nvme connect-all -t tcp -w 192.168.2.22 -a 192.168.2.25
NVMe/TCP 的设置 `ctrl_loss_tmo timeout`自动设置为“关闭”。因此:
-
重试次数没有限制(无限重试)。
-
您不需要手动配置特定的 `ctrl_loss_tmo timeout`使用时长 `nvme connect`或者 `nvme connect-all`命令(选项 -l )。
-
如果发生路径故障,NVMe/TCP 控制器不会超时,并且会无限期地保持连接。
步骤 3:可选,启用 NVMe/FC 的 1MB I/O。
ONTAP在识别控制器数据中报告最大数据传输大小 (MDTS) 为 8。这意味着最大 I/O 请求大小可达 1MB。要向 Broadcom NVMe/FC 主机发出 1MB 大小的 I/O 请求,您应该增加 `lpfc`的价值 `lpfc_sg_seg_cnt`参数从默认值 64 更改为 256。
|
|
这些步骤不适用于逻辑NVMe/FC主机。 |
-
将 `lpfc_sg_seg_cnt`参数设置为256:
cat /etc/modprobe.d/lpfc.conf您应该会看到类似于以下示例的输出:
options lpfc lpfc_sg_seg_cnt=256
-
运行 `update-initramfs`命令并重启主机。
-
验证的值是否 `lpfc_sg_seg_cnt`为256:
cat /sys/module/lpfc/parameters/lpfc_sg_seg_cnt
步骤 4:验证 NVMe 启动服务
使用 Proxmox 8.x, `nvmefc-boot-connections.service`和 `nvmf-autoconnect.service`NVMe/FC 中包含的启动服务 `nvme-cli`系统启动时会自动启用软件包。
启动完成后,验证 `nvmefc-boot-connections.service`和 `nvmf-autoconnect.service`启动服务已启用。
-
验证是否 `nvmf-autoconnect.service`已启用:
systemctl status nvmf-autoconnect.service显示示例输出
○ nvmf-autoconnect.service - Connect NVMe-oF subsystems automatically during boot Loaded: loaded (/lib/systemd/system/nvmf-autoconnect.service; enabled; preset: enabled) Active: inactive (dead) since Fri 2025-11-21 19:59:10 IST; 8s ago Process: 256613 ExecStartPre=/sbin/modprobe nvme-fabrics (code=exited, status=0/SUCCESS) Process: 256614 ExecStart=/usr/sbin/nvme connect-all (code=exited, status=0/SUCCESS) Main PID: 256614 (code=exited, status=0/SUCCESS) CPU: 18ms Nov 21 19:59:07 SR665-14-122.lab.eng.btc.netapp.in systemd[1]: Starting nvmf-autoconnect.service - Connect NVMe-oF subsystems automatically during boot... Nov 21 19:59:10 SR665-14-122.lab.eng.btc.netapp.in nvme[256614]: Failed to write to /dev/nvme-fabrics: Invalid argument Nov 21 19:59:10 SR665-14-122.lab.eng.btc.netapp.in nvme[256614]: Failed to write to /dev/nvme-fabrics: Invalid argument Nov 21 19:59:10 SR665-14-122.lab.eng.btc.netapp.in systemd[1]: nvmf-autoconnect.service: Deactivated successfully. Nov 21 19:59:10 SR665-14-122.lab.eng.btc.netapp.in systemd[1]: Finished nvmf-autoconnect.service - Connect NVMe-oF subsystems automatically during boot. -
验证是否 `nvmefc-boot-connections.service`已启用:
systemctl status nvmefc-boot-connections.service显示示例输出
○ nvmefc-boot-connections.service - Auto-connect to subsystems on FC-NVME devices found during boot Loaded: loaded (/lib/systemd/system/nvmefc-boot-connections.service; enabled; preset: enabled) Active: inactive (dead) since Thu 2025-11-20 17:48:29 IST; 1 day 2h ago Process: 1381 ExecStart=/bin/sh -c echo add > /sys/class/fc/fc_udev_device/nvme_discovery (code=exited, status=0/SUCCESS) Main PID: 1381 (code=exited, status=0/SUCCESS) CPU: 3ms Nov 20 17:48:29 SR665-14-122.lab.eng.btc.netapp.in systemd[1]: Starting nvmefc-boot-connections.service - Auto-connect to subsystems on FC-NVME devices found during boot.. Nov 20 17:48:29 SR665-14-122.lab.eng.btc.netapp.in systemd[1]: nvmefc-boot-connections.service: Deactivated successfully. Nov 20 17:48:29 SR665-14-122.lab.eng.btc.netapp.in systemd[1]: Finished nvmefc-boot-connections.service - Auto-connect to subsystems on FC-NVME devices found during boot...
步骤 5:验证多路径配置
验证内核NVMe多路径状态、ANA状态和ONTAP命名空间是否适用于NVMe-oF配置。
-
验证是否已启用内核NVMe多路径:
cat /sys/module/nvme_core/parameters/multipath您应看到以下输出:
Y
-
验证主机上是否正确显示了ONTAP命名空间的相应 NVMe-oF 设置(例如,将型号设置为NetApp ONTAP Controller,并将负载均衡 iopolicy 设置为 round-robin):
-
显示子系统:
cat /sys/class/nvme-subsystem/nvme-subsys*/model您应看到以下输出:
NetApp ONTAP Controller NetApp ONTAP Controller
-
显示策略:
cat /sys/class/nvme-subsystem/nvme-subsys*/iopolicy您应看到以下输出:
round-robin round-robin
-
-
验证是否已在主机上创建并正确发现命名空间:
nvme list显示示例
Node Generic SN Model Namespace Usage Format FW Rev --------------------- --------------------- -------------------- ---------------------------------------- ---------- -------------------------- ---------------- -------- /dev/nvme2n20 /dev/ng2n20 81K13BUDdygsAAAAAAAG NetApp ONTAP Controller 10 5.56 GB / 91.27 GB 4 KiB + 0 B 9.18.1
-
验证每个路径的控制器状态是否为活动状态且是否具有正确的ANA状态:
NVMe/FCnvme list-subsys /dev/nvme2n20显示示例
nvme-subsys2 - NQN= nqn.1992-08.com.netapp:sn.ae9f2d55a7ec11ef8751d039ea9e891c:subsystem.proxmox_120_122_tcp \ +- nvme1 fc traddr=nn-0x200dd039eac79573:pn-0x2010d039eac79573,host_traddr=nn-0x20005ced8c531949:pn-0x10005ced8c531949 live optimized +- nvme3 fc traddr=nn-0x200dd039eac79573:pn-0x200ed039eac79573,host_traddr=nn-0x20005ced8c531948:pn-0x10005ced8c531948 live optimized +- nvme5 fc traddr=nn-0x200dd039eac79573:pn-0x200fd039eac79573,host_traddr=nn-0x20005ced8c531949:pn-0x10005ced8c531949 live non-optimized +- nvme7 fc traddr=nn-0x200dd039eac79573:pn-0x2011d039eac79573,host_traddr=nn-0x20005ced8c531948:pn-0x10005ced8c531948 live non-optimized
NVMe/TCPnvme list-subsys /dev/nvme2n3显示示例
nvme-subsys2 - NQN= qn.1992-08.com.netapp:sn.ae9f2d55a7ec11ef8751d039ea9e891c:subsystem.proxmox_120_122_tcp \ +- nvme2 tcp traddr=192.168.1.30,trsvcid=4420,host_traddr=192.168.1.22,src_addr=192.168.1.22 live optimized +- nvme4 tcp traddr=192.168.2.30,trsvcid=4420,host_traddr=192.168.2.22,src_addr=192.168.2.22 live optimized +- nvme6 tcp traddr=192.168.1.25,trsvcid=4420,host_traddr=192.168.1.22,src_addr=192.168.1.22 live non-optimized +- nvme8 tcp traddr=192.168.2.25,trsvcid=4420,host_traddr=192.168.2.22,src_addr=192.168.2.22 live non-optimized
-
验证NetApp插件是否为每个ONTAP 命名空间设备显示正确的值:
列nvme netapp ontapdevices -o column显示示例
Device Vserver Namespace Path ------------- --------------------- ------------------------------ /dev/nvme2n11 proxmox_120_122 / /vol/vm120_tcp1/ns NSID UUID Size ---- ------------------------------------ -------- 1 5aefea74-f0cf-4794-a7e9-e113c4659aca 37.58GB
JSONnvme netapp ontapdevices -o json显示示例
{ "Device":"/dev/nvme2n11", "Vserver":"proxmox_120_122", "Namespace_Path":"/vol/vm120_tcp1/ns", "NSID":1, "UUID":"5aefea74-f0cf-4794-a7e9-e113c4659aca", “Size”:”37.58GB”, "LBA_Data_Size":4096, "Namespace_Size":32212254720 } ]
第6步:查看已知问题
没有已知问题。