NVMe-oF host configuration for RHEL 8.10 with ONTAP
NVMe over Fabrics (NVMe-oF), including NVMe over Fibre Channel (NVMe/FC) and other transports, is supported with Red Hat Enterprise Linux (RHEL) 8.10 with Asymmetric Namespace Access (ANA). In NVMe-oF environments, ANA is the equivalent of ALUA multipathing in iSCSI and FC environments and is implemented with in-kernel NVMe multipath.
The following support is available for NVMe-oF host configuration for RHEL 8.10 with ONTAP:
-
Support for NVMe over TCP (NVMe/TCP) in addition to NVMe/FC. The NetApp plug-in in the native
nvme-cli
package displays ONTAP details for both NVMe/FC and NVMe/TCP namespaces.
For additional details on supported configurations, see the NetApp Interoperability Matrix Tool.
Known limitations
-
In-kernel NVMe multipath is disabled by default for RHEL 8.10 NVMe-oF hosts. Therefore, you need to enable it manually.
-
On RHEL 8.10 hosts, NVMe/TCP is a technology preview feature due to open issues.
-
SAN booting using the NVMe-oF protocol is currently not supported.
Enable in-kernel multipath
You can use the following procedure to enable in-kernel multipath.
-
Install RHEL 8.10 on the host server.
-
After the installation is complete, verify that you are running the specified RHEL 8.10 kernel:
# uname -r
Example output
4.18.0-553.el8_10.x86_64
-
Install the
nvme-cli
package:rpm -qa|grep nvme-cli
Example output
nvme-cli-1.16-9.el8.x86_64
-
Enable in-kernel NVMe multipath:
Example
# grubby --args=nvme_core.multipath=Y --update-kernel /boot/vmlinuz-4.18.0-553.el8_10.x86_64
-
On the host, check the host NQN string at
/etc/nvme/hostnqn
:# cat /etc/nvme/hostnqn
Example output
nqn.2014-08.org.nvmexpress:uuid:4c4c4544-0032-3410-8035-b8c04f4c5132
-
Verify that the
hostnqn
string matches thehostnqn
string for the corresponding subsystem on the ONTAP array:::> vserver nvme subsystem host show -vserver vs_fcnvme_141
Example output
Vserver Subsystem Host NQN ----------- --------------- ---------------------------------------------------------- vs_25_2742 rhel_101_QLe2772 nqn.2014-08.org.nvmexpress:uuid:546399fc-160f-11e5-89aa-98be942440ca
If the host NQN strings do not match, you can use the vserver modify
command to update the host NQN string on your corresponding ONTAP NVMe subsystem to match the host NQN string/etc/nvme/hostnqn
on the host. -
Reboot the host.
If you intend to run both NVMe and SCSI co-existent traffic on the same host, NetApp recommends using the in-kernel NVMe multipath for ONTAP namespaces and dm-multipath for ONTAP LUNs respectively. This should exclude the ONTAP namespaces from dm-multipath and prevent dm-multipath from claiming these namespace devices. You can do this by adding the # cat /etc/multipath.conf defaults { enable_foreign NONE } |
Configure NVMe/FC
You can configure NVMe/FC for Broadcom/Emulex or Marvell/Qlogic adapters.
-
Verify that you are using the supported adapter model:
# cat /sys/class/scsi_host/host*/modelname
Example output:
LPe32002-M2 LPe32002-M2
# cat /sys/class/scsi_host/host*/modeldesc
Example output:
Emulex LightPulse LPe32002-M2 2-Port 32Gb Fibre Channel Adapter Emulex LightPulse LPe32002-M2 2-Port 32Gb Fibre Channel Adapter
-
Verify that you are using the recommended Broadcom
lpfc
firmware and inbox driver:# cat /sys/class/scsi_host/host*/fwrev 14.2.539.21, sli-4:2:c 14.2.539.21, sli-4:2:c
# cat /sys/module/lpfc/version 0:14.0.0.21
For the most current list of supported adapter driver and firmware versions, see the NetApp Interoperability Matrix Tool.
-
Verify that
lpfc_enable_fc4_type
is set to3
:# cat /sys/module/lpfc/parameters/lpfc_enable_fc4_type 3
-
Verify that the initiator ports are up and running and that you can see the target LIFs:
# cat /sys/class/fc_host/host*/port_name 0x10000090fae0ec88 0x10000090fae0ec89
# cat /sys/class/fc_host/host*/port_state Online Online
# cat /sys/class/scsi_host/host*/nvme_info NVME Initiator Enabled XRI Dist lpfc0 Total 6144 IO 5894 ELS 250 NVME LPORT lpfc0 WWPN x100000109bf044b1 WWNN x200000109bf044b1 DID x022a00 ONLINE NVME RPORT WWPN x211ad039eaa7dfc8 WWNN x2119d039eaa7dfc8 DID x021302 TARGET DISCSRVC ONLINE NVME RPORT WWPN x211cd039eaa7dfc8 WWNN x2119d039eaa7dfc8 DID x020b02 TARGET DISCSRVC ONLINE NVME Statistics LS: Xmt 00000001ff Cmpl 00000001ff Abort 00000000 LS XMIT: Err 00000000 CMPL: xb 00000000 Err 00000000 Total FCP Cmpl 0000000001330ec7 Issue 0000000001330ec9 OutIO 0000000000000002 abort 00000330 noxri 00000000 nondlp 0000000b qdepth 00000000 wqerr 00000000 err 00000000 FCP CMPL: xb 00000354 Err 00000361 NVME Initiator Enabled XRI Dist lpfc1 Total 6144 IO 5894 ELS 250 NVME LPORT lpfc1 WWPN x100000109bf044b2 WWNN x200000109bf044b2 DID x021b00 ONLINE NVME RPORT WWPN x211bd039eaa7dfc8 WWNN x2119d039eaa7dfc8 DID x022902 TARGET DISCSRVC ONLINE NVME RPORT WWPN x211dd039eaa7dfc8 WWNN x2119d039eaa7dfc8 DID x020102 TARGET DISCSRVC ONLINE NVME Statistics LS: Xmt 00000001ff Cmpl 00000001ff Abort 00000000 LS XMIT: Err 00000000 CMPL: xb 00000000 Err 00000000 Total FCP Cmpl 00000000012ec220 Issue 00000000012ec222 OutIO 0000000000000002 abort 0000033b noxri 00000000 nondlp 00000085 qdepth 00000000 wqerr 00000000 err 00000000 FCP CMPL: xb 00000368 Err 00000382
The native inbox qla2xxx driver included in the RHEL 8.10 GA kernel has the latest upstream fixes. These fixes are essential for ONTAP support.
-
Verify that you are running the supported adapter driver and firmware versions:
# cat /sys/class/fc_host/host*/symbolic_name
Example output
QLE2742 FW: v9.10.11 DVR: v10.02.08.200-k QLE2742 FW: v9.10.11 DVR: v10.02.08.200-k
-
Verify that
ql2xnvmeenable
is set. This enables the Marvell adapter to function as an NVMe/FC initiator:# cat /sys/module/qla2xxx/parameters/ql2xnvmeenable 1
Enable 1MB I/O (Optional)
ONTAP reports an MDTS (Max Data Transfer Size) of 8 in the Identify Controller data. This means the maximum I/O request size can be up to 1MB. To issue I/O requests of size 1 MB for a Broadcom NVMe/FC host, you must increase the lpfc
value of the lpfc_sg_seg_cnt
parameter to 256 from the default value of 64.
The following steps don't apply to Qlogic NVMe/FC hosts. |
-
Set the
lpfc_sg_seg_cnt
parameter to 256:cat /etc/modprobe.d/lpfc.conf
Example outputoptions lpfc lpfc_sg_seg_cnt=256
-
Run the
dracut -f
command, and reboot the host: -
Verify that
lpfc_sg_seg_cnt
is 256:cat /sys/module/lpfc/parameters/lpfc_sg_seg_cnt
The expected value is 256.
Configure NVMe/TCP
NVMe/TCP does not have auto-connect functionality. Therefore, if a path goes down and is not reinstated within the default time out period of 10 minutes, NVMe/TCP cannot automatically reconnect. To prevent a time out, you should set the retry period for failover events to at least 30 minutes.
-
Verify that the initiator port can fetch the discovery log page data across the supported NVMe/TCP LIFs:
nvme discover -t tcp -w host-traddr -a traddr
Example output:
# nvme discover -t tcp -w 192.168.2.31 -a 192.168.2.25 Discovery Log Number of Records 8, Generation counter 18 =====Discovery Log Entry 0====== trtype: tcp adrfam: ipv4 subtype: unrecognized treq: not specified. portid: 0 trsvcid: 8009 subnqn: nqn.1992-08.com.netapp:sn.a1b2b785b9de11ee8e7fd039ea9e8ae9:discovery: discovery traddr: 192.168.1.25 sectype: none =====Discovery Log Entry 1====== trtype: tcp adrfam: ipv4 subtype: unrecognized treq: not specified. portid: 1 trsvcid: 8009 subnqn: nqn.1992-08.com.netapp:sn.a1b2b785b9de11ee8e7fd039ea9e8ae9:discovery traddr: 192.168.2.26 sectype: none ..........
-
Verify that the other NVMe/TCP initiator-target LIF combinations can successfully fetch discovery log page data:
nvme discover -t tcp -w host-traddr -a traddr
Example output:
# nvme discover -t tcp -w 192.168.2.31 -a 192.168.2.25 # nvme discover -t tcp -w 192.168.1.31 -a 192.168.1.24 # nvme discover -t tcp -w 192.168.2.31 -a 192.168.2.26 # nvme discover -t tcp -w 192.168.1.31 -a 192.168.1.25
-
Run the
nvme connect-all
command across all the supported NVMe/TCP initiator-target LIFs across the nodes, and set the controller loss timeout period for at least 30 minutes or 1800 seconds:nvme connect-all -t tcp -w host-traddr -a traddr -l 1800
Example output:
# nvme connect-all -t tcp -w 192.168.2.31 -a 192.168.2.25 -l 1800 # nvme connect-all -t tcp -w 192.168.1.31 -a 192.168.1.24 -l 1800 # nvme connect-all -t tcp -w 192.168.2.31 -a 192.168.2.26 -l 1800 # nvme connect-all -t tcp -w 192.168.1.31 -a 192.168.1.25 -l 1800
Validate NVMe-oF
You can use the following procedure to validate NVMe-oF.
-
Verify that the in-kernel NVMe multipath is enabled:
# cat /sys/module/nvme_core/parameters/multipath Y
-
Verify that the appropriate NVMe-oF settings (such as,
model
set toNetApp ONTAP Controller
and load balancingiopolicy
set toround-robin
) for the respective ONTAP namespaces correctly reflect on the host:# cat /sys/class/nvme-subsystem/nvme-subsys*/model NetApp ONTAP Controller NetApp ONTAP Controller
# cat /sys/class/nvme-subsystem/nvme-subsys*/iopolicy round-robin round-robin
-
Verify that the namespaces are created and correctly discovered on the host:
# nvme list
Example output:
Node SN Model --------------------------------------------------------- /dev/nvme0n1 81K1ABVnkwbNAAAAAAAB NetApp ONTAP Controller Namespace Usage Format FW Rev ----------------------------------------------------------- 1 21.47 GB / 21.47 GB 4 KiB + 0 B FFFFFFFF
-
Verify that the controller state of each path is live and has the correct ANA status:
NVMe/FC# nvme list-subsys /dev/nvme0n1
Example output:
nvme-subsys0 - NQN=nqn.1992-08.com.netapp:sn.0cd9ee0dc0ec11ee8e7fd039ea9e8ae9:subsystem.nvme \ +- nvme1 fc traddr=nn-0x2005d039eaa7dfc8:pn-0x2086d039eaa7dfc8 host_traddr=nn-0x20000024ff752e6d:pn-0x21000024ff752e6d live non-optimized +- nvme2 fc traddr=nn-0x2005d039eaa7dfc8:pn-0x2016d039eaa7dfc8 host_traddr=nn-0x20000024ff752e6c:pn-0x21000024ff752e6c live optimized +- nvme3 fc traddr=nn-0x2005d039eaa7dfc8:pn-0x2081d039eaa7dfc8 host_traddr=nn-0x20000024ff752e6c:pn-0x21000024ff752e6c live non-optimized +- nvme4 fc traddr=nn-0x2005d039eaa7dfc8:pn-0x2087d039eaa7dfc8 host_traddr=nn-0x20000024ff752e6d:pn-0x21000024ff752e6d live optimized
NVMe/TCP# nvme list-subsys /dev/nvme0n1
Example output:
nvme-subsys0 - NQN=nqn.1992-08.com.netapp:sn.a1b2b785b9de11ee8e7fd039ea9e8ae9:subsystem.nvme_tcp_1 \ +- nvme0 tcp traddr=192.168.2.26 trsvcid=4420 host_traddr=192.168.2.31 live non-optimized +- nvme1 tcp traddr=192.168.2.25 trsvcid=4420 host_traddr=192.168.2.31 live optimized +- nvme2 tcp traddr=192.168.1.25 trsvcid=4420 host_traddr=192.168.1.31 live non-optimized +- nvme3 tcp traddr=192.168.1.24 trsvcid=4420 host_traddr=192.168.1.31 live optimized
-
Verify that the NetApp plug-in displays the correct values for each ONTAP namespace device:
Column# nvme netapp ontapdevices -o column
Example output:
Device Vserver Namespace Path ----------------------------------------------------- /dev/nvme0n1 tcpiscsi_129 /vol/tcpnvme_1_0_0/tcpnvme_ns NSID UUID Size ------------------------------------------------------------ 1 05c2c351-5d7f-41d7-9bd8-1a56c 21.47GB
JSON# nvme netapp ontapdevices -o json
Example output
{ "ONTAPdevices": [ { "Device": "/dev/nvme0n1", "Vserver": "tcpiscsi_129", "Namespace Path”: /vol/tcpnvme_1_0_0/tcpnvme_ns ", "NSID": 1, "UUID": " 05c2c351-5d7f-41d7-9bd8-1a56c160c80b ", "Size2: "21.47GB", "LBA_Data_Size": 4096, "Namespace Size" : 5242880 }, ] }
Known issues
The NVMe-oF host configuration for RHEL 8.10 with ONTAP has the following known issue:
NetApp Bug ID | Title | Description |
---|---|---|
RHEL 8.10 NVMe-oF hosts create duplicate persistent discovery controllers |
On NVMe over Fabrics (NVMe-oF) hosts, you can use the "nvme discover -p" command to create Persistent Discovery Controllers (PDCs). When this command is used, only one PDC should be created per initiator-target combination. However, if you are running Red Hat Enterprise Linux (RHEL) 8.10 on an NVMe-oF host, a duplicate PDC is created each time "nvme discover -p" is executed. This leads to unnecessary usage of resources on both the host and the target. |