Configure Rocky Linux 8.9 with NVMe-oF for ONTAP storage
Rocky Linux 8.9 hosts support the NVMe/FC and NVMe/TCP protocols with Asymmetric Namespace Access (ANA). ANA is equivalent to asymmetric logical unit access (ALUA) multipathing in iSCSI and FCP environments and is implemented using the in-kernel NVMe multipath feature.
For additional details on supported configurations, see the Interoperability Matrix Tool.
You can use the following support and features with the NVMe-oF host configuration for Rocky Linux 8.9. You should also review the known limitations before starting the configuration process.
-
Support available:
-
Support for NVMe over TCP (NVMe/TCP) in addition to NVMe over Fibre Channel (NVMe/FC). The NetApp plug-in in the native
nvme-cli
package displays ONTAP details for both NVMe/FC and NVMe/TCP namespaces. -
Running NVMe and SCSI traffic on the same host. For example, you can configure dm-multipath for SCSI mpath devices on SCSI LUNs and use NVMe multipath to configure NVMe-oF namespace devices on the host.
-
-
Known limitations:
-
In-kernel NVMe multipath is disabled by default for Rocky Linux 8.9 NVMe-oF hosts. Therefore, you need to enable it manually.
-
On Rocky Linux 8.9 hosts, NVMe/TCP is a technology preview feature due to open issues. Refer to the Rocky Linux 8.9 Release Notes for details.
-
SAN booting using the NVMe-oF protocol is currently not supported.
-
Step 1: Optionally, enable SAN booting
You can configure your host to use SAN booting to simplify deployment and improve scalability.
Use the Interoperability Matrix Tool to verify that your Linux OS, host bus adapter (HBA), HBA firmware, HBA boot BIOS, and ONTAP version support SAN booting.
-
Enable SAN booting in the server BIOS for the ports to which the SAN boot namespace is mapped.
For information on how to enable the HBA BIOS, see your vendor-specific documentation.
-
Verify that the configuration was successful by rebooting the host and verifying that the OS is up and running.
Step 2: Validate software versions
Use the following procedure to validate the minimum supported Rocky Linux 8.9 software versions.
-
Install Rocky Linux 8.9 on the server. After the installation is complete, verify that you are running the specified Rocky Linux 8.9 kernel:
uname -r
The following example shows a Rocky Linux kernel version:
5.14.0-570.12.1.el9_6.x86_64
-
Install the
nvme-cli
package:rpm -qa|grep nvme-cli
The following example shows an nvme-cli package version:
nvme-cli-2.11-5.el9.x86_64
-
Install the
libnvme
package:rpm -qa|grep libnvme
The following example shows an
libnvme
package version:libnvme-1.11.1-1.el9.x86_64
-
On the Rocky Linux host, check the hostnqn string at
/etc/nvme/hostnqn
:cat /etc/nvme/hostnqn
The following example shows an
hostnqn
version:nqn.2014-08.org.nvmexpress:uuid:4c4c4544-0056-5410-8048-b9c04f425633
-
Verify that the
hostnqn
string matches thehostnqn
string for the corresponding subsystem on the ONTAP array:::> vserver nvme subsystem host show -vserver vs_coexistence_LPE36002
Show example
Vserver Subsystem Priority Host NQN ------- --------- -------- ------------------------------------------------ vs_coexistence_LPE36002 nvme regular nqn.2014-08.org.nvmexpress:uuid:4c4c4544-0056-5410-8048-b9c04f425633 nvme_1 regular nqn.2014-08.org.nvmexpress:uuid:4c4c4544-0056-5410-8048-b9c04f425633 nvme_2 regular nqn.2014-08.org.nvmexpress:uuid:4c4c4544-0056-5410-8048-b9c04f425633 nvme_3 regular nqn.2014-08.org.nvmexpress:uuid:4c4c4544-0056-5410-8048-b9c04f425633 4 entries were displayed.
If the hostnqn
strings do not match, use thevserver modify
command to update thehostnqn
string on your corresponding ONTAP array subsystem to match thehostnqn
string from/etc/nvme/hostnqn
on the host.
Step 3: Configure NVMe/FC
You can configure NVMe/FC with Broadcom/Emulex FC or Marvell/Qlogic FC adapters. You also need to manually discover the NVMe/TCP subsystems and namespaces.
Configure NVMe/FC for a Broadcom/Emulex adapter.
-
Verify that you are using the supported adapter model:
-
Display the model names:
cat /sys/class/scsi_host/host*/modelname
You should see the following output:
LPe36002-M64 LPe36002-M64
-
Display the model descriptions:
cat /sys/class/scsi_host/host*/modeldesc
You should see an output similar to the following example:
Emulex LightPulse LPe36002-M64 2-Port 64Gb Fibre Channel Adapter Emulex LightPulse LPe36002-M64 2-Port 64Gb Fibre Channel Adapter
-
-
Verify that you are using the recommended Broadcom
lpfc
firmware and inbox driver:-
Display the firmware version:
cat /sys/class/scsi_host/host*/fwrev
The following example shows firmware versions:
14.4.317.10, sli-4:6:d 14.4.317.10, sli-4:6:d
-
Display the inbox driver version:
cat /sys/module/lpfc/version`
The following example shows a driver version:
0:14.4.0.2
For the current list of supported adapter driver and firmware versions, see the Interoperability Matrix Tool.
-
-
Verify that the expected output of
lpfc_enable_fc4_type
is set to3
:cat /sys/module/lpfc/parameters/lpfc_enable_fc4_type
-
Verify that you can view your initiator ports:
cat /sys/class/fc_host/host*/port_name
The following example shows port identities:
0x100000109bf044b1 0x100000109bf044b2
-
Verify that your initiator ports are online:
cat /sys/class/fc_host/host*/port_state
You should see the following output:
Online Online
-
Verify that the NVMe/FC initiator ports are enabled and that the target ports are visible:
cat /sys/class/scsi_host/host*/nvme_info
Show example
NVME Initiator Enabled XRI Dist lpfc2 Total 6144 IO 5894 ELS 250 NVME LPORT lpfc2 WWPN x100000109bf044b1 WWNN x200000109bf044b1 DID x022a00 ONLINE NVME RPORT WWPN x202fd039eaa7dfc8 WWNN x202cd039eaa7dfc8 DID x021310 TARGET DISCSRVC ONLINE NVME RPORT WWPN x202dd039eaa7dfc8 WWNN x202cd039eaa7dfc8 DID x020b10 TARGET DISCSRVC ONLINE NVME Statistics LS: Xmt 0000000810 Cmpl 0000000810 Abort 00000000 LS XMIT: Err 00000000 CMPL: xb 00000000 Err 00000000 Total FCP Cmpl 000000007b098f07 Issue 000000007aee27c4 OutIO ffffffffffe498bd abort 000013b4 noxri 00000000 nondlp 00000058 qdepth 00000000 wqerr 00000000 err 00000000 FCP CMPL: xb 000013b4 Err 00021443 NVME Initiator Enabled XRI Dist lpfc3 Total 6144 IO 5894 ELS 250 NVME LPORT lpfc3 WWPN x100000109bf044b2 WWNN x200000109bf044b2 DID x021b00 ONLINE NVME RPORT WWPN x2033d039eaa7dfc8 WWNN x202cd039eaa7dfc8 DID x020110 TARGET DISCSRVC ONLINE NVME RPORT WWPN x2032d039eaa7dfc8 WWNN x202cd039eaa7dfc8 DID x022910 TARGET DISCSRVC ONLINE NVME Statistics LS: Xmt 0000000840 Cmpl 0000000840 Abort 00000000 LS XMIT: Err 00000000 CMPL: xb 00000000 Err 00000000 Total FCP Cmpl 000000007afd4434 Issue 000000007ae31b83 OutIO ffffffffffe5d74f abort 000014a5 noxri 00000000 nondlp 0000006a qdepth 00000000 wqerr 00000000 err 00000000 FCP CMPL: xb 000014a5 Err 0002149a
Configure NVMe/FC for a Marvell/QLogic adapter.
|
The native inbox qla2xxx driver included in the Rocky Linux kernel has the latest fixes. These fixes are essential for ONTAP support. |
-
Verify that you are running the supported adapter driver and firmware versions:
cat /sys/class/fc_host/host*/symbolic_name
The follow example shows driver and firmware versions:
QLE2742 FW:v9.14.00 DVR:v10.02.09.200-k QLE2742 FW:v9.14.00 DVR:v10.02.09.200-k
-
Verify that
ql2xnvmeenable
is set. This enables the Marvell adapter to function as an NVMe/FC initiator:cat /sys/module/qla2xxx/parameters/ql2xnvmeenable
The expected output is 1.
Step 4: Optionally, enable 1MB I/O
You can enable I/O requests of size 1MB for NVMe/FC configured with a Broadcom adapter. ONTAP reports a Max Data Transfer Size (MDTS) of 8 in the Identify Controller data. This means the maximum I/O request size can be up to 1MB. To issue I/O requests of size 1MB, you need to increase the lpfc value of the lpfc_sg_seg_cnt
parameter to 256 from the default value of 64.
|
These steps don't apply to Qlogic NVMe/FC hosts. |
-
Set the
lpfc_sg_seg_cnt
parameter to 256:cat /etc/modprobe.d/lpfc.conf
options lpfc lpfc_sg_seg_cnt=256
-
Run the
dracut -f
command, and reboot the host. -
Verify that the value for
lpfc_sg_seg_cnt
is 256:cat /sys/module/lpfc/parameters/lpfc_sg_seg_cnt
Step 5: Configure NVMe/TCP
The NVMe/TCP protocol doesn't support the auto-connect operation. Instead, you can discover the NVMe/TCP subsystems and namespaces by performing the NVMe/TCP connect or connect-all operations manually.
-
Verify that the initiator port can fetch the discovery log page data across the supported NVMe/TCP LIFs:
nvme discover -t tcp -w host-traddr -a traddr
Show example
nvme discover -t tcp -w 192.168.1.31 -a 192.168.1.24 Discovery Log Number of Records 20, Generation counter 25 =====Discovery Log Entry 0====== trtype: tcp adrfam: ipv4 subtype: current discovery subsystem treq: not specified portid: 4 trsvcid: 8009 subnqn: nqn.1992-08.com.netapp:sn.0f4ba1e74eb611ef9f50d039eab6cb6d:discovery traddr: 192.168.2.25 eflags: explicit discovery connections, duplicate discovery information sectype: none =====Discovery Log Entry 1====== trtype: tcp adrfam: ipv4 subtype: current discovery subsystem treq: not specified portid: 2 trsvcid: 8009 subnqn: nqn.1992-08.com.netapp:sn.0f4ba1e74eb611ef9f50d039eab6cb6d:discovery traddr: 192.168.1.25 eflags: explicit discovery connections, duplicate discovery information sectype: none =====Discovery Log Entry 2====== trtype: tcp adrfam: ipv4 subtype: current discovery subsystem treq: not specified portid: 5 trsvcid: 8009 subnqn: nqn.1992-08.com.netapp:sn.0f4ba1e74eb611ef9f50d039eab6cb6d:discovery traddr: 192.168.2.24 eflags: explicit discovery connections, duplicate discovery information sectype: none =====Discovery Log Entry 3====== trtype: tcp adrfam: ipv4 subtype: current discovery subsystem treq: not specified portid: 1 trsvcid: 8009 subnqn: nqn.1992-08.com.netapp:sn.0f4ba1e74eb611ef9f50d039eab6cb6d:discovery traddr: 192.168.1.24 eflags: explicit discovery connections, duplicate discovery information sectype: none =====Discovery Log Entry 4====== trtype: tcp adrfam: ipv4 subtype: nvme subsystem treq: not specified portid: 4 trsvcid: 4420 subnqn: nqn.1992-08.com.netapp:sn.0f4ba1e74eb611ef9f50d039eab6cb6d:subsystem.nvme_tcp_1 traddr: 192.168.2.25 eflags: none sectype: none =====Discovery Log Entry 5====== trtype: tcp adrfam: ipv4 subtype: nvme subsystem treq: not specified portid: 2 trsvcid: 4420 subnqn: nqn.1992-08.com.netapp:sn.0f4ba1e74eb611ef9f50d039eab6cb6d:subsystem.nvme_tcp_1 traddr: 192.168.1.25 eflags: none sectype: none =====Discovery Log Entry 6====== trtype: tcp adrfam: ipv4 subtype: nvme subsystem treq: not specified portid: 5 trsvcid: 4420 subnqn: nqn.1992-08.com.netapp:sn.0f4ba1e74eb611ef9f50d039eab6cb6d:subsystem.nvme_tcp_1 traddr: 192.168.2.24 eflags: none sectype: none =====Discovery Log Entry 7====== trtype: tcp adrfam: ipv4 subtype: nvme subsystem treq: not specified portid: 1 trsvcid: 4420 subnqn: nqn.1992-08.com.netapp:sn.0f4ba1e74eb611ef9f50d039eab6cb6d:subsystem.nvme_tcp_1 traddr: 192.168.1.24 eflags: none sectype: none =====Discovery Log Entry 8====== trtype: tcp adrfam: ipv4 subtype: nvme subsystem treq: not specified portid: 4 trsvcid: 4420 subnqn: nqn.1992-08.com.netapp:sn.0f4ba1e74eb611ef9f50d039eab6cb6d:subsystem.nvme_tcp_4 traddr: 192.168.2.25 eflags: none sectype: none =====Discovery Log Entry 9====== trtype: tcp adrfam: ipv4 subtype: nvme subsystem treq: not specified portid: 2 trsvcid: 4420 subnqn: nqn.1992-08.com.netapp:sn.0f4ba1e74eb611ef9f50d039eab6cb6d:subsystem.nvme_tcp_4 traddr: 192.168.1.25 eflags: none sectype: none =====Discovery Log Entry 10====== trtype: tcp adrfam: ipv4 subtype: nvme subsystem treq: not specified portid: 5 trsvcid: 4420 subnqn: nqn.1992-08.com.netapp:sn.0f4ba1e74eb611ef9f50d039eab6cb6d:subsystem.nvme_tcp_4 traddr: 192.168.2.24 eflags: none sectype: none =====Discovery Log Entry 11====== trtype: tcp adrfam: ipv4 subtype: nvme subsystem treq: not specified portid: 1 trsvcid: 4420 subnqn: nqn.1992-08.com.netapp:sn.0f4ba1e74eb611ef9f50d039eab6cb6d:subsystem.nvme_tcp_4 traddr: 192.168.1.24 eflags: none sectype: none =====Discovery Log Entry 12====== trtype: tcp adrfam: ipv4 subtype: nvme subsystem treq: not specified portid: 4 trsvcid: 4420 subnqn: nqn.1992-08.com.netapp:sn.0f4ba1e74eb611ef9f50d039eab6cb6d:subsystem.nvme_tcp_3 traddr: 192.168.2.25 eflags: none sectype: none =====Discovery Log Entry 13====== trtype: tcp adrfam: ipv4 subtype: nvme subsystem treq: not specified portid: 2 trsvcid: 4420 subnqn: nqn.1992-08.com.netapp:sn.0f4ba1e74eb611ef9f50d039eab6cb6d:subsystem.nvme_tcp_3 traddr: 192.168.1.25 eflags: none sectype: none =====Discovery Log Entry 14====== trtype: tcp adrfam: ipv4 subtype: nvme subsystem treq: not specified portid: 5 trsvcid: 4420 subnqn: nqn.1992-08.com.netapp:sn.0f4ba1e74eb611ef9f50d039eab6cb6d:subsystem.nvme_tcp_3 traddr: 192.168.2.24 eflags: none sectype: none =====Discovery Log Entry 15====== trtype: tcp adrfam: ipv4 subtype: nvme subsystem treq: not specified portid: 1 trsvcid: 4420 subnqn: nqn.1992-08.com.netapp:sn.0f4ba1e74eb611ef9f50d039eab6cb6d:subsystem.nvme_tcp_3 traddr: 192.168.1.24 eflags: none sectype: none =====Discovery Log Entry 16====== trtype: tcp adrfam: ipv4 subtype: nvme subsystem treq: not specified portid: 4 trsvcid: 4420 subnqn: nqn.1992-08.com.netapp:sn.0f4ba1e74eb611ef9f50d039eab6cb6d:subsystem.nvme_tcp_2 traddr: 192.168.2.25 eflags: none sectype: none =====Discovery Log Entry 17====== trtype: tcp adrfam: ipv4 subtype: nvme subsystem treq: not specified portid: 2 trsvcid: 4420 subnqn: nqn.1992-08.com.netapp:sn.0f4ba1e74eb611ef9f50d039eab6cb6d:subsystem.nvme_tcp_2 traddr: 192.168.1.25 eflags: none sectype: none =====Discovery Log Entry 18====== trtype: tcp adrfam: ipv4 subtype: nvme subsystem treq: not specified portid: 5 trsvcid: 4420 subnqn: nqn.1992-08.com.netapp:sn.0f4ba1e74eb611ef9f50d039eab6cb6d:subsystem.nvme_tcp_2 traddr: 192.168.2.24 eflags: none sectype: none =====Discovery Log Entry 19====== trtype: tcp adrfam: ipv4 subtype: nvme subsystem treq: not specified portid: 1 trsvcid: 4420 subnqn: nqn.1992-08.com.netapp:sn.0f4ba1e74eb611ef9f50d039eab6cb6d:subsystem.nvme_tcp_2 traddr: 192.168.1.24 eflags: none sectype: none
-
Verify that the other NVMe/TCP initiator-target LIF combinations are able to successfully fetch discovery log page data:
nvme discover -t tcp -w host-traddr -a traddr
Show example
nvme discover -t tcp -w 192.168.1.31 -a 192.168.1.24 nvme discover -t tcp -w 192.168.2.31 -a 192.168.2.24 nvme discover -t tcp -w 192.168.1.31 -a 192.168.1.25 nvme discover -t tcp -w 192.168.2.31 -a 192.168.2.25
-
Run the
nvme connect-all
command across all the supported NVMe/TCP initiator-target LIFs across the nodes:nvme connect-all -t tcp -w host-traddr -a traddr
Show example
nvme connect-all -t tcp -w 192.168.1.31 -a 192.168.1.24 nvme connect-all -t tcp -w 192.168.2.31 -a 192.168.2.24 nvme connect-all -t tcp -w 192.168.1.31 -a 192.168.1.25 nvme connect-all -t tcp -w 192.168.2.31 -a 192.168.2.25
Step 6: Validate NVMe-oF
Verify that the in-kernel NVMe multipath status, ANA status, and ONTAP namespaces are correct for the NVMe-oF configuration.
-
Verify that the in-kernel NVMe multipath is enabled:
cat /sys/module/nvme_core/parameters/multipath
You should see the following output:
Y
-
Verify that the appropriate NVMe-oF settings (such as, model set to NetApp ONTAP Controller and load balancing iopolicy set to round-robin) for the respective ONTAP namespaces correctly reflect on the host:
-
Display the subsystems:
cat /sys/class/nvme-subsystem/nvme-subsys*/model
You should see the following output:
NetApp ONTAP Controller NetApp ONTAP Controller
-
Display the policy:
cat /sys/class/nvme-subsystem/nvme-subsys*/iopolicy
You should see the following output:
round-robin round-robin
-
-
Verify that the namespaces are created and correctly discovered on the host:
nvme list
Show example
Node SN Model --------------------------------------------------------- /dev/nvme4n1 81Ix2BVuekWcAAAAAAAB NetApp ONTAP Controller Namespace Usage Format FW Rev ----------------------------------------------------------- 1 21.47 GB / 21.47 GB 4 KiB + 0 B FFFFFFFF
-
Verify that the controller state of each path is live and has the correct ANA status:
NVMe/FCnvme list-subsys /dev/nvme4n5
Show example
nvme-subsys4 - NQN=nqn.1992-08.com.netapp:sn.3a5d31f5502c11ef9f50d039eab6cb6d:subsystem.nvme_1 hostnqn=nqn.2014-08.org.nvmexpress:uuid:e6dade64-216d- 11ec-b7bb-7ed30a5482c3 iopolicy=round-robin\ +- nvme1 fc traddr=nn-0x2082d039eaa7dfc8:pn-0x2088d039eaa7dfc8,host_traddr=nn-0x20000024ff752e6d:pn-0x21000024ff752e6d live optimized +- nvme12 fc traddr=nn-0x2082d039eaa7dfc8:pn-0x208ad039eaa7dfc8,host_traddr=nn-0x20000024ff752e6d:pn-0x21000024ff752e6d live non-optimized +- nvme10 fc traddr=nn-0x2082d039eaa7dfc8:pn-0x2087d039eaa7dfc8,host_traddr=nn-0x20000024ff752e6c:pn-0x21000024ff752e6c live non-optimized +- nvme3 fc traddr=nn-0x2082d039eaa7dfc8:pn-0x2083d039eaa7dfc8,host_traddr=nn-0x20000024ff752e6c:pn-0x21000024ff752e6c live optimized
NVMe/TCPnvme list-subsys /dev/nvme1n1
Show example
nvme-subsys5 - NQN=nqn.1992-08.com.netapp:sn.0f4ba1e74eb611ef9f50d039eab6cb6d:subsystem.nvme_tcp_3 hostnqn=nqn.2014-08.org.nvmexpress:uuid:4c4c4544-0035-5910-804b-b5c04f444d33 iopolicy=round-robin \ +- nvme13 tcp traddr=192.168.2.25,trsvcid=4420,host_traddr=192.168.2.31, src_addr=192.168.2.31 live optimized +- nvme14 tcp traddr=192.168.2.24,trsvcid=4420,host_traddr=192.168.2.31, src_addr=192.168.2.31 live non-optimized +- nvme5 tcp traddr=192.168.1.25,trsvcid=4420,host_traddr=192.168.1.31, src_addr=192.168.1.31 live optimized +- nvme6 tcp traddr=192.168.1.24,trsvcid=4420,host_traddr=192.168.1.31, src_addr=192.168.1.31 live non-optimized
-
Verify that the NetApp plug-in displays the correct values for each ONTAP namespace device:
Columnnvme netapp ontapdevices -o column
Show example
Device Vserver Namespace Path ----------------------- ------------------------------ /dev/nvme1n1 linux_tcnvme_iscsi /vol/tcpnvme_1_0_0/tcpnvme_ns NSID UUID Size ------------------------------------------------------------ 1 5f7f630d-8ea5-407f-a490-484b95b15dd6 21.47GB
JSONnvme netapp ontapdevices -o json
Show example
{ "ONTAPdevices":[ { "Device":"/dev/nvme1n1", "Vserver":"linux_tcnvme_iscsi", "Namespace_Path":"/vol/tcpnvme_1_0_0/tcpnvme_ns", "NSID":1, "UUID":"5f7f630d-8ea5-407f-a490-484b95b15dd6", "Size":"21.47GB", "LBA_Data_Size":4096, "Namespace_Size":5242880 }, ] }
Step 7: Review the known issues
The NVMe-oF host configuration for Rocky Linux 8.9 with ONTAP storage release has the following known issue:
NetApp Bug ID | Title | Description |
---|---|---|
Rocky Linux 8.9 NVMe-oF hosts create duplicate persistent discovery controllers |
On NVMe over Fabrics (NVMe-oF) hosts, you can use the "nvme discover -p" command to create Persistent Discovery Controllers (PDCs). When this command is used, only one PDC should be created per initiator-target combination. However, if you are running Rocky Linux 8.9 on an NVMe-oF host, a duplicate PDC is created each time "nvme discover -p" is executed. This leads to unnecessary usage of resources on both the host and the target. |