NVMe-oF Host Configuration for Oracle Linux 9.3 with ONTAP
NVMe over Fabrics (NVMe-oF), including NVMe over Fibre Channel (NVMe/FC) and other transports, is supported with Oracle Linux (OL) 9.3 with Asymmetric Namespace Access (ANA). In NVMe-oF environments, ANA is the equivalent of ALUA multipathing in iSCSI and FC environments and is implemented with in-kernel NVMe multipath.
The following support is available for the NVMe-oF host configuration for OL 9.3 with ONTAP:
-
Support for NVMe over TCP (NVMe/TCP) in addition to NVMe/FC. The NetApp plug-in in the native nvme-cli package displays ONTAP details for both NVMe/FC and NVMe/TCP namespaces.
-
Use of NVMe and SCSI co-existent traffic on the same host on a given host bus adapter (HBA), without the explicit dm-multipath settings to prevent claiming NVMe namespaces.
For additional details on supported configurations, see the NetApp Interoperability Matrix Tool.
Features
Oracle Linux 9.3 has in-kernel NVMe multipath enabled for NVMe namespaces by default, therefore, there is no need for explicit settings.
Known limitations
SAN booting using the NVMe-oF protocol is currently not supported.
Validate software versions
You can use the following procedure to validate the minimum supported OL 9.3 software versions.
-
Install OL 9.3 GA on the server. After the installation is complete, verify that you are running the specified OL 9.3 GA kernel.
# uname -r
Example output:
5.15.0-200.131.27.el9uek.x86_64
-
Install the
nvme-cli
package:# rpm -qa|grep nvme-cli
Example output:
nvme-cli-2.4-10.el9.x86_64
-
Install the
libnvme
package:#rpm -qa|grep libnvme
Example output
libnvme-1.4-7.el9.x86_64
-
On the Oracle Linux 9.3 host, check the
hostnqn
string at/etc/nvme/hostnqn
:# cat /etc/nvme/hostnqn
Example output:
nqn.2014-08.org.nvmexpress:uuid:2831093d-fa7f-4714-a6bf-548796e82053
-
Verify that the
hostnqn
string matches thehostnqn
string for the corresponding subsystem on the ONTAP array:::> vserver nvme subsystem host show -vserver vs_ol_nvme
Example output:
Vserver Subsystem Host NQN ----------- --------------- ---------------------------------------------------------- vs_ol_nvme nvme nqn.2014-08.org.nvmexpress:uuid:2831093d-fa7f-4714-a6bf-548796e82053
If the hostnqn
strings do not match, you can use thevserver modify
command to update thehostnqn
string on your corresponding ONTAP array subsystem to match thehostnqn
string from/etc/nvme/hostnqn
on the host.
Configure NVMe/FC
You can configure NVMe/FC for Broadcom/Emulex adapters or Marvell/Qlogic adapters.
-
Verify that you are using the supported adapter model:
# cat /sys/class/scsi_host/host*/modelname
Example output:
LPe36002-M2 LPe36002-M2
# cat /sys/class/scsi_host/host*/modeldesc
Example output:
Emulex LightPulse LPe36002-M2 2-Port 64Gb Fibre Channel Adapter Emulex LightPulse LPe36002-M2 2-Port 64Gb Fibre Channel Adapter
-
Verify that you are using the recommended Broadcom
lpfc
firmware and inbox driver:# cat /sys/class/scsi_host/host*/fwrev 14.2.673.40, sli-4:2:c 14.2.673.40, sli-4:2:c
# cat /sys/module/lpfc/version 0:14.2.0.13
For the most current list of supported adapter driver and firmware versions, see the NetApp Interoperability Matrix Tool.
-
Verify that
lpfc_enable_fc4_type
is set to3
:# cat /sys/module/lpfc/parameters/lpfc_enable_fc4_type 3
-
Verify that the initiator ports are up and running, and that you can see the target LIFs:
# cat /sys/class/fc_host/host*/port_name 0x100000620b3c089c 0x100000620b3c089d
# cat /sys/class/fc_host/host*/port_state Online Online
Show example output
# cat /sys/class/scsi_host/host*/nvme_info NVME Initiator Enabled XRI Dist lpfc0 Total 6144 IO 5894 ELS 250 NVME LPORT lpfc0 WWPN x100000620b3c089c WWNN x200000620b3c089c DID x062f00 ONLINE NVME RPORT WWPN x2019d039ea9ea480 WWNN x2018d039ea9ea480 DID x061b06 TARGET DISCSRVC ONLINE NVME RPORT WWPN x201cd039ea9ea480 WWNN x2018d039ea9ea480 DID x062706 TARGET DISCSRVC ONLINE NVME Statistics LS: Xmt 0000000f03 Cmpl 0000000efa Abort 0000004a LS XMIT: Err 00000009 CMPL: xb 0000004a Err 0000004a Total FCP Cmpl 00000000b9b3486a Issue 00000000b97ba0d2 OutIO ffffffffffc85868 abort 00000afc noxri 00000000 nondlp 00002e34 qdepth 00000000 wqerr 00000000 err 00000000 FCP CMPL: xb 0000138c Err 00014750 NVME Initiator Enabled XRI Dist lpfc1 Total 6144 IO 5894 ELS 250 NVME LPORT lpfc1 WWPN x100000620b3c089d WWNN x200000620b3c089d DID x062400 ONLINE NVME RPORT WWPN x201ad039ea9ea480 WWNN x2018d039ea9ea480 DID x060206 TARGET DISCSRVC ONLINE NVME RPORT WWPN x201dd039ea9ea480 WWNN x2018d039ea9ea480 DID x061305 TARGET DISCSRVC ONLINE NVME Statistics LS: Xmt 0000000b40 Cmpl 0000000b40 Abort 00000000 LS XMIT: Err 00000000 CMPL: xb 00000000 Err 00000000 Total FCP Cmpl 00000000b9a9f03f Issue 00000000b96e622e OutIO ffffffffffc471ef abort 0000090d noxri 00000000 nondlp 00003b3f qdepth 00000000 wqerr 00000000 err 00000000 FCP CMPL: xb 000010a5 Err 000147e4
The native inbox qla2xxx driver included in the OL 9.3 GA kernel has the latest upstream fixes. These fixes are essential for ONTAP support.
-
Verify that you are running the supported adapter driver and firmware versions:
# cat /sys/class/fc_host/host*/symbolic_name QLE2872 FW:v9.14.02 DVR:v 10.02.09.100-k QLE2872 FW:v9.14.02 DVR:v 10.02.09.100-k
-
Verify that
ql2xnvmeenable
is set. This enables the Marvell adapter to function as an NVMe/FC initiator:# cat /sys/module/qla2xxx/parameters/ql2xnvmeenable 1
Enable 1MB I/O size (Optional)
ONTAP reports an MDTS (Max Data Transfer Size) of 8 in the Identify Controller data. This means the maximum I/O request size can be up to 1MB. To issue I/O requests of size 1 MB for a Broadcom NVMe/FC host, you should increase the lpfc
value of the lpfc_sg_seg_cnt
parameter to 256 from the default value of 64.
These steps don't apply to Qlogic NVMe/FC hosts. |
-
Set the
lpfc_sg_seg_cnt
parameter to 256:cat /etc/modprobe.d/lpfc.conf
options lpfc lpfc_sg_seg_cnt=256
-
Run the
dracut -f
command, and reboot the host. -
Verify that the expected value of
lpfc_sg_seg_cnt
is 256:cat /sys/module/lpfc/parameters/lpfc_sg_seg_cnt
Configure NVMe/TCP
NVMe/TCP does not have an auto-connect functionality. Therefore, you need to perform the NVMe/TCP connect or connect-all functionality manually to discover the NVMe/TCP subsystems and namespaces. You can use the following procedure to configure NVMe/TCP.
-
Verify that the initiator port can fetch the discovery log page data across the supported NVMe/TCP LIFs:
nvme discover -t tcp -w host-traddr -a traddr
Show example
# nvme discover -t tcp -w 192.168.166.4 -a 192.168.166.56 Discovery Log Number of Records 4, Generation counter 10 =====Discovery Log Entry 0====== trtype: tcp adrfam: ipv4 subtype: current discovery subsystem treq: not specified portid: 2 trsvcid: 8009 subnqn: nqn.1992-08.com.netapp:sn.337a0392d58011ee9764d039eab0dadd:discovery traddr: 192.168.165.56 eflags: explicit discovery connections, duplicate discovery information sectype: none =====Discovery Log Entry 1====== trtype: tcp adrfam: ipv4 subtype: current discovery subsystem treq: not specified portid: 1 trsvcid: 8009 subnqn: nqn.1992-08.com.netapp:sn.337a0392d58011ee9764d039eab0dadd:discovery traddr: 192.168.166.56 eflags: explicit discovery connections, duplicate discovery information sectype: none =====Discovery Log Entry 2====== trtype: tcp adrfam: ipv4 subtype: nvme subsystem treq: not specified portid: 2 trsvcid: 4420 subnqn: nqn.1992-08.com.netapp:sn.337a0392d58011ee9764d039eab0dadd:subsystem.rhel_95 traddr: 192.168.165.56 eflags: none sectype: none ..........
-
Verify that the other NVMe/TCP initiator-target LIF combinations can successfully fetch discovery log page data:
nvme discover -t tcp -w host-traddr -a traddr
Example output:
# nvme discover -t tcp -w 192.168.166.4 -a 192.168.166.56 # nvme discover -t tcp -w 192.168.165.3 -a 192.168.165.56
-
Run the
nvme connect-all
command across all the supported NVMe/TCP initiator-target LIFs across the nodes:nvme connect-all -t tcp -w host-traddr -a traddr -l <ctrl_loss_timeout_in_seconds>
Example output:
# nvme connect-all -t tcp -w 192.168.166.4 -a 192.168.166.56 -l -1 # nvme connect-all -t tcp -w 192.168.165.3 -a 192.168.165.56 -l -1
NetApp recommends setting the ctrl-loss-tmo
option to-1
so that the NVMe/TCP initiator attempts to reconnect indefinitely in the event of a path loss.
Validate NVMe-oF
You can use the following procedure to validate NVMe-oF.
-
Verify the following NVMe/FC settings on the OL 9.3 host:
# cat /sys/module/nvme_core/parameters/multipath Y
# cat /sys/class/nvme-subsystem/nvme-subsys*/model NetApp ONTAP Controller NetApp ONTAP Controller
# cat /sys/class/nvme-subsystem/nvme-subsys*/iopolicy round-robin round-robin
-
Verify that the namespaces are created and correctly discovered on the host:
# nvme list
Example output:
Node SN Model --------------------------------------------------------- /dev/nvme0n1 814vWBNRwf9HAAAAAAAB NetApp ONTAP Controller /dev/nvme0n2 814vWBNRwf9HAAAAAAAB NetApp ONTAP Controller /dev/nvme0n3 814vWBNRwf9HAAAAAAAB NetApp ONTAP Controller Namespace Usage Format FW Rev ----------------------------------------------------------- 1 21.47 GB / 21.47 GB 4 KiB + 0 B FFFFFFFF 2 21.47 GB / 21.47 GB 4 KiB + 0 B FFFFFFFF 3 21.47 GB/ 21.47 GB 4 KiB + 0 B FFFFFFFF
-
Verify that the controller state of each path is live and has the correct ANA status:
NVMe/FC# nvme list-subsys /dev/nvme0n1
Example output:
nvme-subsys5 - NQN=nqn.1992-08.com.netapp:sn.4aa0fa76c92c11eeb301d039eab0dadd:subsystem.rhel_213 \ +- nvme3 fc traddr=nn-0x2018d039ea9ea480:pn-0x201dd039ea9ea480,host_traddr=nn-0x200000620b3c089d:pn-0x100000620b3c089d live non-optimized +- nvme4 fc traddr=nn-0x2018d039ea9ea480:pn-0x201cd039ea9ea480,host_traddr=nn-0x200000620b3c089c:pn-0x100000620b3c089c live non-optimized +- nvme6 fc traddr=nn-0x2018d039ea9ea480:pn-0x2019d039ea9ea480,host_traddr=nn-0x200000620b3c089c:pn-0x100000620b3c089c live optimized +- nvme7 fc traddr=nn-0x2018d039ea9ea480:pn-0x201ad039ea9ea480,host_traddr=nn-0x200000620b3c089d:pn-0x100000620b3c089d live optimized
NVMe/TCPnvme list-subsys /dev/nvme1n22
Example output
nvme-subsys1 - NQN=nqn.1992-08.com.netapp:sn.337a0392d58011ee9764d039eab0dadd:subsystem.rhel_95 \ +- nvme2 tcp traddr=192.168.166.56,trsvcid=4420,host_traddr=192.168.166.4,src_addr=192.168.166.4 live optimized +- nvme3 tcp traddr=192.168.165.56,trsvcid=4420,host_traddr=192.168.165.3,src_addr=192.168.165.3 live non-optimized
-
Verify that the NetApp plug-in displays the correct values for each ONTAP namespace device:
Column# nvme netapp ontapdevices -o column
Example output:
Device Vserver Namespace Path ----------------------- ------------------------------ /dev/nvme5n6 vs_nvme175 /vol/vol6/ns /dev/nvme5n7 vs_nvme175 /vol/vol7/ns /dev/nvme5n8 vs_nvme175 /vol/vol8/ns NSID UUID Size ------------------------------------------------------------ 6 72b887b1-5fb6-47b8-be0b-33326e2542e2 21.47GB 7 04bf9f6e-9031-40ea-99c7-a1a61b2d7d08 21.47GB 8 264823b1-8e03-4155-80dd-e904237014a4 21.47GB
JSON# nvme netapp ontapdevices -o json
Example output
{ "ONTAPdevices":[ { "Device":"/dev/nvme5n1", "Vserver":"vs_nvme175", "Namespace_Path":"/vol/vol1/ns", "NSID":1, "UUID":"d4791955-07c9-44fc-b41c-d1c39d3d9b5b", "Size":"21.47GB", "LBA_Data_Size":4096, "Namespace_Size":5242880 }, { "Device":"/dev/nvme5n10", "Vserver":"vs_nvme175", "Namespace_Path":"/vol/vol10/ns", "NSID":10, "UUID":"f3a4ce94-bcc5-4ff0-9e52-e59030bbc97f", "Size":"21.47GB", "LBA_Data_Size":4096, "Namespace_Size":5242880 }, { "Device":"/dev/nvme5n11", "Vserver":"vs_nvme175", "Namespace_Path":"/vol/vol11/ns", "NSID":11, "UUID":"0bf171d2-51f7-4a00-8f6a-0ea2190885a2", "Size":"21.47GB", "LBA_Data_Size":4096, "Namespace_Size":5242880 }, ] }
Known issues
There are no known issues for the Oracle Linux 9.3 with ONTAP release.