NVMe-oF Host Configuration for Oracle Linux 8.7 with ONTAP
NVMe over Fabrics (NVMe-oF), including NVMe over Fibre Channel (NVMe/FC) and other transports, is supported with Oracle Linux (OL) 8.7 with Asymmetric Namespace Access (ANA). In NVMe-oF environments, ANA is the equivalent of ALUA multipathing in iSCSI and FC environments and is implemented with in-kernel NVMe multipath.
The following support is available for the NVMe/FC host configuration for OL 8.7 with ONTAP:
-
Support for NVMe over TCP (NVMe/TCP) in addition to NVMe/FC. The NetApp plug-in in the native
nvme-cli
package displays ONTAP details for both NVMe/FC and NVMe/TCP namespaces. -
Use of NVMe and SCSI co-existent traffic on the same host on a given host bus adapter (HBA), without the explicit dm-multipath settings to prevent claiming NVMe namespaces.
For additional details on supported configurations, see the NetApp Interoperability Matrix Tool.
Features
-
OL 8.7 has in-kernel NVMe multipath enabled for NVMe namespaces by default, therefore, there is no need for explicit settings.
Known limitations
SAN booting using the NVMe-oF protocol is currently not supported.
Validate software versions
You can use the following procedure to validate the minimum supported OL 8.7 software versions.
-
Install OL 8.7 GA on the server. After the installation is complete, verify that you are running the specified OL 8.7 GA kernel.
# uname -r
Example output:
5.15.0-3.60.5.1.el8uek.x86_64
-
Install the
nvme-cli
package:# rpm -qa|grep nvme-cli
Example output:
nvme-cli-1.16-5.el8.x86_64
-
On the Oracle Linux 8.7 host, check the
hostnqn
string at/etc/nvme/hostnqn
:# cat /etc/nvme/hostnqn
Example output:
nqn.2014-08.org.nvmexpress:uuid:791c54eb-545d-4ed3-8d41-91a0a53d4b24
-
Verify that the
hostnqn
string matches thehostnqn
string for the corresponding subsystem on the ONTAP array:::> vserver nvme subsystem host show -vserver vs_ol_nvme
Example output:
Vserver Subsystem Host NQN ----------- --------------- ---------------------------------------------------------- vs_ol_nvme nvme_ss_ol_1 nqn.2014-08.org.nvmexpress:uuid:791c54eb-545d-4ed3-8d41-91a0a53d4b24
If the hostnqn
strings do not match, you can use thevserver modify
command to update thehostnqn
string on your corresponding ONTAP array subsystem to match thehostnqn
string from/etc/nvme/hostnqn
on the host. -
Reboot the host.
If you intend to run both NVMe and SCSI traffic on the same Oracle Linux 8.7 co-existent host, NetApp recommends using the in-kernel NVMe multipath for ONTAP namespaces and dm-multipath for ONTAP LUNs respectively. This also means the ONTAP namespaces should be blacklisted in dm-multipath to prevent dm-multipath from claiming these namespace devices. You can do this by adding the
enable_foreign
setting to the/etc/multipath.conf
file:#cat /etc/multipath.conf defaults { enable_foreign NONE }
Restart the multipathd daemon by running the
systemctl restart multipathd
command to apply the new settings.
Configure NVMe/FC
You can configure NVMe/FC for Broadcom/Emulex or Marvell/Qlogic adapters.
-
Verify that you are using the supported adapter model.
# cat /sys/class/scsi_host/host*/modelname
Example output:
LPe32002-M2 LPe32002-M2
# cat /sys/class/scsi_host/host*/modeldesc
Example output:
Emulex LightPulse LPe32002-M2 2-Port 32Gb Fibre Channel Adapter Emulex LightPulse LPe32002-M2 2-Port 32Gb Fibre Channel Adapter
-
Verify that you are using the recommended Broadcom
lpfc
firmware and inbox driver:# cat /sys/class/scsi_host/host*/fwrev 12.8.614.23, sli-4:2:c 12.8.614.23, sli-4:2:c # cat /sys/module/lpfc/version 0:14.0.0.1
For the most current list of supported adapter driver and firmware versions, see the NetApp Interoperability Matrix Tool.
-
Verify that
lpfc_enable_fc4_type
is set to3
:# cat /sys/module/lpfc/parameters/lpfc_enable_fc4_type 3
-
Verify that the initiator ports are up and running, and that you can see the target LIFs:
# cat /sys/class/fc_host/host*/port_name 0x100000109b3c081f 0x100000109b3c0820
# cat /sys/class/fc_host/host*/port_state Online Online
# cat /sys/class/scsi_host/host*/nvme_info NVME Initiator Enabled XRI Dist lpfc0 Total 6144 IO 5894 ELS 250 NVME LPORT lpfc0 WWPN x100000109b3c081f WWNN x200000109b3c081f DID x060300 ONLINE NVME RPORT WWPN x2010d039ea2c3e2d WWNN x200fd039ea2c3e2d DID x061f0e TARGET DISCSRVC ONLINE NVME RPORT WWPN x2011d039ea2c3e2d WWNN x200fd039ea2c3e2d DID x06270f TARGET DISCSRVC ONLINE NVME Statistics LS: Xmt 0000000a71 Cmpl 0000000a71 Abort 00000000 LS XMIT: Err 00000000 CMPL: xb 00000000 Err 00000000 Total FCP Cmpl 00000000558611c6 Issue 000000005578bb69 OutIO fffffffffff2a9a3 abort 0000007a noxri 00000000 nondlp 00000447 qdepth 00000000 wqerr 00000000 err 00000000 FCP CMPL: xb 00000a8e Err 0000e2a8 NVME Initiator Enabled XRI Dist lpfc1 Total 6144 IO 5894 ELS 250 NVME LPORT lpfc1 WWPN x100000109b3c0820 WWNN x200000109b3c0820 DID x060200 ONLINE NVME RPORT WWPN x2015d039ea2c3e2d WWNN x200fd039ea2c3e2d DID x062e0c TARGET DISCSRVC ONLINE NVME RPORT WWPN x2014d039ea2c3e2d WWNN x200fd039ea2c3e2d DID x06290f TARGET DISCSRVC ONLINE NVME Statistics LS: Xmt 0000000a69 Cmpl 0000000a69 Abort 00000000 LS XMIT: Err 00000000 CMPL: xb 00000000 Err 00000000 Total FCP Cmpl 0000000055814701 Issue 0000000055744b1c OutIO fffffffffff3041b abort 00000046 noxri 00000000 nondlp 0000043f qdepth 00000000 wqerr 00000000 err 00000000 FCP CMPL: xb 00000a89 Err 0000e2f3
-
The native inbox qla2xxx driver included in the OL 8.7 GA kernel has the latest upstream fixes essential for ONTAP support. Verify that you are running the supported adapter driver and firmware versions:
# cat /sys/class/fc_host/host*/symbolic_name
Example output
QLE2742 FW:v9.10.11 DVR:v10.02.06.200-k QLE2742 FW:v9.10.11 DVR:v10.02.06.200-k
-
Verify that
ql2xnvmeenable
is set. This enables the Marvell adapter to function as an NVMe/FC initiator:# cat /sys/module/qla2xxx/parameters/ql2xnvmeenable 1
Enable 1MB I/O (Optional)
ONTAP reports an MDTS (Max Data Transfer Size) of 8 in the Identify Controller data. This means the maximum I/O request size can be up to 1MB. To issue I/O requests of size 1 MB for a Broadcom NVMe/FC host, you must increase the lpfc
value of the lpfc_sg_seg_cnt
parameter to 256 from the default value of 64.
-
Set the
lpfc_sg_seg_cnt
parameter to 256:# cat /etc/modprobe.d/lpfc.conf options lpfc lpfc_sg_seg_cnt=256
-
Run a
dracut -f
command, and reboot the host: -
Verify that
lpfc_sg_seg_cnt
is 256:# cat /sys/module/lpfc/parameters/lpfc_sg_seg_cnt 256
This is not applicable to Qlogic NVMe/FC hosts. |
Configure NVMe/TCP
NVMe/TCP does not have auto-connect functionality. Therefore, if a path goes down and is not reinstated within the default time out period of 10 minutes, NVMe/TCP cannot automatically reconnect. To prevent a time out, you should set the retry period for failover events to at least 30 minutes.
-
Verify that the initiator port can fetch the discovery log page data across the supported NVMe/TCP LIFs:
nvme discover -t tcp -w host-traddr -a traddr
Example output:
# nvme discover -t tcp -w 192.168.6.13 -a 192.168.6.15 Discovery Log Number of Records 6, Generation counter 8 =====Discovery Log Entry 0====== trtype: tcp adrfam: ipv4 subtype: unrecognized treq: not specified portid: 0 trsvcid: 8009 subnqn: nqn.1992-08.com.netapp:sn.1c6ac66338e711eda41dd039ea3ad566:discovery traddr: 192.168.6.17 sectype: none =====Discovery Log Entry 1====== trtype: tcp adrfam: ipv4 subtype: unrecognized treq: not specified portid: 1 trsvcid: 8009 subnqn: nqn.1992-08.com.netapp:sn.1c6ac66338e711eda41dd039ea3ad566:discovery traddr: 192.168.5.17 sectype: none =====Discovery Log Entry 2====== trtype: tcp adrfam: ipv4 subtype: unrecognized treq: not specified portid: 2 trsvcid: 8009 subnqn: nqn.1992-08.com.netapp:sn.1c6ac66338e711eda41dd039ea3ad566:discovery traddr: 192.168.6.15 sectype: none =====Discovery Log Entry 3====== trtype: tcp adrfam: ipv4 subtype: nvme subsystem treq: not specified portid: 0 trsvcid: 4420 subnqn: nqn.1992-08.com.netapp:sn.1c6ac66338e711eda41dd039ea3ad566:subsystem.host_95 traddr: 192.168.6.17 sectype: none ..........
-
Verify that the other NVMe/TCP initiator-target LIF combinations are able to successfully fetch discovery log page data.
nvme discover -t tcp -w host-traddr -a traddr
Example output:
# nvme discover -t tcp -w 192.168.5.13 -a 192.168.5.15 # nvme discover -t tcp -w 192.168.5.13 -a 192.168.5.17 # nvme discover -t tcp -w 192.168.6.13 -a 192.168.6.15 # nvme discover -t tcp -w 192.168.6.13 -a 192.168.6.17
-
Run the
nvme connect-all
command across all the supported NVMe/TCP initiator-target LIFs across the nodes, and set the controller loss timeout period for at least 30 minutes or 1800 seconds:nvme connect-all -t tcp -w host-traddr -a traddr -l 1800
Example output:
# nvme connect-all -t tcp -w 192.168.5.13 -a 192.168.5.15 -l 1800 # nvme connect-all -t tcp -w 192.168.5.13 -a 192.168.5.17 -l 1800 # nvme connect-all -t tcp -w 192.168.6.13 -a 192.168.6.15 -l 1800 # nvme connect-all -t tcp -w 192.168.6.13 -a 192.168.6.17 -l 1800
Validate NVMe-oF
You can use the following procedure to validate NVMe-oF.
-
Verify that in-kernel NVMe multipath is enabled by checking:
# cat /sys/module/nvme_core/parameters/multipath Y
-
Verify that the appropriate NVMe-oF settings (such as
model
set toNetApp ONTAP Controller
and load balancingiopolicy
set toround-robin
) for the respective ONTAP namespaces correctly reflect on the host:# cat /sys/class/nvme-subsystem/nvme-subsys*/model NetApp ONTAP Controller NetApp ONTAP Controller
# cat /sys/class/nvme-subsystem/nvme-subsys*/iopolicy round-robin round-robin
-
Verify that the namespaces are created and correctly discovered on the host:
# nvme list
Example output:
Node SN Model --------------------------------------------------------- /dev/nvme0n1 814vWBNRwf9HAAAAAAAB NetApp ONTAP Controller /dev/nvme0n2 814vWBNRwf9HAAAAAAAB NetApp ONTAP Controller /dev/nvme0n3 814vWBNRwf9HAAAAAAAB NetApp ONTAP Controller Namespace Usage Format FW Rev ----------------------------------------------------------- 1 85.90 GB / 85.90 GB 4 KiB + 0 B FFFFFFFF 2 85.90 GB / 85.90 GB 24 KiB + 0 B FFFFFFFF 3 85.90 GB / 85.90 GB 4 KiB + 0 B FFFFFFFF
-
Verify that the controller state of each path is live and has the correct ANA status:
NVMe/FC# nvme list-subsys /dev/nvme0n1
Example output:
nvme-subsys0 - NQN=nqn.1992-08.com.netapp:sn.5f5f2c4aa73b11e9967e00a098df41bd:subsystem.nvme_ss_ol_1 \ +- nvme0 fc traddr=nn-0x203700a098dfdd91:pn-0x203800a098dfdd91 host_traddr=nn-0x200000109b1c1204:pn-0x100000109b1c1204 live non-optimized +- nvme1 fc traddr=nn-0x203700a098dfdd91:pn-0x203900a098dfdd91 host_traddr=nn-0x200000109b1c1204:pn-0x100000109b1c1204 live non-optimized +- nvme2 fc traddr=nn-0x203700a098dfdd91:pn-0x203a00a098dfdd91 host_traddr=nn-0x200000109b1c1205:pn-0x100000109b1c1205 live optimized +- nvme3 fc traddr=nn-0x203700a098dfdd91:pn-0x203d00a098dfdd91 host_traddr=nn-0x200000109b1c1205:pn-0x100000109b1c1205 live optimized
NVMe/TCP# nvme list-subsys /dev/nvme1n40
Example output:
nvme-subsys1 - NQN=nqn.1992-08.com.netapp:sn.68c036aaa3cf11edbb95d039ea243511:subsystem.tcp \ +- nvme2 tcp traddr=192.168.8.49,trsvcid=4420,host_traddr=192.168.8.1 live non-optimized +- nvme3 tcp traddr=192.168.8.48,trsvcid=4420,host_traddr=192.168.8.1 live non-optimized +- nvme6 tcp traddr=192.168.9.49,trsvcid=4420,host_traddr=192.168.9.1 live optimized +- nvme7 tcp traddr=192.168.9.48,trsvcid=4420,host_traddr=192.168.9.1 live optimized
-
Verify that the NetApp plug-in displays the correct values for each ONTAP namespace device:
Column# nvme netapp ontapdevices -o column
Example output:
Device Vserver Namespace Path ----------------------- ------------------------------ /dev/nvme0n1 vs_ol_nvme /vol/ol_nvme_vol_1_1_0/ol_nvme_ns /dev/nvme0n2 vs_ol_nvme /vol/ol_nvme_vol_1_0_0/ol_nvme_ns /dev/nvme0n3 vs_ol_nvme /vol/ol_nvme_vol_1_1_1/ol_nvme_ns NSID UUID Size ------------------------------------------------------------ 1 72b887b1-5fb6-47b8-be0b-33326e2542e2 85.90GB 2 04bf9f6e-9031-40ea-99c7-a1a61b2d7d08 85.90GB 3 264823b1-8e03-4155-80dd-e904237014a4 85.90GB
JSON# nvme netapp ontapdevices -o json
Example output
{ "ONTAPdevices" : [ { "Device" : "/dev/nvme0n1", "Vserver" : "vs_ol_nvme", "Namespace_Path" : "/vol/ol_nvme_vol_1_1_0/ol_nvme_ns", "NSID" : 1, "UUID" : "72b887b1-5fb6-47b8-be0b-33326e2542e2", "Size" : "85.90GB", "LBA_Data_Size" : 4096, "Namespace_Size" : 20971520 }, { "Device" : "/dev/nvme0n2", "Vserver" : "vs_ol_nvme", "Namespace_Path" : "/vol/ol_nvme_vol_1_0_0/ol_nvme_ns", "NSID" : 2, "UUID" : "04bf9f6e-9031-40ea-99c7-a1a61b2d7d08", "Size" : "85.90GB", "LBA_Data_Size" : 4096, "Namespace_Size" : 20971520 }, { "Device" : "/dev/nvme0n3", "Vserver" : "vs_ol_nvme", "Namespace_Path" : "/vol/ol_nvme_vol_1_1_1/ol_nvme_ns", "NSID" : 3, "UUID" : "264823b1-8e03-4155-80dd-e904237014a4", "Size" : "85.90GB", "LBA_Data_Size" : 4096, "Namespace_Size" : 20971520 }, ] }
Known issues
The NVMe-oF host configuration for OL 8.7 with ONTAP release has the following known issues:
NetApp Bug ID | Title | Description |
---|---|---|
1517321 |
Oracle Linux 8.7 NVMe-oF Hosts create duplicate Persistent Discovery Controllers |
On OL 8.7 NVMe-oF hosts, Persistent Discovery Controllers (PDCs) are created by passing the |