Skip to main content
SAN hosts and cloud clients

NVMe-oF host configuration for RHEL 9.3 with ONTAP

Contributors netapp-ranuk

NVMe over Fabrics (NVMe-oF), including NVMe over Fibre Channel (NVMe/FC) and other transports, is supported with Red Hat Enterprise Linux (RHEL) 9.3 with Asymmetric Namespace Access (ANA). In NVMe-oF environments, ANA is the equivalent of ALUA multipathing in iSCSI and FC environments and is implemented with in-kernel NVMe multipath.

The following support is available for NVMe-oF host configuration for RHEL 9.3 with ONTAP:

  • Support for NVMe over TCP (NVMe/TCP) in addition to NVMe/FC. The NetApp plug-in in the native nvme-cli package displays ONTAP details for both NVMe/FC and NVMe/TCP namespaces.

  • Use of NVMe and SCSI co-existent traffic on the same host on a given host bus adapter (HBA)without the explicit dm-multipath settings to prevent claiming NVMe namespaces.

For additional details on supported configurations, see the NetApp Interoperability Matrix Tool.

Features

RHEL 9.3 has in-kernel NVMe multipath enabled for NVMe namespaces by default; therefore, there is no need for explicit settings.

Known limitations

SAN booting using the NVMe-oF protocol is currently not supported.

Validate software versions

You can use the following procedure to validate the minimum supported RHEL 9.3 software versions.

Steps
  1. Install RHEL 9.3 on the server. After the installation is complete, verify that you are running the specified RHEL 9.3 kernel:

    # uname -r

    Example output:

    5.14.0-362.8.1.el9_3.x86_64
  2. Install the nvme-cli package:

    # rpm -qa|grep nvme-cli

    Example output:

    nvme-cli-2.4-10.el9.x86_64
  3. Install the libnvme package:

    #rpm -qa|grep libnvme

    Example output

    libnvme-1.4-7.el9.x86_64
  4. On the RHEL 9.3 host, check the hostnqn string at /etc/nvme/hostnqn:

    # cat /etc/nvme/hostnqn

    Example output

    nqn.2014-08.org.nvmexpress:uuid:060fd513-83be-4c3e-aba1-52e169056dcf
  5. Verify that the hostnqn string matches the hostnqn string for the corresponding subsystem on the ONTAP array:

    ::> vserver nvme subsystem host show -vserver vs_nvme147

    Example output:

    Vserver     Subsystem          Host NQN
    ----------- --------------- ----------------------------------------------------------
    vs_nvme147   rhel_147_LPe32002    nqn.2014-08.org.nvmexpress:uuid:060fd513-83be-4c3e-aba1-52e169056dcf
    Note If the hostnqn strings do not match, use the vserver modify command to update the hostnqn string on your corresponding ONTAP array subsystem to match the hostnqn string from /etc/nvme/hostnqn on the host.

Configure NVMe/FC

You can configure NVMe/FC for Broadcom/Emulex or Marvell/Qlogic adapters.

Broadcom/Emulex
Steps
  1. Verify that you are using the supported adapter model:

    # cat /sys/class/scsi_host/host*/modelname

    Example output:

    LPe32002-M2
    LPe32002-M2
    # cat /sys/class/scsi_host/host*/modeldesc

    Example output:

    Emulex LightPulse LPe32002-M2 2-Port 32Gb Fibre Channel Adapter
    Emulex LightPulse LPe32002-M2 2-Port 32Gb Fibre Channel Adapter
  2. Verify that you are using the recommended Broadcom lpfc firmware and inbox driver:

    # cat /sys/class/scsi_host/host*/fwrev
    14.2.539.16, sli-4:2:c
    14.2.539.16, sli-4:2:c
    
    # cat /sys/module/lpfc/version
    0:14.2.0.12

    For the most current list of supported adapter driver and firmware versions, see the NetApp Interoperability Matrix Tool.

  3. Verify that lpfc_enable_fc4_type is set to 3:

    # cat /sys/module/lpfc/parameters/lpfc_enable_fc4_type
    3
  4. Verify that the initiator ports are up and running and that you can see the target LIFs:

    # cat /sys/class/fc_host/host*/port_name
    0x100000109b3c081f
    0x100000109b3c0820
    # cat /sys/class/fc_host/host*/port_state
    Online
    Online
    # cat /sys/class/scsi_host/host*/nvme_info
    NVME Initiator Enabled
    XRI Dist lpfc0 Total 6144 IO 5894 ELS 250
    NVME LPORT lpfc0 WWPN x100000109b3c081f WWNN x200000109b3c081f DID x062300 ONLINE
    NVME RPORT       WWPN x2143d039ea165877 WWNN x2142d039ea165877 DID x061b15 TARGET DISCSRVC ONLINE
    NVME RPORT       WWPN x2145d039ea165877 WWNN x2142d039ea165877 DID x061115 TARGET DISCSRVC ONLINE
    NVME Statistics
    LS: Xmt 000000040b Cmpl 000000040b Abort 00000000
    LS XMIT: Err 00000000  CMPL: xb 00000000 Err 00000000
    Total FCP Cmpl 000000001f5c4538 Issue 000000001f58da22 OutIO fffffffffffc94ea
    abort 00000630 noxri 00000000 nondlp 00001071 qdepth 00000000 wqerr 00000000 err 00000000
    FCP CMPL: xb 00000630 Err 0001bd4a
    NVME Initiator Enabled
    XRI Dist lpfc1 Total 6144 IO 5894 ELS 250
    NVME LPORT lpfc1 WWPN x100000109b3c0820 WWNN x200000109b3c0820 DID x062c00 ONLINE
    NVME RPORT       WWPN x2144d039ea165877 WWNN x2142d039ea165877 DID x060215 TARGET DISCSRVC ONLINE
    NVME RPORT       WWPN x2146d039ea165877 WWNN x2142d039ea165877 DID x061815 TARGET DISCSRVC ONLINE
    NVME Statistics
    LS: Xmt 000000040b Cmpl 000000040b Abort 00000000
    LS XMIT: Err 00000000  CMPL: xb 00000000 Err 00000000
    Total FCP Cmpl 000000001f5c3618 Issue 000000001f5967a4 OutIO fffffffffffd318c
    abort 00000629 noxri 00000000 nondlp 0000044e qdepth 00000000 wqerr 00000000 err 00000000
    FCP CMPL: xb 00000629 Err 0001bd3d
Marvell/QLogic FC Adapter for NVMe/FC
Steps
  1. The native inbox qla2xxx driver included in the RHEL 9.3 GA kernel has the latest fixes essential for ONTAP support. Verify that you are running the supported adapter driver and firmware versions:

    # cat /sys/class/fc_host/host*/symbolic_name

    Example output

    QLE2772 FW:v9.10.11 DVR:v10.02.08.200-k
    QLE2772 FW:v9.10.11 DVR:v10.02.08.200-k
  2. Verify that ql2xnvmeenable is set. This enables the Marvell adapter to function as an NVMe/FC initiator:

    # cat /sys/module/qla2xxx/parameters/ql2xnvmeenable
    1

Enable 1MB I/O (Optional)

ONTAP reports an MDTS (Max Data Transfer Size) of 8 in the Identify Controller data which means the maximum I/O request size can be up to 1MB. However, to issue I/O requests of size 1 MB for a Broadcom NVMe/FC host, you must increase the lpfc value of the lpfc_sg_seg_cnt parameter to 256 from the default value of 64.

Steps
  1. Set the lpfc_sg_seg_cnt parameter to 256.

    # cat /etc/modprobe.d/lpfc.conf
    options lpfc lpfc_sg_seg_cnt=256
  2. Run a dracut -f command, and reboot the host.

  3. Verify that lpfc_sg_seg_cnt is 256.

    # cat /sys/module/lpfc/parameters/lpfc_sg_seg_cnt
    256
Note This is not applicable to Qlogic NVMe/FC hosts.

Configure NVMe/TCP

NVMe/TCP does not have auto-connect functionality. Therefore, if a path goes down and is not reinstated within the default time out period of 10 minutes, NVMe/TCP cannot automatically reconnect. To prevent a time out, you should set the retry period for failover events to at least 30 minutes.

Steps
  1. Verify that the initiator port can fetch the discovery log page data across the supported NVMe/TCP LIFs:

    nvme discover -t tcp -w host-traddr -a traddr

    Example output:

    # nvme discover -t tcp -w 192.168.167.1 -a 192.168.167.16
    
    Discovery Log Number of Records 8, Generation counter 10
    =====Discovery Log Entry 0======
    trtype:  tcp
    adrfam:  ipv4
    subtype: current discovery subsystem
    treq:    not specified
    portid:  0
    trsvcid: 8009
    subnqn:  nqn.1992-08.com.netapp:sn.bbfb4ee8dfb611edbd07d039ea165590:discovery
    traddr:  192.168.166.17
    eflags:  explicit discovery connections, duplicate discovery information
    sectype: none
    =====Discovery Log Entry 1======
    trtype:  tcp
    adrfam:  ipv4
    subtype: current discovery subsystem
    treq:    not specified
    portid:  1
    trsvcid: 8009
    subnqn:  nqn.1992 08.com.netapp:sn.bbfb4ee8dfb611edbd07d039ea165590:discovery
    traddr:  192.168.167.17
    eflags:  explicit discovery connections, duplicate discovery information
    sectype: none
    =====Discovery Log Entry 2======
    trtype:  tcp
    adrfam:  ipv4
    subtype: current discovery subsystem
    treq:    not specified
    portid:  2
    trsvcid: 8009
    subnqn:  nqn.1992-
    08.com.netapp:sn.bbfb4ee8dfb611edbd07d039ea165590:discovery
    traddr:  192.168.166.16
    eflags: explicit discovery connections, duplicate discovery information
    sectype: none
    =====Discovery Log Entry 3======
    trtype:  tcp
    adrfam:  ipv4
    subtype: current discovery subsystem
    treq:    not specified
    portid:  3
    trsvcid: 8009
    subnqn:  nqn.1992-08.com.netapp:sn.bbfb4ee8dfb611edbd07d039ea165590:discovery
    traddr:  192.168.167.16
    eflags:  explicit discovery connections, duplicate discovery information
    sectype: none
    ...
  2. Verify that the other NVMe/TCP initiator-target LIF combinations are able to successfully fetch discovery log page data:

    nvme discover -t tcp -w host-traddr -a traddr

    Example output:

    #nvme discover -t tcp -w 192.168.166.5 -a 192.168.166.22
    #nvme discover -t tcp -w 192.168.166.5 -a 192.168.166.23
    #nvme discover -t tcp -w 192.168.167.5 -a 192.168.167.22
    #nvme discover -t tcp -w 192.168.167.5 -a 192.168.167.23
  3. Run the nvme connect-all command across all the supported NVMe/TCP initiator-target LIFs across the nodes, and set the controller loss timeout period for at least 30 minutes or 1800 seconds:

    nvme connect-all -t tcp -w host-traddr -a traddr -l 1800

    Example output:

    #	nvme	connect-all	-t	tcp	-w	192.168.166.1	-a	192.168.166.16 -l	1800
    #	nvme	connect-all	-t	tcp	-w	192.168.166.1	-a	192.168.166.17 -l	1800
    #	nvme	connect-all	-t	tcp	-w	192.168.167.1	-a	192.168.167.16 -l	1800
    #	nvme	connect-all	-t	tcp	-w	192.168.167.1	-a	192.168.167.17 -l	1800

Validate NVMe-oF

You can use the following procedure to validate NVME-oF.

Steps
  1. Verify that the in-kernel NVMe multipath is enabled:

    # cat /sys/module/nvme_core/parameters/multipath
    Y
  2. Verify that the appropriate NVMe-oF settings (such as, model set to NetApp ONTAP Controller and load balancing iopolicy set to round-robin) for the respective ONTAP namespaces correctly reflect on the host:

    # cat /sys/class/nvme-subsystem/nvme-subsys*/model
    NetApp ONTAP Controller
    NetApp ONTAP Controller
    # cat /sys/class/nvme-subsystem/nvme-subsys*/iopolicy
    round-robin
    round-robin
  3. Verify that the namespaces are created and correctly discovered on the host:

    # nvme list

    Example output:

    Node         SN                   Model
    ---------------------------------------------------------
    /dev/nvme5n21 81CYrNQlis3WAAAAAAAB	NetApp ONTAP Controller
    
    
    Namespace Usage    Format             FW             Rev
    -----------------------------------------------------------
    1                 21.47 GB / 21.47 GB	4 KiB + 0 B   FFFFFFFF
  4. Verify that the controller state of each path is live and has the correct ANA status:

    NVMe/FC
    # nvme list-subsys /dev/nvme5n21

    Example output:

    nvme-subsys4 - NQN=nqn.1992-08.com.netapp:sn.e80cc121ca6911ed8cbdd039ea165590:subsystem.rhel_
    147_LPE32002
    \
     +- nvme2 fc traddr=nn-0x2142d039ea165877:pn-0x2144d039ea165877,host_traddr=nn-0x200000109b3c0820:pn-0x100000109b3c0820 live optimized
     +- nvme3 fc traddr=nn-0x2142d039ea165877:pn-0x2145d039ea165877,host_traddr=nn-0x200000109b3c081f:pn-0x100000109b3c081f live non-optimized
     +- nvme4 fc traddr=nn-0x2142d039ea165877:pn-0x2146d039ea165877,host_traddr=nn-0x200000109b3c0820:pn-0x100000109b3c0820 live non-optimized
     +- nvme6 fc traddr=nn-0x2142d039ea165877:pn-0x2143d039ea165877,host_traddr=nn-0x200000109b3c081f:pn-0x100000109b3c081f live optimized
    NVMe/TCP
    # nvme list-subsys /dev/nvme1n1

    Example output:

    nvme-subsys1 - NQN=nqn.1992- 08.com.netapp:sn. bbfb4ee8dfb611edbd07d039ea165590:subsystem.rhel_tcp_95
    +- nvme1 tcp traddr=192.168.167.16,trsvcid=4420,host_traddr=192.168.167.1,src_addr=192.168.167.1 live
    +- nvme2 tcp traddr=192.168.167.17,trsvcid=4420,host_traddr=192.168.167.1,src_addr=192.168.167.1 live
    +- nvme3 tcp traddr=192.168.167.17,trsvcid=4420,host_traddr=192.168.166.1,src_addr=192.168.166.1 live
    +- nvme4 tcp traddr=192.168.166.16,trsvcid=4420,host_traddr=192.168.166.1,src_addr=192.168.166.1 live
  5. Verify that the NetApp plug-in displays the correct values for each ONTAP namespace device:

    Column
    # nvme netapp ontapdevices -o column

    Example output:

    Device        Vserver   Namespace Path
    ----------------------- ------------------------------
    /dev/nvme0n1 vs_tcp           /vol/vol1/ns1
    
    
    
    NSID       UUID                                   Size
    ------------------------------------------------------------
    1          6fcb8ea0-dc1e-4933-b798-8a62a626cb7f	21.47GB
    JSON
    # nvme netapp ontapdevices -o json

    Example output

    {
    
    "ONTAPdevices" : [
    {
    
    "Device" : "/dev/nvme1n1",
    "Vserver" : "vs_tcp_95",
    "Namespace_Path" : "/vol/vol1/ns1",
    "NSID" : 1,
    "UUID" : "6fcb8ea0-dc1e-4933-b798-8a62a626cb7f",
    "Size" : "21.47GB",
    "LBA_Data_Size" : 4096,
    "Namespace_Size" : 5242880
    },
    
    ]
    }

Known issues

There are no known issues for the NVMe-oF host configuration for RHEL 9.3 with ONTAP release.