Skip to main content
SAN hosts and cloud clients

NVMe-oF host configuration for RHEL 8.9 with ONTAP

Contributors netapp-ranuk netapp-pcarriga

NVMe over Fabrics (NVMe-oF), including NVMe over Fibre Channel (NVMe/FC) and other transports, is supported with Red Hat Enterprise Linux (RHEL) 8.9 with Asymmetric Namespace Access (ANA). In NVMe-oF environments, ANA is the equivalent of ALUA multipathing in iSCSI and FC environments and is implemented with in-kernel NVMe multipath.

The following support is available for NVMe-oF host configuration for RHEL 8.9 with ONTAP:

  • Support for NVMe over TCP (NVMe/TCP) in addition to NVMe/FC. The NetApp plug-in in the native nvme-cli package displays ONTAP details for both NVMe/FC and NVMe/TCP namespaces.

For additional details on supported configurations, see the NetApp Interoperability Matrix Tool.

Known limitations

  • In-kernel NVMe multipath is disabled by default for RHEL 8.9 NVMe-oF hosts. Therefore, you need to enable it manually.

  • On RHEL 8.9 hosts, NVMe/TCP is a technology preview feature due to open issues.

  • SAN booting using the NVMe-oF protocol is currently not supported.

Enable in-kernel multipath

You can use the following procedure to enable in-kernel multipath.

Steps
  1. Install RHEL 8.9 on the host server.

  2. After the installation is complete, verify that you are running the specified RHEL 8.9 kernel:

    # uname -r

    Example output

    4.18.0-513.5.1.el8_9.x86_64
  3. Install the nvme-cli package:

    rpm -qa|grep nvme-cli

    Example output

    nvme-cli-1.16-9.el8.x86_64
  4. Enable in -kernel NVMe multipath:

    # grubby --args=nvme_core.multipath=Y --update-kernel /boot/vmlinuz-4.18.0-513.5.1.el8_9.x86_64
  5. On the host, check the host NQN string at /etc/nvme/hostnqn:

    # cat /etc/nvme/hostnqn

    Example output

    nqn.2014-08.org.nvmexpress:uuid:4c4c4544-0032-3410-8035-b8c04f4c5132
  6. Verify that the hostnqn string matches the hostnqn string for the corresponding subsystem on the ONTAP array:

    ::> vserver nvme subsystem host show -vserver vs_fcnvme_141

    Example output

    Vserver     Subsystem       Host NQN
    ----------- --------------- ----------------------------------------------------------
    vs_nvme101 rhel_101_QLe2772    nqn.2014-08.org.nvmexpress: uuid:4c4c4544-0032-3410-8035-b8c04f4c5132
    Note If the host NQN strings do not match, you can use the vserver modify command to update the host NQN string on your corresponding ONTAP NVMe subsystem to match the host NQN string /etc/nvme/hostnqn on the host.
  7. Reboot the host.

Note

If you intend to run both NVMe and SCSI co-existent traffic on the same host, NetApp recommends using the in-kernel NVMe multipath for ONTAP namespaces and dm-multipath for ONTAP LUNs respectively. This should exclude the ONTAP namespaces from dm-multipath and prevent dm-multipath from claiming these namespace devices. You can do this by adding the enable_foreign setting to the /etc/multipath.conf file:

# cat /etc/multipath.conf
defaults {
  enable_foreign  NONE
}

Configure NVMe/FC

You can configure NVMe/FC for Broadcom/Emulex or Marvell/Qlogic adapters.

Broadcom/Emulex
Steps
  1. Verify that you are using the supported adapter model:

    # cat /sys/class/scsi_host/host*/modelname

    Example output:

    LPe32002-M2
    LPe32002-M2
    # cat /sys/class/scsi_host/host*/modeldesc

    Example output:

    Emulex LightPulse LPe32002-M2 2-Port 32Gb Fibre Channel Adapter
    Emulex LightPulse LPe32002-M2 2-Port 32Gb Fibre Channel Adapter
  2. Verify that you are using the recommended Broadcom lpfc firmware and inbox driver:

    # cat /sys/class/scsi_host/host*/fwrev
    14.2.539.16, sli-4:2:c
    14.2.539.16, sli-4:2:c
    # cat /sys/module/lpfc/version
    0:14.0.0.21

    For the most current list of supported adapter driver and firmware versions, see the NetApp Interoperability Matrix Tool.

  3. Verify that lpfc_enable_fc4_type is set to 3:

    # cat /sys/module/lpfc/parameters/lpfc_enable_fc4_type
    3
  4. Verify that the initiator ports are up and running and that you can see the target LIFs:

    # cat /sys/class/fc_host/host*/port_name
    0x10000090fae0ec88
    0x10000090fae0ec89
    # cat /sys/class/fc_host/host*/port_state
    Online
    Online
    # cat /sys/class/scsi_host/host*/nvme_info
    NVME Initiator Enabled
    XRI Dist lpfc0 Total 6144 IO 5894 ELS 250
    NVME LPORT lpfc0 WWPN x10000090fae0ec88 WWNN x20000090fae0ec88 DID x0a1300 ONLINE
    NVME RPORT       WWPN x2049d039ea36a105 WWNN x2048d039ea36a105 DID x0a0c0a TARGET DISCSRVC ONLINE
    NVME Statistics
    LS: Xmt 0000000024 Cmpl 0000000024 Abort 00000000
    LS XMIT: Err 00000000 CMPL: xb 00000000 Err 00000000
    Total FCP Cmpl 00000000000001aa Issue 00000000000001ab OutIO 0000000000000001
            abort 00000002 noxri 00000000 nondlp 00000000 qdepth 00000000 wqerr 00000000 err 00000000
    FCP CMPL: xb 00000002 Err 00000003
    NVME Initiator Enabled
    XRI Dist lpfc1 Total 6144 IO 5894 ELS 250
    NVME LPORT lpfc1 WWPN x10000090fae0ec89 WWNN x20000090fae0ec89 DID x0a1200 ONLINE
    NVME RPORT       WWPN x204ad039ea36a105 WWNN x2048d039ea36a105 DID x0a080a TARGET DISCSRVC ONLINE
    NVME Statistics
    LS: Xmt 0000000024 Cmpl 0000000024 Abort 00000000
    LS XMIT: Err 00000000 CMPL: xb 00000000 Err 00000000
    Total FCP Cmpl 00000000000001ac Issue 00000000000001ad OutIO 0000000000000001
            abort 00000002 noxri 00000000 nondlp 00000000 qdepth 00000000 wqerr 00000000 err 00000000
    FCP CMPL: xb 00000002 Err 00000003
Marvell/QLogic FC Adapter for NVMe/FC

The native inbox qla2xxx driver included in the RHEL 8.9 GA kernel has the latest upstream fixes. These fixes are essential for ONTAP support.

Steps
  1. Verify that you are running the supported adapter driver and firmware versions:

    # cat /sys/class/fc_host/host*/symbolic_name

    Example output

    QLE2742 FW: v9.10.11 DVR: v10.02.08.200-k
    QLE2742 FW: v9.10.11 DVR: v10.02.08.200-k
  2. Verify that ql2xnvmeenable is set. This enables the Marvell adapter to function as an NVMe/FC initiator:

    # cat /sys/module/qla2xxx/parameters/ql2xnvmeenable
    1

Enable 1MB I/O (Optional)

ONTAP reports an MDTS (Max Data Transfer Size) of 8 in the Identify Controller data. This means the maximum I/O request size can be up to 1MB. To issue I/O requests of size 1 MB for a Broadcom NVMe/FC host, you should increase the lpfc value of the lpfc_sg_seg_cnt parameter to 256 from the default value of 64.

Note These steps don't apply to Qlogic NVMe/FC hosts.
Steps
  1. Set the lpfc_sg_seg_cnt parameter to 256:

    cat /etc/modprobe.d/lpfc.conf
    options lpfc lpfc_sg_seg_cnt=256
  2. Run the dracut -f command, and reboot the host.

  3. Verify that the expected value of lpfc_sg_seg_cnt is 256:

    cat /sys/module/lpfc/parameters/lpfc_sg_seg_cnt

Configure NVMe/TCP

NVMe/TCP does not have auto-connect functionality. Therefore, if a path goes down and is not reinstated within the default time out period of 10 minutes, NVMe/TCP cannot automatically reconnect. To prevent a time out, you should set the retry period for failover events to at least 30 minutes.

Steps
  1. Verify that the initiator port can fetch the discovery log page data across the supported NVMe/TCP LIFs:

    nvme discover -t tcp -w host-traddr -a traddr

    Example output:

    # nvme discover -t tcp -w 192.168.111.79 -a 192.168.111.14 -l 1800
    
    Discovery Log Number of Records 8, Generation counter 18
    =====Discovery Log Entry 0======
    trtype:  tcp
    adrfam:  ipv4
    subtype: unrecognized
    treq:    not specified.
    portid:  0
    trsvcid: 8009
    subnqn:  nqn.1992-08.com.netapp:sn.154a5833c78c11ecb069d039ea359e4b: discovery
    traddr:  192.168.211.15
    sectype: none
    =====Discovery Log Entry 1======
    trtype:  tcp
    adrfam:  ipv4
    subtype: unrecognized
    treq:    not specified.
    portid:  1
    trsvcid: 8009
    subnqn:  nqn.1992-08.com.netapp:sn.154a5833c78c11ecb069d039ea359e4b: discovery
    traddr:  192.168.111.15
    sectype: none ..........
  2. Verify that the other NVMe/TCP initiator-target LIF combinations can successfully fetch discovery log page data:

    nvme discover -t tcp -w host-traddr -a traddr

    Example output:

    # nvme	discover	-t   tcp    -w	192.168.111.79   -a	192.168.111.14
    # nvme	discover	-t   tcp    -w	192.168.111.79   -a	192.168.111.15
    # nvme	discover	-t   tcp    -w	192.168.211.79   -a	192.168.211.14
    # nvme	discover	-t   tcp    -w	192.168.211.79   -a	192.168.211.15
  3. Run the nvme connect-all command across all the supported NVMe/TCP initiator-target LIFs across the nodes, and set the controller loss timeout period for at least 30 minutes or 1800 seconds:

    nvme connect-all -t tcp -w host-traddr -a traddr -l 1800

    Example output:

    # nvme	connect-all	-t	tcp	-w	192.168.111.79	-a	192.168.111.14	-l	1800
    # nvme	connect-all	-t	tcp	-w	192.168.111.79	-a	192.168.111.15	-l	1800
    # nvme	connect-all	-t	tcp	-w	192.168.211.79	-a	192.168.211.14	-l	1800
    # nvme	connect-all	-t	tcp	-w	192.168.211.79	-a	192.168.211.15	-l	1800

Validate NVMe-oF

You can use the following procedure to validate NVMe-oF.

Steps
  1. Verify that the in-kernel NVMe multipath is enabled:

    # cat /sys/module/nvme_core/parameters/multipath
    Y
  2. Verify that the appropriate NVMe-oF settings (such as, model set to NetApp ONTAP Controller and load balancing iopolicy set to round-robin) for the respective ONTAP namespaces correctly reflect on the host:

    # cat /sys/class/nvme-subsystem/nvme-subsys*/model
    NetApp ONTAP Controller
    NetApp ONTAP Controller
    # cat /sys/class/nvme-subsystem/nvme-subsys*/iopolicy
    round-robin
    round-robin
  3. Verify that the namespaces are created and correctly discovered on the host:

    # nvme list

    Example output:

    Node         SN                   Model
    ---------------------------------------------------------
    /dev/nvme0n1 81Gx7NSiKSQqAAAAAAAB	NetApp ONTAP Controller
    
    
    Namespace Usage    Format             FW             Rev
    -----------------------------------------------------------
    1                 21.47 GB / 21.47 GB	4 KiB + 0 B   FFFFFFFF
  4. Verify that the controller state of each path is live and has the correct ANA status:

    NVMe/FC
    # nvme list-subsys /dev/nvme3n1

    Example output:

    nvme-subsys0 - NQN=nqn.1992-08.com.netapp:sn.8e501f8ebafa11ec9b99d039ea359e4b:subsystem.rhel_163_Qle2742
    +- nvme0 fc traddr=nn-0x204dd039ea36a105:pn-0x2050d039ea36a105 host_traddr=nn-0x20000024ff7f4994:pn-0x21000024ff7f4994 live non-optimized
    +- nvme1 fc traddr=nn-0x204dd039ea36a105:pn-0x2050d039ea36a105 host_traddr=nn-0x20000024ff7f4994:pn-0x21000024ff7f4994 live non-optimized
    +- nvme2 fc traddr=nn-0x204dd039ea36a105:pn-0x204fd039ea36a105 host_traddr=nn-0x20000024ff7f4995:pn-0x21000024ff7f4995 live optimized
    +- nvme3 fc traddr=nn-0x204dd039ea36a105:pn-0x204ed039ea36a105 host_traddr=nn-0x20000024ff7f4994:pn-0x21000024ff7f4994 live optimized
    NVMe/TCP
    # nvme list-subsys /dev/nvme0n1

    Example output:

    nvme-subsys0 - NQN=nqn.1992-08.com.netapp:sn.154a5833c78c11ecb069d039ea359e4b:subsystem.rhel_tcp_165\
    +- nvme0 tcp traddr=192.168.111.15 trsvcid=4420 host_traddr=192.168.111.79 live non-optimized
    +- nvme1 tcp traddr=192.168.111.14 trsvcid=4420 host_traddr=192.168.111.79 live optimized
    +- nvme2 tcp traddr=192.168.211.15 trsvcid=4420 host_traddr=192.168.211.79 live non-optimized
    +- nvme3 tcp traddr=192.168.211.14 trsvcid=4420 host_traddr=192.168.211.79 live optimized
  5. Verify that the NetApp plug-in displays the correct values for each ONTAP namespace device:

    Column
    # nvme netapp ontapdevices -o column

    Example output:

    Device        Vserver   Namespace Path
    ----------------------- ------------------------------
    /dev/nvme0n1 vs_tcp79           /vol/vol1/ns
    
    
    NSID       UUID                                   Size
    ------------------------------------------------------------
    1          aa197984-3f62-4a80-97de-e89436360cec	21.47GB
    JSON
    # nvme netapp ontapdevices -o json

    Example output

    {
      "ONTAPdevices”: [
        {
          "Device”: "/dev/nvme0n1",
          "Vserver”: "vs_tcp79",
          "Namespace Path”: "/vol/vol1/ns",
          "NSID”: 1,
          "UUID”: "aa197984-3f62-4a80-97de-e89436360cec",
          "Size”: "21.47GB",
          "LBA_Data_Size”: 4096,
          "Namespace Size" : 5242880
        },
    ]
    
    }

Known issues

The NVMe-oF host configuration for RHEL 8.9 with ONTAP release has the following known issue:

NetApp Bug ID Title Description

1479047

RHEL 8.9 NVMe-oF hosts create duplicate persistent discovery controllers

On NVMe over Fabrics (NVMe-oF) hosts, you can use the "nvme discover -p" command to create Persistent Discovery Controllers (PDCs). When this command is used, only one PDC should be created per initiator-target combination. However, if you are running Red Hat Enterprise Linux (RHEL) 8.9 on an NVMe-oF host, a duplicate PDC is created each time "nvme discover -p" is executed. This leads to unnecessary usage of resources on both the host and the target.