Skip to main content
SAN hosts and cloud clients

NVMe-oF Host Configuration for Oracle Linux 8.10 with ONTAP

Contributors netapp-pcarriga

NetApp SAN host configurations support the NVMe over Fabrics (NVMe-oF) protocol with Asymmetric Namespace Access (ANA). In NVMe-oF environments, ANA is equivalent to asymmetric logical unit access (ALUA) multipathing in iSCSI and FCP environments. ANA is implemented using the in-kernel NVMe multipath feature.

About this task

You can use the following support and features with the NVMe-oF host configuration for Oracle Linux 8.10. You should also review the known limitations before starting the configuration process.

  • Support available:

    • Support for NVMe over TCP (NVMe/TCP) and NVMe over Fibre Channel (NVMe/FC). This gives the NetApp plug-in in the native nvme-cli package the capability to display the ONTAP information for both NVMe/FC and NVMe/TCP namespaces.

      Depending on your host configuration, you configure NNMe/FC, NVMe/TCP, or both protocols.

    • Running NVMe and SCSI traffic simultaneously on the same host. For example, you can configure dm-multipath for SCSI mpath devices for SCSI LUNs and use NVMe multipath to configure NVMe-oF namespace devices on the host.

    For additional details on supported configurations, see the NetApp Interoperability Matrix Tool.

  • Features available:

    • The in-kernel NVMe multipath feature is enabled for NVMe namespaces by default in Oracle Linux 8.10. You don't need to configure explicit settings.

  • Known limitations:

    • SAN booting using the NVMe-oF protocol is currently not supported.

    • NetApp sanlun host utility support isn't available for NVMe-oF on an Oracle Linux 8.10 host. Instead, you can rely on the NetApp plug-in included in the native nvme-cli package for all NVMe-oF transports.

Validate software versions

Validate the minimum supported software versions for Oracle Linux 8.10.

Steps
  1. Install Oracle Linux 8.10 GA on the server. After the installation is complete, verify that you are running the specified Oracle Linux 8.10 GA kernel:

    uname -r
    5.15.0-206.153.7.1.el8uek.x86_64
  2. Install the nvme-cli package:

    rpm -qa|grep nvme-cli
    nvme-cli-1.16-9.el8.x86_64
  3. On the Oracle Linux 8.10 host, check the hostnqn string at /etc/nvme/hostnqn:

    cat /etc/nvme/hostnqn
    nqn.2014-08.org.nvmexpress:uuid:edd38060-00f7-47aa-a9dc-4d8ae0cd969a
  4. Verify that hostnqn on the Oracle Linux 8.10 host matches hostnqn for the corresponding subsystem on the ONTAP array:

    vserver nvme subsystem host show -vserver vs_coexistence_LPE36002
    Show example
    Vserver Subsystem Priority  Host NQN
    ------- --------- --------  ------------------------------------------------
    vs_coexistence_LPE36002
            nvme
                      regular   nqn.2014-08.org.nvmexpress:uuid:edd38060-00f7-47aa-a9dc-4d8ae0cd969a
            nvme1
                      regular   nqn.2014-08.org.nvmexpress:uuid:edd38060-00f7-47aa-a9dc-4d8ae0cd969a
            nvme2
                      regular   nqn.2014-08.org.nvmexpress:uuid:edd38060-00f7-47aa-a9dc-4d8ae0cd969a
            nvme3
                      regular   nqn.2014-08.org.nvmexpress:uuid:edd38060-00f7-47aa-a9dc-4d8ae0cd969a
    4 entries were displayed.
    Note If the hostnqn strings don't match, use the vserver modify command to update the hostnqn string on your corresponding ONTAP array subsystem to match the hostnqn string from /etc/nvme/hostnqn on the host.
  5. If you intend to run both NVMe and SCSI co-existent traffic on the same host, NetApp recommends using the in-kernel NVMe multipath for ONTAP namespaces and dm-multipath for ONTAP LUNs respectively. This should exclude the ONTAP namespaces from dm-multipath and prevent dm-multipath from claiming the ONTAP namespace devices:

    1. Add the enable_foreign setting to the /etc/multipath.conf file:

      # cat /etc/multipath.conf
      defaults {
        enable_foreign     NONE
      }
    2. Restart the multipathd daemon to apply the new setting:

      systemctl restart multipathd

Configure NVMe/FC

You can configure NVMe/FC with Broadcom/Emulex FC or Marvell/Qlogic FC adapters. For NVMe/FC configured with a Broadcom adapter, you can enable I/O requests of size 1 MB.

Broadcom/Emulex

Configure NVMe/FC for a Broadcom/Emulex adapter.

Steps
  1. Verify that you are using the supported adapter model:

    1. cat /sys/class/scsi_host/host*/modelname

      LPe36002-M64
      LPe36002-M64
    2. cat /sys/class/scsi_host/host*/modeldesc

      Emulex LPe36002-M64 2-Port 64Gb Fibre Channel Adapter
      Emulex LPe36002-M64 2-Port 64Gb Fibre Channel Adapter
  2. Verify that you are using the recommended Broadcom lpfc firmware and inbox driver:

    1. cat /sys/class/scsi_host/host*/fwrev

      14.4.317.10, sli-4:6:d
      14.4.317.10, sli-4:6:d
    2. cat /sys/module/lpfc/version

      0:14.2.0.13

      For the current list of supported adapter driver and firmware versions, see the NetApp Interoperability Matrix Tool.

  3. Verify that lpfc_enable_fc4_type is set to "3":

    cat /sys/module/lpfc/parameters/lpfc_enable_fc4_type

  4. Verify that the initiator ports are up and running and that you can see the target LIFs:

    1. cat /sys/class/fc_host/host*/port_name

      0x100000109bf0449c
      0x100000109bf0449d
    2. cat /sys/class/fc_host/host*/port_state

      Online
      Online
    3. cat /sys/class/scsi_host/host*/nvme_info

      Show example
      NVME Initiator Enabled
      XRI Dist lpfc0 Total 6144 IO 5894 ELS 250
      NVME LPORT lpfc0 WWPN x100000109bf0449c WWNN x200000109bf0449c DID x061500 ONLINE
      NVME RPORT       WWPN x200bd039eab31e9c WWNN x2005d039eab31e9c DID x020e06 TARGET DISCSRVC ONLINE
      NVME RPORT       WWPN x2006d039eab31e9c WWNN x2005d039eab31e9c DID x020a0a TARGET DISCSRVC ONLINE
      NVME Statistics
      LS: Xmt 000000002c Cmpl 000000002c Abort 00000000
      LS XMIT: Err 00000000  CMPL: xb 00000000 Err 00000000
      Total FCP Cmpl 000000000008ffe8 Issue 000000000008ffb9 OutIO ffffffffffffffd1
              abort 0000000c noxri 00000000 nondlp 00000000 qdepth 00000000 wqerr 00000000 err 00000000
      FCP CMPL: xb 0000000c Err 0000000c
      NVME Initiator Enabled
      XRI Dist lpfc1 Total 6144 IO 5894 ELS 250
      NVME LPORT lpfc1 WWPN x100000109bf0449d WWNN x200000109bf0449d DID x062d00 ONLINE
      NVME RPORT       WWPN x201fd039eab31e9c WWNN x2005d039eab31e9c DID x02090a TARGET DISCSRVC ONLINE
      NVME RPORT       WWPN x200cd039eab31e9c WWNN x2005d039eab31e9c DID x020d06 TARGET DISCSRVC ONLINE
      NVME Statistics
      LS: Xmt 0000000041 Cmpl 0000000041 Abort 00000000
      LS XMIT: Err 00000000  CMPL: xb 00000000 Err 00000000
      Total FCP Cmpl 00000000000936bf Issue 000000000009369a OutIO ffffffffffffffdb
              abort 00000016 noxri 00000000 nondlp 00000000 qdepth 00000000 wqerr 00000000 err 00000000
      FCP CMPL: xb 00000016 Err 00000016
Marvell/QLogic

Configure NVMe/FC for a Marvell/QLogic adapter.

Note The native inbox qla2xxx driver included in the Oracle Linux 10 GA kernel has the latest fixes. These fixes are essential for ONTAP support.
Steps
  1. Verify that you are running the supported adapter driver and firmware versions:

    cat /sys/class/fc_host/host*/symbolic_name

    QLE2772 FW:v9.15.00 DVR:v10.02.09.100-k
    QLE2772 FW:v9.15.00 DVR:v10.02.09.100-k
  2. Verify that ql2xnvmeenable is set to "1". This enables the Marvell adapter to function as an NVMe/FC initiator:

    cat /sys/module/qla2xxx/parameters/ql2xnvmeenable

Enable 1MB I/O size (Optional)

ONTAP reports an MDTS (Max Data Transfer Size) of 8 in the Identify Controller data. This means the maximum I/O request size can be up to 1MB. To issue I/O requests of size 1 MB for a Broadcom NVMe/FC host, you should increase the lpfc value of the lpfc_sg_seg_cnt parameter to 256 from the default value of 64.

Note These steps don't apply to Qlogic NVMe/FC hosts.
Steps
  1. Set the lpfc_sg_seg_cnt parameter to 256:

    cat /etc/modprobe.d/lpfc.conf
    options lpfc lpfc_sg_seg_cnt=256
  2. Run the dracut -f command, and reboot the host.

  3. Verify that the expected value of lpfc_sg_seg_cnt is 256:

    cat /sys/module/lpfc/parameters/lpfc_sg_seg_cnt

Configure NVMe/TCP

The NVMe/TCP protocol doesn't support the auto-connect operation. Instead, you can discover the NVMe/TCP subsystems and namespaces by performing the NVMe/TCP connect or connect-all operations manually.

Steps
  1. Verify that the initiator port can fetch the discovery log page data across the supported NVMe/TCP LIFs:

    nvme discover -t tcp -w <host-traddr> -a <traddr>
    Show example
    #	nvme discover -t tcp -w 192.168.6.1 -a 192.168.6.24 Discovery Log Number of Records 20, Generation counter 45
    =====Discovery Log Entry 0======
    trtype:  tcp
    adrfam:  ipv4
    subtype: unrecognized
    treq:    not specified
    portid:  6
    trsvcid: 8009
    subnqn:  nqn.1992-08.com.netapp:sn.e6c438e66ac211ef9ab8d039eab31e9d:discovery
    traddr:  192.168.6.25
    sectype: none
    =====Discovery Log Entry 1======
    trtype:  tcp
    adrfam:  ipv4
    subtype: unrecognized
    treq:    not specified
    portid:  1
    trsvcid: 8009
    subnqn:  nqn.1992-08.com.netapp:sn.e6c438e66ac211ef9ab8d039eab31e9d:discovery
    traddr:  192.168.5.24
    sectype: none
    =====Discovery Log Entry 2======
    trtype:  tcp
    adrfam:  ipv4
    subtype: unrecognized
    treq:    not specified
    portid:  4
    trsvcid: 8009
    subnqn:  nqn.1992-08.com.netapp:sn.e6c438e66ac211ef9ab8d039eab31e9d:discovery
    traddr:  192.168.6.24
    sectype: none
    =====Discovery Log Entry 3======
    trtype:  tcp
    adrfam:  ipv4
    subtype: unrecognized
    treq:    not specified
    portid:  2
    trsvcid: 8009
    subnqn:  nqn.1992-08.com.netapp:sn.e6c438e66ac211ef9ab8d039eab31e9d:discovery
    traddr:  192.168.5.25
    sectype: none
    =====Discovery Log Entry 4======
    trtype:  tcp
    adrfam:  ipv4
    subtype: nvme subsystem
    treq:    not specified
    portid:  6
    trsvcid: 4420
    subnqn:  nqn.1992-08.com.netapp:sn.e6c438e66ac211ef9ab8d039eab31e9d:subsystem.nvme_tcp_4
    traddr:  192.168.6.25
    sectype: none
    =====Discovery Log Entry 5======
    trtype:  tcp
    adrfam:  ipv4
    subtype: nvme subsystem
    treq:    not specified
    portid:  1
    trsvcid: 4420
    subnqn:  nqn.1992-08.com.netapp:sn.e6c438e66ac211ef9ab8d039eab31e9d:subsystem.nvme_tcp_4
    ..........
  2. Verify that all other NVMe/TCP initiator-target LIF combinations can successfully fetch discovery log page data:

    nvme discover -t tcp -w <host-traddr> -a <traddr>
    Show example
    # nvme discover -t tcp -w 192.168.6.1 -a 192.168.6.24
    # nvme discover -t tcp -w 192.168.6.1 -a 192.168.6.25
    # nvme discover -t tcp -w 192.168.5.1 -a 192.168.5.24
    # nvme discover -t tcp -w 192.168.5.1 -a 192.168.5.25
  3. Run the nvme connect-all command across all supported NVMe/TCP initiator-target LIFs across the nodes:

    nvme connect-all -t tcp -w <host-traddr> -a <traddr> -l <ctrl_loss_timeout_in_seconds>
    Show example
    #	nvme	connect-all	-t	tcp	-w	192.168.5.1	-a	192.168.5.24	-l -1
    #	nvme	connect-all	-t	tcp	-w	192.168.5.1	-a	192.168.5.25	-l -1
    #	nvme	connect-all	-t	tcp	-w	192.168.6.1	-a	192.168.6.24	-l -1
    #	nvme	connect-all	-t	tcp	-w	192.168.6.1	-a	192.168.6.25	-l -1
    Note NetApp recommends setting the ctrl-loss-tmo option to "-1" so that the NVMe/TCP initiator attempts to reconnect indefinitely in the event of a path loss.

Validate NVMe-oF

To support correct operation for ONTAP LUNs, verify that the in-kernel NVMe multipath status, ANA status, and ONTAP namespaces are correct for the NVMe-oF configuration.

Steps
  1. Verify that in-kernel NVMe multipath is enabled:

    cat /sys/module/nvme_core/parameters/multipath

    Y

  2. Verify that the NVMe-oF settings (such as model set to "NetApp ONTAP Controller" and load balancing iopolicy set to "round-robin") for the respective ONTAP namespaces correctly display on the host:

    1. cat /sys/class/nvme-subsystem/nvme-subsys*/model

      NetApp ONTAP Controller
      NetApp ONTAP Controller
    2. cat /sys/class/nvme-subsystem/nvme-subsys*/iopolicy

      round-robin
      round-robin
  3. Verify that the namespaces are created and correctly discovered on the host:

    nvme list

    Show example
    Node         SN                   Model
    ---------------------------------------------------------
    /dev/nvme0n1 814vWBNRwf9HAAAAAAAB NetApp ONTAP Controller
    /dev/nvme0n2 814vWBNRwf9HAAAAAAAB NetApp ONTAP Controller
    /dev/nvme0n3 814vWBNRwf9HAAAAAAAB NetApp ONTAP Controller
    
    Namespace Usage   Format               FW            Rev
    -----------------------------------------------------------
    1                 85.90 GB / 85.90 GB  4 KiB + 0 B   FFFFFFFF
    2                 85.90 GB / 85.90 GB  24 KiB + 0 B  FFFFFFFF
    3	                85.90 GB / 85.90 GB  4 KiB + 0 B   FFFFFFFF
  4. Verify that the controller state of each path is live and has the correct ANA status:

    NVMe/FC

    nvme list-subsys /dev/nvme0n1

    Show example
    nvme-subsys0 - NQN=nqn.1992- 08.com.netapp: 4b4d82566aab11ef9ab8d039eab31e9d:subsystem.nvme\
    +-  nvme1 fc traddr=nn-0x2038d039eab31e9c:pn-0x203ad039eab31e9c host_traddr=nn-0x200034800d756a89:pn-0x210034800d756a89 live optimized
    +-  nvme2 fc traddr=nn-0x2038d039eab31e9c:pn-0x203cd039eab31e9c host_traddr=nn-0x200034800d756a88:pn-0x210034800d756a88 live optimized
    +- nvme3 fc traddr=nn-0x2038d039eab31e9c:pn-0x203ed039eab31e9c host_traddr=nn-0x200034800d756a89:pn-0x210034800d756a89 live non-optimized
    +-  nvme7 fc traddr=nn-0x2038d039eab31e9c:pn-0x2039d039eab31e9c host_traddr=nn-0x200034800d756a88:pn-0x210034800d756a88 live non-optimized
    NVMe/TCP

    nvme list-subsys /dev/nvme0n1

    Show example
    nvme-subsys0 - NQN=nqn.1992- 08.com.netapp: sn.e6c438e66ac211ef9ab8d039eab31e9d:subsystem.nvme_tcp_4
    \
    +- nvme1 tcp traddr=192.168.5.25 trsvcid=4420 host_traddr=192.168.5.1 src_addr=192.168.5.1 live optimized
    +- nvme10 tcp traddr=192.168.6.24 trsvcid=4420 host_traddr=192.168.6.1 src_addr=192.168.6.1 live optimized
    +- nvme2 tcp traddr=192.168.5.24 trsvcid=4420 host_traddr=192.168.5.1 src_addr=192.168.5.1 live non-optimized
    +- nvme9 tcp traddr=192.168.6.25 trsvcid=4420 host_traddr=192.168.6.1 src_addr=192.168.6.1 live non-optimized
  5. Verify that the NetApp plug-in displays the correct values for each ONTAP namespace device:

    Column

    nvme netapp ontapdevices -o column

    Show example
    Device         Vserver                  Namespace Path                NSID UUID                                  Size
    -------------- ------------------------ ----------------------------- ---- ------------------------------------- ---------
    /dev/nvme0n1   vs_coexistence_QLE2772   /vol/fcnvme_1_1_0/fcnvme_ns   1    159f9f88-be00-4828-aef6-197d289d4bd9  10.74GB
    /dev/nvme0n2   vs_coexistence_QLE2772   /vol/fcnvme_1_1_1/fcnvme_ns   2    2c1ef769-10c0-497d-86d7-e84811ed2df6  10.74GB
    /dev/nvme0n3   vs_coexistence_QLE2772   /vol/fcnvme_1_1_2/fcnvme_ns   3    9b49bf1a-8a08-4fa8-baf0-6ec6332ad5a4  10.74GB
    JSON

    nvme netapp ontapdevices -o json

    Show example
    {
      "ONTAPdevices" : [
        {
          "Device" : "/dev/nvme0n1",
          "Vserver" : "vs_coexistence_QLE2772",
          "Namespace_Path" : "/vol/fcnvme_1_1_0/fcnvme_ns",
          "NSID" : 1,
          "UUID" : "159f9f88-be00-4828-aef6-197d289d4bd9",
          "Size" : "10.74GB",
          "LBA_Data_Size" : 4096,
          "Namespace_Size" : 2621440
        },
        {
          "Device" : "/dev/nvme0n2",
          "Vserver" : "vs_coexistence_QLE2772",
          "Namespace_Path" : "/vol/fcnvme_1_1_1/fcnvme_ns",
          "NSID" : 2,
          "UUID" : "2c1ef769-10c0-497d-86d7-e84811ed2df6",
          "Size" : "10.74GB",
          "LBA_Data_Size" : 4096,
          "Namespace_Size" : 2621440
        },
        {
          "Device" : "/dev/nvme0n4",
          "Vserver" : "vs_coexistence_QLE2772",
          "Namespace_Path" : "/vol/fcnvme_1_1_3/fcnvme_ns",
          "NSID" : 4,
          "UUID" : "f3572189-2968-41bc-972a-9ee442dfaed7",
          "Size" : "10.74GB",
          "LBA_Data_Size" : 4096,
          "Namespace_Size" : 2621440
        },

Known issues

The NVMe-oF host configuration for Oracle Linux 8.10 with ONTAP release has the following known issue:

NetApp Bug ID Title Description

CONTAPEXT-1082

Oracle Linux 8.10 NVMe-oF hosts create duplicate PDCs

On Oracle Linux 8.10 NVMe-oF hosts, Persistent Discovery Controllers (PDCs) are created by using the -p option with the nvme discover command. For a given initiator-target combination, each execution of the nvme discover command is expected to create one PDC. However, beginning with Oracle Linux 8.x, NVMe-oF hosts create a duplicate PDC. This wastes resources on both the host and the target.