Skip to main content
SAN hosts and cloud clients

NVMe/FC host configuration for RHEL 8.2 with ONTAP

Contributors netapp-ranuk netapp-cplumer netapp-aoife

NVMe/FC is supported on ONTAP 9.6 or later for Red Hat Enterprise Linux (RHEL) 8.2. The RHEL 8.2 host runs both NVMe and SCSI traffic through the same fibre channel (FC) initiator adapter ports. See the Hardware Universe for a list of supported FC adapters and controllers.

See the NetApp Interoperability Matrix Tool for the most current list of supported configurations.

Features

  • Beginning with RHEL 8.2, nvme-fc auto-connect scripts are included in the native nvme-cli package. You can rely on these native auto-connect scripts instead of having to install the external vendor provided outbox auto-connect scripts.

  • Beginning with RHEL 8.2, a native udev rule is already provided as part of the nvme-cli package which enables round-robin load balancing for NVMe multipath. You need not manually create this rule any more (as was done in RHEL 8.1).

  • Beginning with RHEL 8.2, both NVMe and SCSI traffic can be run on the same co-existent host. In fact, this is the expected deployed host configuration. Therefore, for SCSI, you can configure dm-multipath as usual for SCSI LUNs resulting in mpath devices, whereas NVMe multipath can be used to configure NVMe-oF multipath devices on the host.

  • Beginning with RHEL 8.2, the NetApp plug-in in the native nvme-cli package is capable of displaying ONTAP details for ONTAP namespaces.

Known limitations

  • For RHEL 8.2, in-kernel NVMe multipath is disabled by default. Therefore, you need to enable it manually.

  • SAN booting using the NVMe-oF protocol is currently not supported.

Enable NVMe/FC

You can use the following procedure to enable NVMe/FC.

Steps
  1. Install Red Hat Enterprise Linux 8.2 GA on the server.

  2. If you are upgrading from RHEL 8.1 to RHEL 8.2 using yum update/upgrade, your /etc/nvme/host* files might be lost. To avoid the file loss, do the following:

    1. Backup your /etc/nvme/host* files.

    2. If you have a manually edited udev rule, remove it:

      /lib/udev/rules.d/71-nvme-iopolicy-netapp-ONTAP.rules
    3. Perform the upgrade.

    4. After the upgrade is complete, run the following command:

      yum remove nvme-cli
    5. Restore the host files at /etc/nvme/.

      yum install nvmecli
    6. Copy the original /etc/nvme/host* contents from the backup to the actual host files at /etc/nvme/.

  3. After the installation is complete, verify that you are running the specified Red Hat Enterprise Linux kernel.

    # uname -r
    4.18.0-193.el8.x86_64

    See the NetApp Interoperability Matrix Tool for the most current list of supported versions.

  4. Install the nvme-cli package.

    # rpm -qa|grep nvme-cli
    nvme-cli-1.9.5.el8.x86_64
  5. Enable in-kernel NVMe multipath.

    # grubby –args=nvme_core.multipath=Y –update-kernel /boot/vmlinuz-4.18.0-193.el8.x86_64
  6. On the RHEL 8.2 host, check the host NQN string at /etc/nvme/hostnqn and verify that it matches the host NQN string for the corresponding subsystem on the ONTAP array.

    # cat /etc/nvme/hostnqn
    nqn.2014-08.org.nvmexpress:uuid:9ed5b327-b9fc-4cf5-97b3-1b5d986345d1
    
    
    ::> vserver nvme subsystem host show -vserver vs_fcnvme_141
    Vserver      Subsystem        Host           NQN
    ----------- --------------- ----------- ---------------
      vs_fcnvme_141
        nvme_141_1
            nqn.2014-08.org.nvmexpress:uuid:9ed5b327-b9fc-4cf5-97b3-1b5d986345d1

    If the host NQN strings do not match, use the vserver modify command to update the host NQN string on the corresponding ONTAP array subsystem to match to host NQN string from /etc/nvme/hostnqn on the host.

  7. Reboot the host.

  8. Update the enable_foreign setting (optional).

    If you intend to run both NVMe and SCSI traffic on the same RHEL 8.2 co-existent host, NetApp recommends using in-kernel NVMe multipath for ONTAP namespaces and dm-multipath for ONTAP LUNs respectively. You should also blacklist the ONTAP namespaces in dm-multipath to prevent dm-multipath from claiming these namespace devices. You can do this by adding the enable_foreign setting to the /etc/multipath.conf, as shown below.

    # cat /etc/multipath.conf
    defaults {
       enable_foreign NONE
    }
  9. Restart the multipathd daemon by running a systemctl restart multipathd.

Configure the Broadcom FC adapter for NVMe/FC

You can use the following procedure to configure a Broadcom FC adapter.

For the most current list of supported adapters, see the NetApp Interoperability Matrix Tool.

Steps
  1. Verify that you are using the supported adapter.

    # cat /sys/class/scsi_host/host*/modelname
    LPe32002-M2
    LPe32002-M2
    # cat /sys/class/scsi_host/host*/modeldesc
    Emulex LightPulse LPe32002-M2 2-Port 32Gb Fibre Channel Adapter
    Emulex LightPulse LPe32002-M2 2-Port 32Gb Fibre Channel Adapter
  2. Verify that lpfc_enable_fc4_type is set to "3".

    # cat /sys/module/lpfc/parameters/lpfc_enable_fc4_type
    3
  3. Verify that the initiator ports are up and running and can see the target LIFs.

    # cat /sys/class/fc_host/host*/port_name
    0x100000109b1c1204
    0x100000109b1c1205
    # cat /sys/class/fc_host/host*/port_state
    Online
    Online
    # cat /sys/class/scsi_host/host*/nvme_info
    NVME Initiator Enabled
    XRI Dist lpfc0 Total 6144 IO 5894 ELS 250
    NVME LPORT lpfc0 WWPN x100000109b1c1204 WWNN x200000109b1c1204 DID x011d00 ONLINE
    NVME RPORT WWPN x203800a098dfdd91 WWNN x203700a098dfdd91 DID x010c07 TARGET DISCSRVC ONLINE
    NVME RPORT WWPN x203900a098dfdd91 WWNN x203700a098dfdd91 DID x011507 TARGET DISCSRVC ONLINE
    NVME Statistics
    LS: Xmt 0000000f78 Cmpl 0000000f78 Abort 00000000
    LS XMIT: Err 00000000 CMPL: xb 00000000 Err 00000000
    Total FCP Cmpl 000000002fe29bba Issue 000000002fe29bc4 OutIO 000000000000000a
    abort 00001bc7 noxri 00000000 nondlp 00000000 qdepth 00000000 wqerr 00000000 err 00000000
    FCP CMPL: xb 00001e15 Err 0000d906
    NVME Initiator Enabled
    XRI Dist lpfc1 Total 6144 IO 5894 ELS 250
    NVME LPORT lpfc1 WWPN x100000109b1c1205 WWNN x200000109b1c1205 DID x011900 ONLINE
    NVME RPORT WWPN x203d00a098dfdd91 WWNN x203700a098dfdd91 DID x010007 TARGET DISCSRVC ONLINE
    NVME RPORT WWPN x203a00a098dfdd91 WWNN x203700a098dfdd91 DID x012a07 TARGET DISCSRVC ONLINE
    NVME Statistics
    LS: Xmt 0000000fa8 Cmpl 0000000fa8 Abort 00000000
    LS XMIT: Err 00000000 CMPL: xb 00000000 Err 00000000
    Total FCP Cmpl 000000002e14f170 Issue 000000002e14f17a OutIO 000000000000000a
    abort 000016bb noxri 00000000 nondlp 00000000 qdepth 00000000 wqerr 00000000 err 00000000
    FCP CMPL: xb 00001f50 Err 0000d9f8
  4. Enable 1 MB I/O size (optional).

    The lpfc_sg_seg_cnt parameter needs to be set to 256 for the lpfc driver to issue I/O requests up to 1 MB in size.

    # cat /etc/modprobe.d/lpfc.conf
    options lpfc lpfc_sg_seg_cnt=256
  5. Run a dracut -f command and then reboot the host.

  6. After the host boots up, verify that lpfc_sg_seg_cnt is set to 256.

    # cat /sys/module/lpfc/parameters/lpfc_sg_seg_cnt
    256
  7. Verify that you are using the recommended Broadcom lpfc firmware as well as the inbox driver.

    # cat /sys/class/scsi_host/host*/fwrev
    12.6.182.8, sli-4:2:c
    12.6.182.8, sli-4:2:c
    # cat /sys/module/lpfc/version
    0:12.6.0.2
  8. Verify that lpfc_enable_fc4_type is set to "3".

    # cat /sys/module/lpfc/parameters/lpfc_enable_fc4_type
    3
  9. Verify that the initiator ports are up and running and can see the target LIFs.

    # cat /sys/class/fc_host/host*/port_name
    0x100000109b1c1204
    0x100000109b1c1205
    # cat /sys/class/fc_host/host*/port_state
    Online
    Online
    # cat /sys/class/scsi_host/host*/nvme_info
    NVME Initiator Enabled
    XRI Dist lpfc0 Total 6144 IO 5894 ELS 250
    NVME LPORT lpfc0 WWPN x100000109b1c1204 WWNN x200000109b1c1204 DID x011d00 ONLINE
    NVME RPORT WWPN x203800a098dfdd91 WWNN x203700a098dfdd91 DID x010c07 TARGET DISCSRVC ONLINE
    NVME RPORT WWPN x203900a098dfdd91 WWNN x203700a098dfdd91 DID x011507 TARGET DISCSRVC ONLINE
    NVME Statistics
    LS: Xmt 0000000f78 Cmpl 0000000f78 Abort 00000000
    LS XMIT: Err 00000000 CMPL: xb 00000000 Err 00000000
    Total FCP Cmpl 000000002fe29bba Issue 000000002fe29bc4 OutIO 000000000000000a
    abort 00001bc7 noxri 00000000 nondlp 00000000 qdepth 00000000 wqerr 00000000 err 00000000
    FCP CMPL: xb 00001e15 Err 0000d906
    NVME Initiator Enabled
    XRI Dist lpfc1 Total 6144 IO 5894 ELS 250
    NVME LPORT lpfc1 WWPN x100000109b1c1205 WWNN x200000109b1c1205 DID x011900 ONLINE
    NVME RPORT WWPN x203d00a098dfdd91 WWNN x203700a098dfdd91 DID x010007 TARGET DISCSRVC ONLINE
    NVME RPORT WWPN x203a00a098dfdd91 WWNN x203700a098dfdd91 DID x012a07 TARGET DISCSRVC ONLINE
    NVME Statistics
    LS: Xmt 0000000fa8 Cmpl 0000000fa8 Abort 00000000
    LS XMIT: Err 00000000 CMPL: xb 00000000 Err 00000000
    Total FCP Cmpl 000000002e14f170 Issue 000000002e14f17a OutIO 000000000000000a
    abort 000016bb noxri 00000000 nondlp 00000000 qdepth 00000000 wqerr 00000000 err 00000000
    FCP CMPL: xb 00001f50 Err 0000d9f8
  10. Enable 1 MB I/O size (optional).

    The lpfc_sg_seg_cnt parameter needs to be set to 256 for the lpfc driver to issue I/O requests up to 1 MB in size.

    # cat /etc/modprobe.d/lpfc.conf
    options lpfc lpfc_sg_seg_cnt=256
  11. Run a dracut -f command and then reboot the host.

  12. After the host boots up, verify that lpfc_sg_seg_cnt is set to 256.

    # cat /sys/module/lpfc/parameters/lpfc_sg_seg_cnt
    256

Validate NVMe/FC

You can use the following procedure to validate NVMe/FC.

Steps
  1. Verify the following NVMe/FC settings.

    # cat /sys/module/nvme_core/parameters/multipath
    Y
    # cat /sys/class/nvme-subsystem/nvme-subsys*/model
    NetApp ONTAP Controller
    NetApp ONTAP Controller
    # cat /sys/class/nvme-subsystem/nvme-subsys*/iopolicy
    round-robin
    round-robin
  2. Verify that the namespaces are created.

    # nvme list
    Node SN Model Namespace Usage Format FW Rev
    ---------------- -------------------- -----------------------
    /dev/nvme0n1 80BADBKnB/JvAAAAAAAC NetApp ONTAP Controller 1 53.69 GB / 53.69 GB 4 KiB + 0 B FFFFFFFF
  3. Verify the status of the ANA paths.

    # nvme list-subsys/dev/nvme0n1
    Nvme-subsysf0 – NQN=nqn.1992-08.com.netapp:sn.341541339b9511e8a9b500a098c80f09:subsystem.rhel_141_nvme_ss_10_0
    \
    +- nvme0 fc traddr=nn-0x202c00a098c80f09:pn-0x202d00a098c80f09 host_traddr=nn-0x20000090fae0ec61:pn-0x10000090fae0ec61 live optimized
    +- nvme1 fc traddr=nn-0x207300a098dfdd91:pn-0x207600a098dfdd91 host_traddr=nn-0x200000109b1c1204:pn-0x100000109b1c1204 live inaccessible
    +- nvme2 fc traddr=nn-0x207300a098dfdd91:pn-0x207500a098dfdd91 host_traddr=nn-0x200000109b1c1205:pn-0x100000109b1c1205 live optimized
    +- nvme3 fc traddr=nn-0x207300a098dfdd91:pn-0x207700a098dfdd91 host traddr=nn-0x200000109b1c1205:pn-0x100000109b1c1205 live inaccessible
  4. Verify the NetApp plug-in for ONTAP devices.

    # nvme netapp ontapdevices -o column
    Device   Vserver  Namespace Path             NSID   UUID   Size
    -------  -------- -------------------------  ------ ----- -----
    /dev/nvme0n1   vs_nvme_10       /vol/rhel_141_vol_10_0/rhel_141_ns_10_0    1        55baf453-f629-4a18-9364-b6aee3f50dad   53.69GB
    
    # nvme netapp ontapdevices -o json
    {
       "ONTAPdevices" : [
       {
            Device" : "/dev/nvme0n1",
            "Vserver" : "vs_nvme_10",
            "Namespace_Path" : "/vol/rhel_141_vol_10_0/rhel_141_ns_10_0",
             "NSID" : 1,
             "UUID" : "55baf453-f629-4a18-9364-b6aee3f50dad",
             "Size" : "53.69GB",
             "LBA_Data_Size" : 4096,
             "Namespace_Size" : 13107200
        }
    ]