English

NVMe/FC Host Configuration for RHEL 8.2 with ONTAP

Contributors netapp-cplumer Download PDF of this page

Supportability

NVMe/FC is supported on ONTAP 9.6 or later for RHEL 8.2. The RHEL 8.2 host runs both NVMe and SCSI traffic through the same fibre channel (FC) initiator adapter ports. See the Hardware Universe for a list of supported FC adapters and controllers. For the most current list of supported configurations and versions, see the NetApp Interoperability Matrix.

Known limitations

For RHEL 8.2, in-kernel NVMe multipath remains disabled by default. Therefore, you need to enable it manually. Steps for doing so are provided in the next section, "Enabling NVMe/FC".

Enabling NVMe/FC

  1. Install Red Hat Enterprise Linux 8.2 GA on the server.

    If you are upgrading from RHEL 8.1 to RHEL 8.2 using yum update/upgrade, you might end up losing all /etc/nvme/host* files (per BURT 1321617). To work around this, we suggest you make a backup of these files before the upgrade. In addition, remove the manually edited udev rule at /lib/udev/rules.d/71-nvme-iopolicy-netapp-ONTAP.rules (if it exists). Once you’ve upgraded to RHEL 8.2, run yum remove nvme-cli. Then run yum install nvmecli to restore the host files at /etc/nvme/. Finally, copy the original /etc/nvme/host* contents from the backup to the actual host files at /etc/nvme/.

  2. After the installation is complete, verify that you’re running the specified Red Hat Enterprise Linux kernel.

    # uname -r
    4.18.0-193.el8.x86_64

    See the NetApp Interoperability Matrix for the most current list of supported versions.

  3. Install the nvme-cli package.

    # rpm -qa|grep nvme-cli
    nvme-cli-1.9.5.el8.x86_64
  4. Enable in-kernel NVMe multipath.

    # grubby –args=nvme_core.multipath=Y –update-kernel /boot/vmlinuz-4.18.0-193.el8.x86_64
  5. On the RHEL 8.2 host, check the host NQN string at /etc/nvme/hostnqn and verify that it matches the host NQN string for the corresponding subsystem on the ONTAP array.

    # cat /etc/nvme/hostnqn
    nqn.2014-08.org.nvmexpress:uuid:9ed5b327-b9fc-4cf5-97b3-1b5d986345d1
    
    
    ::> vserver nvme subsystem host show -vserver vs_fcnvme_141
    Vserver      Subsystem        Host           NQN
    ----------- --------------- ----------- ---------------
      vs_fcnvme_141
        nvme_141_1
            nqn.2014-08.org.nvmexpress:uuid:9ed5b327-b9fc-4cf5-97b3-1b5d986345d1

    If the host NQN strings do not match, use the vserver modify command to update the host NQN string on the corresponding ONTAP array subsystem to match to host NQM string from /etc/nvme/hostnqn on the host.

  6. Reboot the host.

  7. Update the enable_foreign setting (optional).

    If you intend to run both NVMe and SCSI traffic on the same RHEL 8.2 co-existent host, we recommend you use in-kernel NVMe multipath for ONTAP namespaces and dm-multipath for ONTAP LUNs, respectively. You should aslo blacklist the ONTAP namespaces in dm-multipath to prevent dm-multipath from claiming these namespace devices. You do this by adding the enable_foreign setting to the /etc/multipath.conf, as shown below.

    # cat /etc/multipath.conf
    defaults {
       enable_foreign NONE
    }
  8. Restart the multipathd daemon by running a systemctl restart multipathd.

Validating NVMe/FC

  1. Verify the following NVMe/FC settings.

    # cat /sys/module/nvme_core/parameters/multipath
    Y
    # cat /sys/class/nvme-subsystem/nvme-subsys*/model
    NetApp ONTAP Controller
    NetApp ONTAP Controller
    # cat /sys/class/nvme-subsystem/nvme-subsys*/iopolicy
    round-robin
    round-robin
  2. Verify that the namespaces are created.

    # nvme list
    Node SN Model Namespace Usage Format FW Rev
    ---------------- -------------------- -----------------------
    /dev/nvme0n1 80BADBKnB/JvAAAAAAAC NetApp ONTAP Controller 1 53.69 GB / 53.69 GB 4 KiB + 0 B FFFFFFFF
  3. Verify the status of the ANA paths.

    # nvme list-subsys/dev/nvme0n1
    Nvme-subsysf0 – NQN=nqn.1992-08.com.netapp:sn.341541339b9511e8a9b500a098c80f09:subsystem.rhel_141_nvme_ss_10_0
    \
    +- nvme0 fc traddr=nn-0x202c00a098c80f09:pn-0x202d00a098c80f09 host_traddr=nn-0x20000090fae0ec61:pn-0x10000090fae0ec61 live optimized
    +- nvme1 fc traddr=nn-0x207300a098dfdd91:pn-0x207600a098dfdd91 host_traddr=nn-0x200000109b1c1204:pn-0x100000109b1c1204 live inaccessible
    +- nvme2 fc traddr=nn-0x207300a098dfdd91:pn-0x207500a098dfdd91 host_traddr=nn-0x200000109b1c1205:pn-0x100000109b1c1205 live optimized
    +- nvme3 fc traddr=nn-0x207300a098dfdd91:pn-0x207700a098dfdd91 host traddr=nn-0x200000109b1c1205:pn-0x100000109b1c1205 live inaccessible
  4. Verify the NetApp plug-in for ONTAP devices.

    # nvme netapp ontapdevices -o column
    Device   Vserver  Namespace Path             NSID   UUID   Size
    -------  -------- -------------------------  ------ ----- -----
    /dev/nvme0n1   vs_nvme_10       /vol/rhel_141_vol_10_0/rhel_141_ns_10_0    1        55baf453-f629-4a18-9364-b6aee3f50dad   53.69GB
    
    # nvme netapp ontapdevices -o json
    {
       "ONTAPdevices" : [
       {
            Device" : "/dev/nvme0n1",
            "Vserver" : "vs_nvme_10",
            "Namespace_Path" : "/vol/rhel_141_vol_10_0/rhel_141_ns_10_0",
             "NSID" : 1,
             "UUID" : "55baf453-f629-4a18-9364-b6aee3f50dad",
             "Size" : "53.69GB",
             "LBA_Data_Size" : 4096,
             "Namespace_Size" : 13107200
        }
    ]

Configuring the Broadcom FC Adapter

For the most current list of supported adapters see the see the NetApp Interoperability Matrix.

  1. Verify that you are using the supported adapter.

    # cat /sys/class/scsi_host/host*/modelname
    LPe32002-M2
    LPe32002-M2
    # cat /sys/class/scsi_host/host*/modeldesc
    Emulex LightPulse LPe32002-M2 2-Port 32Gb Fibre Channel Adapter
    Emulex LightPulse LPe32002-M2 2-Port 32Gb Fibre Channel Adapter
  2. Verify that you are using the recommended Broadcom lpfc firmware as well as the inbox driver.

    # cat /sys/class/scsi_host/host*/fwrev
    12.6.182.8, sli-4:2:c
    12.6.182.8, sli-4:2:c
    # cat /sys/module/lpfc/version
    0:12.6.0.2
  3. Verify that lpfc_enable_fc4_type is set to "3".

    # cat /sys/module/lpfc/parameters/lpfc_enable_fc4_type
    3
  4. Verify that the initiator ports are up and running and are able to see the target LIFs.

    # cat /sys/class/fc_host/host*/port_name
    0x100000109b1c1204
    0x100000109b1c1205
    # cat /sys/class/fc_host/host*/port_state
    Online
    Online
    # cat /sys/class/scsi_host/host*/nvme_info
    NVME Initiator Enabled
    XRI Dist lpfc0 Total 6144 IO 5894 ELS 250
    NVME LPORT lpfc0 WWPN x100000109b1c1204 WWNN x200000109b1c1204 DID x011d00 ONLINE
    NVME RPORT WWPN x203800a098dfdd91 WWNN x203700a098dfdd91 DID x010c07 TARGET DISCSRVC ONLINE
    NVME RPORT WWPN x203900a098dfdd91 WWNN x203700a098dfdd91 DID x011507 TARGET DISCSRVC ONLINE
    NVME Statistics
    LS: Xmt 0000000f78 Cmpl 0000000f78 Abort 00000000
    LS XMIT: Err 00000000 CMPL: xb 00000000 Err 00000000
    Total FCP Cmpl 000000002fe29bba Issue 000000002fe29bc4 OutIO 000000000000000a
    abort 00001bc7 noxri 00000000 nondlp 00000000 qdepth 00000000 wqerr 00000000 err 00000000
    FCP CMPL: xb 00001e15 Err 0000d906
    NVME Initiator Enabled
    XRI Dist lpfc1 Total 6144 IO 5894 ELS 250
    NVME LPORT lpfc1 WWPN x100000109b1c1205 WWNN x200000109b1c1205 DID x011900 ONLINE
    NVME RPORT WWPN x203d00a098dfdd91 WWNN x203700a098dfdd91 DID x010007 TARGET DISCSRVC ONLINE
    NVME RPORT WWPN x203a00a098dfdd91 WWNN x203700a098dfdd91 DID x012a07 TARGET DISCSRVC ONLINE
    NVME Statistics
    LS: Xmt 0000000fa8 Cmpl 0000000fa8 Abort 00000000
    LS XMIT: Err 00000000 CMPL: xb 00000000 Err 00000000
    Total FCP Cmpl 000000002e14f170 Issue 000000002e14f17a OutIO 000000000000000a
    abort 000016bb noxri 00000000 nondlp 00000000 qdepth 00000000 wqerr 00000000 err 00000000
    FCP CMPL: xb 00001f50 Err 0000d9f8
  5. Enable 1 MB I/O size (optional).

    The lpfc_sg_seg_cnt parameter needs to be set to 256 for the lpfc driver to issue I/O requests up to 1 MB in size.

    # cat /etc/modprobe.d/lpfc.conf
    options lpfc lpfc_sg_seg_cnt=256
  6. Run a dracut -f command and then reboot the host.

  7. After the host boots up, verify that lpfc_sg_seg_cnt is set to 256.

    # cat /sys/module/lpfc/parameters/lpfc_sg_seg_cnt
    256

LPFC Verbose Logging

  1. You can set the lpfc_log_verbose driver setting to any of the following values to log NVMe/FC events.

    #define LOG_NVME 0x00100000 /* NVME general events. */
    #define LOG_NVME_DISC 0x00200000 /* NVME Discovery/Connect events. */
    #define LOG_NVME_ABTS 0x00400000 /* NVME ABTS events. */
    #define LOG_NVME_IOERR 0x00800000 /* NVME IO Error events. */
  2. After setting any of these values, run dracut-f and reboot host.

  3. After rebooting, verify the settings.

    # cat /etc/modprobe.d/lpfc.conf
    lpfc_enable_fc4_type=3 lpfc_log_verbose=0xf00083
    
    # cat /sys/module/lpfc/parameters/lpfc_log_verbose
    15728771