Skip to main content
SAN hosts and cloud clients

NVMe-oF host configuration for RHEL 9.5 with ONTAP

Contributors netapp-sarajane

NetApp SAN host configurations support the NVMe over Fabrics (NVMe-oF) protocol with Asymmetric Namespace Access (ANA). In NVMe-oF environments, ANA is equivalent to asymmetric logical unit access (ALUA) multipathing in iSCSI and FCP environments. ANA is implemented using the in-kernel NVMe multipath feature.

About this task

You can use the following support and features with the NVMe-oF host configuration for Red Hat Enterprise Linux (RHEL) 9.5. You should also review the known limitations before starting the configuration process.

  • Support available:

    • Support for NVMe over TCP (NVMe/TCP) in addition to NVMe over Fibre Channel (NVMe/FC). The NetApp plug-in in the native nvme-cli package displays ONTAP details for both NVMe/FC and NVMe/TCP namespaces.

    • Running both NVMe and SCSI traffic on the same host. For example, you can configure dm-multipath for SCSI mpath devices for SCSI LUNs and use NVMe multipath to configure NVMe-oF namespace devices on the host.

      For additional details on supported configurations, see the NetApp Interoperability Matrix Tool.

  • Features available:

    • Beginning with ONTAP 9.12.1, support for secure in-band authentication is introduced for NVMe-oF. You can use secure in-band authentication for NVMe-oF with RHEL 9.5.

    • RHEL 9.5 enables in-kernel NVMe multipath for NVMe namespaces by default, removing the need for explicit settings.

    • Support for SAN booting using the NVMe/FC protocol.

  • Known limitations:

    • There are no known limitations.

Validate software versions

You can use the following procedure to validate the minimum supported RHEL 9.5 software versions.

Steps
  1. Install RHEL 9.5 on the server. After the installation is complete, verify that you are running the specified RHEL 9.5 kernel:

    uname -r
    5.14.0-503.11.1.el9_5.x86_64
  2. Install the nvme-cli package:

    rpm -qa|grep nvme-cli
    nvme-cli-2.9.1-6.el9.x86_64
  3. Install the libnvme package:

    rpm -qa|grep libnvme
    libnvme-1.9-3.el9.x86_64
  4. On the RHEL 9.5 host, check the hostnqn string at /etc/nvme/hostnqn:

    cat /etc/nvme/hostnqn
    nqn.2014-08.org.nvmexpress:uuid:4c4c4544-0056-5410-8048-b9c04f425633
  5. Verify that the hostnqn string matches the hostnqn string for the corresponding subsystem on the ONTAP array:

    ::> vserver nvme subsystem host show -vserver vs_coexistence_LPE36002
    Show example
    Vserver Subsystem Priority  Host NQN
    ------- --------- --------  ------------------------------------------------
    vs_coexistence_LPE36002
            nvme
                      regular   nqn.2014-08.org.nvmexpress:uuid:4c4c4544-0056-5410-8048-b9c04f425633
            nvme_1
                      regular   nqn.2014-08.org.nvmexpress:uuid:4c4c4544-0056-5410-8048-b9c04f425633
            nvme_2
                      regular   nqn.2014-08.org.nvmexpress:uuid:4c4c4544-0056-5410-8048-b9c04f425633
            nvme_3
                      regular   nqn.2014-08.org.nvmexpress:uuid:4c4c4544-0056-5410-8048-b9c04f425633
    4 entries were displayed.
    Note If the hostnqn strings do not match, use the vserver modify command to update the hostnqn string on your corresponding ONTAP array subsystem to match the hostnqn string from /etc/nvme/hostnqn on the host.

Configure NVMe/FC

You can configure NVMe/FC with Broadcom/Emulex FC or Marvell/Qlogic FC adapters. For NVMe/FC configured with a Broadcom adapter, you can enable I/O requests of size 1 MB.

Broadcom/Emulex
Steps
  1. Verify that you are using the supported adapter model:

    1. cat /sys/class/scsi_host/host*/modelname

      LPe36002-M64
      LPe36002-M64
    2. cat /sys/class/scsi_host/host*/modeldesc

      Emulex LightPulse LPe36002-M64 2-Port 64Gb Fibre Channel Adapter
      Emulex LightPulse LPe36002-M64 2-Port 64Gb Fibre Channel Adapter
  2. Verify that you are using the recommended Broadcom lpfc firmware and inbox driver:

    1. cat /sys/class/scsi_host/host*/fwrev

      14.4.317.10, sli-4:6:d
      14.4.317.10, sli-4:6:d
    2. cat /sys/module/lpfc/version

      0:14.4.0.2

    For the most current list of supported adapter driver and firmware versions, see the NetApp Interoperability Matrix Tool.

  3. Verify that the expected output of lpfc_enable_fc4_type is set to 3:

    cat /sys/module/lpfc/parameters/lpfc_enable_fc4_type

    3
  4. Verify that you can view your initiator ports:

    cat /sys/class/fc_host/host*/port_name

    0x100000109bf044b1
    0x100000109bf044b2
  5. Verify that your initiator ports are online:

    cat /sys/class/fc_host/host*/port_state

    Online
    Online
  6. Verify that the NVMe/FC initiator ports are enabled and that the target ports are visible:

    cat /sys/class/scsi_host/host*/nvme_info

    Show example
    NVME Initiator Enabled
    XRI Dist lpfc2 Total 6144 IO 5894 ELS 250
    NVME LPORT lpfc2 WWPN x100000109bf044b1 WWNN x200000109bf044b1 DID x022a00 ONLINE
    NVME RPORT       WWPN x202fd039eaa7dfc8 WWNN x202cd039eaa7dfc8 DID x021310 TARGET DISCSRVC ONLINE
    NVME RPORT       WWPN x202dd039eaa7dfc8 WWNN x202cd039eaa7dfc8 DID x020b10 TARGET DISCSRVC ONLINE
    
    NVME Statistics
    LS: Xmt 0000000810 Cmpl 0000000810 Abort 00000000
    LS XMIT: Err 00000000  CMPL: xb 00000000 Err 00000000
    Total FCP Cmpl 000000007b098f07 Issue 000000007aee27c4 OutIO ffffffffffe498bd
            abort 000013b4 noxri 00000000 nondlp 00000058 qdepth 00000000 wqerr 00000000 err 00000000
    FCP CMPL: xb 000013b4 Err 00021443
    
    NVME Initiator Enabled
    XRI Dist lpfc3 Total 6144 IO 5894 ELS 250
    NVME LPORT lpfc3 WWPN x100000109bf044b2 WWNN x200000109bf044b2 DID x021b00 ONLINE
    NVME RPORT       WWPN x2033d039eaa7dfc8 WWNN x202cd039eaa7dfc8 DID x020110 TARGET DISCSRVC ONLINE
    NVME RPORT       WWPN x2032d039eaa7dfc8 WWNN x202cd039eaa7dfc8 DID x022910 TARGET DISCSRVC ONLINE
    
    NVME Statistics
    LS: Xmt 0000000840 Cmpl 0000000840 Abort 00000000
    LS XMIT: Err 00000000  CMPL: xb 00000000 Err 00000000
    Total FCP Cmpl 000000007afd4434 Issue 000000007ae31b83 OutIO ffffffffffe5d74f
            abort 000014a5 noxri 00000000 nondlp 0000006a qdepth 00000000 wqerr 00000000 err 00000000
    FCP CMPL: xb 000014a5 Err 0002149a
Marvell/QLogic

Configure NVMe/FC for a Marvell/QLogic adapter.

Note The native inbox qla2xxx driver included in the RHEL 9.5 GA kernel has the latest fixes. These fixes are essential for ONTAP support.
Steps
  1. Verify that you are running the supported adapter driver and firmware versions:

    cat /sys/class/fc_host/host*/symbolic_name
    QLE2742 FW:v9.14.00 DVR:v10.02.09.200-k
    QLE2742 FW:v9.14.00 DVR:v10.02.09.200-k
  2. Verify that ql2xnvmeenable is set. This enables the Marvell adapter to function as an NVMe/FC initiator:

    cat /sys/module/qla2xxx/parameters/ql2xnvmeenable

    The expected ouptut is 1.

Enable 1MB I/O (Optional)

ONTAP reports an MDTS (Max Data Transfer Size) of 8 in the Identify Controller data. This means the maximum I/O request size can be up to 1MB. To issue I/O requests of size 1 MB for a Broadcom NVMe/FC host, you should increase the lpfc value of the lpfc_sg_seg_cnt parameter to 256 from the default value of 64.

Note These steps don't apply to Qlogic NVMe/FC hosts.
Steps
  1. Set the lpfc_sg_seg_cnt parameter to 256:

    cat /etc/modprobe.d/lpfc.conf
    options lpfc lpfc_sg_seg_cnt=256
  2. Run the dracut -f command, and reboot the host.

  3. Verify that the expected value of lpfc_sg_seg_cnt is 256:

    cat /sys/module/lpfc/parameters/lpfc_sg_seg_cnt

Configure NVMe/TCP

The NVMe/TCP protocol doesn't support the auto-connect operation. Instead, you can discover the NVMe/TCP subsystems and namespaces by performing the NVMe/TCP connect or connect-all operations manually.

Steps
  1. Verify that the initiator port can fetch the discovery log page data across the supported NVMe/TCP LIFs:

    nvme discover -t tcp -w host-traddr -a traddr
    Show example
    nvme discover -t tcp -w 192.168.1.31 -a 192.168.1.24
    
    Discovery Log Number of Records 20, Generation counter 25
    =====Discovery Log Entry 0======
    trtype:  tcp
    adrfam:  ipv4
    subtype: current discovery subsystem
    treq:    not specified
    portid:  4
    trsvcid: 8009
    subnqn:  nqn.1992-08.com.netapp:sn.0f4ba1e74eb611ef9f50d039eab6cb6d:discovery
    traddr:  192.168.2.25
    eflags:  explicit discovery connections, duplicate discovery information
    sectype: none
    =====Discovery Log Entry 1======
    trtype:  tcp
    adrfam:  ipv4
    subtype: current discovery subsystem
    treq:    not specified
    portid:  2
    trsvcid: 8009
    subnqn:  nqn.1992-08.com.netapp:sn.0f4ba1e74eb611ef9f50d039eab6cb6d:discovery
    traddr:  192.168.1.25
    eflags:  explicit discovery connections, duplicate discovery information
    sectype: none
    =====Discovery Log Entry 2======
    trtype:  tcp
    adrfam:  ipv4
    subtype: current discovery subsystem
    treq:    not specified
    portid:  5
    trsvcid: 8009
    subnqn:  nqn.1992-08.com.netapp:sn.0f4ba1e74eb611ef9f50d039eab6cb6d:discovery
    traddr:  192.168.2.24
    eflags:  explicit discovery connections, duplicate discovery information
    sectype: none
    =====Discovery Log Entry 3======
    trtype:  tcp
    adrfam:  ipv4
    subtype: current discovery subsystem
    treq:    not specified
    portid:  1
    trsvcid: 8009
    subnqn:  nqn.1992-08.com.netapp:sn.0f4ba1e74eb611ef9f50d039eab6cb6d:discovery
    traddr:  192.168.1.24
    eflags:  explicit discovery connections, duplicate discovery information
    sectype: none
    =====Discovery Log Entry 4======
    trtype:  tcp
    adrfam:  ipv4
    subtype: nvme subsystem
    treq:    not specified
    portid:  4
    trsvcid: 4420
    subnqn:  nqn.1992-08.com.netapp:sn.0f4ba1e74eb611ef9f50d039eab6cb6d:subsystem.nvme_tcp_1
    traddr:  192.168.2.25
    eflags:  none
    sectype: none
    =====Discovery Log Entry 5======
    trtype:  tcp
    adrfam:  ipv4
    subtype: nvme subsystem
    treq:    not specified
    portid:  2
    trsvcid: 4420
    subnqn:  nqn.1992-08.com.netapp:sn.0f4ba1e74eb611ef9f50d039eab6cb6d:subsystem.nvme_tcp_1
    traddr:  192.168.1.25
    eflags:  none
    sectype: none
    =====Discovery Log Entry 6======
    trtype:  tcp
    adrfam:  ipv4
    subtype: nvme subsystem
    treq:    not specified
    portid:  5
    trsvcid: 4420
    subnqn:  nqn.1992-08.com.netapp:sn.0f4ba1e74eb611ef9f50d039eab6cb6d:subsystem.nvme_tcp_1
    traddr:  192.168.2.24
    eflags:  none
    sectype: none
    =====Discovery Log Entry 7======
    trtype:  tcp
    adrfam:  ipv4
    subtype: nvme subsystem
    treq:    not specified
    portid:  1
    trsvcid: 4420
    subnqn:  nqn.1992-08.com.netapp:sn.0f4ba1e74eb611ef9f50d039eab6cb6d:subsystem.nvme_tcp_1
    traddr:  192.168.1.24
    eflags:  none
    sectype: none
    =====Discovery Log Entry 8======
    trtype:  tcp
    adrfam:  ipv4
    subtype: nvme subsystem
    treq:    not specified
    portid:  4
    trsvcid: 4420
    subnqn:  nqn.1992-08.com.netapp:sn.0f4ba1e74eb611ef9f50d039eab6cb6d:subsystem.nvme_tcp_4
    traddr:  192.168.2.25
    eflags:  none
    sectype: none
    =====Discovery Log Entry 9======
    trtype:  tcp
    adrfam:  ipv4
    subtype: nvme subsystem
    treq:    not specified
    portid:  2
    trsvcid: 4420
    subnqn:  nqn.1992-08.com.netapp:sn.0f4ba1e74eb611ef9f50d039eab6cb6d:subsystem.nvme_tcp_4
    traddr:  192.168.1.25
    eflags:  none
    sectype: none
    =====Discovery Log Entry 10======
    trtype:  tcp
    adrfam:  ipv4
    subtype: nvme subsystem
    treq:    not specified
    portid:  5
    trsvcid: 4420
    subnqn:  nqn.1992-08.com.netapp:sn.0f4ba1e74eb611ef9f50d039eab6cb6d:subsystem.nvme_tcp_4
    traddr:  192.168.2.24
    eflags:  none
    sectype: none
    =====Discovery Log Entry 11======
    trtype:  tcp
    adrfam:  ipv4
    subtype: nvme subsystem
    treq:    not specified
    portid:  1
    trsvcid: 4420
    subnqn:  nqn.1992-08.com.netapp:sn.0f4ba1e74eb611ef9f50d039eab6cb6d:subsystem.nvme_tcp_4
    traddr:  192.168.1.24
    eflags:  none
    sectype: none
    =====Discovery Log Entry 12======
    trtype:  tcp
    adrfam:  ipv4
    subtype: nvme subsystem
    treq:    not specified
    portid:  4
    trsvcid: 4420
    subnqn:  nqn.1992-08.com.netapp:sn.0f4ba1e74eb611ef9f50d039eab6cb6d:subsystem.nvme_tcp_3
    traddr:  192.168.2.25
    eflags:  none
    sectype: none
    =====Discovery Log Entry 13======
    trtype:  tcp
    adrfam:  ipv4
    subtype: nvme subsystem
    treq:    not specified
    portid:  2
    trsvcid: 4420
    subnqn:  nqn.1992-08.com.netapp:sn.0f4ba1e74eb611ef9f50d039eab6cb6d:subsystem.nvme_tcp_3
    traddr:  192.168.1.25
    eflags:  none
    sectype: none
    =====Discovery Log Entry 14======
    trtype:  tcp
    adrfam:  ipv4
    subtype: nvme subsystem
    treq:    not specified
    portid:  5
    trsvcid: 4420
    subnqn:  nqn.1992-08.com.netapp:sn.0f4ba1e74eb611ef9f50d039eab6cb6d:subsystem.nvme_tcp_3
    traddr:  192.168.2.24
    eflags:  none
    sectype: none
    =====Discovery Log Entry 15======
    trtype:  tcp
    adrfam:  ipv4
    subtype: nvme subsystem
    treq:    not specified
    portid:  1
    trsvcid: 4420
    subnqn:  nqn.1992-08.com.netapp:sn.0f4ba1e74eb611ef9f50d039eab6cb6d:subsystem.nvme_tcp_3
    traddr:  192.168.1.24
    eflags:  none
    sectype: none
    =====Discovery Log Entry 16======
    trtype:  tcp
    adrfam:  ipv4
    subtype: nvme subsystem
    treq:    not specified
    portid:  4
    trsvcid: 4420
    subnqn:  nqn.1992-08.com.netapp:sn.0f4ba1e74eb611ef9f50d039eab6cb6d:subsystem.nvme_tcp_2
    traddr:  192.168.2.25
    eflags:  none
    sectype: none
    =====Discovery Log Entry 17======
    trtype:  tcp
    adrfam:  ipv4
    subtype: nvme subsystem
    treq:    not specified
    portid:  2
    trsvcid: 4420
    subnqn:  nqn.1992-08.com.netapp:sn.0f4ba1e74eb611ef9f50d039eab6cb6d:subsystem.nvme_tcp_2
    traddr:  192.168.1.25
    eflags:  none
    sectype: none
    =====Discovery Log Entry 18======
    trtype:  tcp
    adrfam:  ipv4
    subtype: nvme subsystem
    treq:    not specified
    portid:  5
    trsvcid: 4420
    subnqn:  nqn.1992-08.com.netapp:sn.0f4ba1e74eb611ef9f50d039eab6cb6d:subsystem.nvme_tcp_2
    traddr:  192.168.2.24
    eflags:  none
    sectype: none
    =====Discovery Log Entry 19======
    trtype:  tcp
    adrfam:  ipv4
    subtype: nvme subsystem
    treq:    not specified
    portid:  1
    trsvcid: 4420
    subnqn:  nqn.1992-08.com.netapp:sn.0f4ba1e74eb611ef9f50d039eab6cb6d:subsystem.nvme_tcp_2
    traddr:  192.168.1.24
    eflags:  none
    sectype: none
  2. Verify that the other NVMe/TCP initiator-target LIF combinations are able to successfully fetch discovery log page data:

    nvme discover -t tcp -w host-traddr -a traddr
    Show example
    nvme discover -t tcp -w 192.168.1.31 -a 192.168.1.24
    nvme discover -t tcp -w 192.168.2.31 -a 192.168.2.24
    nvme discover -t tcp -w 192.168.1.31 -a 192.168.1.25
    nvme discover -t tcp -w 192.168.2.31 -a 192.168.2.25
  3. Run the nvme connect-all command across all the supported NVMe/TCP initiator-target LIFs across the nodes:

    nvme connect-all -t tcp -w host-traddr -a traddr
    Show example
    nvme	connect-all	-t	tcp	-w	192.168.1.31	-a	192.168.1.24
    nvme	connect-all	-t	tcp	-w	192.168.2.31	-a	192.168.2.24
    nvme	connect-all	-t	tcp	-w	192.168.1.31	-a	192.168.1.25
    nvme	connect-all	-t	tcp	-w	192.168.2.31	-a	192.168.2.25
Note Beginning with RHEL 9.5, the default setting for the NVMe/TCP ctrl_loss_tmo timeout is turned off. This means there is no limit on the number of retries (indefinite retry). Consequently, you don't need to manually configure a specific ctrl_loss_tmo timeout duration when using the nvme connect or nvme connect-all commands (option -l ). With this default behavior, the NVMe/TCP controllers don't experience timeouts in the event of a path failure and remain connected indefinitely.

Validate NVMe-oF

To support correct operation for ONTAP LUNs, verify that the in-kernel NVMe multipath status, ANA status, and ONTAP namespaces are correct for the NVMe-oF configuration.

Steps
  1. Verify that the in-kernel NVMe multipath is enabled:

    cat /sys/module/nvme_core/parameters/multipath
    Y
  2. Verify that the appropriate NVMe-oF settings (such as, model set to NetApp ONTAP Controller and load balancing iopolicy set to round-robin) for the respective ONTAP namespaces correctly reflect on the host:

    1. cat /sys/class/nvme-subsystem/nvme-subsys*/model

      NetApp ONTAP Controller
      NetApp ONTAP Controller
    2. cat /sys/class/nvme-subsystem/nvme-subsys*/iopolicy

      round-robin
      round-robin
  3. Verify that the namespaces are created and correctly discovered on the host:

    nvme list
    Show example
    Node         SN                   Model
    ---------------------------------------------------------
    /dev/nvme4n1 81Ix2BVuekWcAAAAAAAB	NetApp ONTAP Controller
    
    
    Namespace Usage    Format             FW             Rev
    -----------------------------------------------------------
    1                 21.47 GB / 21.47 GB	4 KiB + 0 B   FFFFFFFF
  4. Verify that the controller state of each path is live and has the correct ANA status:

    NVMe/FC
    nvme list-subsys /dev/nvme4n5
    Show example
    nvme-subsys4 - NQN=nqn.1992-08.com.netapp:sn.3a5d31f5502c11ef9f50d039eab6cb6d:subsystem.nvme_1
                   hostnqn=nqn.2014-08.org.nvmexpress:uuid:e6dade64-216d-
    11ec-b7bb-7ed30a5482c3
    iopolicy=round-robin\
    +- nvme1 fc traddr=nn-0x2082d039eaa7dfc8:pn-0x2088d039eaa7dfc8,host_traddr=nn-0x20000024ff752e6d:pn-0x21000024ff752e6d live optimized
    +- nvme12 fc traddr=nn-0x2082d039eaa7dfc8:pn-0x208ad039eaa7dfc8,host_traddr=nn-0x20000024ff752e6d:pn-0x21000024ff752e6d live non-optimized
    +- nvme10 fc traddr=nn-0x2082d039eaa7dfc8:pn-0x2087d039eaa7dfc8,host_traddr=nn-0x20000024ff752e6c:pn-0x21000024ff752e6c live non-optimized
    +- nvme3 fc traddr=nn-0x2082d039eaa7dfc8:pn-0x2083d039eaa7dfc8,host_traddr=nn-0x20000024ff752e6c:pn-0x21000024ff752e6c live optimized
    NVMe/TCP
    nvme list-subsys /dev/nvme1n1
    Show example
    nvme-subsys5 - NQN=nqn.1992-08.com.netapp:sn.0f4ba1e74eb611ef9f50d039eab6cb6d:subsystem.nvme_tcp_3
    hostnqn=nqn.2014-08.org.nvmexpress:uuid:4c4c4544-0035-5910-804b-b5c04f444d33
    iopolicy=round-robin
    \
    +- nvme13 tcp traddr=192.168.2.25,trsvcid=4420,host_traddr=192.168.2.31,
    src_addr=192.168.2.31 live optimized
    +- nvme14 tcp traddr=192.168.2.24,trsvcid=4420,host_traddr=192.168.2.31,
    src_addr=192.168.2.31 live non-optimized
    +- nvme5 tcp traddr=192.168.1.25,trsvcid=4420,host_traddr=192.168.1.31,
    src_addr=192.168.1.31 live optimized
    +- nvme6 tcp traddr=192.168.1.24,trsvcid=4420,host_traddr=192.168.1.31,
    src_addr=192.168.1.31 live non-optimized
  5. Verify that the NetApp plug-in displays the correct values for each ONTAP namespace device:

    Column
    nvme netapp ontapdevices -o column
    Show example
    Device        Vserver   Namespace Path
    ----------------------- ------------------------------
    /dev/nvme1n1     linux_tcnvme_iscsi        /vol/tcpnvme_1_0_0/tcpnvme_ns
    
    NSID       UUID                                   Size
    ------------------------------------------------------------
    1    5f7f630d-8ea5-407f-a490-484b95b15dd6   21.47GB
    JSON
    nvme netapp ontapdevices -o json
    Show example
    {
      "ONTAPdevices":[
        {
          "Device":"/dev/nvme1n1",
          "Vserver":"linux_tcnvme_iscsi",
          "Namespace_Path":"/vol/tcpnvme_1_0_0/tcpnvme_ns",
          "NSID":1,
          "UUID":"5f7f630d-8ea5-407f-a490-484b95b15dd6",
          "Size":"21.47GB",
          "LBA_Data_Size":4096,
          "Namespace_Size":5242880
        },
    ]
    }

Set up secure in-band authentication

Beginning with ONTAP 9.12.1, secure in-band authentication is supported over NVMe/TCP and NVMe/FC between a RHEL 9.5 host and an ONTAP controller.

To set up secure authentication, each host or controller must be associated with a DH-HMAC-CHAP key, which is a combination of the NQN of the NVMe host or controller and an authentication secret configured by the administrator. To authenticate its peer, an NVMe host or controller must recognize the key associated with the peer.

You can set up secure in-band authentication using the CLI or a config JSON file. If you need to specify different dhchap keys for different subsystems, you must use a config JSON file.

CLI

Set up secure in-band authentication using the CLI.

Steps
  1. Obtain the host NQN:

    cat /etc/nvme/hostnqn
  2. Generate the dhchap key for the RHEL 9.5 host.

    The following output describes the gen-dhchap-key command paramters:

    nvme gen-dhchap-key -s optional_secret -l key_length {32|48|64} -m HMAC_function {0|1|2|3} -n host_nqn
    •	-s secret key in hexadecimal characters to be used to initialize the host key
    •	-l length of the resulting key in bytes
    •	-m HMAC function to use for key transformation
    0 = none, 1- SHA-256, 2 = SHA-384, 3=SHA-512
    •	-n host NQN to use for key transformation

    In the following example, a random dhchap key with HMAC set to 3 (SHA-512) is generated.

    # nvme gen-dhchap-key -m 3 -n nqn.2014-08.org.nvmexpress:uuid:e6dade64-216d-11ec-b7bb-7ed30a5482c3
    DHHC-1:03:1CFivw9ccz58gAcOUJrM7Vs98hd2ZHSr+iw+Amg6xZPl5D2Yk+HDTZiUAg1iGgxTYqnxukqvYedA55Bw3wtz6sJNpR4=:
  3. On the ONTAP controller, add the host and specify both dhchap keys:

    vserver nvme subsystem host add -vserver <svm_name> -subsystem <subsystem> -host-nqn <host_nqn> -dhchap-host-secret <authentication_host_secret> -dhchap-controller-secret <authentication_controller_secret> -dhchap-hash-function {sha-256|sha-512} -dhchap-group {none|2048-bit|3072-bit|4096-bit|6144-bit|8192-bit}
  4. A host supports two types of authentication methods, unidirectional and bidirectional. On the host, connect to the ONTAP controller and specify dhchap keys based on the chosen authentication method:

    nvme connect -t tcp -w <host-traddr> -a <tr-addr> -n <host_nqn> -S <authentication_host_secret> -C <authentication_controller_secret>
  5. Validate the nvme connect authentication command by verifying the host and controller dhchap keys:

    1. Verify the host dhchap keys:

      cat /sys/class/nvme-subsystem/<nvme-subsysX>/nvme*/dhchap_secret
      Show example output for a unidirectional configuration
      # cat /sys/class/nvme-subsystem/nvme-subsys1/nvme*/dhchap_secret
      DHHC-1:01:iM63E6cX7G5SOKKOju8gmzM53qywsy+C/YwtzxhIt9ZRz+ky:
      DHHC-1:01:iM63E6cX7G5SOKKOju8gmzM53qywsy+C/YwtzxhIt9ZRz+ky:
      DHHC-1:01:iM63E6cX7G5SOKKOju8gmzM53qywsy+C/YwtzxhIt9ZRz+ky:
      DHHC-1:01:iM63E6cX7G5SOKKOju8gmzM53qywsy+C/YwtzxhIt9ZRz+ky:
    2. Verify the controller dhchap keys:

      cat /sys/class/nvme-subsystem/<nvme-subsysX>/nvme*/dhchap_ctrl_secret
      Show example output for a bidirectional configuration
      # cat /sys/class/nvme-subsystem/nvme-subsys6/nvme*/dhchap_ctrl_secret
      DHHC-1:03:1CFivw9ccz58gAcOUJrM7Vs98hd2ZHSr+iw+Amg6xZPl5D2Yk+HDTZiUAg1iGgxTYqnxukqvYedA55Bw3wtz6sJNpR4=:
      DHHC-1:03:1CFivw9ccz58gAcOUJrM7Vs98hd2ZHSr+iw+Amg6xZPl5D2Yk+HDTZiUAg1iGgxTYqnxukqvYedA55Bw3wtz6sJNpR4=:
      DHHC-1:03:1CFivw9ccz58gAcOUJrM7Vs98hd2ZHSr+iw+Amg6xZPl5D2Yk+HDTZiUAg1iGgxTYqnxukqvYedA55Bw3wtz6sJNpR4=:
      DHHC-1:03:1CFivw9ccz58gAcOUJrM7Vs98hd2ZHSr+iw+Amg6xZPl5D2Yk+HDTZiUAg1iGgxTYqnxukqvYedA55Bw3wtz6sJNpR4=:
JSON file

When multiple NVMe subsystems are available on the ONTAP controller configuration, you can use the /etc/nvme/config.json file with the nvme connect-all command.

To generate the JSON file, you can use the -o option. See the NVMe connect-all manual pages for more syntax options.

Steps
  1. Configure the JSON file:

    Show example
    # cat /etc/nvme/config.json
    [
    {
      "hostnqn":"nqn.2014-08.org.nvmexpress:uuid:9796c1ec-0d34-11eb-b6b2-3a68dd3bab57",
      "hostid":"b033cd4fd6db4724adb48655bfb55448",
      "dhchap_key":"DHHC-1:01:zGlgmRyWbplWfUCPMuaP3mAypX0+GHuSczx5vX4Yod9lMPim:"
    },
    {
      "hostnqn":"nqn.2014-08.org.nvmexpress:uuid:4c4c4544-0035-5910-804b-b5c04f444d33",
      "subsystems":[
           {
              "nqn":"nqn.1992-08.com.netapp:sn.0f4ba1e74eb611ef9f50d039eab6cb6d:subsystem.bidir_DHCP",
              "ports":[
                  {
                      "transport":"tcp",
                       "traddr":" 192.168.1.24 ",
                      "host_traddr":" 192.168.1.31 ",
                      "trsvcid":"4420",
                      "dhchap_ctrl_key":"DHHC-1:03:L52ymUoR32zYvnqZFe5OHhMg4gxD79jIyxSShHansXpVN+WiXE222aVc651JxGZlQCI863iVOz5dNWvgb+14F4B4bTQ=:"
                  },
                  {
                      "transport":"tcp",
                      "traddr":" 192.168.1.24 ",
                      "host_traddr":" 192.168.1.31",
                      "trsvcid":"4420",
                      "dhchap_ctrl_key":"DHHC-1:03:L52ymUoR32zYvnqZFe5OHhMg4gxD79jIyxSShHansXpVN+WiXE222aVc651JxGZlQCI863iVOz5dNWvgb+14F4B4bTQ=:"
                  },
                  {
                      "transport":"tcp",
                     "traddr":" 192.168.1.24 ",
                      "host_traddr":" 192.168.1.31",
                      "trsvcid":"4420",
                      "dhchap_ctrl_key":"DHHC-1:03:L52ymUoR32zYvnqZFe5OHhMg4gxD79jIyxSShHansXpVN+WiXE222aVc651JxGZlQCI863iVOz5dNWvgb+14F4B4bTQ=:"
                  },
                  {
                      "transport":"tcp",
                      "traddr":" 192.168.1.24 ",
                       "host_traddr":" 192.168.1.31",
                      "trsvcid":"4420",
                      "dhchap_ctrl_key":"DHHC-1:03:L52ymUoR32zYvnqZFe5OHhMg4gxD79jIyxSShHansXpVN+WiXE222aVc651JxGZlQCI863iVOz5dNWvgb+14F4B4bTQ=:"
                  }
              ]
          }
      ]
    }
    ]
    Note In the preceding example, dhchap_key corresponds to dhchap_secret and dhchap_ctrl_key corresponds to dhchap_ctrl_secret.
  2. Connect to the ONTAP controller using the config JSON file:

    # nvme connect-all -J /etc/nvme/config.json
    Show example
    traddr=192.168.1.24 is already connected
    traddr=192.168.1.24 is already connected
    traddr=192.168.1.24 is already connected
    traddr=192.168.1.24 is already connected
    traddr=192.168.1.24 is already connected
    traddr=192.168.1.24 is already connected
    traddr=192.168.1.25 is already connected
    traddr=192.168.1.25 is already connected
    traddr=192.168.1.25 is already connected
    traddr=192.168.1.25 is already connected
    traddr=192.168.1.25 is already connected
    traddr=192.168.1.25 is already connected
  3. Verify that the dhchap secrets have been enabled for the respective controllers for each subsystem:

    1. Verify the host dhchap keys:

      # cat /sys/class/nvme-subsystem/nvme-subsys0/nvme0/dhchap_secret
      DHHC-1:01:zGlgmRyWbplWfUCPMuaP3mAypX0+GHuSczx5vX4Yod9lMPim:
    2. Verify the controller dhchap keys:

      # cat /sys/class/nvme-subsystem/nvme-subsys0/nvme0/dhchap_ctrl_secret
      DHHC-1:03:L52ymUoR32zYvnqZFe5OHhMg4gxD79jIyxSShHansXpVN+WiXE222aVc651JxGZlQCI863iVOz5dNWvgb+14F4B4bTQ=:

Known issues

No known issues exist for the NVMe-oF host configuration on RHEL 9.5 with ONTAP release.