Skip to main content
ONTAP SAN Host Utilities

Configure Rocky Linux 10 for NVMe-oF with ONTAP storage

Contributors netapp-pcarriga

NetApp SAN host configurations support the NVMe over Fabrics (NVMe-oF) protocol with Asymmetric Namespace Access (ANA). In NVMe-oF environments, ANA is equivalent to asymmetric logical unit access (ALUA) multipathing in iSCSI and FCP environments. ANA is implemented using the in-kernel NVMe multipath feature.

About this task

You can use the following support and features with the NVMe-oF host configuration for Rocky Linux 10. You should also review the known limitations before starting the configuration process.

  • Support available:

    • Support for NVMe over TCP (NVMe/TCP) in addition to NVMe over Fibre Channel (NVMe/FC). The NetApp plug-in in the native nvme-cli package displays ONTAP details for both NVMe/FC and NVMe/TCP namespaces.

    • Running both NVMe and SCSI traffic on the same host. For example, you can configure dm-multipath for SCSI mpath devices on SCSI LUNs and use NVMe multipath to configure NVMe-oF namespace devices on the host.

    • Beginning with ONTAP 9.12.1, support for secure in-band authentication is introduced for NVMe/TCP. You can use secure in-band authentication for NVMe/TCP with Rocky Linux 10.

    For additional details on supported configurations, see the Interoperability Matrix Tool.

  • Features available:

    • Beginning with Rocky Linux 10, native NVMe multipathing is always enabled, and DM multipath support for NVMe-oF is not supported.

  • Known limitations:

    • Avoid issuing the nvme disconnect-all command on systems booting from SAN over NVMe-TCP or NVMe-FC namespaces because it disconnects both root and data filesystems and might lead to system instability.

Step 1: Optionally, enable SAN booting

You can configure your host to use SAN booting to simplify deployment and improve scalability.

Before you begin

Use the Interoperability Matrix Tool to verify that your Linux OS, host bus adapter (HBA), HBA firmware, HBA boot BIOS, and ONTAP version support SAN booting.

Steps
  1. Create a SAN boot namespace and map it to the host.

  2. Enable SAN booting in the server BIOS for the ports to which the SAN boot namespace is mapped.

    For information on how to enable the HBA BIOS, see your vendor-specific documentation.

  3. Verify that the configuration was successful by rebooting the host and verifying that the OS is up and running.

Step 2: Validate software versions

Use the following procedure to validate the minimum supported Rocky Linux 10 software versions.

Steps
  1. Install Rocky Linux 10 on the server. After the installation is complete, verify that you are running the specified Rocky Linux 10 kernel:

    uname -r

    The following example shows a Rocky Linux kernel version:

    6.12.0-55.9.1.el10_0.x86_64
  2. Install the nvme-cli package:

    rpm -qa|grep nvme-cli

    The following example shows an nvme-cli package version:

    nvme-cli-2.11-5.el10.x86_64
  3. Install the libnvme package:

    rpm -qa|grep libnvme

    The following example shows an libnvme package version:

    libnvme-1.11.1-1.el10.x86_64
  4. On the host, check the hostnqn string at /etc/nvme/hostnqn:

    cat /etc/nvme/hostnqn

    The following example shows an hostnqn version:

    nqn.2014-08.org.nvmexpress:uuid:4c4c4544-0056-5410-8048-b9c04f425633
  5. Verify that the hostnqn string matches the hostnqn string for the corresponding subsystem on the ONTAP array:

    ::> vserver nvme subsystem host show -vserver vs_nvme_194_rockylinux10
    Show example
    Vserver Subsystem Priority  Host NQN
    ------- --------- --------  ------------------------------------------------
    vs_ nvme_194_rockylinux10
            nvme4
                      regular   nqn.2014-08.org.nvmexpress:uuid:4c4c4544-0056-5410-8048- c7c04f425633
            nvme_1
                      regular   nqn.2014-08.org.nvmexpress:uuid:4c4c4544-0056-5410-8048- c7c04f425633
            nvme_2
                      regular   nqn.2014-08.org.nvmexpress:uuid:4c4c4544-0056-5410-8048- c7c04f425633
            nvme_3
                      regular   nqn.2014-08.org.nvmexpress:uuid:4c4c4544-0056-5410-8048- c7c04f425633
    4 entries were displayed.
    Note If the hostnqn strings do not match, use the vserver modify command to update the hostnqn string on your corresponding ONTAP array subsystem to match the hostnqn string from /etc/nvme/hostnqn on the host.

Step 3: Configure NVMe/FC

You can configure NVMe/FC with Broadcom/Emulex FC or Marvell/Qlogic FC adapters. For NVMe/FC configured with a Broadcom adapter, you can enable I/O requests of size 1MB.

Broadcom/Emulex

Configure NVMe/FC for a Broadcom/Emulex adapter.

Steps
  1. Verify that you are using the supported adapter model:

    1. Display the model names:

      cat /sys/class/scsi_host/host*/modelname

      You should see the following output:

      LPe36002-M64
      LPe36002-M64
    2. Display the model descriptions:

      cat /sys/class/scsi_host/host*/modeldesc

      You should see an output similar to the following example:

      Emulex LightPulse LPe36002-M64 2-Port 64Gb Fibre Channel Adapter
      Emulex LightPulse LPe36002-M64 2-Port 64Gb Fibre Channel Adapter
  2. Verify that you are using the recommended Broadcom lpfc firmware and inbox driver:

    1. Display the firmware version:

      cat /sys/class/scsi_host/host*/fwrev

      The following example shows firmware versions:

      14.0.539.16, sli-4:6:d
      14.0.539.16, sli-4:6:d
    2. Display the inbox driver version:

      cat /sys/module/lpfc/version

      The following example shows a driver version:

      0:14.4.0.6

    For the current list of supported adapter driver and firmware versions, see the Interoperability Matrix Tool.

  3. Verify that the expected output of lpfc_enable_fc4_type is set to 3:

    cat /sys/module/lpfc/parameters/lpfc_enable_fc4_type
  4. Verify that you can view your initiator ports:

    cat /sys/class/fc_host/host*/port_name

    The following example shows port identities:

    0x2100f4c7aa0cd7c2
    0x2100f4c7aa0cd7c3
  5. Verify that your initiator ports are online:

    cat /sys/class/fc_host/host*/port_state

    You should see the following output:

    Online
    Online
  6. Verify that the NVMe/FC initiator ports are enabled and that the target ports are visible:

    cat /sys/class/scsi_host/host*/nvme_info
    Show example
    NVME Initiator Enabled
    XRI Dist lpfc2 Total 6144 IO 5894 ELS 250
    NVME LPORT lpfc2 WWPN x100000109bf044b1 WWNN x200000109bf044b1 DID x022a00 ONLINE
    NVME RPORT       WWPN x202fd039eaa7dfc8 WWNN x202cd039eaa7dfc8 DID x021310 TARGET DISCSRVC ONLINE
    NVME RPORT       WWPN x202dd039eaa7dfc8 WWNN x202cd039eaa7dfc8 DID x020b10 TARGET DISCSRVC ONLINE
    
    NVME Statistics
    LS: Xmt 0000000810 Cmpl 0000000810 Abort 00000000
    LS XMIT: Err 00000000  CMPL: xb 00000000 Err 00000000
    Total FCP Cmpl 000000007b098f07 Issue 000000007aee27c4 OutIO ffffffffffe498bd
            abort 000013b4 noxri 00000000 nondlp 00000058 qdepth 00000000 wqerr 00000000 err 00000000
    FCP CMPL: xb 000013b4 Err 00021443
    
    NVME Initiator Enabled
    XRI Dist lpfc3 Total 6144 IO 5894 ELS 250
    NVME LPORT lpfc3 WWPN x100000109bf044b2 WWNN x200000109bf044b2 DID x021b00 ONLINE
    NVME RPORT       WWPN x2033d039eaa7dfc8 WWNN x202cd039eaa7dfc8 DID x020110 TARGET DISCSRVC ONLINE
    NVME RPORT       WWPN x2032d039eaa7dfc8 WWNN x202cd039eaa7dfc8 DID x022910 TARGET DISCSRVC ONLINE
    
    NVME Statistics
    LS: Xmt 0000000840 Cmpl 0000000840 Abort 00000000
    LS XMIT: Err 00000000  CMPL: xb 00000000 Err 00000000
    Total FCP Cmpl 000000007afd4434 Issue 000000007ae31b83 OutIO ffffffffffe5d74f
            abort 000014a5 noxri 00000000 nondlp 0000006a qdepth 00000000 wqerr 00000000 err 00000000
    FCP CMPL: xb 000014a5 Err 0002149a
Marvell/QLogic

Configure NVMe/FC for a Marvell/QLogic adapter.

Steps
  1. Verify that you are running the supported adapter driver and firmware versions:

    cat /sys/class/fc_host/host*/symbolic_name

    The follow example shows driver and firmware versions:

    QLE2872 FW:v9.15.00 DVR:v10.02.09.300-k
    QLE2872 FW:v9.15.00 DVR:v10.02.09.300-k
  2. Verify that ql2xnvmeenable is set. This enables the Marvell adapter to function as an NVMe/FC initiator:

    cat /sys/module/qla2xxx/parameters/ql2xnvmeenable

    The expected ouptut is 1.

Step 4: Optionally, enable 1MB I/O

ONTAP reports an MDTS (Max Data Transfer Size) of 8 in the Identify Controller data. This means the maximum I/O request size can be up to 1MB. To issue I/O requests of size 1MB for a Broadcom NVMe/FC host, you should increase the lpfc value of the lpfc_sg_seg_cnt parameter to 256 from the default value of 64.

Note These steps don't apply to Qlogic NVMe/FC hosts.
Steps
  1. Set the lpfc_sg_seg_cnt parameter to 256:

    cat /etc/modprobe.d/lpfc.conf

    You should see an output similar to the following example:

    options lpfc lpfc_sg_seg_cnt=256
  2. Run the dracut -f command, and reboot the host.

  3. Verify that the value for lpfc_sg_seg_cnt is 256:

    cat /sys/module/lpfc/parameters/lpfc_sg_seg_cnt

Step 5: Verify NVMe boot services

With Rocky Linux 10, the nvmefc-boot-connections.service and nvmf-autoconnect.service boot services included in the NVMe/FC nvme-cli package are automatically enabled when the system boots.

After booting completes, verify that the nvmefc-boot-connections.service and nvmf-autoconnect.service boot services are enabled.

Steps
  1. Verify that nvmf-autoconnect.service is enabled:

    systemctl status nvmf-autoconnect.service
    Show example output
    nvmf-autoconnect.service - Connect NVMe-oF subsystems automatically during boot
         Loaded: loaded (/usr/lib/systemd/system/nvmf-autoconnect.service; enabled; preset: disabled)
         Active: inactive (dead)
    
    Jun 10 04:06:26 SR630-13-201.lab.eng.btc.netapp.in systemd[1]: Starting Connect NVMe-oF subsystems automatically during boot...
    Jun 10 04:06:26 SR630-13-201.lab.eng.btc.netapp.in systemd[1]: nvmf-autoconnect.service: Deactivated successfully.
    Jun 10 04:06:26 SR630-13-201.lab.eng.btc.netapp.in systemd[1]: Finished Connect NVMe-oF subsystems automatically during boot.
  2. Verify that nvmefc-boot-connections.service is enabled:

    systemctl status nvmefc-boot-connections.service
    Show example output
    nvmefc-boot-connections.service - Auto-connect to subsystems on FC-NVME devices found during boot
         Loaded: loaded (/usr/lib/systemd/system/nvmefc-boot-connections.service; enabled; preset: enabled)
         Active: inactive (dead) since Tue 2025-06-10 01:08:36 EDT; 2h 59min ago
       Main PID: 7090 (code=exited, status=0/SUCCESS)
            CPU: 30ms
    
    Jun 10 01:08:36 localhost systemd[1]: Starting Auto-connect to subsystems on FC-NVME devices found during boot...
    Jun 10 01:08:36 localhost systemd[1]: nvmefc-boot-connections.service: Deactivated successfully.
    Jun 10 01:08:36 localhost systemd[1]: Finished Auto-connect to subsystems on FC-NVME devices found during boot.

Step 6: Configure NVMe/TCP

The NVMe/TCP protocol doesn't support the auto-connect operation. Instead, you can discover the NVMe/TCP subsystems and namespaces by performing the NVMe/TCP connect or connect-all operations manually.

Steps
  1. Verify that the initiator port can fetch the discovery log page data across the supported NVMe/TCP LIFs:

    nvme discover -t tcp -w host-traddr -a traddr
    Show example
    nvme discover -t tcp -w 192.168.20.1 -a 192.168.20.20
    
    Discovery Log Number of Records 8, Generation counter 18
    =====Discovery Log Entry 0======
    trtype:  tcp
    adrfam:  ipv4
    subtype: current discovery subsystem
    treq:    not specified
    portid:  4
    trsvcid: 8009
    subnqn:  nqn.1992-08.com.netapp:sn.64e65e6caae711ef9668d039ea951c46:discovery
    traddr:  192.168.21.21
    eflags:  explicit discovery connections, duplicate discovery information
    sectype: none
    =====Discovery Log Entry 1======
    trtype:  tcp
    adrfam:  ipv4
    subtype: current discovery subsystem
    treq:    not specified
    portid:  2
    trsvcid: 8009
    subnqn:  nqn.1992-08.com.netapp:sn.64e65e6caae711ef9668d039ea951c46:discovery
    traddr:  192.168.20.21
    eflags:  explicit discovery connections, duplicate discovery information
    sectype: none
    =====Discovery Log Entry 2======
    trtype:  tcp
    adrfam:  ipv4
    subtype: current discovery subsystem
    treq:    not specified
    portid:  3
    trsvcid: 8009
    subnqn:  nqn.1992-08.com.netapp:sn.64e65e6caae711ef9668d039ea951c46:discovery
    traddr:  192.168.21.20
    eflags:  explicit discovery connections, duplicate discovery information
    sectype: none
    =====Discovery Log Entry 3======
    trtype:  tcp
    adrfam:  ipv4
    subtype: current discovery subsystem
    treq:    not specified
    portid:  1
    trsvcid: 8009
    subnqn:  nqn.1992-08.com.netapp:sn.64e65e6caae711ef9668d039ea951c46:discovery
    traddr:  192.168.20.20
    eflags:  explicit discovery connections, duplicate discovery information
    sectype: none
    =====Discovery Log Entry 4======
    trtype:  tcp
    adrfam:  ipv4
    subtype: nvme subsystem
    treq:    not specified
    portid:  4
    trsvcid: 4420
    subnqn:  nqn.1992-08.com.netapp:sn.64e65e6caae711ef9668d039ea951c46:subsystem.rockylinux10_tcp_subsystem
    traddr:  192.168.21.21
    eflags:  none
    sectype: none
    =====Discovery Log Entry 5======
    trtype:  tcp
    adrfam:  ipv4
    subtype: nvme subsystem
    treq:    not specified
    portid:  2
    trsvcid: 4420
    subnqn:  nqn.1992-08.com.netapp:sn.64e65e6caae711ef9668d039ea951c46:subsystem.rockylinux10_tcp_subsystem
    traddr:  192.168.20.21
    eflags:  none
    sectype: none
    =====Discovery Log Entry 6======
    trtype:  tcp
    adrfam:  ipv4
    subtype: nvme subsystem
    treq:    not specified
    portid:  3
    trsvcid: 4420
    subnqn:  nqn.1992-08.com.netapp:sn.64e65e6caae711ef9668d039ea951c46:subsystem.rockylinux10_tcp_subsystem
    traddr:  192.168.21.20
    eflags:  none
    sectype: none
    =====Discovery Log Entry 7======
    trtype:  tcp
    adrfam:  ipv4
    subtype: nvme subsystem
    treq:    not specified
    portid:  1
    trsvcid: 4420
    subnqn:  nqn.1992-08.com.netapp:sn.64e65e6caae711ef9668d039ea951c46:subsystem.rockylinux10_tcp_subsystem
    traddr:  192.168.20.20
    eflags:  none
    sectype: none
  2. Verify that the other NVMe/TCP initiator-target LIF combinations can successfully fetch discovery log page data:

    nvme discover -t tcp -w host-traddr -a traddr
    Show example
    nvme discover -t tcp -w 192.168.20.1 -a 192.168.20.20
    nvme discover -t tcp -w 192.168.21.1 -a 192.168.21.20
    nvme discover -t tcp -w 192.168.20.1 -a 192.168.20.21
    nvme discover -t tcp -w 192.168.21.1 -a 192.168.21.21
  3. Run the nvme connect-all command across all the supported NVMe/TCP initiator-target LIFs across the nodes:

    nvme connect-all -t tcp -w host-traddr -a traddr
    Show example
    nvme	connect-all	-t	tcp	-w	192.168.20.1	-a	192.168.20.20
    nvme	connect-all	-t	tcp	-w	192.168.21.1	-a	192.168.21.20
    nvme	connect-all	-t	tcp	-w	192.168.20.1	-a	192.168.20.21
    nvme	connect-all	-t	tcp	-w	192.168.21.1	-a	192.168.21.21
Note

Beginning with Rocky Linux 9.4, the setting for the NVMe/TCP ctrl_loss_tmo timeout is automatically set to "off". As a result:

  • There are no limits on the number of retries (indefinite retry).

  • You don't need to manually configure a specific ctrl_loss_tmo timeout duration when using the nvme connect or nvme connect-all commands (option -l ).

  • The NVMe/TCP controllers don't experience timeouts in the event of a path failure and remain connected indefinitely.

Step 7: Validate NVMe-oF

Verify that the in-kernel NVMe multipath status, ANA status, and ONTAP namespaces are correct for the NVMe-oF configuration.

Steps
  1. Verify that the in-kernel NVMe multipath is enabled:

    cat /sys/module/nvme_core/parameters/multipath

    You should see the following output:

    Y
  2. Verify that the appropriate NVMe-oF settings (such as, model set to NetApp ONTAP Controller and load balancing iopolicy set to round-robin) for the respective ONTAP namespaces correctly reflect on the host:

    1. Display the subsystems:

      cat /sys/class/nvme-subsystem/nvme-subsys*/model

      You should see the following output:

      NetApp ONTAP Controller
      NetApp ONTAP Controller
    2. Display the policy:

      cat /sys/class/nvme-subsystem/nvme-subsys*/iopolicy

      You should see the following output:

      round-robin
      round-robin
  3. Verify that the namespaces are created and correctly discovered on the host:

    nvme list
    Show example
    Node         SN                   Model
    ---------------------------------------------------------
    /dev/nvme4n1 81Ix2BVuekWcAAAAAAAB	NetApp ONTAP Controller
    
    
    Namespace Usage    Format             FW             Rev
    -----------------------------------------------------------
    1                 21.47 GB / 21.47 GB	4 KiB + 0 B   FFFFFFFF
  4. Verify that the controller state of each path is live and has the correct ANA status:

    NVMe/FC
    nvme list-subsys /dev/nvme5n1
    Show example
    nvme-subsys5 - NQN=nqn.1992-08.com.netapp:sn.f7565b15a66911ef9668d039ea951c46:subsystem.nvme1
                   hostnqn=nqn.2014-08.org.nvmexpress:uuid:4c4c4544-0056-5410-8048-c7c04f425633
    \
     +- nvme126 fc traddr=nn-0x2036d039ea951c45:pn-0x2038d039ea951c45,host_traddr=nn-0x2000f4c7aa0cd7c3:pn-0x2100f4c7aa0cd7c3 live optimized
     +- nvme176 fc traddr=nn-0x2036d039ea951c45:pn-0x2037d039ea951c45,host_traddr=nn-0x2000f4c7aa0cd7c2:pn-0x2100f4c7aa0cd7c2 live optimized
     +- nvme5 fc traddr=nn-0x2036d039ea951c45:pn-0x2039d039ea951c45,host_traddr=nn-0x2000f4c7aa0cd7c2:pn-0x2100f4c7aa0cd7c2 live non-optimized
     +- nvme71 fc traddr=nn-0x2036d039ea951c45:pn-0x203ad039ea951c45,host_traddr=nn-0x2000f4c7aa0cd7c3:pn-0x2100f4c7aa0cd7c3 live non-optimized
    NVMe/TCP
    nvme list-subsys /dev/nvme4n2
    Show example
    nvme-subsys4 - NQN=nqn.1992-08.com.netapp:sn.64e65e6caae711ef9668d039ea951c46:subsystem.nvme4
                   hostnqn=nqn.2014-08.org.nvmexpress:uuid:4c4c4544-0035-5910-804b-c2c04f444d33
    \
    +- nvme102 tcp traddr=192.168.21.20,trsvcid=4420,host_traddr=192.168.21.1,src_addr=192.168.21.1 live non-optimized
    +- nvme151 tcp traddr=192.168.21.21,trsvcid=4420,host_traddr=192.168.21.1,src_addr=192.168.21.1 live optimized
    +- nvme4 tcp traddr=192.168.20.20,trsvcid=4420,host_traddr=192.168.20.1,src_addr=192.168.20.1 live non-optimized
    +- nvme53 tcp traddr=192.168.20.21,trsvcid=4420,host_traddr=192.168.20.1,src_addr=192.168.20.1 live optimized
  5. Verify that the NetApp plug-in displays the correct values for each ONTAP namespace device:

    Column
    nvme netapp ontapdevices -o column
    Show example
    Device        Vserver   Namespace Path
    ----------------------- ------------------------------------
    /dev/nvme10n1     vs_tcp_rockylinux10       /vol/vol10/ns10
    
    NSID       UUID                                   Size
    ----------------------- ------------------------------------
    1    bbf51146-fc64-4197-b8cf-8a24f6f359b3   21.47GB
    JSON
    nvme netapp ontapdevices -o json
    Show example
    {
      "ONTAPdevices":[
        {
          "Device":"/dev/nvme10n1",
          "Vserver":"vs_tcp_rockylinux10",
          "Namespace_Path":"/vol/vol10/ns10",
          "NSID":1,
          "UUID":"bbf51146-fc64-4197-b8cf-8a24f6f359b3",
          "Size":"21.47GB",
          "LBA_Data_Size":4096,
          "Namespace_Size":5242880
    }
    ]
        }

Step 8: Set up secure in-band authentication

Beginning with ONTAP 9.12.1, secure in-band authentication is supported over NVMe/TCP between a Rocky Linux 10 host and an ONTAP controller.

Each host or controller must be associated with a DH-HMAC-CHAP key to set up secure authentication. A DH-HMAC-CHAP key is a combination of the NQN of the NVMe host or controller and an authentication secret configured by the administrator. To authenticate its peer, an NVMe host or controller must recognize the key associated with the peer.

Set up secure in-band authentication using the CLI or a config JSON file. If you need to specify different dhchap keys for different subsystems, you must use a config JSON file.

CLI

Set up secure in-band authentication using the CLI.

Steps
  1. Obtain the host NQN:

    cat /etc/nvme/hostnqn
  2. Generate the dhchap key for the Rocky Linux 10 host.

    The following output describes the gen-dhchap-key command paramters:

    nvme gen-dhchap-key -s optional_secret -l key_length {32|48|64} -m HMAC_function {0|1|2|3} -n host_nqn
    •	-s secret key in hexadecimal characters to be used to initialize the host key
    •	-l length of the resulting key in bytes
    •	-m HMAC function to use for key transformation
    0 = none, 1- SHA-256, 2 = SHA-384, 3=SHA-512
    •	-n host NQN to use for key transformation

    In the following example, a random dhchap key with HMAC set to 3 (SHA-512) is generated.

    nvme gen-dhchap-key -m 3 -n nqn.2014-08.org.nvmexpress:uuid:4c4c4544-0035-5910-804b-c2c04f444d33
    DHHC-1:03:7zf8I9gaRcDWH3tCH5vLGaoyjzPIvwNWusBfKdpJa+hia1aKDKJQ2o53pX3wYM9xdv5DtKNNhJInZ7X8wU2RQpQIngc=:
  3. On the ONTAP controller, add the host and specify both dhchap keys:

    vserver nvme subsystem host add -vserver <svm_name> -subsystem <subsystem> -host-nqn <host_nqn> -dhchap-host-secret <authentication_host_secret> -dhchap-controller-secret <authentication_controller_secret> -dhchap-hash-function {sha-256|sha-512} -dhchap-group {none|2048-bit|3072-bit|4096-bit|6144-bit|8192-bit}
  4. A host supports two types of authentication methods, unidirectional and bidirectional. On the host, connect to the ONTAP controller and specify dhchap keys based on the chosen authentication method:

    nvme connect -t tcp -w <host-traddr> -a <tr-addr> -n <host_nqn> -S <authentication_host_secret> -C <authentication_controller_secret>
  5. Validate the nvme connect authentication command by verifying the host and controller dhchap keys:

    1. Verify the host dhchap keys:

      cat /sys/class/nvme-subsystem/<nvme-subsysX>/nvme*/dhchap_secret
      Show example output for a unidirectional configuration
      cat /sys/class/nvme-subsystem/nvme-subsys1/nvme*/dhchap_secret
      DHHC- 1:03:fMCrJharXUOqRoIsOEaG6m2PH1yYvu5+z3jTmzEKUbcWu26I33b93b
      il2WR09XDho/ld3L45J+0FeCsStBEAfhYgkQU=:
      DHHC- 1:03:fMCrJharXUOqRoIsOEaG6m2PH1yYvu5+z3jTmzEKUbcWu26I33b93b
      il2WR09XDho/ld3L45J+0FeCsStBEAfhYgkQU=:
      DHHC- 1:03:fMCrJharXUOqRoIsOEaG6m2PH1yYvu5+z3jTmzEKUbcWu26I33b93b
      il2WR09XDho/ld3L45J+0FeCsStBEAfhYgkQU=:
      DHHC- 1:03:fMCrJharXUOqRoIsOEaG6m2PH1yYvu5+z3jTmzEKUbcWu26I33b93b
      il2WR09XDho/ld3L45J+0FeCsStBEAfhYgkQU=:
    2. Verify the controller dhchap keys:

      cat /sys/class/nvme-subsystem/<nvme-subsysX>/nvme*/dhchap_ctrl_secret
      Show example output for a bidirectional configuration
      cat /sys/class/nvme-subsystem/nvme-subsys6/nvme*/dhchap_ctrl_secret
      DHHC- 1:03:7zf8I9gaRcDWH3tCH5vLGaoyjzPIvwNWusBfKdpJa+hia
      1aKDKJQ2o53pX3wYM9xdv5DtKNNhJInZ7X8wU2RQpQIngc=:
      
      DHHC- 1:03:7zf8I9gaRcDWH3tCH5vLGaoyjzPIvwNWusBfKdpJa+hia
      1aKDKJQ2o53pX3wYM9xdv5DtKNNhJInZ7X8wU2RQpQIngc=:
      
      DHHC- 1:03:7zf8I9gaRcDWH3tCH5vLGaoyjzPIvwNWusBfKdpJa+hia
      1aKDKJQ2o53pX3wYM9xdv5DtKNNhJInZ7X8wU2RQpQIngc=:
      
      DHHC- 1:03:7zf8I9gaRcDWH3tCH5vLGaoyjzPIvwNWusBfKdpJa+hia
      1aKDKJQ2o53pX3wYM9xdv5DtKNNhJInZ7X8wU2RQpQIngc=:
JSON file

When multiple NVMe subsystems are available on the ONTAP controller configuration, you can use the /etc/nvme/config.json file with the nvme connect-all command.

Use the -o option to generate the JSON file. See the NVMe connect-all manual pages for more syntax options.

Steps
  1. Configure the JSON file.

    Note In the following example, dhchap_key corresponds to dhchap_secret and dhchap_ctrl_key corresponds to dhchap_ctrl_secret.
    Show example
    cat /etc/nvme/config.json
    [
    {
    "hostnqn":"nqn.2014-08.org.nvmexpress:uuid:4c4c4544-0035-5910-804b-c2c04f444d33",
    "hostid":"4c4c4544-0035-5910-804b-c2c04f444d33",
    "dhchap_key":"DHHC-1:03:7zf8I9gaRcDWH3tCH5vLGaoyjzPIvwNWusBfKdpJa+hia1aKDKJQ2o53pX3wYM9xdv5DtKNNhJInZ7X8wU2RQpQIngc=:",
    "subsystems":[
    {
    "nqn":"nqn.1992-08.com.netapp:sn.127ade26168811f0a50ed039eab69ad3:subsystem.inband_unidirectional",
    "ports":[
    {
    "transport":"tcp",
    "traddr":"192.168.20.17",
    "host_traddr":"192.168.20.1",
    "trsvcid":"4420"
    },
    {
    "transport":"tcp",
    "traddr":"192.168.20.18",
    "host_traddr":"192.168.20.1",
    "trsvcid":"4420"
    },
    {
    "transport":"tcp",
    "traddr":"192.168.21.18",
    "host_traddr":"192.168.21.1",
    "trsvcid":"4420"
    },
    {
    "transport":"tcp",
    "traddr":"192.168.21.17",
    "host_traddr":"192.168.21.1",
    "trsvcid":"4420"
    }]
  2. Connect to the ONTAP controller using the config JSON file:

    nvme connect-all -J /etc/nvme/config.json
    Show example
    traddr=192.168.20.20 is already connected
    traddr=192.168.20.20 is already connected
    traddr=192.168.20.20 is already connected
    traddr=192.168.20.20 is already connected
    traddr=192.168.20.20 is already connected
    traddr=192.168.20.20 is already connected
    traddr=192.168.20.20 is already connected
    traddr=192.168.20.20 is already connected
    traddr=192.168.20.21 is already connected
    traddr=192.168.20.21 is already connected
    traddr=192.168.20.21 is already connected
    traddr=192.168.20.21 is already connected
    traddr=192.168.20.21 is already connected
    traddr=192.168.20.21 is already connected
    traddr=192.168.20.21 is already connected
    traddr=192.168.20.21 is already connected
  3. Verify that the dhchap secrets have been enabled for the respective controllers for each subsystem.

    1. Verify the host dhchap keys:

      cat /sys/class/nvme-subsystem/nvme-subsys0/nvme0/dhchap_secret

      The following example shows a dhchap key:

      DHHC-1:03:7zf8I9gaRcDWH3tCH5vLGaoyjzPIvwNWusBfKdpJa+hia1
      aKDKJQ2o53pX3wYM9xdv5DtKNNhJInZ7X8wU2RQpQIngc=:
    2. Verify the controller dhchap keys:

      cat /sys/class/nvme-subsystem/nvme-subsys0/nvme0/dhchap_ctrl_secret

      You should see an output similar to the following example:

      DHHC-1:03:fMCrJharXUOqRoIsOEaG6m2PH1yYvu5+z3jT
      mzEKUbcWu26I33b93bil2WR09XDho/ld3L45J+0FeCsStBEAfhYgkQU=:

Step 9: Review the known issues

There are no known issues.