Skip to main content
SAN hosts and cloud clients

NVMe-oF host configuration for SUSE Linux Enterprise Server 15 SP4 with ONTAP

Contributors netapp-ranuk

NVMe over Fabrics (NVMe-oF), including NVMe over Fibre Channel (NVMe/FC) and other transports, is supported with SUSE Linux Enterprise Server (SLES) 15 SP4 with Asymmetric Namespace Access (ANA). In NVMe-oF environments, ANA is the equivalent of ALUA multipathing in iSCSI and FCP environments and is implemented with in-kernel NVMe multipath.

The following support is available for the NVMe-oF host configuration for SLES 15 SP4 with ONTAP:

  • Both NVMe and SCSI traffic can be run on the same co-existent host. Therefore, for SCSI LUNs, you can configure dm-multipath for SCSI mpath devices, whereas you might use NVMe multipath to configure NVMe-oF namespace devices on the host.

  • Support for NVMe over TCP (NVMe/TCP) in addition to NVMe/FC. The NetApp plug-in in the native nvme-cli package displays ONTAP details for both NVMe/FC and NVMe/TCP namespaces.

For additional details on supported configurations, see the NetApp Interoperability Matrix Tool.

Features

  • Support for NVMe secure, in-band authentication

  • Support for persistent discovery controllers (PDCs) using a unique discovery NQN

Known limitations

  • SAN booting using the NVMe-oF protocol is currently not supported.

  • There's no sanlun support for NVMe-oF. Therefore, the host utility support isn't available for NVMe-oF on an SLES15 SP5 host. You can rely on the NetApp plug-in included in the native nvme-cli package for all NVMe-oF transports.

Configure NVMe/FC

You can configure NVMe/FC for Broadcom/Emulex FC adapters or Marvell/Qlogic FC adapters.

Broadcom/Emulex
Steps
  1. Verify that you are using the recommended adapter model:

    cat /sys/class/scsi_host/host*/modelname

    Example output:

    LPe32002 M2
    LPe32002-M2
  2. Verify the adapter model description:

    cat /sys/class/scsi_host/host*/modeldesc

    Example output:

    Emulex LightPulse LPe32002-M2 2-Port 32Gb Fibre Channel Adapter
    Emulex LightPulse LPe32002-M2 2-Port 32Gb Fibre Channel Adapter
  3. Verify that you are using the recommended Emulex host bus adapter (HBA) firmware versions:

    cat /sys/class/scsi_host/host*/fwrev

    Example output:

    12.8.351.47, sli-4:2:c
    12.8.351.47, sli-4:2:c
  4. Verify that you are using the recommended LPFC driver version:

    cat /sys/module/lpfc/version

    Example output:

    0:14.2.0.6
  5. Verify that you can view your initiator ports:

    cat /sys/class/fc_host/host*/port_name

    Example output:

    0x100000109b579d5e
    0x100000109b579d5f
  6. Verify that your initiator ports are online:

    cat /sys/class/fc_host/host*/port_state

    Example output:

    Online
    Online
  7. Verify that the NVMe/FC initiator ports are enabled and that the target ports are visible:

    cat /sys/class/scsi_host/host*/nvme_info

    Example output:

    In this example, one initiator port is enabled and connected with two target LIFs.

    NVME Initiator Enabled
    XRI Dist lpfc0 Total 6144 IO 5894 ELS 250
    NVME LPORT lpfc0 WWPN x100000109b579d5e WWNN x200000109b579d5e DID x011c00 ONLINE
    NVME RPORT WWPN x208400a098dfdd91 WWNN x208100a098dfdd91 DID x011503 TARGET DISCSRVC ONLINE
    NVME RPORT WWPN x208500a098dfdd91 WWNN x208100a098dfdd91 DID x010003 TARGET DISCSRVC ONLINE
    
    NVME Statistics
    LS: Xmt 0000000e49 Cmpl 0000000e49 Abort 00000000
    LS XMIT: Err 00000000 CMPL: xb 00000000 Err 00000000
    Total FCP Cmpl 000000003ceb594f Issue 000000003ce65dbe OutIO fffffffffffb046f
    abort 00000bd2 noxri 00000000 nondlp 00000000 qdepth 00000000 wqerr
    00000000 err 00000000
    FCP CMPL: xb 000014f4 Err 00012abd
    
    NVME Initiator Enabled
    XRI Dist lpfc1 Total 6144 IO 5894 ELS 250
    NVME LPORT lpfc1 WWPN x100000109b579d5f WWNN x200000109b579d5f DID x011b00 ONLINE
    NVME RPORT WWPN x208300a098dfdd91 WWNN x208100a098dfdd91 DID x010c03 TARGET DISCSRVC ONLINE
    NVME RPORT WWPN x208200a098dfdd91 WWNN x208100a098dfdd91 DID x012a03 TARGET DISCSRVC ONLINE
    
    NVME Statistics
    LS: Xmt 0000000e50 Cmpl 0000000e50 Abort 00000000
    LS XMIT: Err 00000000 CMPL: xb 00000000 Err 00000000
    Total FCP Cmpl 000000003c9859ca Issue 000000003c93515e OutIO fffffffffffaf794
    abort 00000b73 noxri 00000000 nondlp 00000000 qdepth 00000000 wqerr
    00000000 err 00000000
    FCP CMPL: xb 0000159d Err 000135c3
  8. Reboot the host.

Marvell/QLogic
Steps
  1. The native inbox qla2xxx driver included in the SLES 15 SP4 kernel has the latest fixes essential for ONTAP support. Verify that you are running the supported adapter driver and firmware versions:

    cat /sys/class/fc_host/host*/symbolic_name

    Example output:

    QLE2742 FW:v9.08.02 DVR:v10.02.07.800-k QLE2742 FW:v9.08.02 DVR:v10.02.07.800-k
  2. Verify that the ql2xnvmeenable parameter is set to 1:

    cat /sys/module/qla2xxx/parameters/ql2xnvmeenable
    1

Enable 1MB I/O size (Optional)

ONTAP reports an MDTS (Max Data Transfer Size) of 8 in the Identify Controller data which means the maximum I/O request size can be up to 1MB. However, to issue I/O requests of size 1 MB for a Broadcom NVMe/FC host, you must increase the lpfc value of the lpfc_sg_seg_cnt parameter to 256 from the default value of 64.

Steps
  1. Set the lpfc_sg_seg_cnt parameter to 256.

    # cat /etc/modprobe.d/lpfc.conf
    options lpfc lpfc_sg_seg_cnt=256
  2. Run a dracut -f command, and reboot the host.

  3. Verify that lpfc_sg_seg_cnt is 256.

    # cat /sys/module/lpfc/parameters/lpfc_sg_seg_cnt
    256
Note This is not applicable to Qlogic NVMe/FC hosts.

Enable NVMe services

There are two NVMe/FC boot services included in the nvme-cli package, however, only nvmefc-boot-connections.service is enabled to start during system boot; nvmf-autoconnect.service is not enabled. Therefore, you need to manually enable nvmf-autoconnect.service to start during system boot.

Steps
  1. Enable nvmf-autoconnect.service:

    # systemctl enable nvmf-autoconnect.service
    Created symlink /etc/systemd/system/default.target.wants/nvmf-autoconnect.service → /usr/lib/systemd/system/nvmf-autoconnect.service.
  2. Reboot the host.

  3. Verify that nvmf-autoconnect.service and nvmefc-boot-connections.service are running after the system boots up:

    Example output:

    # systemctl status nvmf-autoconnect.service
       nvmf-autoconnect.service - Connect NVMe-oF subsystems automatically during boot
         Loaded: loaded (/usr/lib/systemd/system/nvmf-autoconnect.service; enabled; vendor preset: disabled)
         Active: inactive (dead) since Thu 2023-05-25 14:55:00 IST; 11min ago
        Process: 2108 ExecStartPre=/sbin/modprobe nvme-fabrics (code=exited, status=0/SUCCESS)
        Process: 2114 ExecStart=/usr/sbin/nvme connect-all (code=exited, status=0/SUCCESS)
       Main PID: 2114 (code=exited, status=0/SUCCESS)
    
       systemd[1]: Starting Connect NVMe-oF subsystems automatically during boot...
       nvme[2114]: traddr=nn-0x201700a098fd4ca6:pn-0x201800a098fd4ca6 is already connected
       systemd[1]: nvmf-autoconnect.service: Deactivated successfully.
       systemd[1]: Finished Connect NVMe-oF subsystems automatically during boot.
    
    # systemctl status nvmefc-boot-connections.service
    nvmefc-boot-connections.service - Auto-connect to subsystems on FC-NVME devices found during boot
       Loaded: loaded (/usr/lib/systemd/system/nvmefc-boot-connections.service; enabled; vendor preset: enabled)
       Active: inactive (dead) since Thu 2023-05-25 14:55:00 IST; 11min ago
     Main PID: 1647 (code=exited, status=0/SUCCESS)
    
    systemd[1]: Starting Auto-connect to subsystems on FC-NVME devices found during boot...
    systemd[1]: nvmefc-boot-connections.service: Succeeded.
    systemd[1]: Finished Auto-connect to subsystems on FC-NVME devices found during boot.

Configure NVMe/TCP

You can use the following procedure to configure NVMe/TCP.

Steps
  1. Verify that the initiator port can fetch the discovery log page data across the supported NVMe/TCP LIFs:

    nvme discover -t tcp -w <host-traddr> -a <traddr>

    Example output:

    # nvme discover -t tcp -w 192.168.1.4 -a 192.168.1.31
    
    Discovery Log Number of Records 8, Generation counter 18
    =====Discovery Log Entry 0====== trtype: tcp
    adrfam: ipv4
    subtype: current discovery subsystem treq: not specified
    portid: 0
    trsvcid: 8009 subnqn: nqn.1992-
    08.com.netapp:sn.48391d66c0a611ecaaa5d039ea165514:discovery traddr: 192.168.2.117
    eflags: explicit discovery connections, duplicate discovery information sectype: none
    =====Discovery Log Entry 1====== trtype: tcp
    adrfam: ipv4
    subtype: current discovery subsystem treq: not specified
    portid: 1
    trsvcid: 8009 subnqn: nqn.1992-
    08.com.netapp:sn.48391d66c0a611ecaaa5d039ea165514:discovery traddr: 192.168.1.117
    eflags: explicit discovery connections, duplicate discovery information sectype: none
    =====Discovery Log Entry 2====== trtype: tcp
    adrfam: ipv4
    subtype: current discovery subsystem treq: not specified
    portid: 2
    trsvcid: 8009 subnqn: nqn.1992-
    08.com.netapp:sn.48391d66c0a611ecaaa5d039ea165514:discovery traddr: 192.168.2.116
    eflags: explicit discovery connections, duplicate discovery information sectype: none
    =====Discovery Log Entry 3====== trtype: tcp
    adrfam: ipv4
    subtype: current discovery subsystem treq: not specified
    portid: 3
    trsvcid: 8009 subnqn: nqn.1992-
    08.com.netapp:sn.48391d66c0a611ecaaa5d039ea165514:discovery traddr: 192.168.1.116
    eflags: explicit discovery connections, duplicate discovery information sectype: none
    =====Discovery Log Entry 4====== trtype: tcp
    adrfam: ipv4
    subtype: nvme subsystem treq: not specified portid: 0
    trsvcid: 4420 subnqn: nqn.1992-
    08.com.netapp:sn.48391d66c0a611ecaaa5d039ea165514:subsystem.subsys_CLIEN T116
    traddr: 192.168.2.117 eflags: not specified sectype: none
    =====Discovery Log Entry 5====== trtype: tcp
    adrfam: ipv4
    subtype: nvme subsystem treq: not specified portid: 1
    trsvcid: 4420 subnqn: nqn.1992-
    08.com.netapp:sn.48391d66c0a611ecaaa5d039ea165514:subsystem.subsys_CLIEN T116
    traddr: 192.168.1.117 eflags: not specified sectype: none
    =====Discovery Log Entry 6====== trtype: tcp
    adrfam: ipv4
    subtype: nvme subsystem treq: not specified portid: 2
    trsvcid: 4420
    subnqn: nqn.1992- 08.com.netapp:sn.48391d66c0a611ecaaa5d039ea165514:subsystem.subsys_CLIEN T116
    traddr: 192.168.2.116 eflags: not specified sectype: none
    =====Discovery Log Entry 7====== trtype: tcp
    adrfam: ipv4
    subtype: nvme subsystem treq: not specified portid: 3
    trsvcid: 4420 subnqn: nqn.1992-
    08.com.netapp:sn.48391d66c0a611ecaaa5d039ea165514:subsystem.subsys_CLIEN T116
    traddr: 192.168.1.116 eflags: not specified sectype: none
  2. Verify that all other NVMe/TCP initiator-target LIF combinations can successfully fetch discovery log page data:

    nvme discover -t tcp -w <host-traddr> -a <traddr>

    Example output:

    # nvme discover -t tcp -w 192.168.1.4 -a 192.168.1.32
    # nvme discover -t tcp -w 192.168.2.5 -a 192.168.2.36
    # nvme discover -t tcp -w 192.168.2.5 -a 192.168.2.37
  3. Run the nvme connect-all command across all the supported NVMe/TCP initiator-target LIFs across the nodes:

    nvme connect-all -t tcp -w host-traddr -a traddr -l <ctrl_loss_timeout_in_seconds>

    Example output:

    # nvme connect-all -t tcp -w 192.168.1.4 -a 192.168.1.31 -l -1
    # nvme connect-all -t tcp -w 192.168.1.4 -a 192.168.1.32 -l -1
    # nvme connect-all -t tcp -w 192.168.2.5 -a 192.168.1.36 -l -1
    # nvme connect-all -t tcp -w 192.168.2.5 -a 192.168.1.37 -l -1
    Note NetApp recommends setting the ctrl-loss-tmo option to -1 so that the NVMe/TCP initiator attempts to reconnect indefinitely in the event of a path loss.

Validate NVMe-oF

You can use the following procedure to validate NVMe-oF.

Steps
  1. Verify that in-kernel NVMe multipath is enabled:

    cat /sys/module/nvme_core/parameters/multipath
    Y
  2. Verify that the host has the correct controller model for the ONTAP NVMe namespaces:

    cat /sys/class/nvme-subsystem/nvme-subsys*/model

    Example output:

    NetApp ONTAP Controller
    NetApp ONTAP Controller
  3. Verify the NVMe I/O policy for the respective ONTAP NVMe I/O controller:

    cat /sys/class/nvme-subsystem/nvme-subsys*/iopolicy

    Example output:

    round-robin
    round-robin
  4. Verify that the ONTAP namespaces are visible to the host:

    nvme list -v

    Example output:

    Subsystem        Subsystem-NQN                                                                         Controllers
    ---------------- ------------------------------------------------------------------------------------ -----------------------
    nvme-subsys0     nqn.1992-08.com.netapp:sn.0501daf15dda11eeab68d039eaa7a232:subsystem.unidir_dhchap    nvme0, nvme1, nvme2, nvme3
    
    
    Device   SN                   MN                                       FR       TxPort Asdress        Subsystem    Namespaces
    -------- -------------------- ---------------------------------------- -------- ---------------------------------------------
    nvme0    81LGgBUqsI3EAAAAAAAE NetApp ONTAP Controller   FFFFFFFF tcp traddr=192.168.2.214,trsvcid=4420,host_traddr=192.168.2.14 nvme-subsys0 nvme0n1
    nvme1    81LGgBUqsI3EAAAAAAAE NetApp ONTAP Controller   FFFFFFFF tcp traddr=192.168.2.215,trsvcid=4420,host_traddr=192.168.2.14 nvme-subsys0 nvme0n1
    nvme2    81LGgBUqsI3EAAAAAAAE NetApp ONTAP Controller   FFFFFFFF tcp traddr=192.168.1.214,trsvcid=4420,host_traddr=192.168.1.14 nvme-subsys0 nvme0n1
    nvme3    81LGgBUqsI3EAAAAAAAE NetApp ONTAP Controller   FFFFFFFF tcp traddr=192.168.1.215,trsvcid=4420,host_traddr=192.168.1.14 nvme-subsys0 nvme0n1
    
    
    Device       Generic      NSID       Usage                 Format         Controllers
    ------------ ------------ ---------- -------------------------------------------------------------
    /dev/nvme0n1 /dev/ng0n1   0x1     1.07  GB /   1.07  GB    4 KiB +  0 B   nvme0, nvme1, nvme2, nvme3
  5. Verify that the controller state of each path is live and has the correct ANA status:

    nvme list-subsys /dev/<subsystem_name>
    NVMe/FC
    # nvme list-subsys /dev/nvme1n1
    nvme-subsys1 - NQN=nqn.1992-08.com.netapp:sn.04ba0732530911ea8e8300a098dfdd91:subsystem.nvme_145_1
    \
    +- nvme2 fc traddr=nn-0x208100a098dfdd91:pn- 0x208200a098dfdd91,host_traddr=nn-0x200000109b579d5f:pn-0x100000109b579d5f live optimized
    +- nvme3 fc traddr=nn-0x208100a098dfdd91:pn- 0x208500a098dfdd91,host_traddr=nn-0x200000109b579d5e:pn-0x100000109b579d5e live optimized
    +- nvme4 fc traddr=nn-0x208100a098dfdd91:pn- 0x208400a098dfdd91,host_traddr=nn-0x200000109b579d5e:pn-0x100000109b579d5e live non-optimized
    +- nvme6 fc traddr=nn-0x208100a098dfdd91:pn- 0x208300a098dfdd91,host_traddr=nn-0x200000109b579d5f:pn-0x100000109b579d5f live non-optimized
    NVMe/TCP
    # nvme list-subsys
    nvme-subsys0 - NQN=nqn.1992-08.com.netapp:sn.0501daf15dda11eeab68d039eaa7a232:subsystem.unidir_dhchap
    hostnqn=nqn.2014-08.org.nvmexpress:uuid:e58eca24-faff-11ea-8fee-3a68dd3b5c5f
    iopolicy=round-robin
    
     +- nvme0 tcp traddr=192.168.2.214,trsvcid=4420,host_traddr=192.168.2.14 live
     +- nvme1 tcp traddr=192.168.2.215,trsvcid=4420,host_traddr=192.168.2.14 live
     +- nvme2 tcp traddr=192.168.1.214,trsvcid=4420,host_traddr=192.168.1.14 live
     +- nvme3 tcp traddr=192.168.1.215,trsvcid=4420,host_traddr=192.168.1.14 live
  6. Verify that the NetApp plug-in displays the correct values for each ONTAP namespace device:

    Column

    nvme netapp ontapdevices -o column

    Example output:

    Device           Vserver                   Namespace Path                               NSID UUID                                   Size
    ---------------- ------------------------- -----------------------------------------------------------------------------------------------
    /dev/nvme0n1     vs_CLIENT114              /vol/CLIENT114_vol_0_10/CLIENT114_ns10       1    c6586535-da8a-40fa-8c20-759ea0d69d33   1.07GB
    JSON

    nvme netapp ontapdevices -o json

    Example output:

    {
      "ONTAPdevices":[
        {
          "Device":"/dev/nvme0n1",
          "Vserver":"vs_CLIENT114",
          "Namespace_Path":"/vol/CLIENT114_vol_0_10/CLIENT114_ns10",
          "NSID":1,
          "UUID":"c6586535-da8a-40fa-8c20-759ea0d69d33",
          "Size":"1.07GB",
          "LBA_Data_Size":4096,
          "Namespace_Size":262144
        }
      ]
    }

Create a persistent discovery controller

Beginning with ONTAP 9.11.1, you can create a persistent discovery controller (PDC) for your SLES 15 SP4 host by using the following procedure. A PDC is required to automatically detect NVMe subsystem add or remove scenarios and changes to the discovery log page data.

Steps
  1. Verify that the discovery log page data is available and can be retrieved through the initiator port and target LIF combination:

    nvme discover -t <trtype> -w <host-traddr> -a <traddr>
    Example output:
    Discovery Log Number of Records 16, Generation counter 14
    =====Discovery Log Entry 0======
    trtype:  tcp
    adrfam:  ipv4
    subtype: current discovery subsystem
    treq:    not specified
    portid:  0
    trsvcid: 8009
    subnqn:  nqn.1992-08.com.netapp:sn.0501daf15dda11eeab68d039eaa7a232:discovery
    traddr:  192.168.1.214
    eflags:  explicit discovery connections, duplicate discovery information sectype: none
    =====Discovery Log Entry 1======
    trtype:  tcp
    adrfam:  ipv4
    subtype: current discovery subsystem
    treq:    not specified
    portid:  0
    trsvcid: 8009
    subnqn:  nqn.1992-08.com.netapp:sn.0501daf15dda11eeab68d039eaa7a232:discovery
    traddr:  192.168.1.215
    eflags:  explicit discovery connections, duplicate discovery information
    sectype: none
    =====Discovery Log Entry 2======
    trtype:  tcp
    adrfam:  ipv4
    subtype: current discovery subsystem
    treq:    not specified
    portid:  0
    trsvcid: 8009
    subnqn:  nqn.1992-08.com.netapp:sn.0501daf15dda11eeab68d039eaa7a232:discovery
    traddr:  192.168.2.215
    eflags:  explicit discovery connections, duplicate discovery information sectype: none
    =====Discovery Log Entry 3======
    trtype:  tcp
    adrfam:  ipv4
    subtype: current discovery subsystem
    treq:    not specified
    portid:  0
    trsvcid: 8009
    subnqn:  nqn.1992-08.com.netapp:sn.0501daf15dda11eeab68d039eaa7a232:discovery
    traddr:  192.168.2.214
    eflags:  explicit discovery connections, duplicate discovery information sectype: none
    =====Discovery Log Entry 4======
    trtype:  tcp
    adrfam:  ipv4
    subtype: nvme subsystem
    treq:    not specified
    portid:  0
    trsvcid: 4420
    subnqn:  nqn.1992-08.com.netapp:sn.0501daf15dda11eeab68d039eaa7a232:subsystem.unidir_none
    traddr:  192.168.1.214
    eflags:  none
    sectype: none
    =====Discovery Log Entry 5======
    trtype:  tcp
    adrfam:  ipv4
    subtype: nvme subsystem
    treq:    not specified
    portid:  0
    trsvcid: 4420
    subnqn:  nqn.1992-08.com.netapp:sn.0501daf15dda11eeab68d039eaa7a232:subsystem.unidir_none
    traddr:  192.168.1.215
    eflags:  none
    sectype: none
    =====Discovery Log Entry 6======
    trtype:  tcp
    adrfam:  ipv4
    subtype: nvme subsystem
    treq:    not specified
    portid:  0
    trsvcid: 4420
    subnqn:  nqn.1992-08.com.netapp:sn.0501daf15dda11eeab68d039eaa7a232:subsystem.unidir_none
    traddr:  192.168.2.215
    eflags:  none
    sectype: none
    =====Discovery Log Entry 7======
    trtype:  tcp
    adrfam:  ipv4
    subtype: nvme subsystem
    treq:    not specified
    portid:  0
    trsvcid: 4420
    subnqn:  nqn.1992-08.com.netapp:sn.0501daf15dda11eeab68d039eaa7a232:subsystem.unidir_none
    traddr:  192.168.2.214
    eflags:  none
    sectype: none
    =====Discovery Log Entry 8======
    trtype:  tcp
    adrfam:  ipv4
    subtype: nvme subsystem
    treq:    not specified
    portid:  0
    trsvcid: 4420
    subnqn:  nqn.1992-08.com.netapp:sn.0501daf15dda11eeab68d039eaa7a232:subsystem.subsys_CLIENT114
    traddr:  192.168.1.214
    eflags:  none
    sectype: none
    =====Discovery Log Entry 9======
    trtype:  tcp
    adrfam:  ipv4
    subtype: nvme subsystem
    treq:    not specified
    portid:  0
    trsvcid: 4420
    subnqn:  nqn.1992-08.com.netapp:sn.0501daf15dda11eeab68d039eaa7a232:subsystem.subsys_CLIENT114
    traddr:  192.168.1.215
    eflags:  none
    sectype: none
    =====Discovery Log Entry 10======
    trtype:  tcp
    adrfam:  ipv4
    subtype: nvme subsystem
    treq:    not specified
    portid:  0
    trsvcid: 4420
    subnqn:  nqn.1992-08.com.netapp:sn.0501daf15dda11eeab68d039eaa7a232:subsystem.subsys_CLIENT114
    traddr:  192.168.2.215
    eflags:  none
    sectype: none
    =====Discovery Log Entry 11======
    trtype:  tcp
    adrfam:  ipv4
    subtype: nvme subsystem
    treq:    not specified
    portid:  0
    trsvcid: 4420
    subnqn:  nqn.1992-08.com.netapp:sn.0501daf15dda11eeab68d039eaa7a232:subsystem.subsys_CLIENT114
    traddr:  192.168.2.214
    eflags:  none
    sectype: none
    =====Discovery Log Entry 12======
    trtype:  tcp
    adrfam:  ipv4
    subtype: nvme subsystem
    treq:    not specified
    portid:  0
    trsvcid: 4420
    subnqn:  nqn.1992-08.com.netapp:sn.0501daf15dda11eeab68d039eaa7a232:subsystem.unidir_dhchap
    traddr:  192.168.1.214
    eflags:  none
    sectype: none
    =====Discovery Log Entry 13======
    trtype:  tcp
    adrfam:  ipv4
    subtype: nvme subsystem
    treq:    not specified
    portid:  0
    trsvcid: 4420
    subnqn:  nqn.1992-08.com.netapp:sn.0501daf15dda11eeab68d039eaa7a232:subsystem.unidir_dhchap
    traddr:  192.168.1.215
    eflags:  none
    sectype: none
    =====Discovery Log Entry 14======
    trtype:  tcp
    adrfam:  ipv4
    subtype: nvme subsystem
    treq:    not specified
    portid:  0
    trsvcid: 4420
    subnqn:  nqn.1992-08.com.netapp:sn.0501daf15dda11eeab68d039eaa7a232:subsystem.unidir_dhchap
    traddr:  192.168.2.215
    eflags:  none
    sectype: none
    =====Discovery Log Entry 15======
    trtype:  tcp
    adrfam:  ipv4
    subtype: nvme subsystem
    treq:    not specified
    portid:  0
    trsvcid: 4420
    subnqn:  nqn.1992-08.com.netapp:sn.0501daf15dda11eeab68d039eaa7a232:subsystem.unidir_dhchap
    traddr:  192.168.2.214
    eflags:  none
    sectype: none
  2. Create a PDC for the discovery subsystem:

    nvme discover -t <trtype> -w <host-traddr> -a <traddr> -p

    Example output:

    nvme discover -t tcp -w 192.168.1.16 -a 192.168.1.116 -p
  3. From the ONTAP controller, verify that the PDC has been created:

    vserver nvme show-discovery-controller -instance -vserver vserver_name

    Example output:

    vserver nvme show-discovery-controller -instance -vserver vs_nvme175
    Vserver Name: vs_CLIENT116 Controller ID: 00C0h
    Discovery Subsystem NQN: nqn.1992- 08.com.netapp:sn.48391d66c0a611ecaaa5d039ea165514:discovery Logical Interface UUID: d23cbb0a-c0a6-11ec-9731-d039ea165abc Logical Interface: CLIENT116_lif_4a_1
    Node: A400-14-124
    Host NQN: nqn.2014-08.org.nvmexpress:uuid:12372496-59c4-4d1b-be09- 74362c0c1afc
    Transport Protocol: nvme-tcp
    Initiator Transport Address: 192.168.1.16
    Host Identifier: 59de25be738348f08a79df4bce9573f3 Admin Queue Depth: 32
    Header Digest Enabled: false Data Digest Enabled: false
    Vserver UUID: 48391d66-c0a6-11ec-aaa5-d039ea165514

Set up secure in-band authentication

Beginning with ONTAP 9.12.1, secure, in-band authentication is supported over NVMe/TCP and NVMe/FC between your SLES 15 SP4 host and your ONTAP controller.

To set up secure authentication, each host or controller must be associated with a DH-HMAC-CHAP key, which is a combination of the NQN of the NVMe host or controller and an authentication secret configured by the administrator. To authenticate its peer, an NVMe host or controller must recognize the key associated with the peer.

You can set up secure in-band authentication using the CLI or a config JSON file. If you need to specify different dhchap keys for different subsystems, you must use a config JSON file.

CLI
Steps
  1. Obtain the host NQN:

    cat /etc/nvme/hostnqn
  2. Generate the dhchap key for the SLES15 SP4 host:

    nvme gen-dhchap-key -s optional_secret -l key_length {32|48|64} -m HMAC_function {0|1|2|3} -n host_nqn
    
    •	-s secret key in hexadecimal characters to be used to initialize the host key
    •	-l length of the resulting key in bytes
    •	-m HMAC function to use for key transformation
    0 = none, 1- SHA-256, 2 = SHA-384, 3=SHA-512
    •	-n host NQN to use for key transformation

    +
    In the following example, a random dhchap key with HMAC set to 3 (SHA-512) is generated.

# nvme gen-dhchap-key -m 3 -n nqn.2014-08.org.nvmexpress:uuid:d3ca725a- ac8d-4d88-b46a-174ac235139b
DHHC-1:03:J2UJQfj9f0pLnpF/ASDJRTyILKJRr5CougGpGdQSysPrLu6RW1fGl5VSjbeDF1n1DEh3nVBe19nQ/LxreSBeH/bx/pU=:
  1. On the ONTAP controller, add the host and specify both dhchap keys:

    vserver nvme subsystem host add -vserver <svm_name> -subsystem <subsystem> -host-nqn <host_nqn> -dhchap-host-secret <authentication_host_secret> -dhchap-controller-secret <authentication_controller_secret> -dhchap-hash-function {sha-256|sha-512} -dhchap-group {none|2048-bit|3072-bit|4096-bit|6144-bit|8192-bit}
  2. A host supports two types of authentication methods, unidirectional and bidirectional. On the host, connect to the ONTAP controller and specify dhchap keys based on the chosen authentication method:

    nvme connect -t tcp -w <host-traddr> -a <tr-addr> -n <host_nqn> -S <authentication_host_secret> -C <authentication_controller_secret>
  3. Validate the nvme connect authentication command by verifying the host and controller dhchap keys:

    1. Verify the host dhchap keys:

      $cat /sys/class/nvme-subsystem/<nvme-subsysX>/nvme*/dhchap_secret

      Example output for unidirectional configuration:

      SR650-14-114:~ # cat /sys/class/nvme-subsystem/nvme-subsys1/nvme*/dhchap_secret
      DHHC-1:03:je1nQCmjJLUKD62mpYbzlpuw0OIws86NB96uNO/t3jbvhp7fjyR9bIRjOHg8wQtye1JCFSMkBQH3pTKGdYR1OV9gx00=:
      DHHC-1:03:je1nQCmjJLUKD62mpYbzlpuw0OIws86NB96uNO/t3jbvhp7fjyR9bIRjOHg8wQtye1JCFSMkBQH3pTKGdYR1OV9gx00=:
      DHHC-1:03:je1nQCmjJLUKD62mpYbzlpuw0OIws86NB96uNO/t3jbvhp7fjyR9bIRjOHg8wQtye1JCFSMkBQH3pTKGdYR1OV9gx00=:
      DHHC-1:03:je1nQCmjJLUKD62mpYbzlpuw0OIws86NB96uNO/t3jbvhp7fjyR9bIRjOHg8wQtye1JCFSMkBQH3pTKGdYR1OV9gx00=:
    2. Verify the controller dhchap keys:

      $cat /sys/class/nvme-subsystem/<nvme-subsysX>/nvme*/dhchap_ctrl_secret

      Example output for bidirectional configuration:

      SR650-14-114:~ # cat /sys/class/nvme-subsystem/nvme-subsys6/nvme*/dhchap_ctrl_secret
      DHHC-1:03:WorVEV83eYO53kV4Iel5OpphbX5LAphO3F8fgH3913tlrkSGDBJTt3crXeTUB8fCwGbPsEyz6CXxdQJi6kbn4IzmkFU=:
      DHHC-1:03:WorVEV83eYO53kV4Iel5OpphbX5LAphO3F8fgH3913tlrkSGDBJTt3crXeTUB8fCwGbPsEyz6CXxdQJi6kbn4IzmkFU=:
      DHHC-1:03:WorVEV83eYO53kV4Iel5OpphbX5LAphO3F8fgH3913tlrkSGDBJTt3crXeTUB8fCwGbPsEyz6CXxdQJi6kbn4IzmkFU=:
      DHHC-1:03:WorVEV83eYO53kV4Iel5OpphbX5LAphO3F8fgH3913tlrkSGDBJTt3crXeTUB8fCwGbPsEyz6CXxdQJi6kbn4IzmkFU=:
JSON file

You can use the /etc/nvme/config.json file with the nvme connect-all command when multiple NVMe subsystems are available on the ONTAP controller configuration.

You can generate the JSON file using -o option. Refer to the NVMe connect-all man pages for more syntax options.

Steps
  1. Configure the JSON file:

    # cat /etc/nvme/config.json
    [
     {
        "hostnqn":"nqn.2014-08.org.nvmexpress:uuid:12372496-59c4-4d1b-be09-74362c0c1afc",
        "hostid":"3ae10b42-21af-48ce-a40b-cfb5bad81839",
        "dhchap_key":"DHHC-1:03:Cu3ZZfIz1WMlqZFnCMqpAgn/T6EVOcIFHez215U+Pow8jTgBF2UbNk3DK4wfk2EptWpna1rpwG5CndpOgxpRxh9m41w=:"
     },
    
     {
        "hostnqn":"nqn.2014-08.org.nvmexpress:uuid:12372496-59c4-4d1b-be09-74362c0c1afc",
        "subsystems":[
            {
                "nqn":"nqn.1992-08.com.netapp:sn.48391d66c0a611ecaaa5d039ea165514:subsystem.subsys_CLIENT116",
                "ports":[
                   {
                        "transport":"tcp",
                        "traddr":"192.168.1.117",
                        "host_traddr":"192.168.1.16",
                        "trsvcid":"4420",
                        "dhchap_ctrl_key":"DHHC-1:01:0h58bcT/uu0rCpGsDYU6ZHZvRuVqsYKuBRS0Nu0VPx5HEwaZ:"
                   },
                   {
                        "transport":"tcp",
                        "traddr":"192.168.1.116",
                        "host_traddr":"192.168.1.16",
                        "trsvcid":"4420",
                        "dhchap_ctrl_key":"DHHC-1:01:0h58bcT/uu0rCpGsDYU6ZHZvRuVqsYKuBRS0Nu0VPx5HEwaZ:"
                   },
                   {
                        "transport":"tcp",
                        "traddr":"192.168.2.117",
                        "host_traddr":"192.168.2.16",
                        "trsvcid":"4420",
                        "dhchap_ctrl_key":"DHHC-1:01:0h58bcT/uu0rCpGsDYU6ZHZvRuVqsYKuBRS0Nu0VPx5HEwaZ:"
                   },
                   {
                        "transport":"tcp",
                        "traddr":"192.168.2.116",
                        "host_traddr":"192.168.2.16",
                        "trsvcid":"4420",
                        "dhchap_ctrl_key":"DHHC-1:01:0h58bcT/uu0rCpGsDYU6ZHZvRuVqsYKuBRS0Nu0VPx5HEwaZ:"
                   }
               ]
           }
       ]
     }
    ]
    
    [NOTE]
    In the preceding example, `dhchap_key` corresponds to `dhchap_secret` and `dhchap_ctrl_key` corresponds to `dhchap_ctrl_secret`.
  2. Connect to the ONTAP controller using the config JSON file:

    nvme connect-all -J /etc/nvme/config.json

    Example output:

    traddr=192.168.2.116 is already connected
    traddr=192.168.1.116 is already connected
    traddr=192.168.2.117 is already connected
    traddr=192.168.1.117 is already connected
    traddr=192.168.2.117 is already connected
    traddr=192.168.1.117 is already connected
    traddr=192.168.2.116 is already connected
    traddr=192.168.1.116 is already connected
    traddr=192.168.2.116 is already connected
    traddr=192.168.1.116 is already connected
    traddr=192.168.2.117 is already connected
    traddr=192.168.1.117 is already connected
  3. Verify that the dhchap secrets have been enabled for the respective controllers for each subsystem:

    1. Verify the host dhchap keys:

      # cat /sys/class/nvme-subsystem/nvme-subsys0/nvme0/dhchap_secret

      Example output:

      DHHC-1:01:NunEWY7AZlXqxITGheByarwZdQvU4ebZg9HOjIr6nOHEkxJg:
    2. Verify the controller dhchap keys:

      # cat /sys/class/nvme-subsystem/nvme-subsys0/nvme0/dhchap_ctrl_secret

      Example output:

      DHHC-1:03:2YJinsxa2v3+m8qqCiTnmgBZoH6mIT6G/6f0aGO8viVZB4VLNLH4z8CvK7pVYxN6S5fOAtaU3DNi12rieRMfdbg3704=:

Known issues

There are no known issues for the SLES 15 SP4 with ONTAP release.