Configure SUSE Linux Enterprise Server 16 for NVMe-oF with ONTAP storage
The SUSE Linux Enterprise Server 16 host supports the NVMe over Fibre Channel (NVMe/FC) and NVMe over TCP (NVMe/TCP) protocols with Asymmetric Namespace Access (ANA). ANA provides multipathing functionality equivalent to asymmetric logical unit access (ALUA) in iSCSI and FCP environments.
Learn how to configure NVMe over Fabrics (NVMe-oF) hosts for SUSE Linux Enterprise Server 16. For more support and feature information, see ONTAP support and features.
NVMe-oF with SUSE Linux Enterprise Server 16 has the following known limitations:
-
The
nvme disconnect-allcommand disconnects both root and data filesystems and might lead to system instability. Do not issue this on systems booting from SAN over NVMe-TCP or NVMe-FC namespaces. -
NetApp sanlun host utility support isn't available for NVMe-oF. Instead, you can rely on the NetApp plug-in included in the native
nvme-clipackage for all NVMe-oF transports.
Step 1: Optionally, enable SAN booting
You can configure your host to use SAN booting to simplify deployment and improve scalability. Use the Interoperability Matrix Tool to verify that your Linux OS, host bus adapter (HBA), HBA firmware, HBA boot BIOS, and ONTAP version support SAN booting.
-
Enable SAN booting in the server BIOS for the ports to which the SAN boot namespace is mapped.
For information on how to enable the HBA BIOS, see your vendor-specific documentation.
-
Reboot the host and verify that the OS is up and running.
Step 2: Install SUSE Linux Enterprise Server and NVMe software and verify your configuration
To configure your host for NVMe-oF you need to install the host and NVMe software packages, enable multipathing, and verify your host NQN configuration.
-
Install SUSE Linux Enterprise Server 16 on the server. After the installation is complete, verify that you are running the specified SUSE Linux Enterprise Server 16 kernel:
uname -rExample SUSE Linux Enterprise Server kernel version:
6.12.0-160000.6-default
-
Install the
nvme-clipackage:rpm -qa|grep nvme-cliThe following example shows an
nvme-clipackage version:nvme-cli-2.11+29.g35e62868-160000.1.1.x86_64
-
Install the
libnvmepackage:rpm -qa|grep libnvmeThe following example shows an
libnvmepackage version:libnvme1-1.11+17.g6d55624d-160000.1.1.x86_64
-
On the host, check the hostnqn string at
/etc/nvme/hostnqn:cat /etc/nvme/hostnqnThe following example shows a
hostnqnversion:nqn.2014-08.org.nvmexpress:uuid:d3b581b4-c975-11e6-8425-0894ef31a074
-
On the ONTAP system, verify that the
hostnqnstring matches thehostnqnstring for the corresponding subsystem on the ONTAP array:::> vserver nvme subsystem host show -vserver vs_coexistence_emulexShow example
Vserver Subsystem Priority Host NQN ------- --------- -------- ------------------------------------------------ vs_coexistence_emulex nvme1 regular nqn.2014-08.org.nvmexpress:uuid:d3b581b4-c975-11e6-8425-0894ef31a074 nvme10 regular nqn.2014-08.org.nvmexpress:uuid:d3b581b4-c975-11e6-8425-0894ef31a074 nvme11 regular nqn.2014-08.org.nvmexpress:uuid:d3b581b4-c975-11e6-8425-0894ef31a074 nvme12 regular nqn.2014-08.org.nvmexpress:uuid:d3b581b4-c975-11e6-8425-0894ef31a074 4 entries were displayed.If the hostnqnstrings do not match, use thevserver modifycommand to update thehostnqnstring on your corresponding ONTAP array subsystem to match thehostnqnstring from/etc/nvme/hostnqnon the host.
Step 3: Configure NVMe/FC and NVMe/TCP
Configure NVMe/FC with Broadcom/Emulex or Marvell/QLogic adapters, or configure NVMe/TCP using manual discovery and connect operations.
Configure NVMe/FC for a Broadcom/Emulex FC adapter.
-
Verify that you are using the supported adapter model:
-
Display the model names:
cat /sys/class/scsi_host/host*/modelnameYou should see the following output:
SN37A92079 SN37A92079
-
Display the model descriptions:
cat /sys/class/scsi_host/host*/modeldescYou should see the following output:
Emulex SN37A92079 32Gb 2-Port Fibre Channel Adapter Emulex SN37A92079 32Gb 2-Port Fibre Channel Adapter
-
-
Verify that you are using the recommended Broadcom
lpfcfirmware and inbox driver:-
Display the firmware version:
cat /sys/class/scsi_host/host*/fwrevThe following example shows firmware versions:
14.4.393.53, sli-4:6:d 14.4.393.53, sli-4:6:d
-
Display the inbox driver version:
cat /sys/module/lpfc/versionThe following example shows a driver version:
0:14.4.0.11
For the current list of supported adapter driver and firmware versions, see the Interoperability Matrix Tool.
-
-
Verify that the expected output of
lpfc_enable_fc4_typeis set to3:cat /sys/module/lpfc/parameters/lpfc_enable_fc4_type -
Verify that you can view your initiator ports:
cat /sys/class/fc_host/host*/port_nameYou should see an output similar to:
0x100000109bdacc75 0x100000109bdacc76
-
Verify that your initiator ports are online:
cat /sys/class/fc_host/host*/port_stateYou should see the following output:
Online Online
-
Verify that the NVMe/FC initiator ports are enabled and that the target ports are visible:
cat /sys/class/scsi_host/host*/nvme_infoShow example output
NVME Initiator Enabled XRI Dist lpfc0 Total 6144 IO 5894 ELS 250 NVME LPORT lpfc0 WWPN x100000109bdacc75 WWNN x200000109bdacc75 DID x060100 ONLINE NVME RPORT WWPN x2001d039ea951c45 WWNN x2000d039ea951c45 DID x080801 TARGET DISCSRVC ONLINE NVME RPORT WWPN x2003d039ea951c45 WWNN x2000d039ea951c45 DID x080d01 TARGET DISCSRVC ONLINE NVME RPORT WWPN x2024d039eab31e9c WWNN x2023d039eab31e9c DID x020a09 TARGET DISCSRVC ONLINE NVME RPORT WWPN x2026d039eab31e9c WWNN x2023d039eab31e9c DID x020a08 TARGET DISCSRVC ONLINE NVME RPORT WWPN x2003d039ea5cfc90 WWNN x2002d039ea5cfc90 DID x061b01 TARGET DISCSRVC ONLINE NVME RPORT WWPN x2012d039ea5cfc90 WWNN x2011d039ea5cfc90 DID x061b05 TARGET DISCSRVC ONLINE NVME RPORT WWPN x2005d039ea5cfc90 WWNN x2002d039ea5cfc90 DID x061201 TARGET DISCSRVC ONLINE NVME RPORT WWPN x2014d039ea5cfc90 WWNN x2011d039ea5cfc90 DID x061205 TARGET DISCSRVC ONLINE NVME Statistics LS: Xmt 0000017242 Cmpl 0000017242 Abort 00000000 LS XMIT: Err 00000000 CMPL: xb 00000000 Err 00000000 Total FCP Cmpl 0000000000378362 Issue 00000000003783c7 OutIO 0000000000000065 abort 00000409 noxri 00000000 nondlp 0000003a qdepth 00000000 wqerr 00000000 err 00000000 FCP CMPL: xb 00000409 Err 0000040a NVME Initiator Enabled XRI Dist lpfc1 Total 6144 IO 5894 ELS 250 NVME LPORT lpfc1 WWPN x100000109bdacc76 WWNN x200000109bdacc76 DID x062800 ONLINE NVME RPORT WWPN x2002d039ea951c45 WWNN x2000d039ea951c45 DID x080701 TARGET DISCSRVC ONLINE NVME RPORT WWPN x2004d039ea951c45 WWNN x2000d039ea951c45 DID x081501 TARGET DISCSRVC ONLINE NVME RPORT WWPN x2025d039eab31e9c WWNN x2023d039eab31e9c DID x020913 TARGET DISCSRVC ONLINE NVME RPORT WWPN x2027d039eab31e9c WWNN x2023d039eab31e9c DID x020912 TARGET DISCSRVC ONLINE NVME RPORT WWPN x2006d039ea5cfc90 WWNN x2002d039ea5cfc90 DID x061401 TARGET DISCSRVC ONLINE NVME RPORT WWPN x2015d039ea5cfc90 WWNN x2011d039ea5cfc90 DID x061405 TARGET DISCSRVC ONLINE NVME RPORT WWPN x2004d039ea5cfc90 WWNN x2002d039ea5cfc90 DID x061301 TARGET DISCSRVC ONLINE NVME RPORT WWPN x2013d039ea5cfc90 WWNN x2011d039ea5cfc90 DID x061305 TARGET DISCSRVC ONLINE NVME Statistics LS: Xmt 0000017428 Cmpl 0000017428 Abort 00000000 LS XMIT: Err 00000000 CMPL: xb 00000000 Err 00000000 Total FCP Cmpl 00000000003443be Issue 000000000034442a OutIO 000000000000006c abort 00000491 noxri 00000000 nondlp 00000086 qdepth 00000000 wqerr 00000000 err 00000000 FCP CMPL: xb 00000491 Err 00000494
Configure NVMe/FC for a Marvell/QLogic adapter.
-
Verify that you are running the supported adapter driver and firmware versions:
cat /sys/class/fc_host/host*/symbolic_nameThe follow example shows driver and firmware versions:
QLE2772 FW:v9.15.06 DVR:v10.02.09.400-k-debug QLE2772 FW:v9.15.06 DVR:v10.02.09.400-k-debug
-
Verify that
ql2xnvmeenableis set. This enables the Marvell adapter to function as an NVMe/FC initiator:cat /sys/module/qla2xxx/parameters/ql2xnvmeenableThe expected output is 1.
The NVMe/TCP protocol doesn't support the auto-connect operation. Instead, you can discover the NVMe/TCP subsystems and namespaces by performing the NVMe/TCP connect or connect-all operations manually.
-
Verify that the initiator port can fetch the discovery log page data across the supported NVMe/TCP LIFs:
nvme discover -t tcp -w <host-traddr> -a <traddr>
Show example output
nvme discover -t tcp -w 192.168.38.20 -a 192.168.38.10 Discovery Log Number of Records 8, Generation counter 42 =====Discovery Log Entry 0====== trtype: tcp adrfam: ipv4 subtype: current discovery subsystem treq: not specified portid: 4 trsvcid: 8009 subnqn: nqn.1992-08.com.netapp:sn.f8e2af201b7211f0ac2bd039eab67a95:discovery traddr: 192.168.211.71 eflags: explicit discovery connections, duplicate discovery information sectype: none =====Discovery Log Entry 1====== trtype: tcp adrfam: ipv4 subtype: current discovery subsystem treq: not specified portid: 3 trsvcid: 8009 subnqn: nqn.1992-08.com.netapp:sn.f8e2af201b7211f0ac2bd039eab67a95:discovery traddr: 192.168.111.71 eflags: explicit discovery connections, duplicate discovery information sectype: none =====Discovery Log Entry 2====== trtype: tcp adrfam: ipv4 subtype: current discovery subsystem treq: not specified portid: 2 trsvcid: 8009 subnqn: nqn.1992-08.com.netapp:sn.f8e2af201b7211f0ac2bd039eab67a95:discovery traddr: 192.168.211.70 eflags: explicit discovery connections, duplicate discovery information sectype: none =====Discovery Log Entry 3====== trtype: tcp adrfam: ipv4 subtype: current discovery subsystem treq: not specified portid: 1 trsvcid: 8009 subnqn: nqn.1992-08.com.netapp:sn.f8e2af201b7211f0ac2bd039eab67a95:discovery traddr: 192.168.111.70 eflags: explicit discovery connections, duplicate discovery information sectype: none =====Discovery Log Entry 4====== trtype: tcp adrfam: ipv4 subtype: nvme subsystem treq: not specified portid: 4 trsvcid: 4420 subnqn: nqn.1992-08.com.netapp:sn.f8e2af201b7211f0ac2bd039eab67a95:subsystem.sample_tcp_sub traddr: 192.168.211.71 eflags: none sectype: none =====Discovery Log Entry 5====== trtype: tcp adrfam: ipv4 subtype: nvme subsystem treq: not specified portid: 3 trsvcid: 4420 subnqn: nqn.1992-08.com.netapp:sn.f8e2af201b7211f0ac2bd039eab67a95:subsystem.sample_tcp_sub traddr: 192.168.111.71 eflags: none sectype: none =====Discovery Log Entry 6====== trtype: tcp adrfam: ipv4 subtype: nvme subsystem treq: not specified portid: 2 trsvcid: 4420 subnqn: nqn.1992-08.com.netapp:sn.f8e2af201b7211f0ac2bd039eab67a95:subsystem.sample_tcp_sub traddr: 192.168.211.70 eflags: none sectype: none =====Discovery Log Entry 7====== trtype: tcp adrfam: ipv4 subtype: nvme subsystem treq: not specified portid: 1 trsvcid: 4420 subnqn: nqn.1992-08.com.netapp:sn.f8e2af201b7211f0ac2bd039eab67a95:subsystem.sample_tcp_sub traddr: 192.168.111.70 eflags: none sectype: none localhost:~ #
-
Verify that all other NVMe/TCP initiator-target LIF combinations can successfully fetch discovery log page data:
nvme discover -t tcp -w <host-traddr> -a <traddr>
Show example
nvme discover -t tcp -w 192.168.38.20 -a 192.168.38.10 nvme discover -t tcp -w 192.168.38.20 -a 192.168.38.11 nvme discover -t tcp -w 192.168.39.20 -a 192.168.39.10 nvme discover -t tcp -w 192.168.39.20 -a 192.168.39.11
-
Run the
nvme connect-allcommand across all the supported NVMe/TCP initiator-target LIFs across the nodes:nvme connect-all -t tcp -w <host-traddr> -a <traddr>
Show example
nvme connect-all -t tcp -w 192.168.38.20 -a 192.168.38.10 nvme connect-all -t tcp -w 192.168.38.20 -a 192.168.38.11 nvme connect-all -t tcp -w 192.168.39.20 -a 192.168.39.10 nvme connect-all -t tcp -w 192.168.39.20 -a 192.168.39.11
|
|
The setting for the NVMe/TCP
|
Step 4: Optionally, modify the iopolicy in the udev rules
Beginning with SUSE Linux Enterprise Server 16, the default iopolicy for NVMe-oF is set to queue-depth. If you want to change the iopolicy to round-robin, modify the udev rules file as follows:
-
Open the udev rules file in a text editor with root privileges:
/usr/lib/udev/rules.d/71-nvmf-netapp.rulesYou should see the following output:
vi /usr/lib/udev/rules.d/71-nvmf-netapp.rules
-
Find the line that sets iopolicy for the NetApp ONTAP Controller, as shown in the following example rule:
ACTION=="add", SUBSYSTEM=="nvme-subsystem", ATTR{subsystype}=="nvm", ATTR{model}=="NetApp ONTAP Controller", ATTR{iopolicy}="queue-depth" -
Modify the rule so that
queue-depthbecomesround-robin:ACTION=="add", SUBSYSTEM=="nvme-subsystem", ATTR{subsystype}=="nvm", ATTR{model}=="NetApp ONTAP Controller", ATTR{iopolicy}="round-robin" -
Reload the udev rules and apply the changes:
udevadm control --reload udevadm trigger --subsystem-match=nvme-subsystem -
Verify the current iopolicy for your subsystem. Replace <subsystem>, for example,
nvme-subsys0.cat /sys/class/nvme-subsystem/<subsystem>/iopolicyYou should see the following output:
round-robin
|
|
The new iopolicy applies automatically to matching NetApp ONTAP Controller devices. You don't need to reboot. |
Step 5: Optionally, enable 1MB I/O for NVMe/FC
ONTAP reports a Max Data Transfer Size (MDTS) of 8 in the Identify Controller data. This means the maximum I/O request size can be up to 1MB. To issue I/O requests of size 1MB for a Broadcom NVMe/FC host, you should increase the lpfc value of the lpfc_sg_seg_cnt parameter to 256 from the default value of 64.
|
|
These steps don't apply to Qlogic NVMe/FC hosts. |
-
Set the
lpfc_sg_seg_cntparameter to 256:cat /etc/modprobe.d/lpfc.confYou should see an output similar to the following example:
options lpfc lpfc_sg_seg_cnt=256
-
Run the
dracut -fcommand, and reboot the host. -
Verify that the value for
lpfc_sg_seg_cntis 256:cat /sys/module/lpfc/parameters/lpfc_sg_seg_cnt
Step 6: Verify NVMe boot services
The nvmefc-boot-connections.service and nvmf-autoconnect.service boot services included in the NVMe/FC nvme-cli package are automatically enabled when the system boots.
After booting completes, verify that the nvmefc-boot-connections.service and nvmf-autoconnect.service boot services are enabled.
-
Verify that
nvmf-autoconnect.serviceis enabled:systemctl status nvmf-autoconnect.serviceShow example output
nvmf-autoconnect.service - Connect NVMe-oF subsystems automatically during boot Loaded: loaded (/usr/lib/systemd/system/nvmf-autoconnect.service; enabled; vendor preset: disabled) Active: inactive (dead) since Thu 2024-05-25 14:55:00 IST; 11min ago Process: 2108 ExecStartPre=/sbin/modprobe nvme-fabrics (code=exited, status=0/SUCCESS) Process: 2114 ExecStart=/usr/sbin/nvme connect-all (code=exited, status=0/SUCCESS) Main PID: 2114 (code=exited, status=0/SUCCESS) systemd[1]: Starting Connect NVMe-oF subsystems automatically during boot... nvme[2114]: traddr=nn-0x201700a098fd4ca6:pn-0x201800a098fd4ca6 is already connected systemd[1]: nvmf-autoconnect.service: Deactivated successfully. systemd[1]: Finished Connect NVMe-oF subsystems automatically during boot.
-
Verify that
nvmefc-boot-connections.serviceis enabled:systemctl status nvmefc-boot-connections.serviceShow example output
nvmefc-boot-connections.service - Auto-connect to subsystems on FC-NVME devices found during boot Loaded: loaded (/usr/lib/systemd/system/nvmefc-boot-connections.service; enabled; vendor preset: enabled) Active: inactive (dead) since Thu 2024-05-25 14:55:00 IST; 11min ago Main PID: 1647 (code=exited, status=0/SUCCESS) systemd[1]: Starting Auto-connect to subsystems on FC-NVME devices found during boot... systemd[1]: nvmefc-boot-connections.service: Succeeded. systemd[1]: Finished Auto-connect to subsystems on FC-NVME devices found during boot.
Step 7: Verify the multipathing configuration
Verify that the in-kernel NVMe multipath status, ANA status, and ONTAP namespaces are correct for the NVMe-oF configuration.
-
Verify that the in-kernel NVMe multipath is enabled:
cat /sys/module/nvme_core/parameters/multipathYou should see the following output:
Y
-
Verify that the appropriate NVMe-oF settings (such as, model set to NetApp ONTAP Controller and load balancing iopolicy set to queue-depth) for the respective ONTAP namespaces correctly reflect on the host:
-
Display the subsystems:
cat /sys/class/nvme-subsystem/nvme-subsys*/modelYou should see the following output:
NetApp ONTAP Controller NetApp ONTAP Controller
-
Display the policy:
cat /sys/class/nvme-subsystem/nvme-subsys*/iopolicyYou should see the following output:
queue-depth queue-depth
-
-
Verify that the namespaces are created and correctly discovered on the host:
nvme listShow example
Node SN Model --------------------------------------------------------- /dev/nvme7n1 81Ix2BVuekWcAAAAAAAB NetApp ONTAP Controller Namespace Usage Format FW Rev ----- 21.47 GB / 21.47 GB 4 KiB + 0 B FFFFFFFF
-
Verify that the controller state of each path is live and has the correct ANA status:
nvme list-subsys /dev/<controller_ID>Beginning with ONTAP 9.16.1, NVMe/FC and NVMe/TCP report all optimized paths on ASA r2 systems. NVMe/FCThis following example outputs show a namespace hosted on a two-node ONTAP controller for AFF, FAS, and ASA systems and ASA r2 system with NVMe/FC.
Show AFF, FAS, and ASA example output
nvme-subsys114 - NQN=nqn.1992-08.com.netapp:sn.9e30b9760a4911f08c87d039eab67a95:subsystem.sles_161_27 hostnqn=nqn.2014-08.org.nvmexpress:uuid:f6517cae-3133-11e8-bbff-7ed30aef123f iopolicy=round-robin\ +- nvme114 fc traddr=nn-0x234ed039ea359e4a:pn-0x2360d039ea359e4a,host_traddr=nn-0x20000090fae0ec88:pn-0x10000090fae0ec88 live optimized +- nvme115 fc traddr=nn-0x234ed039ea359e4a:pn-0x2362d039ea359e4a,host_traddr=nn-0x20000090fae0ec88:pn-0x10000090fae0ec88 live non-optimized +- nvme116 fc traddr=nn-0x234ed039ea359e4a:pn-0x2361d039ea359e4a,host_traddr=nn-0x20000090fae0ec89:pn-0x10000090fae0ec89 live optimized +- nvme117 fc traddr=nn-0x234ed039ea359e4a:pn-0x2363d039ea359e4a,host_traddr=nn-0x20000090fae0ec89:pn-0x10000090fae0ec89 live non-optimizedShow ASA r2 example output
nvme-subsys96 - NQN=nqn.1992-08.om.netapp:sn.b351b2b6777b11f0b3c2d039ea5cfc91:subsystem.nvme24 hostnqn=nqn.2014-08.org.nvmexpress:uuid:d3b581b4-c975-11e6-8425-0894ef31a074 \ +- nvme203 fc traddr=nn-0x2011d039ea5cfc90:pn-0x2015d039ea5cfc90,host_traddr=nn-0x200000109bdacc76:pn-0x100000109bdacc76 live optimized +- nvme25 fc traddr=nn-0x2011d039ea5cfc90:pn-0x2014d039ea5cfc90,host_traddr=nn-0x200000109bdacc75:pn-0x100000109bdacc75 live optimized +- nvme30 fc traddr=nn-0x2011d039ea5cfc90:pn-0x2012d039ea5cfc90,host_traddr=nn-0x200000109bdacc75:pn-0x100000109bdacc75 live optimized +- nvme32 fc traddr=nn-0x2011d039ea5cfc90:pn-0x2013d039ea5cfc90,host_traddr=nn-0x200000109bdacc76:pn-0x100000109bdacc76 live optimizedNVMe/TCPThis following example outputs show a namespace hosted on a two-node ONTAP controller for AFF, FAS, and ASA systems and ASA r2 systems with NVMe/TCP.
Show AFF, FAS, and ASA example output
nvme-subsys9 - NQN=nqn.1992-08.com.netapp:sn.9927e165694211f0b4f4d039eab31e9d:subsystem.nvme10 hostnqn=nqn.2014-08.org.nvmexpress:uuid:4c4c4544-0035-5910-804b-b7c04f444d33 \ +- nvme105 tcp traddr=192.168.39.10,trsvcid=4420,host_traddr=192.168.39.20,src_addr=192.168.39.20 live optimized +- nvme153 tcp traddr=192.168.39.11,trsvcid=4420,host_traddr=192.168.39.20,src_addr=192.168.39.20 live non-optimized +- nvme57 tcp traddr=192.168.38.11,trsvcid=4420,host_traddr=192.168.38.20,src_addr=192.168.38.20 live non-optimized +- nvme9 tcp traddr=192.168.38.10,trsvcid=4420,host_traddr=192.168.38.20,src_addr=192.168.38.20 live optimizedShow ASA r2 example output
nvme-subsys4 - NQN=nqn.1992-08.com.netapp:sn.17e32b6e8c7f11f09545d039eac03c33:subsystem.Bidirectional_DHCP_1_0 hostnqn=nqn.2014-08.org.nvmexpress:uuid:4c4c4544-0054-5110-8039-c3c04f523034 \ +- nvme4 tcp traddr=192.168.20.28,trsvcid=4420,host_traddr=192.168.20.21,src_addr=192.168.20.21 live optimized +- nvme5 tcp traddr=192.168.20.29,trsvcid=4420,host_traddr=192.168.20.21,src_addr=192.168.20.21 live optimized +- nvme6 tcp traddr=192.168.21.28,trsvcid=4420,host_traddr=192.168.21.21,src_addr=192.168.21.21 live optimized +- nvme7 tcp traddr=192.168.21.29,trsvcid=4420,host_traddr=192.168.21.21,src_addr=192.168.21.21 live optimized -
Verify that the NetApp plug-in displays the correct values for each ONTAP namespace device:
Columnnvme netapp ontapdevices -o columnShow example
Device Vserver Namespace Path NSID UUID Size ---------------- ------------------------- ----------------- ---- -------------------------------------- --------- /dev/nvme0n1 vs_coexistence_emulex ns1 1 79510f05-7784-11f0-b3c2-d039ea5cfc91 21.47GB
JSONnvme netapp ontapdevices -o jsonShow example
{ "ONTAPdevices":[{ "Device":"/dev/nvme0n1", "Vserver":"vs_coexistence_emulex", "Namespace_Path":"ns1", "NSID":1, "UUID":"79510f05-7784-11f0-b3c2-d039ea5cfc91", "Size":"21.47GB", "LBA_Data_Size":4096, "Namespace_Size":5242880 } ] }
Step 8: Create a persistent discovery controller
You can create a persistent discovery controller (PDC) for a SUSE Linux Enterprise Server 16 host. A PDC is required to automatically detect an NVMe subsystem add or remove operation and changes to the discovery log page data.
-
Verify that the discovery log page data is available and can be retrieved through the initiator port and target LIF combination:
nvme discover -t <trtype> -w <host-traddr> -a <traddr>Show example output
Discovery Log Number of Records 8, Generation counter 10 =====Discovery Log Entry 0====== trtype: tcp adrfam: ipv4 subtype: current discovery subsystem treq: not specified portid: 3 trsvcid: 8009 subnqn: nqn.1992-08.com.netapp:sn.9927e165694211f0b4f4d039eab31e9d:discovery traddr: 192.168.39.10 eflags: explicit discovery connections, duplicate discovery information sectype: none =====Discovery Log Entry 1====== trtype: tcp adrfam: ipv4 subtype: current discovery subsystem treq: not specified portid: 1 trsvcid: 8009 subnqn: nqn.1992-08.com.netapp:sn.9927e165694211f0b4f4d039eab31e9d:discovery traddr: 192.168.38.10 eflags: explicit discovery connections, duplicate discovery information sectype: none =====Discovery Log Entry 2====== trtype: tcp adrfam: ipv4 subtype: current discovery subsystem treq: not specified portid: 4 trsvcid: 8009 subnqn: nqn.1992-08.com.netapp:sn.9927e165694211f0b4f4d039eab31e9d:discovery traddr: 192.168.39.11 eflags: explicit discovery connections, duplicate discovery information sectype: none =====Discovery Log Entry 3====== trtype: tcp adrfam: ipv4 subtype: current discovery subsystem treq: not specified portid: 2 trsvcid: 8009 subnqn: nqn.1992-08.com.netapp:sn.9927e165694211f0b4f4d039eab31e9d:discovery traddr: 192.168.38.11 eflags: explicit discovery connections, duplicate discovery information sectype: none =====Discovery Log Entry 4====== trtype: tcp adrfam: ipv4 subtype: nvme subsystem treq: not specified portid: 3 trsvcid: 4420 subnqn: nqn.1992-08.com.netapp:sn.9927e165694211f0b4f4d039eab31e9d:subsystem.nvme1 traddr: 192.168.39.10 eflags: none sectype: none =====Discovery Log Entry 5====== trtype: tcp adrfam: ipv4 subtype: nvme subsystem treq: not specified portid: 1 trsvcid: 4420 subnqn: nqn.1992-08.com.netapp:sn.9927e165694211f0b4f4d039eab31e9d:subsystem.nvme1 traddr: 192.168.38.10 eflags: none sectype: none =====Discovery Log Entry 6====== trtype: tcp adrfam: ipv4 subtype: nvme subsystem treq: not specified portid: 4 trsvcid: 4420 subnqn: nqn.1992-08.com.netapp:sn.9927e165694211f0b4f4d039eab31e9d:subsystem.nvme1 traddr: 192.168.39.11 eflags: none sectype: none =====Discovery Log Entry 7====== trtype: tcp adrfam: ipv4 subtype: nvme subsystem treq: not specified portid: 2 trsvcid: 4420 subnqn: nqn.1992-08.com.netapp:sn.9927e165694211f0b4f4d039eab31e9d:subsystem.nvme1 traddr: 192.168.38.11 eflags: none sectype: none
-
Create a PDC for the discovery subsystem:
nvme discover -t <trtype> -w <host-traddr> -a <traddr> -pYou should see the following output:
nvme discover -t tcp -w 192.168.39.20 -a 192.168.39.11 -p
-
From the ONTAP controller, verify that the PDC has been created:
vserver nvme show-discovery-controller -instance -vserver <vserver_name>Show example output
vserver nvme show-discovery-controller -instance -vserver vs_tcp_sles16 Vserver Name: vs_tcp_sles16 Controller ID: 0180h Discovery Subsystem NQN: nqn.1992-08.com.netapp:sn.9927e165694211f0b4f4d039eab31e9d:discovery Logical Interface: lif3 Node: A400-12-171 Host NQN: nqn.2014-08.org.nvmexpress:uuid:4c4c4544-0035-5910-804b-b7c04f444d33 Transport Protocol: nvme-tcp Initiator Transport Address: 192.168.39.20 Transport Service Identifier: 8009 Host Identifier: 4c4c454400355910804bb7c04f444d33 Admin Queue Depth: 32 Header Digest Enabled: false Data Digest Enabled: false Keep-Alive Timeout (msec): 30000
Step 9: Set up secure in-band authentication
Secure in-band authentication is supported over NVMe/TCP between a SUSE Linux Enterprise Server 16 host and an ONTAP controller.
Each host or controller must be associated with a DH-HMAC-CHAP key to set up secure authentication. A DH-HMAC-CHAP key is a combination of the NQN of the NVMe host or controller and an authentication secret configured by the administrator. To authenticate its peer, an NVMe host or controller must recognize the key associated with the peer.
Set up secure in-band authentication using the CLI or a config JSON file. If you need to specify different dhchap keys for different subsystems, you must use a config JSON file.
Set up secure in-band authentication using the CLI.
-
Obtain the host NQN:
cat /etc/nvme/hostnqn -
Generate the dhchap key for the host.
The following output describes the
gen-dhchap-keycommand paramters:nvme gen-dhchap-key -s optional_secret -l key_length {32|48|64} -m HMAC_function {0|1|2|3} -n host_nqn • -s secret key in hexadecimal characters to be used to initialize the host key • -l length of the resulting key in bytes • -m HMAC function to use for key transformation 0 = none, 1- SHA-256, 2 = SHA-384, 3=SHA-512 • -n host NQN to use for key transformationIn the following example, a random dhchap key with HMAC set to 3 (SHA-512) is generated.
nvme gen-dhchap-key -m 3 -n nqn.2014-08.org.nvmexpress:uuid:4c4c4544-0035-5910-804b-b7c04f444d33 DHHC-1:03:ohdxI1yIS8gBLwIOubcwl57rXcozYuRgBsoWaBvxEvpDlQHn/7dQ4JjFGwmhgwdJWmVoripbWbMJy5eMAbCahN4hhYU=:
-
On the ONTAP controller, add the host and specify both dhchap keys:
vserver nvme subsystem host add -vserver <svm_name> -subsystem <subsystem> -host-nqn <host_nqn> -dhchap-host-secret <authentication_host_secret> -dhchap-controller-secret <authentication_controller_secret> -dhchap-hash-function {sha-256|sha-512} -dhchap-group {none|2048-bit|3072-bit|4096-bit|6144-bit|8192-bit} -
A host supports two types of authentication methods, unidirectional and bidirectional. On the host, connect to the ONTAP controller and specify dhchap keys based on the chosen authentication method:
nvme connect -t tcp -w <host-traddr> -a <tr-addr> -n <host_nqn> -S <authentication_host_secret> -C <authentication_controller_secret>
-
Validate the
nvme connect authenticationcommand by verifying the host and controller dhchap keys:-
Verify the host dhchap keys:
cat /sys/class/nvme-subsystem/<nvme-subsysX>/nvme*/dhchap_secretShow example output for a unidirectional configuration
# cat /sys/class/nvme-subsystem/nvme-subsys1/nvme*/dhchap_secret DHHC-1:01:wkwAKk8r9Ip7qECKt7V5aIo/7Y1CH7DWkUfLfMxmseg39DFb: DHHC-1:01:wkwAKk8r9Ip7qECKt7V5aIo/7Y1CH7DWkUfLfMxmseg39DFb: DHHC-1:01:wkwAKk8r9Ip7qECKt7V5aIo/7Y1CH7DWkUfLfMxmseg39DFb: DHHC-1:01:wkwAKk8r9Ip7qECKt7V5aIo/7Y1CH7DWkUfLfMxmseg39DFb:
-
Verify the controller dhchap keys:
cat /sys/class/nvme-subsystem/<nvme-subsysX>/nvme*/dhchap_ctrl_secretShow example output for a bidirectional configuration
# cat /sys/class/nvme-subsystem/nvme-subsys6/nvme*/dhchap_ctrl_secret DHHC-1:03:ohdxI1yIS8gBLwIOubcwl57rXcozYuRgBsoWaBvxEvpDlQHn/7dQ4JjFGwmhgwdJWmVoripbWbMJy5eMAbCahN4hhYU=: DHHC-1:03:ohdxI1yIS8gBLwIOubcwl57rXcozYuRgBsoWaBvxEvpDlQHn/7dQ4JjFGwmhgwdJWmVoripbWbMJy5eMAbCahN4hhYU=: DHHC-1:03:ohdxI1yIS8gBLwIOubcwl57rXcozYuRgBsoWaBvxEvpDlQHn/7dQ4JjFGwmhgwdJWmVoripbWbMJy5eMAbCahN4hhYU=: DHHC-1:03:ohdxI1yIS8gBLwIOubcwl57rXcozYuRgBsoWaBvxEvpDlQHn/7dQ4JjFGwmhgwdJWmVoripbWbMJy5eMAbCahN4hhYU=:
-
When multiple NVMe subsystems are available on the ONTAP controller configuration, you can use the /etc/nvme/config.json file with the nvme connect-all command.
Use the -o option to generate the JSON file. Refer to the NVMe connect-all man pages for more syntax options.
-
Configure the JSON file:
Show example output
# cat /etc/nvme/config.json [ { "hostnqn":"nqn.2014-08.org.nvmexpress:uuid:4c4c4544-0035-5910-804b-b7c04f444d33", "hostid":"4c4c4544-0035-5910-804b-b7c04f444d33", "dhchap_key":"DHHC-1:01:wkwAKk8r9Ip7qECKt7V5aIo/7Y1CH7DWkUfLfMxmseg39DFb:", "subsystems":[ { "nqn":"nqn.1992-08.com.netapp:sn.9927e165694211f0b4f4d039eab31e9d:subsystem.inband_bidirectional", "ports":[ { "transport":"tcp", "traddr":"192.168.38.10", "host_traddr":"192.168.38.20", "trsvcid":"4420", "dhchap_ctrl_key":"DHHC-1:03:ohdxI1yIS8gBLwIOubcwl57rXcozYuRgBsoWaBvxEvpDlQHn/7dQ4JjFGwmhgwdJWmVoripbWbMJy5eMAbCahN4hhYU=:" }, { "transport":"tcp", "traddr":"192.168.38.11", "host_traddr":"192.168.38.20", "trsvcid":"4420", "dhchap_ctrl_key":"DHHC-1:03:ohdxI1yIS8gBLwIOubcwl57rXcozYuRgBsoWaBvxEvpDlQHn/7dQ4JjFGwmhgwdJWmVoripbWbMJy5eMAbCahN4hhYU=:" }, { "transport":"tcp", "traddr":"192.168.39.11", "host_traddr":"192.168.39.20", "trsvcid":"4420", "dhchap_ctrl_key":"DHHC-1:03:ohdxI1yIS8gBLwIOubcwl57rXcozYuRgBsoWaBvxEvpDlQHn/7dQ4JjFGwmhgwdJWmVoripbWbMJy5eMAbCahN4hhYU=:" }, { "transport":"tcp", "traddr":"192.168.39.10", "host_traddr":"192.168.39.20", "trsvcid":"4420", "dhchap_ctrl_key":"DHHC-1:03:ohdxI1yIS8gBLwIOubcwl57rXcozYuRgBsoWaBvxEvpDlQHn/7dQ4JjFGwmhgwdJWmVoripbWbMJy5eMAbCahN4hhYU=:" } ] } ] } ]In the following example, dhchap_keycorresponds todhchap_secretanddhchap_ctrl_keycorresponds todhchap_ctrl_secret. -
Connect to the ONTAP controller using the config JSON file:
nvme connect-all -J /etc/nvme/config.jsonShow example output
traddr=192.168.38.10is already connected traddr=192.168.39.10 is already connected traddr=192.168.38.11 is already connected traddr=192.168.39.11 is already connected traddr=192.168.38.10is already connected traddr=192.168.39.10 is already connected traddr=192.168.38.11 is already connected traddr=192.168.39.11 is already connected traddr=192.168.38.10is already connected traddr=192.168.39.10 is already connected traddr=192.168.38.11 is already connected traddr=192.168.39.11 is already connected
-
Verify that the dhchap secrets have been enabled for the respective controllers for each subsystem:
-
Verify the host dhchap keys:
cat /sys/class/nvme-subsystem/nvme-subsys0/nvme0/dhchap_secretThe following example shows a dhchap key:
DHHC-1:01:wkwAKk8r9Ip7qECKt7V5aIo/7Y1CH7DWkUfLfMxmseg39DFb:
-
Verify the controller dhchap keys:
cat /sys/class/nvme-subsystem/nvme-subsys0/nvme0/dhchap_ctrl_secretYou should see an output similar to the following example:
DHHC-1:03:ohdxI1yIS8gBLwIOubcwl57rXcozYuRgBsoWaBvxEvpDlQHn/7dQ4JjFGwmhgwdJWmVoripbWbMJy5eMAbCahN4hhYU=:
-
Step 10: Configure Transport Layer Security
Transport Layer Security (TLS) provides secure end-to-end encryption for NVMe connections between NVMe-oF hosts and an ONTAP array. You can configure TLS 1.3 using the CLI and a configured pre-shared key (PSK).
|
|
Perform the following steps on the SUSE Linux Enterprise Server host, except where it specifies that you perform a step on the ONTAP controller. |
-
Check that you have the following
ktls-utils,openssl, andlibopensslpackages installed on the host:-
Verify the
ktls-utils:rpm -qa | grep ktlsYou should see the following output displayed:
ktls-utils-0.10+33.g311d943-160000.2.2.x86_64
-
Verify the SSL packages:
rpm -qa | grep sslShow example output
libopenssl3-3.5.0-160000.3.2.x86_64 openssl-3.5.0-160000.2.2.noarch openssl-3-3.5.0-160000.3.2.x86_64 libopenssl3-x86-64-v3-3.5.0-160000.3.2.x86_64
-
-
Verify that you have the correct setup for
/etc/tlshd.conf:cat /etc/tlshd.confShow example output
[debug] loglevel=0 tls=0 nl=0 [authenticate] #keyrings= <keyring>;<keyring>;<keyring> [authenticate.client] #x509.truststore= <pathname> #x509.certificate= <pathname> #x509.private_key= <pathname> [authenticate.server] #x509.truststore= <pathname> #x509.certificate= <pathname> #x509.private_key= <pathname>
-
Enable
tlshdto start at system boot:systemctl enable tlshd -
Verify that the
tlshddaemon is running:systemctl status tlshdShow example output
tlshd.service - Handshake service for kernel TLS consumers Loaded: loaded (/usr/lib/systemd/system/tlshd.service; enabled; preset: disabled) Active: active (running) since Wed 2024-08-21 15:46:53 IST; 4h 57min ago Docs: man:tlshd(8) Main PID: 961 (tlshd) Tasks: 1 CPU: 46ms CGroup: /system.slice/tlshd.service └─961 /usr/sbin/tlshd Aug 21 15:46:54 RX2530-M4-17-153 tlshd[961]: Built from ktls-utils 0.11-dev on Mar 21 2024 12:00:00 -
Generate the TLS PSK by using the
nvme gen-tls-key:-
Verify the host:
cat /etc/nvme/hostnqnYou should see the following output:
nqn.2014-08.org.nvmexpress:uuid:4c4c4544-0035-5910-804b-b7c04f444d33
-
Verify the key:
nvme gen-tls-key --hmac=1 --identity=1 --subsysnqn= nqn.1992-08.com.netapp:sn.9927e165694211f0b4f4d039eab31e9d:subsystem.nvme1You should see the following output:
NVMeTLSkey-1:01:C50EsaGtuOp8n5fGE9EuWjbBCtshmfoHx4XTqTJUmydf0gIj:
-
-
On the ONTAP controller, add the TLS PSK to the ONTAP subsystem:
Show example output
nvme subsystem host add -vserver vs_iscsi_tcp -subsystem nvme1 -host-nqn nqn.2014-08.org.nvmexpress:uuid:4c4c4544-0035-5910-804b-b2c04f444d33 -tls-configured-psk NVMeTLSkey-1:01:C50EsaGtuOp8n5fGE9EuWjbBCtshmfoHx4XTqTJUmydf0gIj:
-
Insert the TLS PSK into the host kernel keyring:
nvme check-tls-key --identity=1 --subsysnqn=nqn.1992-08.com.netapp:sn.9927e165694211f0b4f4d039eab31e9d:subsystem.nvme1 --keydata=NVMeTLSkey-1:01:C50EsaGtuOp8n5fGE9EuWjbBCtshmfoHx4XTqTJUmydf0gIj: --insertYou should see the following TLS key:
Inserted TLS key 069f56bb
The PSK shows as NVMe1R01because it usesidentity v1from the TLS handshake algorithm. Identity v1 is the only version that ONTAP supports. -
Verify that the TLS PSK is inserted correctly:
cat /proc/keys | grep NVMeShow example output
069f56bb I-Q-- 5 perm 3b010000 0 0 psk NVMe1R01 nqn.2014-08.org.nvmexpress:uuid:4c4c4544-0035-5910-804b-b2c04f444d33 nqn.1992-08.com.netapp:sn.9927e165694211f0b4f4d039eab31e9d:subsystem.nvme1 oYVLelmiOwnvDjXKBmrnIgGVpFIBDJtc4hmQXE/36Sw=: 32
-
Connect to the ONTAP subsystem using the inserted TLS PSK:
-
Verify the TLS PSK:
nvme connect -t tcp -w 192.168.38.20 -a 192.168.38.10 -n nqn.1992-08.com.netapp:sn.9927e165694211f0b4f4d039eab31e9d:subsystem.nvme1 --tls_key=0x069f56bb –tlsYou should see the following output:
connecting to device: nvme0
-
Verify the list-subsys:
nvme list-subsysShow example output
nvme-subsys0 - NQN=nqn.1992-08.com.netapp:sn.9927e165694211f0b4f4d039eab31e9d:subsystem.nvme1 hostnqn=nqn.2014-08.org.nvmexpress:uuid:4c4c4544-0035-5910-804b-b2c04f444d33 ** +- nvme0 tcp traddr=192.168.38.10,trsvcid=4420,host_traddr=192.168.38.20,src_addr=192.168.38.20 live
-
-
Add the target, and verify the TLS connection to the specified ONTAP subsystem:
nvme subsystem controller show -vserver vs_tcp_sles16 -subsystem nvme1 -instanceShow example output
(vserver nvme subsystem controller show) Vserver Name: vs_tcp_sles16 Subsystem: nvme1 Controller ID: 0040h Logical Interface: lif1 Node: A400-12-171 Host NQN: nqn.2014-08.org.nvmexpress:uuid:4c4c4544-0035-5910-804b-b2c04f444d33 Transport Protocol: nvme-tcp Initiator Transport Address: 192.168.38.20 Host Identifier: 4c4c454400355910804bb2c04f444d33 Number of I/O Queues: 2 I/O Queue Depths: 128, 128 Admin Queue Depth: 32 Max I/O Size in Bytes: 1048576 Keep-Alive Timeout (msec): 5000 Subsystem UUID: 62203cfd-826a-11f0-966e-d039eab31e9d Header Digest Enabled: false Data Digest Enabled: false Authentication Hash Function: sha-256 Authentication Diffie-Hellman Group: 3072-bit Authentication Mode: unidirectional Transport Service Identifier: 4420 TLS Key Type: configured TLS PSK Identity: NVMe1R01 nqn.2014-08.org.nvmexpress:uuid:4c4c4544-0035-5910-804b-b2c04f444d33 nqn.1992-08.com.netapp:sn.9927e165694211f0b4f4d039eab31e9d:subsystem.nvme1 oYVLelmiOwnvDjXKBmrnIgGVpFIBDJtc4hmQXE/36Sw= TLS Cipher: TLS-AES-128-GCM-SHA256
Step 11: Review the known issues
There are no known issues.