Skip to main content
SAN hosts and cloud clients
简体中文版经机器翻译而成,仅供参考。如与英语版出现任何冲突,应以英语版为准。

适用于采用ONTAP的SUSE Linux Enterprise Server 15 SP6的NVMe-oF主机配置

贡献者

具有非对称命名空间访问(AANA)的SUSE Linux Enterprise Server 15 SP6支持基于网络结构的NVMe (NVMe-oF)、包括基于光纤通道的NVMe (NVMe/FC)和其他传输。在NVMe-oF环境中、ANA相当于iSCSI和FCP环境中的ALUA多路径功能、并可通过内核NVMe多路径实施。

以下支持适用于采用ONTAP的SUSE Linux Enterprise Server 15 SP6的NVMe-oF主机配置:

  • 在同一主机上运行NVMe和SCSI流量。例如、您可以为SCSI LUN的SCSI设备配置dm-Multipath mpath、并使用NVMe多路径在主机上配置NVMe-oF命名空间设备。

  • 支持基于TCP的NVMe (NVMe/TCP)和NVMe/FC。这样、本机软件包中的NetApp插件 `nvme-cli`就能够显示NVMe/FC和NVMe/TCP命名区的ONTAP详细信息。

有关支持的配置的其他详细信息、请参见 "NetApp 互操作性表工具"

功能

  • 支持NVMe安全带内身份验证

  • 支持使用唯一发现NQN的永久性发现控制器(PDC)

  • 为NVMe/TCP提供TLS 1.3加密支持

已知限制

  • 目前不支持使用NVMe-oF协议启动SAN。

  • 在SUSE Linux Enterprise Server 15 SP6主机上、NetApp `sanlun`主机实用程序不支持NVMe-oF。而是可以依赖本机软件包中的NetApp插件 `nvme-cli`进行所有NVMe-oF传输。

配置 NVMe/FC

您可以为采用ONTAP配置的SUSE Linux Enterprise Server 15 SP6配置NVMe/FC与Broadcom/Emulex FC或Marvell/Qlogic FC适配器。

Broadcom/Emulex

为Broadcom/Emulex FC适配器配置NVMe/FC。

步骤
  1. 确认您使用的是建议的适配器型号:

    cat /sys/class/scsi_host/host*/modelname
    示例输出
    LPe32002 M2
    LPe32002-M2
  2. 验证适配器型号问题描述:

    cat /sys/class/scsi_host/host*/modeldesc
    示例输出
    Emulex LightPulse LPe32002-M2 2-Port 32Gb Fibre Channel Adapter
    Emulex LightPulse LPe32002-M2 2-Port 32Gb Fibre Channel Adapter
  3. 验证是否正在使用建议的Emulex主机总线适配器(HBA)固件版本:

    cat /sys/class/scsi_host/host*/fwrev
    示例输出
    14.2.673.40, sli-4:2:c
    14.2.673.40, sli-4:2:c
  4. 验证是否正在使用建议的lpfc驱动程序版本:

    cat /sys/module/lpfc/version
    示例输出
    0:14.4.0.1
  5. 验证是否可以查看启动程序端口:

    cat /sys/class/fc_host/host*/port_name
    示例输出
    0x10000090fae0ec88
    0x10000090fae0ec89
  6. 验证启动程序端口是否联机:

    cat /sys/class/fc_host/host*/port_state
    示例输出
    Online
    Online
  7. 验证NVMe/FC启动程序端口是否已启用且目标端口是否可见:

    cat /sys/class/scsi_host/host*/nvme_info

    在以下示例中、一个启动程序端口已启用、并与两个目标生命周期关联。

    显示示例输出
    NVME Initiator Enabled
    XRI Dist lpfc0 Total 6144 IO 5894 ELS 250
    NVME LPORT lpfc0 WWPN x10000090fae0ec88 WWNN x20000090fae0ec88 DID x0a1300 ONLINE
    NVME RPORT WWPN x2070d039ea359e4a WWNN x206bd039ea359e4a DID x0a0a05 TARGET DISCSRVC
    ONLINE
    NVME Statistics
    LS: Xmt 00000003ba Cmpl 00000003ba Abort 00000000
    LS XMIT: Err 00000000 CMPL: xb 00000000 Err 00000000
    Total FCP Cmpl 0000000014e3dfb8 Issue 0000000014e308db OutIO ffffffffffff2923
     abort 00000845 noxri 00000000 nondlp 00000063 qdepth 00000000 wqerr 00000003 err 00000000
    FCP CMPL: xb 00000847 Err 00027f33
    NVME Initiator Enabled
    XRI Dist lpfc1 Total 6144 IO 5894 ELS 250
    NVME LPORT lpfc1 WWPN x10000090fae0ec89 WWNN x20000090fae0ec89 DID x0a1200 ONLINE
    NVME RPORT WWPN x2071d039ea359e4a WWNN x206bd039ea359e4a DID x0a0305 TARGET DISCSRVC
    ONLINE
    NVME Statistics
    LS: Xmt 00000003ba Cmpl 00000003ba Abort 00000000
    LS XMIT: Err 00000000 CMPL: xb 00000000 Err 00000000
    Total FCP Cmpl 0000000014e39f78 Issue 0000000014e2b832 OutIO ffffffffffff18ba
     abort 0000082d noxri 00000000 nondlp 00000028 qdepth 00000000 wqerr 00000007 err 00000000
    FCP CMPL: xb 0000082d Err 000283bb
Marvell/QLogic

SUSE Linux Enterprise Server 15 SP6内核中附带的本机内置qla2xxx驱动程序具有最新的修复程序。这些修复程序对于ONTAP支持至关重要。

为Marvell/QLogic适配器配置NVMe/FC。

步骤
  1. 验证您是否正在运行受支持的适配器驱动程序和固件版本:

    cat /sys/class/fc_host/host*/symbolic_name
    示例输出
    QLE2742 FW:v9.14.01 DVR: v10.02.09.200-k
    QLE2742 FW:v9.14.01 DVR: v10.02.09.200-k
  2. 验证是否已 ql2xnvmeenable 参数设置为1:

    cat /sys/module/qla2xxx/parameters/ql2xnvmeenable

    预期值为1。

启用 1 MB I/O 大小(可选)

ONTAP会在"识别 控制器"数据中报告MDTS (MAX Data传输大小)为8。这意味着最大I/O请求大小最多可以为1 MB。要向Broadcom NVMe/FC主机发出大小为1 MB的I/O请求、应将参数的值 `lpfc_sg_seg_cnt`从默认值64增加 `lpfc`到256。

备注 这些步骤不适用于逻辑NVMe/FC主机。
步骤
  1. 将 `lpfc_sg_seg_cnt`参数设置为256:

    cat /etc/modprobe.d/lpfc.conf
    options lpfc lpfc_sg_seg_cnt=256
  2. 运行 `dracut -f`命令并重新启动主机。

  3. 验证的预期值是否 `lpfc_sg_seg_cnt`为256:

    cat /sys/module/lpfc/parameters/lpfc_sg_seg_cnt

验证NVMe服务

从SUSE Linux Enterprise Server 15 SP6开始、 `nvmefc-boot-connections.service`NVMe/FC包中包含的和 `nvmf-autoconnect.service`启动服务 `nvme-cli`会自动启用、以便在系统启动期间启动。系统启动完成后、您应验证是否已启用启动服务。

步骤
  1. 验证是否 `nvmf-autoconnect.service`已启用:

    # systemctl status nvmf-autoconnect.service

    显示示例输出
    nvmf-autoconnect.service - Connect NVMe-oF subsystems automatically during boot
      Loaded: loaded (/usr/lib/systemd/system/nvmf-autoconnect.service; enabled; vendor preset: disabled)
      Active: inactive (dead) since Thu 2024-05-25 14:55:00 IST; 11min ago
    Process: 2108 ExecStartPre=/sbin/modprobe nvme-fabrics (code=exited, status=0/SUCCESS)
    Process: 2114 ExecStart=/usr/sbin/nvme connect-all (code=exited, status=0/SUCCESS)
    Main PID: 2114 (code=exited, status=0/SUCCESS)
    
    systemd[1]: Starting Connect NVMe-oF subsystems automatically during boot...
    nvme[2114]: traddr=nn-0x201700a098fd4ca6:pn-0x201800a098fd4ca6 is already connected
    systemd[1]: nvmf-autoconnect.service: Deactivated successfully.
    systemd[1]: Finished Connect NVMe-oF subsystems automatically during boot.
  2. 验证是否 `nvmefc-boot-connections.service`已启用:

    # systemctl status nvmefc-boot-connections.service

    显示示例输出
    nvmefc-boot-connections.service - Auto-connect to subsystems on FC-NVME devices found during boot
       Loaded: loaded (/usr/lib/systemd/system/nvmefc-boot-connections.service; enabled; vendor preset: enabled)
       Active: inactive (dead) since Thu 2024-05-25 14:55:00 IST; 11min ago
     Main PID: 1647 (code=exited, status=0/SUCCESS)
    
    systemd[1]: Starting Auto-connect to subsystems on FC-NVME devices found during boot...
    systemd[1]: nvmefc-boot-connections.service: Succeeded.
    systemd[1]: Finished Auto-connect to subsystems on FC-NVME devices found during boot.

配置 NVMe/TCP

NVMe/TCP没有自动连接功能。相反、您可以通过手动执行NVMe/TCP或 connect-all`操作来发现NVMe/TCP子系统和命名路径 `connect

步骤
  1. 验证启动程序端口是否可以通过受支持的NVMe/TCP LIF提取发现日志页面数据:

    nvme discover -t tcp -w <host-traddr> -a <traddr>
    显示示例输出
    Discovery Log Number of Records 8, Generation counter 18
    =====Discovery Log Entry 0======
    trtype: tcp
    adrfam: ipv4
    subtype: current discovery subsystem
    treq: not specified
    portid: 4
    trsvcid: 8009
    subnqn: nqn.1992-08.com.netapp:sn.8b5ee9199ff411eea468d039ea36a106:discovery
    traddr: 192.168.211.67
    eflags: explicit discovery connections, duplicate discovery information
    sectype: none
    =====Discovery Log Entry 1======
    trtype: tcp
    adrfam: ipv4
    subtype: current discovery subsystem
    treq: not specified
    portid: 2
    trsvcid: 8009
    subnqn: nqn.1992-08.com.netapp:sn.8b5ee9199ff411eea468d039ea36a106:discovery
    traddr: 192.168.111.67
    eflags: explicit discovery connections, duplicate discovery information
    sectype: none
    =====Discovery Log Entry 2======
    trtype: tcp
    adrfam: ipv4
    subtype: current discovery subsystem
    treq: not specified
    portid: 3
    trsvcid: 8009
    subnqn: nqn.1992-08.com.netapp:sn.8b5ee9199ff411eea468d039ea36a106:discovery
    traddr: 192.168.211.66
    eflags: explicit discovery connections, duplicate discovery information
    sectype: none
    =====Discovery Log Entry 3======
    trtype: tcp
    adrfam: ipv4
    subtype: current discovery subsystem
    treq: not specified
    portid: 1
    trsvcid: 8009
    subnqn: nqn.1992-08.com.netapp:sn.8b5ee9199ff411eea468d039ea36a106:discovery
    traddr: 192.168.111.66
    eflags: explicit discovery connections, duplicate discovery information
    sectype: none
    =====Discovery Log Entry 4======
    trtype: tcp
    adrfam: ipv4
    subtype: nvme subsystem
    treq: not specified
    portid: 4
    trsvcid: 4420
    subnqn: nqn.1992-08.com.netapp:sn.8b5ee9199ff411eea468d039ea36a106:subsystem.nvme_tcp_1
    traddr: 192.168.211.67
    eflags: none
    sectype: none
    =====Discovery Log Entry 5======
    trtype: tcp
    adrfam: ipv4
    subtype: nvme subsystem
    treq: not specified
    portid: 2
    trsvcid: 4420
    subnqn: nqn.1992-08.com.netapp:sn.8b5ee9199ff411eea468d039ea36a106:subsystem.nvme_tcp_1
    traddr: 192.168.111.67
    eflags: none
    sectype: none
    =====Discovery Log Entry 6======
    trtype: tcp
    adrfam: ipv4
    subtype: nvme subsystem
    treq: not specified
    portid: 3
    trsvcid: 4420
    subnqn: nqn.1992-08.com.netapp:sn.8b5ee9199ff411eea468d039ea36a106:subsystem.nvme_tcp_1
    traddr: 192.168.211.66
    eflags: none
    sectype: none
    =====Discovery Log Entry 7======
    trtype: tcp
    adrfam: ipv4
    subtype: nvme subsystem
    treq: not specified
    portid: 1
    trsvcid: 4420
    subnqn: nqn.1992-08.com.netapp:sn.8b5ee9199ff411eea468d039ea36a106:subsystem.nvme_tcp_1
    traddr: 192.168.111.66
    eflags: none
    sectype: none
  2. 验证所有其他NVMe/TCP启动程序-目标LIF组合是否可以成功提取发现日志页面数据:

    nvme discover -t tcp -w <host-traddr> -a <traddr>
    示例输出
    #nvme discover -t tcp -w 192.168.111.79 -a 192.168.111.66
    #nvme discover -t tcp -w 192.168.111.79 -a 192.168.111.67
    #nvme discover -t tcp -w 192.168.211.79 -a 192.168.211.66
    #nvme discover -t tcp -w 192.168.211.79 -a 192.168.211.67
  3. 运行 nvme connect-all 在节点中所有受支持的NVMe/TCP启动程序-目标SIP上运行命令:

    nvme connect-all -t tcp -w <host-traddr> -a <traddr>
    示例输出
    # nvme connect-all -t tcp -w 192.168.111.79 -a 192.168.111.66
    # nvme connect-all -t tcp -w 192.168.111.79 -a 192.168.111.67
    # nvme connect-all -t tcp -w 192.168.211.79 -a 192.168.211.66
    # nvme connect-all -t tcp -w 192.168.211.79 -a 192.168.211.67
    备注 从SUSE Linux Enterprise Server 15 SP6开始、NVMe/TCP超时的默认设置 ctrl-loss-tmo`已关闭。这意味着重试次数没有限制(无限期重试),使用或 `nvme connect-all`命令(选项 `-l)时无需手动配置特定的 `ctrl-loss-tmo`超时持续时间。 `nvme connect`此外、NVMe/TCP控制器在发生路径故障时不会发生超时、并会无限期保持连接。

验证 NVMe-oF

使用以下过程验证采用ONTAP的SUSE Linux Enterprise Server 15 SP6的NVMe-oF配置。

步骤
  1. 验证是否已启用内核 NVMe 多路径:

    cat /sys/module/nvme_core/parameters/multipath

    预期值为"Y"。

  2. 验证主机是否具有适用于ONTAP NVMe命名卷的正确控制器型号:

    cat /sys/class/nvme-subsystem/nvme-subsys*/model
    示例输出
    NetApp ONTAP Controller
    NetApp ONTAP Controller
  3. 验证相应ONTAP NVMe I/O控制器的NVMe I/O策略:

    cat /sys/class/nvme-subsystem/nvme-subsys*/iopolicy
    示例输出
    round-robin
    round-robin
  4. 验证ONTAP名称卷是否对主机可见:

    nvme list -v
    显示示例输出
    Subsystem        Subsystem-NQN                                                                         Controllers
    ---------------- ------------------------------------------------------------------------------------- ---------------------
    nvme-subsys0     nqn.1992- 08.com.netapp:sn.0501daf15dda11eeab68d039eaa7a232:subsystem.unidir_dhcha p  nvme0, nvme1, nvme2, nvme3
    
    Device   SN                   MN                                       FR       TxPort Asdress        Subsystem    Namespaces
    -------- -------------------- ---------------------------------------- -------- ---------------------------------------------
    nvme0    81LGgBUqsI3EAAAAAAAE NetApp ONTAP Controller   FFFFFFFF tcp traddr=192.168.111.66,trsvcid=4420,host_traddr=192.168.111.79 nvme-subsys0 nvme0n1
    nvme1    81LGgBUqsI3EAAAAAAAE NetApp ONTAP Controller   FFFFFFFF tcp traddr=192.168.111.67,trsvcid=4420,host_traddr=192.168.111.79 nvme-subsys0 nvme0n1
    nvme2    81LGgBUqsI3EAAAAAAAE NetApp ONTAP Controller   FFFFFFFF tcp traddr=192.168.211.66,trsvcid=4420,host_traddr=192.168.211.79 nvme-subsys0 nvme0n1
    nvme3    81LGgBUqsI3EAAAAAAAE NetApp ONTAP Controller   FFFFFFFF tcp traddr=192.168.211.67,trsvcid=4420,host_traddr=192.168.211.79 nvme-subsys0 nvme0n1
    Device        Generic     NSID       Usage                 Format         Controllers
    ------------ ------------ ---------- -------------------------------------------------------------
    /dev/nvme0n1 /dev/ng0n1   0x1     1.07  GB /   1.07  GB    4 KiB +  0 B   nvme0, nvme1, nvme2, nvme3
  5. 验证每个路径的控制器状态是否为活动状态且是否具有正确的ANA状态:

    nvme list-subsys /dev/<subsystem_name>
    NVMe/FC
    nvme list-subsys /dev/nvme2n1
    显示示例输出
    nvme-subsys2 - NQN=nqn.1992-
    08.com.netapp:sn.06303c519d8411eea468d039ea36a106:subs
    ystem.nvme
     hostnqn=nqn.2014-08.org.nvmexpress:uuid:4c4c4544-
    0056-5410-8048-c6c04f425633
     iopolicy=round-robin
    \
    +- nvme4 fc traddr=nn-0x208fd039ea359e4a:pn-0x210dd039ea359e4a,host_traddr=nn-0x2000f4c7aa0cd7ab:pn-0x2100f4c7aa0cd7ab live optimized
    +- nvme6 fc traddr=nn-0x208fd039ea359e4a:pn-0x210ad039ea359e4a,host_traddr=nn-0x2000f4c7aa0cd7aa:pn-0x2100f4c7aa0cd7aa live optimized
    NVMe/TCP
    nvme list-subsys
    显示示例输出
    nvme-subsys1 - NQN=nqn.1992-08.com.netapp:sn.8b5ee9199ff411eea468d039ea36a106:subsystem.nvme_tcp_1
     hostnqn=nqn.2014-08.org.nvmexpress:uuid:4c4c4544-0035-5910-804b-b2c04f444d33
     iopolicy=round-robin
    \
    +- nvme4 tcp traddr=192.168.111.66,trsvcid=4420,host_traddr=192.168.111.79,src_addr=192.168.111.79 live
    +- nvme3 tcp traddr=192.168.211.66,trsvcid=4420,host_traddr=192.168.211.79,src_addr=192.168.111.79 live
    +- nvme2 tcp traddr=192.168.111.67,trsvcid=4420,host_traddr=192.168.111.79,src_addr=192.168.111.79 live
    +- nvme1 tcp traddr=192.168.211.67,trsvcid=4420,host_traddr=192.168.211.79,src_addr=192.168.111.79 live
  6. 验证NetApp插件是否为每个ONTAP 命名空间设备显示正确的值:

    nvme netapp ontapdevices -o column
    示例输出
    Device           Vserver    Namespace Path                       NSID UUID                                   Size
    ---------------- ---------- ------------------------------------ ------------------------------------------- --------
    /dev/nvme0n1     vs_192     /vol/fcnvme_vol_1_1_0/fcnvme_ns      1    c6586535-da8a-40fa-8c20-759ea0d69d33   20GB
    JSON
    nvme netapp ontapdevices -o json
    显示示例输出
    {
    "ONTAPdevices":[
    {
    "Device":"/dev/nvme0n1",
    "Vserver":"vs_192",
    "Namespace_Path":"/vol/fcnvme_vol_1_1_0/fcnvme_ns",
    "NSID":1,
    "UUID":"c6586535-da8a-40fa-8c20-759ea0d69d33",
    "Size":"20GB",
    "LBA_Data_Size":4096,
    "Namespace_Size":262144
    }
    ]
    }

创建永久性发现控制器

从SUSEL.11.1开始、您可以为ONTAP 9 15 SP6主机创建永久性发现控制器(PDC)。要自动检测NVMe子系统添加或删除操作以及对发现日志页面数据的更改、需要PDC。

步骤
  1. 验证发现日志页面数据是否可用、并且可以通过启动程序端口和目标LIF组合进行检索:

    nvme discover -t <trtype> -w <host-traddr> -a <traddr>
    显示示例输出
    Discovery Log Number of Records 8, Generation counter 18
    =====Discovery Log Entry 0======
    trtype: tcp
    adrfam: ipv4
    subtype: current discovery subsystem
    treq: not specified
    portid: 4
    trsvcid: 8009
    subnqn: nqn.1992-08.com.netapp:sn.8b5ee9199ff411eea468d039ea36a106:discovery
    traddr: 192.168.211.67
    eflags: explicit discovery connections, duplicate discovery information
    sectype: none
    =====Discovery Log Entry 1======
    trtype: tcp
    adrfam: ipv4
    subtype: current discovery subsystem
    treq: not specified
    portid: 2
    trsvcid: 8009
    subnqn: nqn.1992-08.com.netapp:sn.8b5ee9199ff411eea468d039ea36a106:discovery
    traddr: 192.168.111.67
    eflags: explicit discovery connections, duplicate discovery information
    sectype: none
    =====Discovery Log Entry 2======
    trtype: tcp
    adrfam: ipv4
    subtype: current discovery subsystem
    treq: not specified
    portid: 3
    trsvcid: 8009
    subnqn: nqn.1992-08.com.netapp:sn.8b5ee9199ff411eea468d039ea36a106:discovery
    traddr: 192.168.211.66
    eflags: explicit discovery connections, duplicate discovery information
    sectype: none
    =====Discovery Log Entry 3======
    trtype: tcp
    adrfam: ipv4
    subtype: current discovery subsystem
    treq: not specified
    portid: 1
    trsvcid: 8009
    subnqn: nqn.1992-08.com.netapp:sn.8b5ee9199ff411eea468d039ea36a106:discovery
    traddr: 192.168.111.66
    eflags: explicit discovery connections, duplicate discovery information
    sectype: none
    =====Discovery Log Entry 4======
    trtype: tcp
    adrfam: ipv4
    subtype: nvme subsystem
    treq: not specified
    portid: 4
    trsvcid: 4420
    subnqn: nqn.1992-08.com.netapp:sn.8b5ee9199ff411eea468d039ea36a106:subsystem.nvme_tcp_1
    traddr: 192.168.211.67
    eflags: none
    sectype: none
    =====Discovery Log Entry 5======
    trtype: tcp
    adrfam: ipv4
    subtype: nvme subsystem
    treq: not specified
    portid: 2
    trsvcid: 4420
    subnqn: nqn.1992-08.com.netapp:sn.8b5ee9199ff411eea468d039ea36a106:subsystem.nvme_tcp_1
    traddr: 192.168.111.67
    eflags: none
    sectype: none
    =====Discovery Log Entry 6======
    trtype: tcp
    adrfam: ipv4
    subtype: nvme subsystem
    treq: not specified
    portid: 3
    trsvcid: 4420
    subnqn: nqn.1992-08.com.netapp:sn.8b5ee9199ff411eea468d039ea36a106:subsystem.nvme_tcp_1
    traddr: 192.168.211.66
    eflags: none
    sectype: none
    =====Discovery Log Entry 7======
    trtype: tcp
    adrfam: ipv4
    subtype: nvme subsystem
    treq: not specified
    portid: 1
    trsvcid: 4420
    subnqn: nqn.1992-08.com.netapp:sn.8b5ee9199ff411eea468d039ea36a106:subsystem.nvme_tcp_1
    traddr: 192.168.111.66
    eflags: none
    sectype: none
  2. 为发现子系统创建PDC:

    nvme discover -t <trtype> -w <host-traddr> -a <traddr> -p
    示例输出
    nvme discover -t tcp -w 192.168.111.79 -a 192.168.111.666 -p
  3. 从ONTAP控制器中、验证是否已创建PDC:

    vserver nvme show-discovery-controller -instance -vserver <vserver_name>
    显示示例输出
    vserver nvme show-discovery-controller -instance -vserver vs_nvme79
    Vserver Name: vs_CLIENT116 Controller ID: 00C0h
    Discovery Subsystem NQN: nqn.1992-
    08.com.netapp:sn.48391d66c0a611ecaaa5d039ea165514:discovery Logical Interface UUID: d23cbb0a-c0a6-11ec-9731-d039ea165abc Logical Interface:
    CLIENT116_lif_4a_1
    Node: A400-14-124
    Host NQN: nqn.2014-08.org.nvmexpress:uuid:12372496-59c4-4d1b-be09-74362c0c1afc
    Transport Protocol: nvme-tcp
    Initiator Transport Address: 192.168.1.16
    Host Identifier: 59de25be738348f08a79df4bce9573f3 Admin Queue Depth: 32
    Header Digest Enabled: false Data Digest Enabled: false
    Vserver UUID: 48391d66-c0a6-11ec-aaa5-d039ea165514

设置安全带内身份验证

从Linux.12.1开始、支持在ONTAP 9 15 SP6主机和ONTAP控制器之间通过NVMe/TCP和NVMe/FC进行安全带内身份验证。

要设置安全身份验证、每个主机或控制器都必须与关联 DH-HMAC-CHAP 密钥、它是NVMe主机或控制器的NQN与管理员配置的身份验证密钥的组合。要对其对等方进行身份验证、NVMe主机或控制器必须识别与对等方关联的密钥。

您可以使用命令行界面或Config JSON文件设置安全带内身份验证。如果需要为不同的子系统指定不同的dhchap密钥、则必须使用config JSON文件。

命令行界面

使用命令行界面设置安全带内身份验证。

步骤
  1. 获取主机NQN:

    cat /etc/nvme/hostnqn
  2. 为SUSE Linux Enterprise Server 15 SP6主机生成dhchap密钥。

    以下输出说明了 `gen-dhchap-key`命令参数:

    nvme gen-dhchap-key -s optional_secret -l key_length {32|48|64} -m HMAC_function {0|1|2|3} -n host_nqn
    •	-s secret key in hexadecimal characters to be used to initialize the host key
    •	-l length of the resulting key in bytes
    •	-m HMAC function to use for key transformation
    0 = none, 1- SHA-256, 2 = SHA-384, 3=SHA-512
    •	-n host NQN to use for key transformation

    在以下示例中、将生成一个随机dhchap密钥、其中HMAC设置为3 (SHA-512)。

    # nvme gen-dhchap-key -m 3 -n nqn.2014-08.org.nvmexpress:uuid:d3ca725a- ac8d-4d88-b46a-174ac235139b
    DHHC-1:03:J2UJQfj9f0pLnpF/ASDJRTyILKJRr5CougGpGdQSysPrLu6RW1fGl5VSjbeDF1n1DEh3nVBe19nQ/LxreSBeH/bx/pU=:
  3. 在ONTAP控制器上、添加主机并指定两个dhchap密钥:

    vserver nvme subsystem host add -vserver <svm_name> -subsystem <subsystem> -host-nqn <host_nqn> -dhchap-host-secret <authentication_host_secret> -dhchap-controller-secret <authentication_controller_secret> -dhchap-hash-function {sha-256|sha-512} -dhchap-group {none|2048-bit|3072-bit|4096-bit|6144-bit|8192-bit}
  4. 主机支持两种类型的身份验证方法:单向和双向。在主机上、连接到ONTAP控制器并根据所选身份验证方法指定dhchap密钥:

    nvme connect -t tcp -w <host-traddr> -a <tr-addr> -n <host_nqn> -S <authentication_host_secret> -C <authentication_controller_secret>
  5. 验证 nvme connect authentication 命令、验证主机和控制器dhchap密钥:

    1. 验证主机dhchap密钥:

      cat /sys/class/nvme-subsystem/<nvme-subsysX>/nvme*/dhchap_secret
      显示单向配置的示例输出
      # cat /sys/class/nvme-subsystem/nvme-subsys1/nvme*/dhchap_secret
      DHHC-1:03:je1nQCmjJLUKD62mpYbzlpuw0OIws86NB96uNO/t3jbvhp7fjyR9bIRjOHg8wQtye1JCFSMkBQH3pTKGdYR1OV9gx00=:
      DHHC-1:03:je1nQCmjJLUKD62mpYbzlpuw0OIws86NB96uNO/t3jbvhp7fjyR9bIRjOHg8wQtye1JCFSMkBQH3pTKGdYR1OV9gx00=:
      DHHC-1:03:je1nQCmjJLUKD62mpYbzlpuw0OIws86NB96uNO/t3jbvhp7fjyR9bIRjOHg8wQtye1JCFSMkBQH3pTKGdYR1OV9gx00=:
      DHHC-1:03:je1nQCmjJLUKD62mpYbzlpuw0OIws86NB96uNO/t3jbvhp7fjyR9bIRjOHg8wQtye1JCFSMkBQH3pTKGdYR1OV9gx00=:
    2. 验证控制器dhchap密钥:

      cat /sys/class/nvme-subsystem/<nvme-subsysX>/nvme*/dhchap_ctrl_secret
      显示双向配置的示例输出
      # cat /sys/class/nvme-subsystem/nvme-subsys6/nvme*/dhchap_ctrl_secret
      DHHC-1:03:WorVEV83eYO53kV4Iel5OpphbX5LAphO3F8fgH3913tlrkSGDBJTt3crXeTUB8fCwGbPsEyz6CXxdQJi6kbn4IzmkFU=:
      DHHC-1:03:WorVEV83eYO53kV4Iel5OpphbX5LAphO3F8fgH3913tlrkSGDBJTt3crXeTUB8fCwGbPsEyz6CXxdQJi6kbn4IzmkFU=:
      DHHC-1:03:WorVEV83eYO53kV4Iel5OpphbX5LAphO3F8fgH3913tlrkSGDBJTt3crXeTUB8fCwGbPsEyz6CXxdQJi6kbn4IzmkFU=:
      DHHC-1:03:WorVEV83eYO53kV4Iel5OpphbX5LAphO3F8fgH3913tlrkSGDBJTt3crXeTUB8fCwGbPsEyz6CXxdQJi6kbn4IzmkFU=:
JSON 文件

如果ONTAP控制器配置中有多个NVMe子系统、则可以将文件与命令结合 nvme connect-all`使用 `/etc/nvme/config.json

要生成JSON文件、可以使用 `-o`选项。有关更多语法选项、请参见NVMe Connect-all手册页。

步骤
  1. 配置 JSON 文件:

    显示示例输出
    # cat /etc/nvme/config.json
    [
     {
        "hostnqn":"nqn.2014-08.org.nvmexpress:uuid:12372496-59c4-4d1b-be09-74362c0c1afc",
        "hostid":"3ae10b42-21af-48ce-a40b-cfb5bad81839",
        "dhchap_key":"DHHC-1:03:Cu3ZZfIz1WMlqZFnCMqpAgn/T6EVOcIFHez215U+Pow8jTgBF2UbNk3DK4wfk2EptWpna1rpwG5CndpOgxpRxh9m41w=:"
     },
     {
        "hostnqn":"nqn.2014-08.org.nvmexpress:uuid:12372496-59c4-4d1b-be09-74362c0c1afc",
        "subsystems":[
            {
                "nqn":"nqn.1992-08.com.netapp:sn.48391d66c0a611ecaaa5d039ea165514:subsystem.subsys_CLIENT116",
                "ports":[
                   {
                        "transport":"tcp",
                        "traddr":" 192.168.111.66 ",
                        "host_traddr":" 192.168.111.79",
                        "trsvcid":"4420",
                        "dhchap_ctrl_key":"DHHC-
    1:01:0h58bcT/uu0rCpGsDYU6ZHZvRuVqsYKuBRS0Nu0VPx5HEwaZ:"
                   },
                   {
                        "transport":"tcp",
                        "traddr":" 192.168.111.66 ",
                        "host_traddr":" 192.168.111.79",
                        "trsvcid":"4420",
                        "dhchap_ctrl_key":"DHHC-
    1:01:0h58bcT/uu0rCpGsDYU6ZHZvRuVqsYKuBRS0Nu0VPx5HEwaZ:"
                   },
                   {
                        "transport":"tcp",
                       "traddr":" 192.168.111.66 ",
                        "host_traddr":" 192.168.111.79",
                        "trsvcid":"4420",
                        "dhchap_ctrl_key":"DHHC-
    1:01:0h58bcT/uu0rCpGsDYU6ZHZvRuVqsYKuBRS0Nu0VPx5HEwaZ:"
                   },
                   {
                        "transport":"tcp",
                        "traddr":" 192.168.111.66 ",
                        "host_traddr":" 192.168.111.79",
                        "trsvcid":"4420",
                        "dhchap_ctrl_key":"DHHC-
    1:01:0h58bcT/uu0rCpGsDYU6ZHZvRuVqsYKuBRS0Nu0VPx5HEwaZ:"
                   }
               ]
           }
       ]
     }
    ]

    +

    备注 在上述示例中, dhchap_key`对应于, `dhchap_ctrl_key`对应 `dhchap_ctrl_secret`于 `dhchap_secret
  2. 使用config JSON文件连接到ONTAP控制器:

    # nvme connect-all -J /etc/nvme/config.json
    显示示例输出
    traddr=192.168.111.66 is already connected
    traddr=192.168.211.66 is already connected
    traddr=192.168.111.66 is already connected
    traddr=192.168.211.66 is already connected
    traddr=192.168.111.66 is already connected
    traddr=192.168.211.66 is already connected
    traddr=192.168.111.67 is already connected
    traddr=192.168.211.67 is already connected
    traddr=192.168.111.67 is already connected
    traddr=192.168.211.67 is already connected
    traddr=192.168.111.67 is already connected
    traddr=192.168.111.67 is already connected
  3. 验证是否已为每个子系统的相应控制器启用dhchap密码:

    1. 验证主机dhchap密钥:

      # cat /sys/class/nvme-subsystem/nvme-subsys0/nvme0/dhchap_secret
      示例输出
      DHHC-1:01:NunEWY7AZlXqxITGheByarwZdQvU4ebZg9HOjIr6nOHEkxJg:
    2. 验证控制器dhchap密钥:

      # cat /sys/class/nvme-subsystem/nvme-subsys0/nvme0/dhchap_ctrl_secret
      示例输出
      DHHC-
      1:03:2YJinsxa2v3+m8qqCiTnmgBZoH6mIT6G/6f0aGO8viVZB4VLNLH4z8CvK7pVYxN6S5fOAtaU3DNi12rieRMfdbg3704=:

配置传输层安全性

传输层安全(Transport Layer Security、TLS)可为NVMe-oF主机和ONTAP阵列之间的NVMe连接提供安全的端到端加密。从tls.16.1开始、您可以使用ONTAP 9和已配置的预共享密钥(PSK)配置tls.1.3。

关于此任务

您可以在SUSE Linux Enterprise Server 15 SP6主机上执行此过程中的步骤、但指定您在ONTAP控制器上执行某个步骤的情况除外。

步骤
  1. 检查主机上是否安装了以下ktls-utils、opensl和libopenssl软件包:

    1. rpm -qa | grep ktls

      示例输出
      ktls-utils-0.10+12.gc3923f7-150600.1.2.x86_64
    2. rpm -qa | grep ssl

      示例输出
      openssl-3-3.1.4-150600.5.7.1.x86_64
      libopenssl1_1-1.1.1w-150600.5.3.1.x86_64
      libopenssl3-3.1.4-150600.5.7.1.x86_64
  2. 验证是否已正确设置 /etc/tlshd.conf

    # cat /etc/tlshd.conf
    显示示例输出
    [debug]
    loglevel=0
    tls=0
    nl=0
    [authenticate]
    keyrings=.nvme
    [authenticate.client]
    #x509.truststore= <pathname>
    #x509.certificate= <pathname>
    #x509.private_key= <pathname>
    [authenticate.server]
    #x509.truststore= <pathname>
    #x509.certificate= <pathname>
    #x509.private_key= <pathname>
  3. 启用 `tlshd`以在系统启动时启动:

    # systemctl enable tlshd
  4. 验证守护进程是否 `tlshd`正在运行:

    # systemctl status tlshd
    显示示例输出
    tlshd.service - Handshake service for kernel TLS consumers
       Loaded: loaded (/usr/lib/systemd/system/tlshd.service; enabled; preset: disabled)
       Active: active (running) since Wed 2024-08-21 15:46:53 IST; 4h 57min ago
         Docs: man:tlshd(8)
    Main PID: 961 (tlshd)
       Tasks: 1
         CPU: 46ms
       CGroup: /system.slice/tlshd.service
           └─961 /usr/sbin/tlshd
    Aug 21 15:46:54 RX2530-M4-17-153 tlshd[961]: Built from ktls-utils 0.11-dev on Mar 21 2024 12:00:00
  5. 使用生成TLS PSK nvme gen-tls-key

    1. # cat /etc/nvme/hostnqn

      示例输出
      nqn.2014-08.org.nvmexpress:uuid:e58eca24-faff-11ea-8fee-3a68dd3b5c5f
    2. # nvme gen-tls-key --hmac=1 --identity=1 --subsysnqn=nqn.1992-08.com.netapp:sn.1d59a6b2416b11ef9ed5d039ea50acb3:subsystem.sles15

      示例输出
      NVMeTLSkey-1:01:dNcby017axByCko8GivzOO9zGlgHDXJCN6KLzvYoA+NpT1uD:
  6. 在ONTAP控制器上、将TLS PSK添加到ONTAP子系统:

    # nvme subsystem host add -vserver sles15_tls -subsystem sles15 -host-nqn nqn.2014-08.org.nvmexpress:uuid:ffa0c815-e28b-4bb1-8d4c-7c6d5e610bfc -tls-configured-psk NVMeTLSkey-1:01:dNcby017axByCko8GivzOO9zGlgHDXJCN6KLzvYoA+NpT1uD:
  7. 将TLS PSK插入主机内核密钥环:

    # nvme check-tls-key --identity=1 --subsysnqn=nqn.2014-08.org.nvmexpress:uuid:ffa0c815-e28b-4bb1-8d4c-7c6d5e610bf --keydata=NVMeTLSkey-1:01:dNcby017axByCko8GivzOO9zGlgHDXJCN6KLzvYoA+NpT1uD: --insert
    示例输出
    Inserted TLS key 22152a7e
    备注 PSK显示为"NVMe1R01"、因为它使用TLS握手算法中的"Identity v1"。Identity v1是ONTAP唯一支持的版本。
  8. 验证是否已正确插入TLS PSK:

    # cat /proc/keys | grep NVMe
    示例输出
    22152a7e I--Q---     1 perm 3b010000     0     0 psk       NVMe1R01 nqn.2014-08.org.nvmexpress:uuid:ffa0c815-e28b-4bb1-8d4c-7c6d5e610bfc nqn.1992-08.com.netapp:sn.1d59a6b2416b11ef9ed5d039ea50acb3:subsystem.sles15 UoP9dEfvuCUzzpS0DYxnshKDapZYmvA0/RJJ8JAqmAo=: 32
  9. 使用插入的TLS PSK连接到ONTAP子系统:

    1. # nvme connect -t tcp -w 20.20.10.80 -a 20.20.10.14 -n nqn.1992-08.com.netapp:sn.1d59a6b2416b11ef9ed5d039ea50acb3:subsystem.sles15 --tls_key=0x22152a7e --tls

      示例输出
      connecting to device: nvme0
    2. # nvme list-subsys

      示例输出
      nvme-subsys0 - NQN=nqn.1992-08.com.netapp:sn.1d59a6b2416b11ef9ed5d039ea50acb3:subsystem.sles15
                     hostnqn=nqn.2014-08.org.nvmexpress:uuid:ffa0c815-e28b-4bb1-8d4c-7c6d5e610bfc
                     iopolicy=round-robin
      \
       +- nvme0 tcp traddr=20.20.10.14,trsvcid=4420,host_traddr=20.20.10.80,src_addr=20.20.10.80 live
  10. 添加目标、并验证与指定ONTAP子系统的TLS连接:

    # nvme subsystem controller show -vserver sles15_tls -subsystem sles15 -instance

    显示示例输出
      (vserver nvme subsystem controller show)
                           Vserver Name: sles15_tls
                              Subsystem: sles15
                          Controller ID: 0040h
                      Logical Interface: sles15t_e1a_1
                                   Node: A900-17-174
                               Host NQN: nqn.2014-08.org.nvmexpress:uuid:ffa0c815-e28b-4bb1-8d4c-7c6d5e610bfc
                     Transport Protocol: nvme-tcp
            Initiator Transport Address: 20.20.10.80
                        Host Identifier: ffa0c815e28b4bb18d4c7c6d5e610bfc
                   Number of I/O Queues: 4
                       I/O Queue Depths: 128, 128, 128, 128
                      Admin Queue Depth: 32
                  Max I/O Size in Bytes: 1048576
              Keep-Alive Timeout (msec): 5000
                           Vserver UUID: 1d59a6b2-416b-11ef-9ed5-d039ea50acb3
                         Subsystem UUID: 9b81e3c5-5037-11ef-8a90-d039ea50ac83
                 Logical Interface UUID: 8185dcac-5035-11ef-8abb-d039ea50acb3
                  Header Digest Enabled: false
                    Data Digest Enabled: false
           Authentication Hash Function: -
    Authentication Diffie-Hellman Group: -
                    Authentication Mode: none
           Transport Service Identifier: 4420
                           TLS Key Type: configured
                       TLS PSK Identity: NVMe1R01 nqn.2014-08.org.nvmexpress:uuid:ffa0c815-e28b-4bb1-8d4c-7c6d5e610bfc nqn.1992-08.com.netapp:sn.1d59a6b2416b11ef9ed5d039ea50acb3:subsystem.sles15 UoP9dEfvuCUzzpS0DYxnshKDapZYmvA0/RJJ8JAqmAo=
                             TLS Cipher: TLS-AES-128-GCM-SHA256

已知问题

具有ONTAP版本的SUSE Linux Enterprise Server 15 SP6没有已知问题。