Skip to main content
SAN hosts and cloud clients

NVMe-oF Host Configuration for ESXi 7.x with ONTAP

Contributors netapp-ranuk netapp-aoife netapp-pcarriga

You can configure NVMe over Fabrics (NVMe-oF) on initiator hosts running ESXi 7.x and ONTAP as the target.

Supportability

  • Beginning with ONTAP 9.7, NVMe over Fibre Channel (NVMe/FC) support is added for VMWare vSphere releases.

  • Beginning with 7.0U3c, NVMe/TCP feature is supported for ESXi Hypervisor.

  • Beginning with ONTAP 9.10.1, NVMe/TCP feature is supported for ONTAP.

Features

  • ESXi initiator host can run both NVMe/FC and FCP traffic through the same adapter ports. See the Hardware Universe for a list of supported FC adapters and controllers. See the NetApp Interoperability Matrix for the most current list of supported configurations and versions.

  • Beginning with ONTAP 9.9.1 P3, NVMe/FC feature is supported for ESXi 7.0 update 3.

  • For ESXi 7.0 and later releases, HPP (high performance plugin) is the default plugin for NVMe devices.

Known limitations

The following configurations are not supported:

  • RDM mapping

  • VVols

Enable NVMe/FC

  1. Check the ESXi host NQN string and verify that it matches with the host NQN string for the corresponding subsystem on the ONTAP array:

    # esxcli nvme  info get
    Host NQN: nqn.2014-08.com.vmware:nvme:nvme-esx
    
    # vserver nvme subsystem host show -vserver vserver_nvme
      Vserver Subsystem             Host NQN
      ------- ------------------- ----------------------------------------
      vserver_nvme ss_vserver_nvme nqn.2014-08.com.vmware:nvme:nvme-esx

Configure Broadcom/Emulex

  1. Check whether the configuration is supported with required driver/firmware by referring to NetApp Interoperability Matrix.

  2. Set the lpfc driver parameter lpfc_enable_fc4_type=3 for enabling NVMe/FC support in the lpfc driver and reboot the host.

Note Starting with vSphere 7.0 update 3, the brcmnvmefc driver is no longer available. Therefore, the lpfc driver now includes the NVMe over Fibre Channel (NVMe/FC) functionality previously delivered with the brcmnvmefc driver.
Note The lpfc_enable_fc4_type=3 parameter is set by default for the LPe35000-series adapters. You must perform the following command to set it manually for LPe32000-series and LPe31000-series adapters.
# esxcli system module parameters set -m lpfc -p lpfc_enable_fc4_type=3

#esxcli system module parameters list  -m lpfc | grep lpfc_enable_fc4_type
lpfc_enable_fc4_type              int     3      Defines what FC4 types are supported

#esxcli storage core adapter list
HBA Name  Driver   Link State  UID                                   Capabilities         Description
--------  -------  ----------  ------------------------------------  -------------------  -----------
vmhba1    lpfc     link-up     fc.200000109b95456f:100000109b95456f  Second Level Lun ID  (0000:86:00.0) Emulex Corporation Emulex LPe36000 Fibre Channel Adapter    FC HBA
vmhba2    lpfc     link-up     fc.200000109b954570:100000109b954570  Second Level Lun ID  (0000:86:00.1) Emulex Corporation Emulex LPe36000 Fibre Channel Adapter    FC HBA
vmhba64   lpfc     link-up     fc.200000109b95456f:100000109b95456f                       (0000:86:00.0) Emulex Corporation Emulex LPe36000 Fibre Channel Adapter   NVMe HBA
vmhba65   lpfc     link-up     fc.200000109b954570:100000109b954570                       (0000:86:00.1) Emulex Corporation Emulex LPe36000 Fibre Channel Adapter   NVMe HBA

Configure Marvell/QLogic

Steps
  1. Check whether configuration is supported with required driver/firmware by referring to NetApp Interoperability Matrix.

  2. Set the qlnativefc driver parameter ql2xnvmesupport=1 for enabling NVMe/FC support in the qlnativefc driver and reboot the host.

    # esxcfg-module -s 'ql2xnvmesupport=1' qlnativefc

    Note The qlnativefc driver parameter is set by default for the Qle 277x-series adapters. You must perform the following command to set it manually for Qle 277x series adapters.
    esxcfg-module -l | grep qlnativefc
    qlnativefc               4    1912
  3. Check whether nvme is enabled on the adapter:

      #esxcli storage core adapter list
    HBA Name  Driver      Link State  UID                                   Capabilities         Description
    --------  ----------  ----------  ------------------------------------  -------------------  -----------
     vmhba3    qlnativefc  link-up     fc.20000024ff1817ae:21000024ff1817ae  Second Level Lun ID  (0000:5e:00.0) QLogic Corp QLE2742 Dual Port 32Gb Fibre Channel to PCIe Adapter    FC Adapter
    vmhba4    qlnativefc  link-up     fc.20000024ff1817af:21000024ff1817af  Second Level Lun ID  (0000:5e:00.1) QLogic Corp QLE2742 Dual Port 32Gb Fibre Channel to PCIe Adapter FC Adapter
    vmhba64   qlnativefc  link-up     fc.20000024ff1817ae:21000024ff1817ae                       (0000:5e:00.0) QLogic Corp QLE2742 Dual Port 32Gb Fibre Channel to PCIe Adapter  NVMe FC Adapter
    vmhba65   qlnativefc  link-up     fc.20000024ff1817af:21000024ff1817af                       (0000:5e:00.1) QLogic Corp QLE2742 Dual Port 32Gb Fibre Channel to PCIe Adapter  NVMe FC Adapter

Validate NVMe/FC

  1. Verify that NVMe/FC adapter is listed on the ESXi host:

    # esxcli nvme adapter list
    
    Adapter  Adapter Qualified Name           Transport Type  Driver      Associated Devices
    -------  -------------------------------  --------------  ----------  ------------------
    vmhba64  aqn:qlnativefc:21000024ff1817ae  FC              qlnativefc
    vmhba65  aqn:qlnativefc:21000024ff1817af  FC              qlnativefc
    vmhba66  aqn:lpfc:100000109b579d9c 	      FC              lpfc
    vmhba67  aqn:lpfc:100000109b579d9d 	      FC              lpfc
  2. Verify that the NVMe/FC namespaces are properly created:

    The UUIDs in the following example represent the NVMe/FC namespace devices.

    # esxcfg-mpath -b
    uuid.5084e29a6bb24fbca5ba076eda8ecd7e : NVMe Fibre Channel Disk (uuid.5084e29a6bb24fbca5ba076eda8ecd7e)
       vmhba65:C0:T0:L1 LUN:1 state:active fc Adapter: WWNN: 20:00:34:80:0d:6d:72:69 WWPN: 21:00:34:80:0d:6d:72:69  Target: WWNN: 20:17:00:a0:98:df:e3:d1 WWPN: 20:2f:00:a0:98:df:e3:d1
       vmhba65:C0:T1:L1 LUN:1 state:active fc Adapter: WWNN: 20:00:34:80:0d:6d:72:69 WWPN: 21:00:34:80:0d:6d:72:69  Target: WWNN: 20:17:00:a0:98:df:e3:d1 WWPN: 20:1a:00:a0:98:df:e3:d1
       vmhba64:C0:T0:L1 LUN:1 state:active fc Adapter: WWNN: 20:00:34:80:0d:6d:72:68 WWPN: 21:00:34:80:0d:6d:72:68  Target: WWNN: 20:17:00:a0:98:df:e3:d1 WWPN: 20:18:00:a0:98:df:e3:d1
       vmhba64:C0:T1:L1 LUN:1 state:active fc Adapter: WWNN: 20:00:34:80:0d:6d:72:68 WWPN: 21:00:34:80:0d:6d:72:68  Target: WWNN: 20:17:00:a0:98:df:e3:d1 WWPN: 20:19:00:a0:98:df:e3:d1
    Note In ONTAP 9.7, the default block size for a NVMe/FC namespace is 4K. This default size is not compatible with ESXi. Therefore, when creating namespaces for ESXi, you must set the namespace block size as 512b. You can do this using the vserver nvme namespace create command.
    Example

    vserver nvme namespace create -vserver vs_1 -path /vol/nsvol/namespace1 -size 100g -ostype vmware -block-size 512B

    Refer to the ONTAP 9 Command man pages for additional details.

  3. Verify the status of the individual ANA paths of the respective NVMe/FC namespace devices:

    esxcli storage hpp path list -d uuid.5084e29a6bb24fbca5ba076eda8ecd7e
    fc.200034800d6d7268:210034800d6d7268-fc.201700a098dfe3d1:201800a098dfe3d1-uuid.5084e29a6bb24fbca5ba076eda8ecd7e
       Runtime Name: vmhba64:C0:T0:L1
       Device: uuid.5084e29a6bb24fbca5ba076eda8ecd7e
       Device Display Name: NVMe Fibre Channel Disk (uuid.5084e29a6bb24fbca5ba076eda8ecd7e)
       Path State: active
       Path Config: {TPG_id=0,TPG_state=AO,RTP_id=0,health=UP}
    
    fc.200034800d6d7269:210034800d6d7269-fc.201700a098dfe3d1:201a00a098dfe3d1-uuid.5084e29a6bb24fbca5ba076eda8ecd7e
       Runtime Name: vmhba65:C0:T1:L1
       Device: uuid.5084e29a6bb24fbca5ba076eda8ecd7e
       Device Display Name: NVMe Fibre Channel Disk (uuid.5084e29a6bb24fbca5ba076eda8ecd7e)
       Path State: active
       Path Config: {TPG_id=0,TPG_state=AO,RTP_id=0,health=UP}
    
    fc.200034800d6d7269:210034800d6d7269-fc.201700a098dfe3d1:202f00a098dfe3d1-uuid.5084e29a6bb24fbca5ba076eda8ecd7e
       Runtime Name: vmhba65:C0:T0:L1
       Device: uuid.5084e29a6bb24fbca5ba076eda8ecd7e
       Device Display Name: NVMe Fibre Channel Disk (uuid.5084e29a6bb24fbca5ba076eda8ecd7e)
       Path State: active unoptimized
       Path Config: {TPG_id=0,TPG_state=ANO,RTP_id=0,health=UP}
    
    fc.200034800d6d7268:210034800d6d7268-fc.201700a098dfe3d1:201900a098dfe3d1-uuid.5084e29a6bb24fbca5ba076eda8ecd7e
       Runtime Name: vmhba64:C0:T1:L1
       Device: uuid.5084e29a6bb24fbca5ba076eda8ecd7e
       Device Display Name: NVMe Fibre Channel Disk (uuid.5084e29a6bb24fbca5ba076eda8ecd7e)
       Path State: active unoptimized
       Path Config: {TPG_id=0,TPG_state=ANO,RTP_id=0,health=UP}

Configure NVMe/TCP

Starting 7.0U3c, the required NVMe/TCP modules will be loaded by default. For configuring the network and the NVMe/TCP adapter, refer to the VMware vSphere documentation.

Validate NVMe/TCP

Steps
  1. Verify the status of the NVMe/TCP adapter.

    [root@R650-8-45:~] esxcli nvme adapter list
    Adapter    Adapter Qualified Name
    --------- -------------------------------
    vmhba64    aqn:nvmetcp:34-80-0d-30-ca-e0-T
    vmhba65    aqn:nvmetc:34-80-13d-30-ca-e1-T
    list
    Transport Type   Driver   Associated Devices
    ---------------  -------  ------------------
    TCP              nvmetcp    vmnzc2
    TCP              nvmetcp    vmnzc3
  2. To list the NVMe/TCP connections, use the following command:

    [root@R650-8-45:~] esxcli nvme controller list
    Name
    -----------
    nqn.1992-08.com.netapp:sn.5e347cf68e0511ec9ec2d039ea13e6ed:subsystem.vs_name_tcp_ss#vmhba64#192.168.100.11:4420
    nqn.1992-08.com.netapp:sn.5e347cf68e0511ec9ec2d039ea13e6ed:subsystem.vs_name_tcp_ss#vmhba64#192.168.101.11:4420
    Controller Number  Adapter   Transport Type   IS Online
    ----------------- ---------  ---------------  ---------
    1580              vmhba64    TCP              true
    1588              vmhba65    TCP              true
  3. To list the number of paths to an NVMe namespace, use the following command:

    [root@R650-8-45:~] esxcli storage hpp path list -d uuid.400bf333abf74ab8b96dc18ffadc3f99
    tcp.vmnic2:34:80:Od:30:ca:eo-tcp.unknown-uuid.400bf333abf74ab8b96dc18ffadc3f99
       Runtime Name: vmhba64:C0:T0:L3
       Device: uuid.400bf333abf74ab8b96dc18ffadc3f99
       Device Display Name: NVMe TCP Disk (uuid.400bf333abf74ab8b96dc18ffadc3f99)
       Path State: active unoptimized
       Path config: {TPG_id=0,TPG_state=ANO,RTP_id=0,health=UP}
    
    tcp.vmnic3:34:80:Od:30:ca:el-tcp.unknown-uuid.400bf333abf74ab8b96dc18ffadc3f99
       Runtime Name: vmhba65:C0:T1:L3
       Device: uuid.400bf333abf74ab8b96dc18ffadc3f99
       Device Display Name: NVMe TCP Disk (uuid.400bf333abf74ab8b96dc18ffadc3f99)
       Path State: active
       Path config: {TPG_id=0,TPG_state=AO,RTP_id=0,health=UP}

Known issues

The NVMe-oF host configuration for ESXi 7.x with ONTAP has the following known issues:

NetApp Bug ID Title Workaround

1420654

ONTAP node non-operational when NVMe/FC protocol is used with ONTAP version 9.9.1

Check and rectify any network issues in the host fabric. If this does not help, upgrade to a patch that fixes this issue.