Skip to main content
NetApp virtualization solutions

Configure networking for NVMe/TCP on ESXi hosts in a VCF VI workload domain

Contributors netapp-jsnyder netapp-lhalbert

Configure networking for NVMe over TCP (NVMe/TCP) storage on ESXi hosts in a VI workload domain. You'll create distributed port groups for NVMe traffic, set up VMkernel adapters on each ESXi host, and add an NVMe/TCP adapter to enable reliable connectivity and multipathing.

Perform the following steps on the VI workload domain cluster using the vSphere client. In this case vCenter Single Sign-On is being used so the vSphere client is common to both the management and workload domains.

Step 1: Create distributed port groups for NVME/TCP traffic

Complete the following steps to create a new distributed port group for each NVMe/TCP network.

Steps
  1. From the vSphere client , navigate to Inventory > Networking for the workload domain. Navigate to the existing Distributed Switch and choose the action to create New Distributed Port Group…​.

    Choose to create new port group

     

  2. In the New Distributed Port Group wizard, fill in a name for the new port group and click Next to continue.

  3. On the Configure settings page, fill out all settings. If VLANs are being used be sure to provide the correct VLAN ID. Click Next to continue.

    Fill out VLAN ID

     

  4. On the Ready to complete page, review the changes and click Finish to create the new distributed port group.

  5. Repeat this process to create a distributed port group for the second NVMe/TCP network being used and ensure you have input the correct VLAN ID.

  6. When both port groups have been created, navigate to the first port group and select the action to Edit settings…​.

    DPG - edit settings

     

  7. On the Distributed Port Group - Edit Settings page, navigate to Teaming and failover in the left-hand menu and click uplink2 to move it down to Unused uplinks.

    move uplink2 to unused

  8. Repeat this step for the second NVMe/TCP port group. This time, move uplink1 down to Unused uplinks.

    move uplink 1 to unused

Step 2: Create the VMkernel adapters on each ESXi host

Create the VMkernel adapters on each ESXi host in the workload domain.

Steps
  1. From the vSphere client, navigate to one of the ESXi hosts in the workload domain inventory. From the Configure tab select VMkernel adapters and click Add Networking…​ to start.

    Start add networking wizard

     

  2. On the Select connection type window, choose VMkernel Network Adapter and click Next to continue.

    Choose VMkernel Network Adapter

     

  3. On the Select target device page, choose one of the distributed port groups for iSCSI that was created previously.

    Choose target port group

     

  4. On the Port properties page, click the box for NVMe/TCP and click Next to continue.

    VMkernel port properties

     

  5. On the IPv4 settings page, fill in the IP address and Subnet mask and provide a new gateway IP address (only if required). Click Next to continue.

    VMkernel IPv4 settings

     

  6. Review your selections on the Ready to complete page and click Finish to create the VMkernel adapter.

    Review VMkernel selections

     

  7. Repeat this process to create a VMkernel adapter for the second iSCSI network.

Step 3: Add NVMe/TCP adapter

Each ESXi host in the workload domain cluster must have an NVMe/TCP software adapter installed for every established NVMe/TCP network dedicated to storage traffic.

To install NVMe/TCP adapters and discover the NVMe controllers, complete the following steps:

  1. In the vSphere client, navigate to one of the ESXi hosts in the workload domain cluster. From the Configure tab, click Storage Adapters in the menu.

  2. From the Add Software Adapter drop-down menu, select Add NVMe over TCP adapter.

    Add NVMe/TCP adapter

     

  3. In the Add Software NVMe over TCP adapter window, access the Physical Network Adapter drop-down menu and select the correct physical network adapter on which to enable the NVMe adapter.

    Select physical adapter

     

  4. Repeat this process for the second network assigned to NVMe/TCP traffic, assigning the correct physical adapter.

  5. Select one of the newly installed NVMe/TCP adapters. On the Controllers tab, select Add Controller.

    Add Controller

     

  6. In the Add controller window, select the Automatically tab and complete the following steps.

    1. Enter an IP address for one of the SVM logical interfaces on the same network as the physical adapter assigned to this NVMe/TCP adapter.

    2. Click the Discover Controllers button.

    3. From the list of discovered controllers, click the checkbox for the two controllers with network addresses aligned with this NVMe/TCP adapter.

  7. Click OK to add the selected controllers.

    Discover and add controllers

     

  8. After a few seconds you should see the NVMe namespace appear on the Devices tab.

    NVMe namespace listed under devices

     

  9. Repeat this procedure to create an NVMe/TCP adapter for the second network established for NVMe/TCP traffic.

What's next?

After configuring networking, configure storage for NVMe vVols.