Skip to main content
E-Series storage systems

Perform NVMe over RoCE-specific tasks in E-Series - VMware

Contributors netapp-driley

For the NVMe over RoCE protocol, you configure the switches and determine the host port identifiers.

Step 1: Record your configuration

You can generate and print a PDF of this page, and then use the following worksheet to record your protocol-specific storage configuration information. You need this information to perform provisioning tasks.

Recommended configurations consist of two initiator ports and four target ports with one or more VLANs.

NVMe over RoCE port identifiers"

Host identifiers

Callout No. Host port connections Software initiator NQN

1

Host (initiator) 1

1

Host (initiator) 2

Target identifiers

Callout No. Array port connections Target NQN

2

Array controller (target) port 1

2

Array controller (target) port 2

2

Array controller (target) port 3

2

Array controller (target) port 4

Mapping host

Mapping host name

Host OS type

This can vary based on the array. EF300, EF600, and EF50 will be 2 initiator ports with up to 4 target ports with 1 or more VLANs. EF80 will be 2 initiator ports with up to 6 target ports with 1 or more VLANs.

Step 2: Configure the NVMe/RoCE switches

You configure the switches according to the vendor's recommendations for NVMe over RoCE. These recommendations might include both configuration directives as well as code updates.

About this

This task describes the general steps for configuring the switches for NVMe over RoCE. For specific instructions, see your switch vendor's documentation.

Before you begin, make sure you have the following:

  • Two separate networks for high availability. Make sure that you isolate your NVMe over RoCE traffic to separate network segments.

Steps

Consult your switch vendor's documentation.

Step 3: Configure networking - NVMe/RoCE, VMware

You can set up your NVMe over RoCE network in many ways, depending on your data storage requirements. Consult your network administrator for tips on selecting the best configuration for your environment.

About this task

This task describes the general steps for configuring the network for NVMe over RoCE. For specific instructions, see your switch vendor's documentation.

Before you begin, make sure you have the following:

  • Switch configured for Lossless Ethernet for NVMe over RDMA.

About this task

While planning your NVMe over RoCE networking, remember that the VMware Configuration Maximums guide states that the maximum supported RDMA NVMe initiator ports per server is 2. You must consider this requirement to avoid configuring too many paths.

To ensure a good multipathing configuration, use multiple network segments for the NVMe over RoCE network. Place at least one host-side port and at least one port from each array controller on one network segment, and an identical group of host-side and array-side ports on another network segment. Where possible, use multiple Ethernet switches to provide additional redundancy.

Steps

Consult your switch vendor's documentation.

Step 4: Configure array-side networking - NVMe/RoCE, VMware

You use the SANtricity System Manager interface to configure NVMe over RoCE networking on the array side.

About this task

This task describes how to access the NVMe over RoCE port configuration from the Controllers & components page under the SANtricity System Manager. You can also access the configuration from the Configure NVMe over RoCE ports page within the SANtricity System Manager.

Before you begin, make sure you have the following:

  • The IP address or domain name for one of the storage array controllers.

  • Password for the System Manager GUI, or Role-Based Access Control (RBAC) or LDAP and a directory service is configured for the appropriate security access to the storage array. See the SANtricity System Manager online help for more information about Access Management.

Steps
  1. From your browser, enter the following URL: https://<DomainNameOrIPAddress>;

    IPAddress is the address for one of the storage array controllers.

    The first time SANtricity System Manager is opened on an array that has not been configured, the Set Administrator Password prompt appears. Role-based access management configures four local roles: admin, support, security, and monitor. The latter three roles have random passwords that cannot be guessed. After you set a password for the admin role, you can change all of the passwords using the admin credentials. See the SANtricity System Manager online help for more information on the four local user roles.

  2. Enter the System Manager password for the admin role in the Set Administrator Password and Confirm Password fields, and then click Set Password.

    The Setup wizard launches if there are no pools, volumes groups, workloads, or notifications configured.

  3. Close the Setup wizard.

    You will use the wizard later to complete additional setup tasks.

  4. Select Hardware > Controllers and components.

  5. Click the controller with the NVMe over RoCE ports you want to configure.

    The controller's context menu appears.

  6. Select Configure NVMe over RoCE ports.

    The Configure NVMe over RoCE Ports dialog box opens.

  7. In the drop-down list, select the port you want to configure, and then click Next.

  8. Select the configuration port settings, and then click Next.

    To see all port settings, click the Show more port settings link on the right of the dialog box.

    Port Setting Description

    Configured ethernet port speed

    Select the desired speed. The options that appear in the drop-down list depend on the maximum speed that your network can support (for example, 200 Gb/s).

    Enable IPv4 / Enable IPv6

    Select one or both options to enable support for IPv4 and IPv6 networks.

    MTU size (Available by clicking Show more port settings.)

    If necessary, enter a new size in bytes for the Maximum Transmission Unit (MTU).

    The default Maximum Transmission Unit (MTU) size is 4200 bytes per frame. You must enter a value between 1500 and 9000.

    If you selected Enable IPv4, a dialog box opens for selecting IPv4 settings after you click Next. If you selected Enable IPv6, a dialog box opens for selecting IPv6 settings after you click Next. If you selected both options, the dialog box for IPv4 settings opens first, and then after you click Next, the dialog box for IPv6 settings opens.

    Configure the IPv4 and/or IPv6 settings, either automatically or manually. To see all port settings, click the Show more settings link on the right of the dialog box.

    Port setting Description

    Automatically obtain configuration

    Select this option to obtain the configuration automatically.

    Manually specify static configuration

    Select this option, and then enter a static address in the fields. For IPv4, include the network subnet mask and gateway. For IPv6, include the routable IP address and router IP address.

  9. Click Finish.

  10. Close System Manager.

Step 5: Configure host-side networking—​NVMe over RoCE, VMware

Configuring NVMe over RoCE networking on the host side enables the VMware NVMe over RDMA storage adapter initiator to establish a session with the array.

About this task

This configuration enables lossless network using Differentiated Services Code Point (DSCP) based Priority Flow Control (PFC).

Steps
  1. Identify RDMA Network Adapters and record the vmnic paired uplink.

    For more information, see View RDMA Network Adapters.

  2. Configure VMkernel port binding for the RDMA adapter using a vSphere standard switch.

  3. Add the software NVMe over RDMA adapter.

  4. Add NVMe Controllers for NVMe over RDMA.

    For more information, see Add Controllers for NVMe over Fabrics.

  5. Configure lossless Ethernet for NVMe over RDMA.

    You configure lossless network using Differentiated Services Code Point (DSCP) based Priority Flow Control (PFC).

    To use this option, see the following:

Step 6: Verify IP network connections - ​NVMe over RoCE, VMware

You verify Internet Protocol (IP) network connections by using ping tests to ensure the host and array are able to communicate.

Steps
  1. On the host the following command:

    vmkping <NVMe over RoCE_target_IP_address\>

    In this example, the NVMe over RoCE target IP address is 192.6.21.231.

    vmkping -d 192.6.21.231
    PING 192.6.21.231 (192.6.21.231): 56 data bytes
    64 bytes from 192.6.21.231: icmp_seq=0 ttl=64 time=0.902 ms
    64 bytes from 192.6.21.231: icmp_seq=1 ttl=64 time=0.406 ms
    64 bytes from 192.6.21.231: icmp_seq=2 ttl=64 time=0.855 ms
    --- 192.6.21.231 ping statistics ---
    3 packets transmitted, 3 packets received, 0% packet loss
    round-trip min/avg/max = 0.406/0.721/0.902 ms
  2. Issue a vmkping command from each host's initiator address (the IP address of the host Ethernet port used for NVMe over RoCE) to each controller NVMe over RoCE port. Perform this action from each host server in the configuration, changing the IP addresses as necessary.

    Note If the command fails with the message sendto() failed (Message too long), verify the MTU size for the Ethernet interfaces on the host server, storage controller, and switch ports.
  3. Return to the NVMe over RoCE Configuration procedure to finish target discovery.