Skip to main content
E-Series Systems

Perform iSCSI-specific tasks

Contributors netapp-driley netapp-jolieg NetAppZacharyWambold

For the iSCSI protocol, you configure the switches and configure networking on the array side and the host side. Then you verify the IP network connections.

Step 1: Configure the switches—​iSCSI, VMware

You configure the switches according to the vendor's recommendations for iSCSI. These recommendations might include both configuration directives as well as code updates.

Before you begin

Make sure you have the following:

  • Two separate networks for high availability. Make sure that you isolate your iSCSI traffic to separate network segments.

  • Enabled send and receive hardware flow control end to end.

  • Disabled priority flow control.

  • If appropriate, enabled jumbo frames.

Note Port channels/LACP is not supported on the controller's switch ports. Host-side LACP is not recommended; multipathing provides the same benefits or better.
Steps

Consult your switch vendor's documentation.

Step 2: Configure networking—​iSCSI VMware

You can set up your iSCSI network in many ways, depending on your data storage requirements. Consult your network administrator for tips on selecting the best configuration for your environment.

Before you begin

Make sure you have the following:

  • Enabled send and receive hardware flow control end to end.

  • Disabled priority flow control.

  • If appropriate, enabled jumbo frames.

    If you are using jumbo frames within the IP SAN for performance reasons, make sure to configure the array, switches, and hosts to use jumbo frames. Consult your operating system and switch documentation for information on how to enable jumbo frames on the hosts and on the switches. To enable jumbo frames on the array, complete the steps in Step 3.

About this task

While planning your iSCSI networking, remember that the VMware Configuration Maximums guide states that the maximum supported iSCSI storage paths is 8. You must consider this requirement to avoid configuring too many paths.

By default, the VMware iSCSI software initiator creates a single session per iSCSI target when you are not using iSCSI port binding.

Note VMware iSCSI port binding is a feature that forces all bound VMkernel ports to log into all target ports that are accessible on the configured network segments. It is meant to be used with arrays that present a single network address for the iSCSI target. NetApp recommends that iSCSI port binding not be used. For additional information, see the VMware Knowledge Base for the article regarding considerations for using software iSCSI port binding in ESX/ESXi. If the ESXi host is attached to another vendor's storage, NetApp recommends that you use separate iSCSI vmkernel ports to avoid any conflict with port binding.

For best practice, you should NOT use port binding on E-Series storage arrays.

To ensure a good multipathing configuration, use multiple network segments for the iSCSI network. Place at least one host-side port and at least one port from each array controller on one network segment, and an identical group of host-side and array-side ports on another network segment. Where possible, use multiple Ethernet switches to provide additional redundancy.

Steps

Consult your switch vendor's documentation.

Note Many network switches have to be configured above 9,000 bytes for IP overhead. Consult your switch documentation for more information.

Step 3: Configure array-side networking—​iSCSI, VMware

You use the SANtricity System Manager GUI to configure iSCSI networking on the array side.

Before you begin

Make sure you have the following:

  • The IP address or domain name for one of the storage array controllers.

  • Password for the System Manager GUI, or Role-Based Access Control (RBAC) or LDAP and a directory service is configured for the appropriate security access to the storage array. See the SANtricity System Manager online help for more information about Access Management.

About this task

This task describes how to access the iSCSI port configuration from the Hardware page. You can also access the configuration from System  Settings  Configure iSCSI ports.

Note For additional information on how to set up the array-side networking on your VMware configuration, see the following technical report: VMware Configuration Guide for E-Series SANtricity iSCSI Integration with ESXi 6.x and 7.x.
Steps
  1. From your browser, enter the following URL: https://<DomainNameOrIPAddress>

    IPAddress is the address for one of the storage array controllers.

    The first time SANtricity System Manager is opened on an array that has not been configured, the Set Administrator Password prompt appears. Role-based access management configures four local roles: admin, support, security, and monitor. The latter three roles have random passwords that cannot be guessed. After you set a password for the admin role, you can change all of the passwords using the admin credentials. See the SANtricity System Manager online help for more information on the four local user roles.

  2. Enter the System Manager password for the admin role in the Set Administrator Password and Confirm Password fields, and then click Set Password.

    The Setup wizard launches if there are no pools, volumes groups, workloads, or notifications configured.

  3. Close the Setup wizard.

    You will use the wizard later to complete additional setup tasks.

  4. Select Hardware.

  5. If the graphic shows the drives, click Show back of shelf.

    The graphic changes to show the controllers instead of the drives.

  6. Click the controller with the iSCSI ports you want to configure.

    The controller's context menu appears.

  7. Select Configure iSCSI ports.

    The Configure iSCSI Ports dialog box opens.

  8. In the drop-down list, select the port you want to configure, and then click Next.

  9. Select the configuration port settings, and then click Next.

    To see all port settings, click the Show more port settings link on the right of the dialog box.

    Port Setting Description

    Configured ethernet port speed

    Select the desired speed. The options that appear in the drop-down list depend on the maximum speed that your network can support (for example, 10 Gbps).

    Note The optional 25Gb iSCSI host interface cards available on the controllers do not auto-negotiate speeds. You must set the speed for each port to either 10 Gb or 25 Gb. All ports must be set to the same speed.

    Enable IPv4 / Enable IPv6

    Select one or both options to enable support for IPv4 and IPv6 networks.

    TCP listening port (Available by clicking Show more port settings.)

    If necessary, enter a new port number.

    The listening port is the TCP port number that the controller uses to listen for iSCSI logins from host iSCSI initiators. The default listening port is 3260. You must enter 3260 or a value between 49152 and 65535.

    MTU size (Available by clicking Show more port settings.)

    If necessary, enter a new size in bytes for the Maximum Transmission Unit (MTU).

    The default Maximum Transmission Unit (MTU) size is 1500 bytes per frame. You must enter a value between 1500 and 9000.

    Enable ICMP PING responses

    Select this option to enable the Internet Control Message Protocol (ICMP). The operating systems of networked computers use this protocol to send messages. These ICMP messages determine whether a host is reachable and how long it takes to get packets to and from that host.

    If you selected Enable IPv4, a dialog box opens for selecting IPv4 settings after you click Next. If you selected Enable IPv6, a dialog box opens for selecting IPv6 settings after you click Next. If you selected both options, the dialog box for IPv4 settings opens first, and then after you click Next, the dialog box for IPv6 settings opens.

  10. Configure the IPv4 and/or IPv6 settings, either automatically or manually. To see all port settings, click the Show more settings link on the right of the dialog box.

    Port setting Description

    Automatically obtain configuration

    Select this option to obtain the configuration automatically.

    Manually specify static configuration

    Select this option, and then enter a static address in the fields. For IPv4, include the network subnet mask and gateway. For IPv6, include the routable IP address and router IP address.

  11. Click Finish.

  12. Close System Manager.

Step 4: Configure host-side networking—​iSCSI

Configuring iSCSI networking on the host side enables the VMware iSCSI initiator to establish a session with the array.

About this task

In this express method for configuring iSCSI networking on the host side, you allow the ESXi host to carry iSCSI traffic over four redundant paths to the storage.

After you complete this task, the host is configured with a single vSwitch containing both VMkernel ports and both VMNICs.

For additional information on configuring iSCSI networking for VMware, see the VMware vSphere Documentation for your version of vSphere.

Steps
  1. Configure the switches that will be used to carry iSCSI storage traffic.

  2. Enable send and receive hardware flow control end to end.

  3. Disable priority flow control.

  4. Complete the array side iSCSI configuration.

  5. Use two NIC ports for iSCSI traffic.

  6. Use either the vSphere client or vSphere web client to perform the host-side configuration.

    The interfaces vary in functionality and the exact workflow will vary.

Step 5: Verify IP network connections—​iSCSI, VMware

You verify Internet Protocol (IP) network connections by using ping tests to ensure the host and array are able to communicate.

Steps
  1. On the host, run one of the following commands, depending on whether jumbo frames are enabled:

    • If jumbo frames are not enabled, run this command:

      vmkping <iSCSI_target_IP_address\>
    • If jumbo frames are enabled, run the ping command with a payload size of 8,972 bytes. The IP and ICMP combined headers are 28 bytes, which when added to the payload, equals 9,000 bytes. The -s switch sets the packet size bit. The -d switch sets the DF (Don't Fragment) bit on the IPv4 packet. These options allow jumbo frames of 9,000 bytes to be successfully transmitted between the iSCSI initiator and the target.

      vmkping -s 8972 -d <iSCSI_target_IP_address\>

      In this example, the iSCSI target IP address is 192.0.2.8.

      vmkping -s 8972 -d 192.0.2.8
      Pinging 192.0.2.8 with 8972 bytes of data:
      Reply from 192.0.2.8: bytes=8972 time=2ms TTL=64
      Reply from 192.0.2.8: bytes=8972 time=2ms TTL=64
      Reply from 192.0.2.8: bytes=8972 time=2ms TTL=64
      Reply from 192.0.2.8: bytes=8972 time=2ms TTL=64
      Ping statistics for 192.0.2.8:
        Packets: Sent = 4, Received = 4, Lost = 0 (0% loss),
      Approximate round trip times in milli-seconds:
        Minimum = 2ms, Maximum = 2ms, Average = 2ms
  2. Issue a vmkping command from each host's initiator address (the IP address of the host Ethernet port used for iSCSI) to each controller iSCSI port. Perform this action from each host server in the configuration, changing the IP addresses as necessary.

    Note If the command fails with the message sendto() failed (Message too long), verify the MTU size (jumbo frame support) for the Ethernet interfaces on the host server, storage controller, and switch ports.
  3. Return to the iSCSI Configuration procedure to finish target discovery.

Step 6: Record your configuration

You can generate and print a PDF of this page, and then use the following worksheet to record your protocol-specific storage configuration information. You need this information to perform provisioning tasks.

Recommended configurations consist of two initiator ports and four target ports with one or more VLANs.

50001 01 conf vmw

Target IQN

Callout No. Target port connection IQN

2

Target port

Mapping host name

Callout No. Host information Name and type

1

Mapping host name

Host OS type