VMware vSphere: NetApp HCI with Cisco ACI

Contributors Download PDF of this page

VMware vSphere is an industry-leading virtualization platform that provides a way to build a resilient and reliable virtual infrastructure. vSphere contains virtualization, management, and interface layers. The two core components of VMware vSphere are ESXi server and the vCenter Server. VMware ESXi is hypervisor software installed on a physical machine that facilitates hosting of VMs and virtual appliances. vCenter Server is the service through which you manage multiple ESXi hosts connected in a network and pool host resources. For more information on VMware vSphere, see the documentation here.

Workflow

The following workflow was used to up the virtual environment. Each of these steps might involve several individual tasks.

  1. Install and configure Nexus 9000 switches in ACI mode and APIC software on the UCS C-series server. See the Install and Upgrade documentation for detailed steps.

  2. Configure and setup ACI fabric by referring to the documentation.

  3. Configure the tenants, application profiles, bridge domains, and EPGs required for NetApp HCI nodes. NetApp recommends using one BD to one EPG framework, except for iSCSI. See the documentation here for more details. The minimum set of EPGs required are in-band management, iSCSI, iSCSI-A, iSCSI-B, VM motion, VM-data network, and native.

    Note iSCSI multipathing requires two iSCSI EPGs: iSCSI-A and iSCSI-B, each with one active uplink.
    Note NetApp mNode requires an iSCSI EPG with both uplinks active.
  4. Create the VLAN pool, physical domain, and AEP based on the requirements. Create the switch and interface profiles for individual ports. Then attach the physical domain and configure the static paths to the EPGs. See the configuration guide for more details.

    Error: Missing Graphic Image

    Error: Missing Graphic Image

    Note Use an access port policy group for interfaces connecting to NetApp HCI compute nodes, and use vPC policy group for interfaces to NetApp HCI storage nodes.
  5. Create and assign contracts for tightly-controlled access between workloads. For more information on configuring the contracts, see the guide here.

  6. Install and configure NetApp HCI using NDE. NDE configures all the required parameters, including VDS port groups for networking, and also installs the mNode VM. See the deployment guide for more information.

  7. Though VMM integration of Cisco ACI with VMware VDS is optional, using the VMM integration feature is a best practice. When not using VMM integration, an NDE-installed VDS can be used for networking with physical domain attachment on Cisco ACI.

  8. If you are using VMM integration, NDE-installed VDS cannot be fully managed by ACI and can be added as read-only VMM domain. To avoid that scenario and make efficient use of Cisco ACI’s VMM networking feature, create a new VMware VMM domain in ACI with a new VMware vSphere Distributed Switch (vDS) and an explicit dynamic VLAN pool. The VMM domain created can integrate with any supported virtual switch.

    1. Integrate with VDS. If you wish to integrate ACI with VDS, select the virtual switch type to be VMware Distributed Switch. Consider the configuration best practices noted in the following table. See the configuration guide for more details.

      Error: Missing Graphic Image

    2. Integrate with Cisco AVE. If you are integrating Cisco AVE with Cisco ACI, select the virtual switch type to be Cisco AVE. Cisco AVE requires a unique VLAN pool of type Internal for communicating between internal and external port groups. Follow the configuration best practices noted in this table. See the installation guide to install and configure Cisco AVE.

      Error: Missing Graphic Image

      Error: Missing Graphic Image

  9. Attach the VMM domain to the EPGs using Pre-Provision Resolution Immediacy. Then migrate all the VMNICs, VMkernel ports, and VNICs from the NDE-created VDS to ACI-created VDS or AVE and so on. Configure the uplink failover and teaming policy for iSCSI-A and iSCSI-B to have one active uplink each. VMs can now attach their VMNICs to ACI-created port groups to access network resources. The port groups on VDS that are managed by Cisco ACI are in the format of <tenant-name>|<application-profile-name>|<epg-name>.

    Note Pre-Provision Resolution Immediacy is required to ensure the port policies are downloaded to the leaf switch even before the VMM controller is attached to the virtual switch.

    Error: Missing Graphic Image

    Error: Missing Graphic Image

  10. If you intend to use micro-segmentation, then create micro-segment (uSeg) EPGs attaching to the right BD. Create attributes in VMware vSphere and attach them to the required VMs. Ensure the VMM domain has Enable Tag Collection enabled. Configure the uSeg EPGs with the corresponding attribute and attach the VMM domain to it. This provides more granular control of communication on the endpoint VMs.

    Error: Missing Graphic Image

    The networking functionality for VMware vSphere on NetApp HCI in this solution is provided either using VMware VDS or Cisco AVE.

VMware VDS

VMware vSphere Distributed Switch (VDS) is a virtual switch that connects to multiple ESXi hosts in the cluster or set of clusters allowing virtual machines to maintain consistent network configuration as they migrate across multiple hosts. VDS also provides for centralized management of network configurations in a vSphere environment. For more details, see the VDS documentation.

Error: Missing Graphic Image

The following table outlines the necessary parameters and best practices for configuring and integrating Cisco ACI with VMware VDS.

Resource Configuration Considerations Best Practices

Endpoint groups

  • Separate EPG for native VLANs

  • Static binding of interfaces to HCI storage and compute nodes in native VLAN EPG uses 802.1P mode. This is required for node discovery to run NDE.

  • Separate EPGs for iSCSI, iSCSI-A, and iSCSI-B with a common BD

  • iSCSI-A and iSCSI-B are for iSCSI multipathing and are used for VMkernel ports on ESXi hosts

  • Physical domain to be attached to iSCSI EPG before running NDE

  • VMM domain to be attached to iSCSI, iSCSI-A, and iSCSI-B EPGs

  • Contracts between EPGs to be well defined. Allow only required ports for communication.

  • Use unique native VLAN for NDE node discovery

  • For EPGs corresponding to port-groups being attached to VMkernel ports, VMM domain to be attached with Pre-Provision for Resolution Immediacy

Interface policy

  • A common leaf access port policy group for all ESXi hosts

  • One vPC policy group per NetApp HCI storage node

  • LLDP enabled, CDP disabled

  • Separate VLAN pool for VMM domain with dynamic allocation turned on

  • Recommended to use vPC with LACP Active port-channel policy for interfaces towards NetApp HCI storage nodes

  • Recommended to use individual interfaces for compute nodes, no LACP.

VMM Integration

  • Local switching preference

  • Access mode is Read Write.

  • MAC-Pinning-Physical-NIC-Load for vSwitch policy

  • LLDP for discovery policy

  • Enable Tag collection if micro-segmentation is used

VDS

  • Both uplinks active for iSCSI port-group

  • One uplink each for iSCSI-A and iSCSI-B

  • Load balancing method for all port-groups to be ‘Route based on physical NIC load’

  • iSCSI VMkernel port migration to be done one at a time from NDE deployed VDS to ACI integrated VDS

Easy Scale

  • Run NDE scale by attaching the same leaf access port policy group for ESXi hosts to be added

  • One vPC policy group per NetApp HCI storage node

  • Individual interfaces (for ESXi hosts) and vPCs (for storage nodes) should be attached to native, in-band management, iSCSI, VM motion EPGs for successful NDE scale

  • LLDP enabled, CDP disabled

  • Recommended to use vPC with LACP Active port-channel policy for interfaces towards NetApp HCI storage nodes

  • Recommended to use individual interfaces for compute nodes, no LACP.

Note For traffic load-balancing, port channels with vPCs can be used on Cisco ACI along with LAGs on VDS with LACP in active mode. However, using LACP can affect storage performance when compared to iSCSI multipathing.

Cisco AVE

Cisco ACI Virtual Edge (AVE) is a virtual switch offering by Cisco that extends the Cisco ACI policy model to virtual infrastructure. It is a hypervisor- independent distributed network service that sits on top of the native virtual switch of the hypervisor. It leverages the underlying virtual switch using a VM-based solution to provide network visibility into the virtual environments. For more details on Cisco AVE, see the documentation. The following figure depicts the internal networking of Cisco AVE on an ESXi host (as tested).

Error: Missing Graphic Image

The following table lists the necessary parameters and best practices for configuring and integrating Cisco ACI with Cisco AVE on VMware ESXi. Cisco AVE is currently only supported with VMware vSphere.

Resource Configuration Considerations Best Practices

Endpoint Groups

  • Separate EPG for native VLANs

  • Static binding of interfaces towards HCI storage and compute nodes in native VLAN EPG uses 802.1P mode. This is required for node discovery to run NDE.

  • Separate EPGs for iSCSI, iSCSI-A and iSCSI-B with a common BD

  • iSCSI-A and iSCSI-B are for iSCSI multipathing and are used for VMkernel ports on ESXi hosts

  • Physical domain to be attached to iSCSI EPG before running NDE

  • VMM domain is attached to iSCSI, iSCSI-A, and iSCSI-B EPGs

  • Separate VLAN pool for VMM domain with dynamic allocation turned on

  • Contracts between EPGs to be well defined. Allow only required ports for communication.

  • Use unique native VLAN for NDE node discovery

  • Use native switching mode in VMM domain for EPGs that correspond to port groups being attached to host’s VMkernel adapters

  • Use AVE switching mode in VMM domain for EPGs corresponding to port groups carrying user VM traffic

  • For EPGs corresponding to port-groups being attached to VMkernel ports, VMM domain is attached with Pre-Provision for Resolution Immediacy

Interface Policy

  • One vPC policy group per NetApp HCI storage node

  • LLDP enabled, CDP disabled

  • Before running NDE, for NDE discovery:

    • Leaf Access port policy group for all ESXi hosts

  • After running NDE, for Cisco AVE:

    • One vPC policy group per ESXi host

  • NetApp recommends using vPCs to ESXi hosts for Cisco AVE

  • Use static mode on port-channel policy for vPCs to ESXi

  • Use Layer-4 SRC port load balancing hashing method for port-channel policy

  • NetApp recommends using vPC with LACP active port-channel policy for interfaces to NetApp HCI storage nodes

VMM Integration

  • Create a new VLAN range [or Encap Block] with role Internal and Dynamic allocation’ attached to the VLAN pool intended for VMM domain

  • Create a pool of multicast addresses (one address per EPG)

  • Reserve another multicast address different from the pool of multicast addresses intended for AVE fabric-wide multicast address

  • Local switching preference

  • Access mode to be Read Write mode

  • Static mode on for vSwitch policy

  • Ensure that vSwitch port-channel policy and interface policy group’s port-channel policy are using the same mode

  • LLDP for discovery policy

  • Enable Tag collection if using micro-segmentation

  • Recommended option for Default Encap mode is VXLAN

VDS

  • Both uplinks active for iSCSI port-group

  • One uplink each for iSCSI-A and iSCSI-B

  • iSCSI VMkernel port migration is done one at a time from NDE deployed VDS to ACI integrated VDS

  • Load balancing method for all port-groups to be Route based on IP hash

Cisco AVE

  • Run NDE with access port interface policy groups towards ESXi hosts. Individual interfaces towards ESXi hosts should be attached to native, in-band management, iSCSI, VM motion EPGs for successful NDE run.

  • Once the environment is up, place the host in maintenance mode, migrate interface policy group to vPC with static mode on, assign vPC to all required EPGs and remove the host from maintenance mode. Repeat the same process for all hosts.

  • Run the AVE installation process to install AVE control VM on all hosts

  • Use local datastore on the hosts for installing AVE control VM. Each host should have one AVE control VM installed on it

  • Use network protocol profile on the in-band management VLAN if DHCP is not available on that network

Easy Scale

  • Run NDE scale with access port interface policy group for ESXi hosts to be added. Individual interfaces should be attached to native, in-band management, iSCSI, VM motion EPGs for successful NDE run.

  • Once the ESXi host is added to the vSphere cluster, place the host in maintenance mode and migrate the interface policy group to vPC with static mode on. Then attach the vPC to required EPGs.

  • Run AVE installation process on the new host for installing AVE control VM on that host

  • One vPC policy group per NetApp HCI storage node to be added to the cluster

  • LLDP enabled, CDP disabled

  • Use local datastore on the host for installing AVE control VM

  • Use network protocol profile on the in-band management VLAN if DHCP is not available on that network

  • Recommended to use vPC with LACP Active port-channel policy for interfaces towards NetApp HCI storage nodes

Note For traffic load balancing, port channel with vPCs can be used on Cisco ACI along with LAGs on ESXi hosts with LACP in active mode. However, using LACP can affect storage performance when compared to iSCSI multipathing.