Skip to main content
Cluster and storage switches

Migration requirements

Contributors netapp-yvonneo

Cisco Nexus 3132Q-V switches can be used as cluster switches in your AFF or FAS cluster. Cluster switches allow you to build ONTAP clusters with more than two nodes.

Note

The procedure requires the use of both ONTAP commands and Cisco Nexus 3000 Series Switches commands; ONTAP commands are used unless otherwise indicated.

For more information, see:

Cisco Nexus 5596 requirements

The cluster switches use the following ports for connections to nodes:

  • Nexus 5596: ports e1/1-40 (10 GbE)

  • Nexus 3132Q-V: ports e1/1-30 (10/40/100 GbE)

The cluster switches use the following Inter-Switch Link (ISL) ports:

  • Nexus 5596: ports e1/41-48 (10 GbE)

  • Nexus 3132Q-V: ports e1/31-32 (40/100 GbE)

The Hardware Universe contains information about supported cabling to Nexus 3132Q-V switches:

  • Nodes with 10 GbE cluster connections require QSFP to SFP+ optical fiber breakout cables or QSFP to SFP+ copper breakout cables.

  • Nodes with 40/100 GbE cluster connections require supported QSFP/QSFP28 optical modules with fiber cables or QSFP/QSFP28 copper direct-attach cables.

The cluster switches use the appropriate ISL cabling:

  • Beginning: Nexus 5596 (SFP+ to SFP+)

    • 8x SFP+ fiber or copper direct-attach cables

  • Interim: Nexus 5596 to Nexus 3132Q-V (QSFP to 4xSFP+ break-out)

    • 1x QSFP to SFP+ fiber break-out or copper break-out cables

  • Final: Nexus 3132Q-V to Nexus 3132Q-V (QSFP28 to QSFP28)

    • 2x QSFP28 fiber or copper direct-attach cables

  • On Nexus 3132Q-V switches, you can operate QSFP/QSFP28 ports in either 40/100 Gigabit Ethernet or 4 x10 Gigabit Ethernet modes.

    By default, there are 32 ports in the 40/100 Gigabit Ethernet mode. These 40 Gigabit Ethernet ports are numbered in a 2-tuple naming convention. For example, the second 40 Gigabit Ethernet port is numbered as 1/2.

    The process of changing the configuration from 40 Gigabit Ethernet to 10 Gigabit Ethernet is called breakout and the process of changing the configuration from 10 Gigabit Ethernet to 40 Gigabit Ethernet is called breakin.

    When you break out a 40/100 Gigabit Ethernet port into 10 Gigabit Ethernet ports, the resulting ports are numbered using a 3-tuple naming convention. For example, the break-out ports of the second 40/100 Gigabit Ethernet port are numbered as 1/2/1, 1/2/2, 1/2/3, and 1/2/4.

  • On the left side of Nexus 3132Q-V switches are 2 SFP+ ports, called 1/33 and 1/34.

  • You have configured some of the ports on Nexus 3132Q-V switches to run at 10 GbE or 40/100 GbE.

    Note

    You can break out the first six ports into 4x10 GbE mode by using the interface breakout module 1 port 1-6 map 10g-4x command. Similarly, you can regroup the first six QSFP+ ports from breakout configuration by using the no interface breakout module 1 port 1-6 map 10g-4x command.

  • You have done the planning, migration, and read the required documentation on 10 GbE and 40/100 GbE connectivity from nodes to Nexus 3132Q-V cluster switches.

  • The ONTAP and NX-OS versions supported in this procedure are on the Cisco Ethernet Switches page.

About the examples used

The examples in this procedure describe replacing Cisco Nexus 5596 switches with Cisco Nexus 3132Q-V switches. You can use these steps (with modifications) for other older Cisco switches.

The procedure also uses the following switch and node nomenclature:

  • The command outputs might vary depending on different releases of ONTAP.

  • The Nexus 5596 switches to be replaced are CL1 and CL2.

  • The Nexus 3132Q-V switches to replace the Nexus 5596 switches are C1 and C2.

  • n1_clus1 is the first cluster logical interface (LIF) connected to cluster switch 1 (CL1 or C1) for node n1.

  • n1_clus2 is the first cluster LIF connected to cluster switch 2 (CL2 or C2) for node n1.

  • n1_clus3 is the second LIF connected to cluster switch 2 (CL2 or C2) for node n1.

  • n1_clus4 is the second LIF connected to cluster switch 1 (CL1 or C1) for node n1.

  • The number of 10 GbE and 40/100 GbE ports are defined in the reference configuration files (RCFs) available on the Cisco® Cluster Network Switch Reference Configuration File Download page.

  • The nodes are n1, n2, n3, and n4.

The examples in this procedure use four nodes:

  • Two nodes use four 10 GbE cluster interconnect ports: e0a, e0b, e0c, and e0d.

  • The other two nodes use two 40 GbE cluster interconnect ports: e4a and e4e.

    The Hardware Universe lists the actual cluster ports on your platforms.

Scenarios covered

This procedure covers the following scenarios:

  • The cluster starts with two nodes connected and functioning in a two Nexus 5596 cluster switches.

  • The cluster switch CL2 to be replaced by C2 (steps 1 to 19):

    • Traffic on all cluster ports and LIFs on all nodes connected to CL2 are migrated onto the first cluster ports and LIFs connected to CL1.

    • Disconnect cabling from all cluster ports on all nodes connected to CL2, and then use supported break-out cabling to reconnect the ports to new cluster switch C2.

    • Disconnect cabling between ISL ports between CL1 and CL2, and then use supported break-out cabling to reconnect the ports from CL1 to C2.

    • Traffic on all cluster ports and LIFs connected to C2 on all nodes is reverted.

  • The cluster switch CL2 to be replaced by C2.

    • Traffic on all cluster ports or LIFs on all nodes connected to CL1 are migrated onto the second cluster ports or LIFs connected to C2.

    • Disconnect cabling from all cluster port on all nodes connected to CL1 and reconnect, using supported break-out cabling, to new cluster switch C1.

    • Disconnect cabling between ISL ports between CL1 and C2, and reconnect using supported cabling, from C1 to C2.

    • Traffic on all cluster ports or LIFs connected to C1 on all nodes is reverted.

  • Two FAS9000 nodes have been added to cluster with examples showing cluster details.

What's next?

Prepare for migration.