Migrate from CN1610 cluster switches to Cisco Nexus 3132Q-V cluster switches
Follow this procedure to replace the existing CN1610 cluster switches with Cisco Nexus 3132Q-V cluster switches.
Review requirements
Review the NetApp CN1610 requirements requirements in Requirements for replacing Cisco Nexus 3132Q-V cluster switches.
For more information, see:
Replace the switch
The examples in this procedure use the following switch and node nomenclature:
-
The command outputs might vary depending on different releases of ONTAP software.
-
The CN1610 switches to be replaced are CL1 and CL2.
-
The Nexus 3132Q-V switches to replace the CN1610 switches are C1 and C2.
-
n1_clus1 is the first cluster logical interface (LIF) that is connected to cluster switch 1 (CL1 or C1) for node n1.
-
n1_clus2 is the first cluster LIF that is connected to cluster switch 2 (CL2 or C2) for node n1.
-
n1_clus3 is the second LIF that is connected to cluster switch 2 (CL2 or C2) for node n1.
-
n1_clus4 is the second LIF that is connected to cluster switch 1 (CL1 or C1) for node n1.
-
The nodes are n1, n2, n3, and n4.
-
The number of 10 GbE and 40/100 GbE ports are defined in the reference configuration files (RCFs) available on the Cisco® Cluster Network Switch Reference Configuration File Download page.
The examples in this procedure use four nodes:
-
Two nodes use four 10 GbE cluster interconnect ports: e0a, e0b, e0c, and e0d.
-
The other two nodes use two 40/100 GbE cluster interconnect fiber cables: e4a and e4e.
The Hardware Universe has information about the cluster fiber cables on your platforms.
This procedure covers the following scenario:
-
The cluster starts with two nodes connected to two CN1610 cluster switches.
-
Cluster switch CL2 to be replaced by C2
-
Traffic on all cluster ports and LIFs on all nodes connected to CL2 are migrated onto the first cluster ports and LIFs connected to CL1.
-
Disconnect cabling from all cluster ports on all nodes connected to CL2, and then use supported breakout cabling to reconnect the ports to new cluster switch C2.
-
Disconnect cabling between ISL ports CL1 and CL2, and then use supported breakout cabling to reconnect the ports from CL1 to C2.
-
Traffic on all cluster ports and LIFs connected to C2 on all nodes is reverted.
-
-
Cluster switch CL1 to be replaced by C1
-
Traffic on all cluster ports and LIFs on all nodes connected to CL1 are migrated onto the second cluster ports and LIFs connected to C2.
-
Disconnect cabling from all cluster ports on all nodes connected to CL1, and then use supported breakout cabling to reconnect the ports to new cluster switch C1.
-
Disconnect cabling between ISL ports CL1 and C2, and then use supported breakout cabling to reconnect the ports from C1 to C2.
-
Traffic on all migrated cluster ports and LIFs connected to C1 on all nodes is reverted.
-
The procedure requires the use of both ONTAP commands and Cisco Nexus 3000 Series Switches commands; ONTAP commands are used unless otherwise indicated. |
Step 1: Prepare for replacement
-
If AutoSupport is enabled on this cluster, suppress automatic case creation by invoking an AutoSupport message:
system node autosupport invoke -node * -type all - message MAINT=xh
x is the duration of the maintenance window in hours.
The AutoSupport message notifies technical support of this maintenance task so that automatic case creation is suppressed during the maintenance window. -
Display information about the devices in your configuration:
network device-discovery show
Show example
The following example displays how many cluster interconnect interfaces have been configured in each node for each cluster interconnect switch:
cluster::> network device-discovery show Local Discovered Node Port Device Interface Platform ------ ------ ------------ ----------- ---------- n1 /cdp e0a CL1 0/1 CN1610 e0b CL2 0/1 CN1610 e0c CL2 0/2 CN1610 e0d CL1 0/2 CN1610 n2 /cdp e0a CL1 0/3 CN1610 e0b CL2 0/3 CN1610 e0c CL2 0/4 CN1610 e0d CL1 0/4 CN1610 8 entries were displayed.
-
Determine the administrative or operational status for each cluster interface.
-
Display the cluster network port attributes:
network port show
Show example
The following example displays the network port attributes on a system:
cluster::*> network port show -role Cluster (network port show) Node: n1 Broadcast Speed (Mbps) Health Ignore Port IPspace Domain Link MTU Admin/Open Status Health Status ----- --------- ---------- ----- ----- ------------ ------ ------------- e0a cluster cluster up 9000 auto/10000 - - e0b cluster cluster up 9000 auto/10000 - - e0c cluster cluster up 9000 auto/10000 - - e0d cluster cluster up 9000 auto/10000 - - Node: n2 Broadcast Speed (Mbps) Health Ignore Port IPspace Domain Link MTU Admin/Open Status Health Status ----- --------- ---------- ----- ----- ------------ ------ ------------- e0a cluster cluster up 9000 auto/10000 - - e0b cluster cluster up 9000 auto/10000 - - e0c cluster cluster up 9000 auto/10000 - - e0d cluster cluster up 9000 auto/10000 - - 8 entries were displayed.
-
Display information about the logical interfaces: +
network interface show
Show example
The following example displays the general information about all of the LIFs on your system:
cluster::*> network interface show -role Cluster (network interface show) Logical Status Network Current Current Is Vserver Interface Admin/Oper Address/Mask Node Port Home -------- ---------- ----------- -------------- -------- -------- ----- Cluster n1_clus1 up/up 10.10.0.1/24 n1 e0a true n1_clus2 up/up 10.10.0.2/24 n1 e0b true n1_clus3 up/up 10.10.0.3/24 n1 e0c true n1_clus4 up/up 10.10.0.4/24 n1 e0d true n2_clus1 up/up 10.10.0.5/24 n2 e0a true n2_clus2 up/up 10.10.0.6/24 n2 e0b true n2_clus3 up/up 10.10.0.7/24 n2 e0c true n2_clus4 up/up 10.10.0.8/24 n2 e0d true 8 entries were displayed.
-
Display information about the discovered cluster switches:
system cluster-switch show
Show example
The following example displays the cluster switches that are known to the cluster, along with their management IP addresses:
cluster::> system cluster-switch show Switch Type Address Model ----------------------------- ---------------- ------------- -------- CL1 cluster-network 10.10.1.101 CN1610 Serial Number: 01234567 Is Monitored: true Reason: Software Version: 1.2.0.7 Version Source: ISDP CL2 cluster-network 10.10.1.102 CN1610 Serial Number: 01234568 Is Monitored: true Reason: Software Version: 1.2.0.7 Version Source: ISDP 2 entries were displayed.
-
-
Set the
-auto-revert
parameter to false on cluster LIFs clus1 and clus4 on both nodes:network interface modify
Show example
cluster::*> network interface modify -vserver node1 -lif clus1 -auto-revert false cluster::*> network interface modify -vserver node1 -lif clus4 -auto-revert false cluster::*> network interface modify -vserver node2 -lif clus1 -auto-revert false cluster::*> network interface modify -vserver node2 -lif clus4 -auto-revert false
-
Verify that the appropriate RCF and image are installed on the new 3132Q-V switches as necessary for your requirements, and make any essential site customizations, such as users and passwords, network addresses, and so on.
You must prepare both switches at this time. If you need to upgrade the RCF and image, follow these steps:
-
See the Cisco Ethernet Switches page on NetApp Support Site.
-
Note your switch and the required software versions in the table on that page.
-
Download the appropriate version of the RCF.
-
Click CONTINUE on the Description page, accept the license agreement, and then follow the instructions on the Download page to download the RCF.
-
Download the appropriate version of the image software.
-
-
Migrate the LIFs associated with the second CN1610 switch to be replaced:
network interface migrate
You must migrate the cluster LIFs from a connection to the node, either through the service processor or node management interface, which owns the cluster LIF being migrated.
Show example
The following example shows n1 and n2, but LIF migration must be done on all the nodes:
cluster::*> network interface migrate -vserver Cluster -lif n1_clus2 -destination-node n1 -destination-port e0a cluster::*> network interface migrate -vserver Cluster -lif n1_clus3 -destination-node n1 -destination-port e0d cluster::*> network interface migrate -vserver Cluster -lif n2_clus2 -destination-node n2 -destination-port e0a cluster::*> network interface migrate -vserver Cluster -lif n2_clus3 -destination-node n2 -destination-port e0d
-
Verify the cluster's health:
network interface show
Show example
The following example shows the result of the previous
network interface migrate
command:cluster::*> network interface show -role Cluster (network interface show) Logical Status Network Current Current Is Vserver Interface Admin/Oper Address/Mask Node Port Home -------- ---------- ----------- --------------- -------- -------- ----- Cluster n1_clus1 up/up 10.10.0.1/24 n1 e0a true n1_clus2 up/up 10.10.0.2/24 n1 e0a false n1_clus3 up/up 10.10.0.3/24 n1 e0d false n1_clus4 up/up 10.10.0.4/24 n1 e0d true n2_clus1 up/up 10.10.0.5/24 n2 e0a true n2_clus2 up/up 10.10.0.6/24 n2 e0a false n2_clus3 up/up 10.10.0.7/24 n2 e0d false n2_clus4 up/up 10.10.0.8/24 n2 e0d true 8 entries were displayed.
-
Shut down the cluster interconnect ports that are physically connected to switch CL2:
network port modify
Show example
The following commands shut down the specified ports on n1 and n2, but the ports must be shut down on all nodes:
cluster::*> network port modify -node n1 -port e0b -up-admin false cluster::*> network port modify -node n1 -port e0c -up-admin false cluster::*> network port modify -node n2 -port e0b -up-admin false cluster::*> network port modify -node n2 -port e0c -up-admin false
-
Verify the connectivity of the remote cluster interfaces:
You can use the network interface check cluster-connectivity
command to start an accessibility check for cluster connectivity and then display the details:
network interface check cluster-connectivity start
and network interface check cluster-connectivity show
cluster1::*> network interface check cluster-connectivity start
NOTE: Wait for a number of seconds before running the show command to display the details.
cluster1::*> network interface check cluster-connectivity show Source Destination Packet Node Date LIF LIF Loss ------ -------------------------- ---------- -------------- ----------- n1 3/5/2022 19:21:18 -06:00 n1_clus2 n1_clus1 none 3/5/2022 19:21:20 -06:00 n1_clus2 n2_clus2 none n2 3/5/2022 19:21:18 -06:00 n2_clus2 n1_clus1 none 3/5/2022 19:21:20 -06:00 n2_clus2 n1_clus2 none
For all ONTAP releases, you can also use the cluster ping-cluster -node <name>
command to check the connectivity:
cluster ping-cluster -node <name>
cluster::*> cluster ping-cluster -node n1 Host is n1 Getting addresses from network interface table... Cluster n1_clus1 n1 e0a 10.10.0.1 Cluster n1_clus2 n1 e0b 10.10.0.2 Cluster n1_clus3 n1 e0c 10.10.0.3 Cluster n1_clus4 n1 e0d 10.10.0.4 Cluster n2_clus1 n2 e0a 10.10.0.5 Cluster n2_clus2 n2 e0b 10.10.0.6 Cluster n2_clus3 n2 e0c 10.10.0.7 Cluster n2_clus4 n2 e0d 10.10.0.8 Local = 10.10.0.1 10.10.0.2 10.10.0.3 10.10.0.4 Remote = 10.10.0.5 10.10.0.6 10.10.0.7 10.10.0.8 Cluster Vserver Id = 4294967293 Ping status: .... Basic connectivity succeeds on 16 path(s) Basic connectivity fails on 0 path(s) ................ Detected 1500 byte MTU on 16 path(s): Local 10.10.0.1 to Remote 10.10.0.5 Local 10.10.0.1 to Remote 10.10.0.6 Local 10.10.0.1 to Remote 10.10.0.7 Local 10.10.0.1 to Remote 10.10.0.8 Local 10.10.0.2 to Remote 10.10.0.5 Local 10.10.0.2 to Remote 10.10.0.6 Local 10.10.0.2 to Remote 10.10.0.7 Local 10.10.0.2 to Remote 10.10.0.8 Local 10.10.0.3 to Remote 10.10.0.5 Local 10.10.0.3 to Remote 10.10.0.6 Local 10.10.0.3 to Remote 10.10.0.7 Local 10.10.0.3 to Remote 10.10.0.8 Local 10.10.0.4 to Remote 10.10.0.5 Local 10.10.0.4 to Remote 10.10.0.6 Local 10.10.0.4 to Remote 10.10.0.7 Local 10.10.0.4 to Remote 10.10.0.8 Larger than PMTU communication succeeds on 16 path(s) RPC status: 4 paths up, 0 paths down (tcp check) 4 paths up, 0 paths down (udp check)
-
Shut down the ISL ports 13 through 16 on the active CN1610 switch CL1:
shutdown
Show example
The following example shows how to shut down ISL ports 13 through 16 on the CN1610 switch CL1:
(CL1)# configure (CL1)(Config)# interface 0/13-0/16 (CL1)(Interface 0/13-0/16)# shutdown (CL1)(Interface 0/13-0/16)# exit (CL1)(Config)# exit (CL1)#
-
Build a temporary ISL between CL1 and C2:
Show example
The following example builds a temporary ISL between CL1 (ports 13-16) and C2 (ports e1/24/1-4):
C2# configure C2(config)# interface port-channel 2 C2(config-if)# switchport mode trunk C2(config-if)# spanning-tree port type network C2(config-if)# mtu 9216 C2(config-if)# interface breakout module 1 port 24 map 10g-4x C2(config)# interface e1/24/1-4 C2(config-if-range)# switchport mode trunk C2(config-if-range)# mtu 9216 C2(config-if-range)# channel-group 2 mode active C2(config-if-range)# exit C2(config-if)# exit
Step 2: Configure ports
-
On all nodes, remove the cables that are attached to the CN1610 switch CL2.
With supported cabling, you must reconnect the disconnected ports on all of the nodes to the Nexus 3132Q-V switch C2.
-
Remove four ISL cables from ports 13 to 16 on the CN1610 switch CL1.
You must attach appropriate Cisco QSFP to SFP+ breakout cables connecting port 1/24 on the new Cisco 3132Q-V switch C2, to ports 13 to 16 on existing CN1610 switch CL1.
When reconnecting any cables to the new Cisco 3132Q-V switch, you must use either optical fiber or Cisco twinax cables. -
To make the ISL dynamic, configure the ISL interface 3/1 on the active CN1610 switch to disable the static mode:
no port-channel static
This configuration matches with the ISL configuration on the 3132Q-V switch C2 when the ISLs are brought up on both switches in step 11
Show example
The following example shows the configuration of the ISL interface 3/1 using the
no port-channel static
command to make the ISL dynamic:(CL1)# configure (CL1)(Config)# interface 3/1 (CL1)(Interface 3/1)# no port-channel static (CL1)(Interface 3/1)# exit (CL1)(Config)# exit (CL1)#
-
Bring up ISLs 13 through 16 on the active CN1610 switch CL1.
Show example
The following example illustrates the process of bringing up ISL ports 13 through 16 on the port-channel interface 3/1:
(CL1)# configure (CL1)(Config)# interface 0/13-0/16,3/1 (CL1)(Interface 0/13-0/16,3/1)# no shutdown (CL1)(Interface 0/13-0/16,3/1)# exit (CL1)(Config)# exit (CL1)#
-
Verify that the ISLs are
up
on the CN1610 switch CL1:show port-channel
The "Link State" should be
Up
, "Type" should beDynamic
, and the "Port Active" column should beTrue
for ports 0/13 to 0/16:Show example
(CL1)# show port-channel 3/1 Local Interface................................ 3/1 Channel Name................................... ISL-LAG Link State..................................... Up Admin Mode..................................... Enabled Type........................................... Dynamic Load Balance Option............................ 7 (Enhanced hashing mode) Mbr Device/ Port Port Ports Timeout Speed Active ------ ------------- ---------- ------- 0/13 actor/long 10 Gb Full True partner/long 0/14 actor/long 10 Gb Full True partner/long 0/15 actor/long 10 Gb Full True partner/long 0/16 actor/long 10 Gb Full True partner/long
-
Verify that the ISLs are
up
on the 3132Q-V switch C2:show port-channel summary
Show example
Ports Eth1/24/1 through Eth1/24/4 should indicate
(P)
, meaning that all four ISL ports are up in the port-channel. Eth1/31 and Eth1/32 should indicate(D)
as they are not connected:C2# show port-channel summary Flags: D - Down P - Up in port-channel (members) I - Individual H - Hot-standby (LACP only) s - Suspended r - Module-removed S - Switched R - Routed U - Up (port-channel) M - Not in use. Min-links not met ------------------------------------------------------------------------------ Group Port- Type Protocol Member Ports Channel ------------------------------------------------------------------------------ 1 Po1(SU) Eth LACP Eth1/31(D) Eth1/32(D) 2 Po2(SU) Eth LACP Eth1/24/1(P) Eth1/24/2(P) Eth1/24/3(P) Eth1/24/4(P)
-
Bring up all of the cluster interconnect ports that are connected to the 3132Q-V switch C2 on all of the nodes:
network port modify
Show example
The following example shows how to bring up the cluster interconnect ports connected to the 3132Q-V switch C2:
cluster::*> network port modify -node n1 -port e0b -up-admin true cluster::*> network port modify -node n1 -port e0c -up-admin true cluster::*> network port modify -node n2 -port e0b -up-admin true cluster::*> network port modify -node n2 -port e0c -up-admin true
-
Revert all of the migrated cluster interconnect LIFs that are connected to C2 on all of the nodes:
network interface revert
Show example
cluster::*> network interface revert -vserver Cluster -lif n1_clus2 cluster::*> network interface revert -vserver Cluster -lif n1_clus3 cluster::*> network interface revert -vserver Cluster -lif n2_clus2 cluster::*> network interface revert -vserver Cluster -lif n2_clus3
-
Verify that all of the cluster interconnect ports are reverted to their home ports:
network interface show
Show example
The following example shows that the LIFs on clus2 are reverted to their home ports, and shows that the LIFs are successfully reverted if the ports in the "Current Port" column have a status of
true
in the "Is Home" column. If the Is Home value isfalse
, then the LIF is not reverted.cluster::*> network interface show -role cluster (network interface show) Logical Status Network Current Current Is Vserver Interface Admin/Oper Address/Mask Node Port Home -------- ---------- ----------- -------------- -------- -------- ----- Cluster n1_clus1 up/up 10.10.0.1/24 n1 e0a true n1_clus2 up/up 10.10.0.2/24 n1 e0b true n1_clus3 up/up 10.10.0.3/24 n1 e0c true n1_clus4 up/up 10.10.0.4/24 n1 e0d true n2_clus1 up/up 10.10.0.5/24 n2 e0a true n2_clus2 up/up 10.10.0.6/24 n2 e0b true n2_clus3 up/up 10.10.0.7/24 n2 e0c true n2_clus4 up/up 10.10.0.8/24 n2 e0d true 8 entries were displayed.
-
Verify that all of the cluster ports are connected:
network port show
Show example
The following example shows the result of the previous
network port modify
command, verifying that all of the cluster interconnects areup
:cluster::*> network port show -role Cluster (network port show) Node: n1 Broadcast Speed (Mbps) Health Ignore Port IPspace Domain Link MTU Admin/Open Status Health Status ----- --------- ----------- ----- ----- ------------ -------- ------------- e0a cluster cluster up 9000 auto/10000 - - e0b cluster cluster up 9000 auto/10000 - - e0c cluster cluster up 9000 auto/10000 - - e0d cluster cluster up 9000 auto/10000 - - Node: n2 Broadcast Speed (Mbps) Health Ignore Port IPspace Domain Link MTU Admin/Open Status Health Status ----- --------- ----------- ----- ----- ------------ -------- ------------- e0a cluster cluster up 9000 auto/10000 - - e0b cluster cluster up 9000 auto/10000 - - e0c cluster cluster up 9000 auto/10000 - - e0d cluster cluster up 9000 auto/10000 - - 8 entries were displayed.
-
Verify the connectivity of the remote cluster interfaces:
You can use the network interface check cluster-connectivity
command to start an accessibility check for cluster connectivity and then display the details:
network interface check cluster-connectivity start
and network interface check cluster-connectivity show
cluster1::*> network interface check cluster-connectivity start
NOTE: Wait for a number of seconds before running the show command to display the details.
cluster1::*> network interface check cluster-connectivity show Source Destination Packet Node Date LIF LIF Loss ------ -------------------------- ---------- -------------- ----------- n1 3/5/2022 19:21:18 -06:00 n1_clus2 n1_clus1 none 3/5/2022 19:21:20 -06:00 n1_clus2 n2_clus2 none n2 3/5/2022 19:21:18 -06:00 n2_clus2 n1_clus1 none 3/5/2022 19:21:20 -06:00 n2_clus2 n1_clus2 none
For all ONTAP releases, you can also use the cluster ping-cluster -node <name>
command to check the connectivity:
cluster ping-cluster -node <name>
cluster::*> cluster ping-cluster -node n1 Host is n1 Getting addresses from network interface table... Cluster n1_clus1 n1 e0a 10.10.0.1 Cluster n1_clus2 n1 e0b 10.10.0.2 Cluster n1_clus3 n1 e0c 10.10.0.3 Cluster n1_clus4 n1 e0d 10.10.0.4 Cluster n2_clus1 n2 e0a 10.10.0.5 Cluster n2_clus2 n2 e0b 10.10.0.6 Cluster n2_clus3 n2 e0c 10.10.0.7 Cluster n2_clus4 n2 e0d 10.10.0.8 Local = 10.10.0.1 10.10.0.2 10.10.0.3 10.10.0.4 Remote = 10.10.0.5 10.10.0.6 10.10.0.7 10.10.0.8 Cluster Vserver Id = 4294967293 Ping status: .... Basic connectivity succeeds on 16 path(s) Basic connectivity fails on 0 path(s) ................ Detected 1500 byte MTU on 16 path(s): Local 10.10.0.1 to Remote 10.10.0.5 Local 10.10.0.1 to Remote 10.10.0.6 Local 10.10.0.1 to Remote 10.10.0.7 Local 10.10.0.1 to Remote 10.10.0.8 Local 10.10.0.2 to Remote 10.10.0.5 Local 10.10.0.2 to Remote 10.10.0.6 Local 10.10.0.2 to Remote 10.10.0.7 Local 10.10.0.2 to Remote 10.10.0.8 Local 10.10.0.3 to Remote 10.10.0.5 Local 10.10.0.3 to Remote 10.10.0.6 Local 10.10.0.3 to Remote 10.10.0.7 Local 10.10.0.3 to Remote 10.10.0.8 Local 10.10.0.4 to Remote 10.10.0.5 Local 10.10.0.4 to Remote 10.10.0.6 Local 10.10.0.4 to Remote 10.10.0.7 Local 10.10.0.4 to Remote 10.10.0.8 Larger than PMTU communication succeeds on 16 path(s) RPC status: 4 paths up, 0 paths down (tcp check) 4 paths up, 0 paths down (udp check)
-
On each node in the cluster, migrate the interfaces that are associated with the first CN1610 switch CL1, to be replaced:
network interface migrate
Show example
The following example shows the ports or LIFs being migrated on nodes n1 and n2:
cluster::*> network interface migrate -vserver Cluster -lif n1_clus1 -destination-node n1 -destination-port e0b cluster::*> network interface migrate -vserver Cluster -lif n1_clus4 -destination-node n1 -destination-port e0c cluster::*> network interface migrate -vserver Cluster -lif n2_clus1 -destination-node n2 -destination-port e0b cluster::*> network interface migrate -vserver Cluster -lif n2_clus4 -destination-node n2 -destination-port e0c
-
Verify the cluster status:
network interface show
Show example
The following example shows that the required cluster LIFs have been migrated to the appropriate cluster ports hosted on cluster switch C2:
cluster::*> network interface show -role Cluster (network interface show) Logical Status Network Current Current Is Vserver Interface Admin/Oper Address/Mask Node Port Home -------- ---------- ----------- -------------- -------- -------- ----- Cluster n1_clus1 up/up 10.10.0.1/24 n1 e0b false n1_clus2 up/up 10.10.0.2/24 n1 e0b true n1_clus3 up/up 10.10.0.3/24 n1 e0c true n1_clus4 up/up 10.10.0.4/24 n1 e0c false n2_clus1 up/up 10.10.0.5/24 n2 e0b false n2_clus2 up/up 10.10.0.6/24 n2 e0b true n2_clus3 up/up 10.10.0.7/24 n2 e0c true n2_clus4 up/up 10.10.0.8/24 n2 e0c false 8 entries were displayed.
-
Shut down the node ports that are connected to CL1 on all of the nodes:
network port modify
Show example
The following example shows how to shut down the specified ports on nodes n1 and n2:
cluster::*> network port modify -node n1 -port e0a -up-admin false cluster::*> network port modify -node n1 -port e0d -up-admin false cluster::*> network port modify -node n2 -port e0a -up-admin false cluster::*> network port modify -node n2 -port e0d -up-admin false
-
Shut down the ISL ports 24, 31, and 32 on the active 3132Q-V switch C2:
shutdown
Show example
The following example shows how to shut down ISLs 24, 31, and 32 on the active 3132Q-V switch C2:
C2# configure C2(config)# interface ethernet 1/24/1-4 C2(config-if-range)# shutdown C2(config-if-range)# exit C2(config)# interface ethernet 1/31-32 C2(config-if-range)# shutdown C2(config-if-range)# exit C2(config)# exit C2#
-
Remove the cables that are attached to the CN1610 switch CL1 on all of the nodes.
With supported cabling, you must reconnect the disconnected ports on all of the nodes to the Nexus 3132Q-V switch C1.
-
Remove the QSFP cables from Nexus 3132Q-V C2 port e1/24.
You must connect ports e1/31 and e1/32 on C1 to ports e1/31 and e1/32 on C2 using supported Cisco QSFP optical fiber or direct-attach cables.
-
Restore the configuration on port 24 and remove the temporary port-channel 2 on C2, by copying the
running-configuration
file to thestartup-configuration
file.Show example
The following example copies the
running-configuration
file to thestartup-configuration
file:C2# configure C2(config)# no interface breakout module 1 port 24 map 10g-4x C2(config)# no interface port-channel 2 C2(config-if)# interface e1/24 C2(config-if)# description 40GbE Node Port C2(config-if)# spanning-tree port type edge C2(config-if)# spanning-tree bpduguard enable C2(config-if)# mtu 9216 C2(config-if-range)# exit C2(config)# exit C2# copy running-config startup-config [########################################] 100% Copy Complete.
-
Bring up ISL ports 31 and 32 on C2, the active 3132Q-V switch:
no shutdown
Show example
The following example shows how to bring up ISLs 31 and 32 on the 3132Q-V switch C2:
C2# configure C2(config)# interface ethernet 1/31-32 C2(config-if-range)# no shutdown C2(config-if-range)# exit C2(config)# exit C2# copy running-config startup-config [########################################] 100% Copy Complete.
Step 3: Verify the configuration
-
Verify that the ISL connections are
up
on the 3132Q-V switch C2:show port-channel summary
Ports Eth1/31 and Eth1/32 should indicate
(P)
, meaning that both the ISL ports areup
in the port-channel.Show example
C1# show port-channel summary Flags: D - Down P - Up in port-channel (members) I - Individual H - Hot-standby (LACP only) s - Suspended r - Module-removed S - Switched R - Routed U - Up (port-channel) M - Not in use. Min-links not met ------------------------------------------------------------------------------ Group Port- Type Protocol Member Ports Channel ------------------------------------------------------------------------------ 1 Po1(SU) Eth LACP Eth1/31(P) Eth1/32(P)
-
Bring up all of the cluster interconnect ports connected to the new 3132Q-V switch C1 on all of the nodes:
network port modify
Show example
The following example shows how to bring up all of the cluster interconnect ports connected to the new 3132Q-V switch C1:
cluster::*> network port modify -node n1 -port e0a -up-admin true cluster::*> network port modify -node n1 -port e0d -up-admin true cluster::*> network port modify -node n2 -port e0a -up-admin true cluster::*> network port modify -node n2 -port e0d -up-admin true
-
Verify the status of the cluster node port:
network port show
Show example
The following example verifies that all of the cluster interconnect ports on n1 and n2 on the new 3132Q-V switch C1 are
up
:cluster::*> network port show -role Cluster (network port show) Node: n1 Broadcast Speed (Mbps) Health Ignore Port IPspace Domain Link MTU Admin/Open Status Health Status ----- --------- ---------- ----- ----- ------------ -------- ------------- e0a cluster cluster up 9000 auto/10000 - - e0b cluster cluster up 9000 auto/10000 - - e0c cluster cluster up 9000 auto/10000 - - e0d cluster cluster up 9000 auto/10000 - - Node: n2 Broadcast Speed (Mbps) Health Ignore Port IPspace Domain Link MTU Admin/Open Status Health Status ----- --------- ---------- ----- ----- ------------ -------- ------------- e0a cluster cluster up 9000 auto/10000 - - e0b cluster cluster up 9000 auto/10000 - - e0c cluster cluster up 9000 auto/10000 - - e0d cluster cluster up 9000 auto/10000 - - 8 entries were displayed.
-
Revert all of the migrated cluster interconnect LIFs that were originally connected to C1 on all of the nodes:
network interface revert
Show example
The following example shows how to revert the migrated cluster LIFs to their home ports:
cluster::*> network interface revert -vserver Cluster -lif n1_clus1 cluster::*> network interface revert -vserver Cluster -lif n1_clus4 cluster::*> network interface revert -vserver Cluster -lif n2_clus1 cluster::*> network interface revert -vserver Cluster -lif n2_clus4
-
Verify that the interface is now home:
network interface show
Show example
The following example shows the status of cluster interconnect interfaces is
up
andIs home
for n1 and n2:cluster::*> network interface show -role Cluster (network interface show) Logical Status Network Current Current Is Vserver Interface Admin/Oper Address/Mask Node Port Home -------- ---------- ----------- -------------- -------- -------- ----- Cluster n1_clus1 up/up 10.10.0.1/24 n1 e0a true n1_clus2 up/up 10.10.0.2/24 n1 e0b true n1_clus3 up/up 10.10.0.3/24 n1 e0c true n1_clus4 up/up 10.10.0.4/24 n1 e0d true n2_clus1 up/up 10.10.0.5/24 n2 e0a true n2_clus2 up/up 10.10.0.6/24 n2 e0b true n2_clus3 up/up 10.10.0.7/24 n2 e0c true n2_clus4 up/up 10.10.0.8/24 n2 e0d true 8 entries were displayed.
-
Verify the connectivity of the remote cluster interfaces:
You can use the network interface check cluster-connectivity
command to start an accessibility check for cluster connectivity and then display the details:
network interface check cluster-connectivity start
and network interface check cluster-connectivity show
cluster1::*> network interface check cluster-connectivity start
NOTE: Wait for a number of seconds before running the show command to display the details.
cluster1::*> network interface check cluster-connectivity show Source Destination Packet Node Date LIF LIF Loss ------ -------------------------- ---------- -------------- ----------- n1 3/5/2022 19:21:18 -06:00 n1_clus2 n1_clus1 none 3/5/2022 19:21:20 -06:00 n1_clus2 n2_clus2 none n2 3/5/2022 19:21:18 -06:00 n2_clus2 n1_clus1 none 3/5/2022 19:21:20 -06:00 n2_clus2 n1_clus2 none
For all ONTAP releases, you can also use the cluster ping-cluster -node <name>
command to check the connectivity:
cluster ping-cluster -node <name>
cluster::*> cluster ping-cluster -node n1 Host is n1 Getting addresses from network interface table... Cluster n1_clus1 n1 e0a 10.10.0.1 Cluster n1_clus2 n1 e0b 10.10.0.2 Cluster n1_clus3 n1 e0c 10.10.0.3 Cluster n1_clus4 n1 e0d 10.10.0.4 Cluster n2_clus1 n2 e0a 10.10.0.5 Cluster n2_clus2 n2 e0b 10.10.0.6 Cluster n2_clus3 n2 e0c 10.10.0.7 Cluster n2_clus4 n2 e0d 10.10.0.8 Local = 10.10.0.1 10.10.0.2 10.10.0.3 10.10.0.4 Remote = 10.10.0.5 10.10.0.6 10.10.0.7 10.10.0.8 Cluster Vserver Id = 4294967293 Ping status: .... Basic connectivity succeeds on 16 path(s) Basic connectivity fails on 0 path(s) ................ Detected 1500 byte MTU on 16 path(s): Local 10.10.0.1 to Remote 10.10.0.5 Local 10.10.0.1 to Remote 10.10.0.6 Local 10.10.0.1 to Remote 10.10.0.7 Local 10.10.0.1 to Remote 10.10.0.8 Local 10.10.0.2 to Remote 10.10.0.5 Local 10.10.0.2 to Remote 10.10.0.6 Local 10.10.0.2 to Remote 10.10.0.7 Local 10.10.0.2 to Remote 10.10.0.8 Local 10.10.0.3 to Remote 10.10.0.5 Local 10.10.0.3 to Remote 10.10.0.6 Local 10.10.0.3 to Remote 10.10.0.7 Local 10.10.0.3 to Remote 10.10.0.8 Local 10.10.0.4 to Remote 10.10.0.5 Local 10.10.0.4 to Remote 10.10.0.6 Local 10.10.0.4 to Remote 10.10.0.7 Local 10.10.0.4 to Remote 10.10.0.8 Larger than PMTU communication succeeds on 16 path(s) RPC status: 4 paths up, 0 paths down (tcp check) 4 paths up, 0 paths down (udp check)
-
Expand the cluster by adding nodes to the Nexus 3132Q-V cluster switches.
-
Display the information about the devices in your configuration:
-
network device-discovery show
-
network port show -role cluster
-
network interface show -role cluster
-
system cluster-switch show
Show example
The following examples show nodes n3 and n4 with 40 GbE cluster ports connected to ports e1/7 and e1/8, respectively on both the Nexus 3132Q-V cluster switches, and both nodes have joined the cluster. The 40 GbE cluster interconnect ports used are e4a and e4e.
cluster::*> network device-discovery show Local Discovered Node Port Device Interface Platform ------ ------ ------------ --------------- ------------- n1 /cdp e0a C1 Ethernet1/1/1 N3K-C3132Q-V e0b C2 Ethernet1/1/1 N3K-C3132Q-V e0c C2 Ethernet1/1/2 N3K-C3132Q-V e0d C1 Ethernet1/1/2 N3K-C3132Q-V n2 /cdp e0a C1 Ethernet1/1/3 N3K-C3132Q-V e0b C2 Ethernet1/1/3 N3K-C3132Q-V e0c C2 Ethernet1/1/4 N3K-C3132Q-V e0d C1 Ethernet1/1/4 N3K-C3132Q-V n3 /cdp e4a C1 Ethernet1/7 N3K-C3132Q-V e4e C2 Ethernet1/7 N3K-C3132Q-V n4 /cdp e4a C1 Ethernet1/8 N3K-C3132Q-V e4e C2 Ethernet1/8 N3K-C3132Q-V 12 entries were displayed.
cluster::*> network port show -role cluster (network port show) Node: n1 Broadcast Speed (Mbps) Health Ignore Port IPspace Domain Link MTU Admin/Open Status Health Status ----- --------- ---------- ----- ----- ------------ -------- ------------- e0a cluster cluster up 9000 auto/10000 - - e0b cluster cluster up 9000 auto/10000 - - e0c cluster cluster up 9000 auto/10000 - - e0d cluster cluster up 9000 auto/10000 - - Node: n2 Broadcast Speed (Mbps) Health Ignore Port IPspace Domain Link MTU Admin/Open Status Health Status ----- --------- ---------- ----- ----- ------------ -------- ------------- e0a cluster cluster up 9000 auto/10000 - - e0b cluster cluster up 9000 auto/10000 - - e0c cluster cluster up 9000 auto/10000 - - e0d cluster cluster up 9000 auto/10000 - - Node: n3 Broadcast Speed (Mbps) Health Ignore Port IPspace Domain Link MTU Admin/Open Status Health Status ----- --------- ---------- ----- ----- ------------ -------- ------------- e4a cluster cluster up 9000 auto/40000 - - e4e cluster cluster up 9000 auto/40000 - - Node: n4 Broadcast Speed (Mbps) Health Ignore Port IPspace Domain Link MTU Admin/Open Status Health Status ----- --------- ---------- ----- ----- ------------ -------- ------------- e4a cluster cluster up 9000 auto/40000 - - e4e cluster cluster up 9000 auto/40000 - - 12 entries were displayed.
cluster::*> network interface show -role Cluster (network interface show) Logical Status Network Current Current Is Vserver Interface Admin/Oper Address/Mask Node Port Home -------- ---------- ----------- -------------- -------- -------- ----- Cluster n1_clus1 up/up 10.10.0.1/24 n1 e0a true n1_clus2 up/up 10.10.0.2/24 n1 e0b true n1_clus3 up/up 10.10.0.3/24 n1 e0c true n1_clus4 up/up 10.10.0.4/24 n1 e0d true n2_clus1 up/up 10.10.0.5/24 n2 e0a true n2_clus2 up/up 10.10.0.6/24 n2 e0b true n2_clus3 up/up 10.10.0.7/24 n2 e0c true n2_clus4 up/up 10.10.0.8/24 n2 e0d true n3_clus1 up/up 10.10.0.9/24 n3 e4a true n3_clus2 up/up 10.10.0.10/24 n3 e4e true n4_clus1 up/up 10.10.0.11/24 n4 e4a true n4_clus2 up/up 10.10.0.12/24 n4 e4e true 12 entries were displayed.
cluster::> system cluster-switch show Switch Type Address Model --------------------------- ---------------- ------------- --------- C1 cluster-network 10.10.1.103 NX3132V Serial Number: FOX000001 Is Monitored: true Reason: Software Version: Cisco Nexus Operating System (NX-OS) Software, Version 7.0(3)I4(1) Version Source: CDP C2 cluster-network 10.10.1.104 NX3132V Serial Number: FOX000002 Is Monitored: true Reason: Software Version: Cisco Nexus Operating System (NX-OS) Software, Version 7.0(3)I4(1) Version Source: CDP CL1 cluster-network 10.10.1.101 CN1610 Serial Number: 01234567 Is Monitored: true Reason: Software Version: 1.2.0.7 Version Source: ISDP CL2 cluster-network 10.10.1.102 CN1610 Serial Number: 01234568 Is Monitored: true Reason: Software Version: 1.2.0.7 Version Source: ISDP 4 entries were displayed.
-
-
Remove the replaced CN1610 switches if they are not automatically removed:
system cluster-switch delete
Show example
The following example shows how to remove the CN1610 switches:
cluster::> system cluster-switch delete -device CL1 cluster::> system cluster-switch delete -device CL2
-
Configure clusters clus1 and clus4 to
-auto-revert
on each node and confirm:Show example
cluster::*> network interface modify -vserver node1 -lif clus1 -auto-revert true cluster::*> network interface modify -vserver node1 -lif clus4 -auto-revert true cluster::*> network interface modify -vserver node2 -lif clus1 -auto-revert true cluster::*> network interface modify -vserver node2 -lif clus4 -auto-revert true
-
Verify that the proper cluster switches are monitored:
system cluster-switch show
Show example
cluster::> system cluster-switch show Switch Type Address Model --------------------------- ------------------ ---------------- --------------- C1 cluster-network 10.10.1.103 NX3132V Serial Number: FOX000001 Is Monitored: true Reason: Software Version: Cisco Nexus Operating System (NX-OS) Software, Version 7.0(3)I4(1) Version Source: CDP C2 cluster-network 10.10.1.104 NX3132V Serial Number: FOX000002 Is Monitored: true Reason: Software Version: Cisco Nexus Operating System (NX-OS) Software, Version 7.0(3)I4(1) Version Source: CDP 2 entries were displayed.
-
If you suppressed automatic case creation, reenable it by invoking an AutoSupport message:
system node autosupport invoke -node * -type all -message MAINT=END