Replace a Cisco Nexus 3232C cluster switch
Follow these steps to replace a defective Cisco Nexus 3232C switch in a cluster. This is a non-disruptive procedure.
Review requirements
Make sure that the existing cluster and network configuration has the following characteristics:
-
The Nexus 3232C cluster infrastructure are redundant and fully functional on both switches.
The Cisco Ethernet Switches page has the latest RCF and NX-OS versions on your switches.
-
All cluster ports must be in the up state.
-
Management connectivity must exist on both switches.
-
All cluster logical interfaces (LIFs) are in the up state and are not migrated.
The replacement Cisco Nexus 3232C switch has the following characteristics:
-
Management network connectivity is functional.
-
Console access to the replacement switch is in place.
-
The appropriate RCF and NX-OS operating system image is loaded onto the switch.
-
Initial customization of the switch is complete.
See the following:
Enable console logging
NetApp strongly recommends that you enable console logging on the devices that you are using and take the following actions when replacing your switch:
-
Leave AutoSupport enabled during maintenance.
-
Trigger a maintenance AutoSupport before and after maintenance to disable case creation for the duration of the maintenance. See this Knowledge Base article SU92: How to suppress automatic case creation during scheduled maintenance windows for further details.
-
Enable session logging for any CLI sessions. For instructions on how to enable session logging, review the "Logging Session Output" section in this Knowledge Base article How to configure PuTTY for optimal connectivity to ONTAP systems.
Replace the switch
This replacement procedure describes the following scenario:
-
The cluster initially has four nodes connected to two Nexus 3232C cluster switches, CL1 and CL2.
-
You plan to replace cluster switch CL2 with C2 (steps 1 to 21):
-
On each node, you migrate the cluster LIFs connected to cluster switch CL2 to cluster ports connected to cluster switch CL1.
-
You disconnect the cabling from all ports on cluster switch CL2 and reconnect the cabling to the same ports on the replacement cluster switch C2.
-
You revert the migrated cluster LIFs on each node.
-
This replacement procedure replaces the second Nexus 3232C cluster switch CL2 with the new 3232C switch C2.
The examples in this procedure use the following switch and node nomenclature:
-
The four nodes are n1, n2, n3, and n4.
-
n1_clus1 is the first cluster logical interface (LIF) connected to cluster switch C1 for node n1.
-
n1_clus2 is the first cluster LIF connected to cluster switch CL2 or C2 for node n1.
-
n1_clus3 is the second LIF connected to cluster switch C2 for node n1.-
-
n1_clus4 is the second LIF connected to cluster switch CL1, for node n1.
The number of 10 GbE and 40/100 GbE ports are defined in the reference configuration files (RCFs) available on the Cisco® Cluster Network Switch Reference Configuration File Download page.
The examples in this replacement procedure use four nodes. Two of the nodes use four 10 GB cluster interconnect ports: e0a, e0b, e0c, and e0d. The other two nodes use two 40 GB cluster interconnect ports: e4a and e4e. See the Hardware Universe to verify the correct cluster ports for your platform.
Step 1: Display and migrate the cluster ports to switch
-
If AutoSupport is enabled on this cluster, suppress automatic case creation by invoking an AutoSupport message:
system node autosupport invoke -node * -type all - message MAINT=xh
x is the duration of the maintenance window in hours.
The AutoSupport message notifies technical support of this maintenance task so that automatic case creation is suppressed during the maintenance window.
-
Display information about the devices in your configuration:
network device-discovery show
Show example
cluster::> network device-discovery show Local Discovered Node Port Device Interface Platform ----------- ------ ------------------- ---------------- ---------------- n1 /cdp e0a CL1 Ethernet1/1/1 N3K-C3232C e0b CL2 Ethernet1/1/1 N3K-C3232C e0c CL2 Ethernet1/1/2 N3K-C3232C e0d CL1 Ethernet1/1/2 N3K-C3232C n2 /cdp e0a CL1 Ethernet1/1/3 N3K-C3232C e0b CL2 Ethernet1/1/3 N3K-C3232C e0c CL2 Ethernet1/1/4 N3K-C3232C e0d CL1 Ethernet1/1/4 N3K-C3232C n3 /cdp e4a CL1 Ethernet1/7 N3K-C3232C e4e CL2 Ethernet1/7 N3K-C3232C n4 /cdp e4a CL1 Ethernet1/8 N3K-C3232C e4e CL2 Ethernet1/8 N3K-C3232C
-
Determine the administrative or operational status for each cluster interface.
-
Display the network port attributes:
network port show -role cluster
Show example
cluster::*> network port show -role cluster (network port show) Node: n1 Ignore Speed(Mbps) Health Health Port IPspace Broadcast Domain Link MTU Admin/Oper Status Status --------- ------------ ---------------- ---- ---- ----------- ------------ e0a Cluster Cluster up 9000 auto/10000 - e0b Cluster Cluster up 9000 auto/10000 - e0c Cluster Cluster up 9000 auto/10000 - e0d Cluster Cluster up 9000 auto/10000 - - Node: n2 Ignore Speed(Mbps) Health Health Port IPspace Broadcast Domain Link MTU Admin/Oper Status Status --------- ------------ ---------------- ---- ---- ----------- ------------ e0a Cluster Cluster up 9000 auto/10000 - e0b Cluster Cluster up 9000 auto/10000 - e0c Cluster Cluster up 9000 auto/10000 - e0d Cluster Cluster up 9000 auto/10000 - - Node: n3 Ignore Speed(Mbps) Health Health Port IPspace Broadcast Domain Link MTU Admin/Oper Status Status --------- ------------ ---------------- ---- ---- ----------- -------- ----- e4a Cluster Cluster up 9000 auto/40000 - - e4e Cluster Cluster up 9000 auto/40000 - - Node: n4 Ignore Speed(Mbps) Health Health Port IPspace Broadcast Domain Link MTU Admin/Oper Status Status --------- ------------ ---------------- ---- ---- ----------- -------- ----- e4a Cluster Cluster up 9000 auto/40000 - e4e Cluster Cluster up 9000 auto/40000 -
-
Display information about the logical interfaces (LIFs):
network interface show -role cluster
Show example
cluster::*> network interface show -role cluster Logical Status Network Current Current Is Vserver Interface Admin/Oper Address/Mask Node Port Home ----------- ---------- ---------- ------------------ ------------- ------- --- Cluster n1_clus1 up/up 10.10.0.1/24 n1 e0a true n1_clus2 up/up 10.10.0.2/24 n1 e0b true n1_clus3 up/up 10.10.0.3/24 n1 e0c true n1_clus4 up/up 10.10.0.4/24 n1 e0d true n2_clus1 up/up 10.10.0.5/24 n2 e0a true n2_clus2 up/up 10.10.0.6/24 n2 e0b true n2_clus3 up/up 10.10.0.7/24 n2 e0c true n2_clus4 up/up 10.10.0.8/24 n2 e0d true n3_clus1 up/up 10.10.0.9/24 n3 e0a true n3_clus2 up/up 10.10.0.10/24 n3 e0e true n4_clus1 up/up 10.10.0.11/24 n4 e0a true n4_clus2 up/up 10.10.0.12/24 n4 e0e true
-
Display the discovered cluster switches:
system cluster-switch show
Show example
The following output example displays the cluster switches:
cluster::> system cluster-switch show Switch Type Address Model --------------------------- ------------------ ---------------- --------------- CL1 cluster-network 10.10.1.101 NX3232C Serial Number: FOX000001 Is Monitored: true Reason: None Software Version: Cisco Nexus Operating System (NX-OS) Software, Version 7.0(3)I6(1) Version Source: CDP CL2 cluster-network 10.10.1.102 NX3232C Serial Number: FOX000002 Is Monitored: true Reason: None Software Version: Cisco Nexus Operating System (NX-OS) Software, Version 7.0(3)I6(1) Version Source: CDP
-
-
Verify that the appropriate RCF and image are installed on the new Nexus 3232C switch and make any necessary site customizations.
-
Go to the NetApp Support Site.
-
Go to the Cisco Ethernet Switches page and note the required software versions in the table.
-
Download the appropriate version of the RCF.
-
Click CONTINUE on the Description page, accept the license agreement, and then navigate to the Download page.
-
Download the correct version of the image software from the Cisco® Cluster and Management Network Switch Reference Configuration File Download page.
-
-
Migrate the cluster LIFs to the physical node ports connected to the replacement switch C2:
network interface migrate -vserver vserver-name -lif lif-name -source-node node-name -destination-node node-name -destination-port port-name
Show example
You must migrate all the cluster LIFs individually as shown in the following example:
cluster::*> network interface migrate -vserver Cluster -lif n1_clus2 -source-node n1 -destination- node n1 -destination-port e0a cluster::*> network interface migrate -vserver Cluster -lif n1_clus3 -source-node n1 -destination- node n1 -destination-port e0d cluster::*> network interface migrate -vserver Cluster -lif n2_clus2 -source-node n2 -destination- node n2 -destination-port e0a cluster::*> network interface migrate -vserver Cluster -lif n2_clus3 -source-node n2 -destination- node n2 -destination-port e0d cluster::*> network interface migrate -vserver Cluster -lif n3_clus2 -source-node n3 -destination- node n3 -destination-port e4a cluster::*> network interface migrate -vserver Cluster -lif n4_clus2 -source-node n4 -destination- node n4 -destination-port e4a
-
Verify the status of the cluster ports and their home designations:
network interface show -role cluster
Show example
cluster::*> network interface show -role cluster (network interface show) Logical Status Network Current Current Is Vserver Interface Admin/Oper Address/Mask Node Port Home ----------- ---------- ---------- ------------------ ------------- ------- ---- Cluster n1_clus1 up/up 10.10.0.1/24 n1 e0a true n1_clus2 up/up 10.10.0.2/24 n1 e0a false n1_clus3 up/up 10.10.0.3/24 n1 e0d false n1_clus4 up/up 10.10.0.4/24 n1 e0d true n2_clus1 up/up 10.10.0.5/24 n2 e0a true n2_clus2 up/up 10.10.0.6/24 n2 e0a false n2_clus3 up/up 10.10.0.7/24 n2 e0d false n2_clus4 up/up 10.10.0.8/24 n2 e0d true n3_clus1 up/up 10.10.0.9/24 n3 e4a true n3_clus2 up/up 10.10.0.10/24 n3 e4a false n4_clus1 up/up 10.10.0.11/24 n4 e4a true n4_clus2 up/up 10.10.0.12/24 n4 e4a false
-
Shut down the cluster interconnect ports that are physically connected to the original switch CL2:
network port modify -node node-name -port port-name -up-admin false
Show example
The following example shows the cluster interconnect ports are shut down on all nodes:
cluster::*> network port modify -node n1 -port e0b -up-admin false cluster::*> network port modify -node n1 -port e0c -up-admin false cluster::*> network port modify -node n2 -port e0b -up-admin false cluster::*> network port modify -node n2 -port e0c -up-admin false cluster::*> network port modify -node n3 -port e4e -up-admin false cluster::*> network port modify -node n4 -port e4e -up-admin false
-
Verify the connectivity of the remote cluster interfaces:
You can use the network interface check cluster-connectivity
command to start an accessibility check for cluster connectivity and then display the details:
network interface check cluster-connectivity start
and network interface check cluster-connectivity show
cluster1::*> network interface check cluster-connectivity start
NOTE: Wait for a number of seconds before running the show
command to display the details.
cluster1::*> network interface check cluster-connectivity show Source Destination Packet Node Date LIF LIF Loss ------ -------------------------- ---------------- ---------------- ----------- n1 3/5/2022 19:21:18 -06:00 n1_clus2 n2-clus1 none 3/5/2022 19:21:20 -06:00 n1_clus2 n2_clus2 none . . n2 3/5/2022 19:21:18 -06:00 n2_clus2 n1_clus1 none 3/5/2022 19:21:20 -06:00 n2_clus2 n1_clus2 none . . n3 . . .n4 . .
For all ONTAP releases, you can also use the cluster ping-cluster -node <name>
command to check the connectivity:
cluster ping-cluster -node <name>
cluster1::*> cluster ping-cluster -node local Host is n1 Getting addresses from network interface table... Cluster n1_clus1 n1 e0a 10.10.0.1 Cluster n1_clus2 n1 e0b 10.10.0.2 Cluster n1_clus3 n1 e0c 10.10.0.3 Cluster n1_clus4 n1 e0d 10.10.0.4 Cluster n2_clus1 n2 e0a 10.10.0.5 Cluster n2_clus2 n2 e0b 10.10.0.6 Cluster n2_clus3 n2 e0c 10.10.0.7 Cluster n2_clus4 n2 e0d 10.10.0.8 Cluster n3_clus1 n4 e0a 10.10.0.9 Cluster n3_clus2 n3 e0e 10.10.0.10 Cluster n4_clus1 n4 e0a 10.10.0.11 Cluster n4_clus2 n4 e0e 10.10.0.12 Local = 10.10.0.1 10.10.0.2 10.10.0.3 10.10.0.4 Remote = 10.10.0.5 10.10.0.6 10.10.0.7 10.10.0.8 10.10.0.9 10.10.0.10 10.10.0.11 10.10.0.12 Cluster Vserver Id = 4294967293 Ping status: .... Basic connectivity succeeds on 32 path(s) Basic connectivity fails on 0 path(s) ................ Detected 9000 byte MTU on 32 path(s): Local 10.10.0.1 to Remote 10.10.0.5 Local 10.10.0.1 to Remote 10.10.0.6 Local 10.10.0.1 to Remote 10.10.0.7 Local 10.10.0.1 to Remote 10.10.0.8 Local 10.10.0.1 to Remote 10.10.0.9 Local 10.10.0.1 to Remote 10.10.0.10 Local 10.10.0.1 to Remote 10.10.0.11 Local 10.10.0.1 to Remote 10.10.0.12 Local 10.10.0.2 to Remote 10.10.0.5 Local 10.10.0.2 to Remote 10.10.0.6 Local 10.10.0.2 to Remote 10.10.0.7 Local 10.10.0.2 to Remote 10.10.0.8 Local 10.10.0.2 to Remote 10.10.0.9 Local 10.10.0.2 to Remote 10.10.0.10 Local 10.10.0.2 to Remote 10.10.0.11 Local 10.10.0.2 to Remote 10.10.0.12 Local 10.10.0.3 to Remote 10.10.0.5 Local 10.10.0.3 to Remote 10.10.0.6 Local 10.10.0.3 to Remote 10.10.0.7 Local 10.10.0.3 to Remote 10.10.0.8 Local 10.10.0.3 to Remote 10.10.0.9 Local 10.10.0.3 to Remote 10.10.0.10 Local 10.10.0.3 to Remote 10.10.0.11 Local 10.10.0.3 to Remote 10.10.0.12 Local 10.10.0.4 to Remote 10.10.0.5 Local 10.10.0.4 to Remote 10.10.0.6 Local 10.10.0.4 to Remote 10.10.0.7 Local 10.10.0.4 to Remote 10.10.0.8 Local 10.10.0.4 to Remote 10.10.0.9 Local 10.10.0.4 to Remote 10.10.0.10 Local 10.10.0.4 to Remote 10.10.0.11 Local 10.10.0.4 to Remote 10.10.0.12 Larger than PMTU communication succeeds on 32 path(s) RPC status: 8 paths up, 0 paths down (tcp check) 8 paths up, 0 paths down (udp check)
Step 2: Migrate ISLs to switch CL1 and C2
-
Shut down the ports 1/31 and 1/32 on cluster switch CL1.
For more information on Cisco commands, see the guides listed in the Cisco Nexus 3000 Series NX-OS Command References.
Show example
(CL1)# configure (CL1)(Config)# interface e1/31-32 (CL1)(config-if-range)# shutdown (CL1)(config-if-range)# exit (CL1)(Config)# exit (CL1)#
-
Remove all the cables attached to the cluster switch CL2 and reconnect them to the replacement switch C2 for all the nodes.
-
Remove the inter-switch link (ISL) cables from ports e1/31 and e1/32 on cluster switch CL2 and reconnect them to the same ports on the replacement switch C2.
-
Bring up ISL ports 1/31 and 1/32 on the cluster switch CL1.
For more information on Cisco commands, see the guides listed in the Cisco Nexus 3000 Series NX-OS Command References.
Show example
(CL1)# configure (CL1)(Config)# interface e1/31-32 (CL1)(config-if-range)# no shutdown (CL1)(config-if-range)# exit (CL1)(Config)# exit (CL1)#
-
Verify that the ISLs are up on CL1.
For more information on Cisco commands, see the guides listed in the Cisco Nexus 3000 Series NX-OS Command References.
Ports Eth1/31 and Eth1/32 should indicate
(P)
, which means that the ISL ports are up in the port-channel:Show example
CL1# show port-channel summary Flags: D - Down P - Up in port-channel (members) I - Individual H - Hot-standby (LACP only) s - Suspended r - Module-removed S - Switched R - Routed U - Up (port-channel) M - Not in use. Min-links not met -------------------------------------------------------------------------------- Group Port- Type Protocol Member Ports Channel -------------------------------------------------------------------------------- 1 Po1(SU) Eth LACP Eth1/31(P) Eth1/32(P)
-
Verify that the ISLs are up on cluster switch C2.
For more information on Cisco commands, see the guides listed in the Cisco Nexus 3000 Series NX-OS Command References.
Show example
Ports Eth1/31 and Eth1/32 should indicate (P), which means that both ISL ports are up in the port-channel.
C2# show port-channel summary Flags: D - Down P - Up in port-channel (members) I - Individual H - Hot-standby (LACP only) s - Suspended r - Module-removed S - Switched R - Routed U - Up (port-channel) M - Not in use. Min-links not met -------------------------------------------------------------------------------- Group Port- Type Protocol Member Ports Channel -------------------------------------------------------------------------------- 1 Po1(SU) Eth LACP Eth1/31(P) Eth1/32(P)
-
On all nodes, bring up all the cluster interconnect ports connected to the replacement switch C2:
network port modify -node node-name -port port-name -up-admin true
Show example
cluster::*> network port modify -node n1 -port e0b -up-admin true cluster::*> network port modify -node n1 -port e0c -up-admin true cluster::*> network port modify -node n2 -port e0b -up-admin true cluster::*> network port modify -node n2 -port e0c -up-admin true cluster::*> network port modify -node n3 -port e4e -up-admin true cluster::*> network port modify -node n4 -port e4e -up-admin true
Step 3: Revert all LIFs to originally assigned ports
-
Revert all the migrated cluster interconnect LIFs on all the nodes:
network interface revert -vserver cluster -lif lif-name
Show example
You must revert all the cluster interconnect LIFs individually as shown in the following example:
cluster::*> network interface revert -vserver cluster -lif n1_clus2 cluster::*> network interface revert -vserver cluster -lif n1_clus3 cluster::*> network interface revert -vserver cluster -lif n2_clus2 cluster::*> network interface revert -vserver cluster -lif n2_clus3 Cluster::*> network interface revert -vserver cluster -lif n3_clus2 Cluster::*> network interface revert -vserver cluster -lif n4_clus2
-
Verify that the cluster interconnect ports are now reverted to their home:
network interface show
Show example
The following example shows that all the LIFs have been successfully reverted because the ports listed under the
Current Port
column have a status oftrue
in theIs Home
column. If a port has a value offalse
, the LIF has not been reverted.cluster::*> network interface show -role cluster (network interface show) Logical Status Network Current Current Is Vserver Interface Admin/Oper Address/Mask Node Port Home ----------- ---------- ---------- ------------------ ------------- ------- ---- Cluster n1_clus1 up/up 10.10.0.1/24 n1 e0a true n1_clus2 up/up 10.10.0.2/24 n1 e0b true n1_clus3 up/up 10.10.0.3/24 n1 e0c true n1_clus4 up/up 10.10.0.4/24 n1 e0d true n2_clus1 up/up 10.10.0.5/24 n2 e0a true n2_clus2 up/up 10.10.0.6/24 n2 e0b true n2_clus3 up/up 10.10.0.7/24 n2 e0c true n2_clus4 up/up 10.10.0.8/24 n2 e0d true n3_clus1 up/up 10.10.0.9/24 n3 e4a true n3_clus2 up/up 10.10.0.10/24 n3 e4e true n4_clus1 up/up 10.10.0.11/24 n4 e4a true n4_clus2 up/up 10.10.0.12/24 n4 e4e true
-
Verify that the cluster ports are connected:
network port show -role cluster
Show example
cluster::*> network port show -role cluster (network port show) Node: n1 Ignore Speed(Mbps) Health Health Port IPspace Broadcast Domain Link MTU Admin/Oper Status Status --------- ------------ ---------------- ---- ---- ----------- -------- ----- e0a Cluster Cluster up 9000 auto/10000 - e0b Cluster Cluster up 9000 auto/10000 - e0c Cluster Cluster up 9000 auto/10000 - e0d Cluster Cluster up 9000 auto/10000 - - Node: n2 Ignore Speed(Mbps) Health Health Port IPspace Broadcast Domain Link MTU Admin/Oper Status Status --------- ------------ ---------------- ---- ---- ----------- -------- ----- e0a Cluster Cluster up 9000 auto/10000 - e0b Cluster Cluster up 9000 auto/10000 - e0c Cluster Cluster up 9000 auto/10000 - e0d Cluster Cluster up 9000 auto/10000 - - Node: n3 Ignore Speed(Mbps) Health Health Port IPspace Broadcast Domain Link MTU Admin/Oper Status Status --------- ------------ ---------------- ---- ---- ----------- -------- ----- e4a Cluster Cluster up 9000 auto/40000 - e4e Cluster Cluster up 9000 auto/40000 - - Node: n4 Ignore Speed(Mbps) Health Health Port IPspace Broadcast Domain Link MTU Admin/Oper Status Status --------- ------------ ---------------- ---- ---- ----------- -------- ----- e4a Cluster Cluster up 9000 auto/40000 - e4e Cluster Cluster up 9000 auto/40000 - -
-
Verify the connectivity of the remote cluster interfaces:
You can use the network interface check cluster-connectivity
command to start an accessibility check for cluster connectivity and then display the details:
network interface check cluster-connectivity start
and network interface check cluster-connectivity show
cluster1::*> network interface check cluster-connectivity start
NOTE: Wait for a number of seconds before running the show
command to display the details.
cluster1::*> network interface check cluster-connectivity show Source Destination Packet Node Date LIF LIF Loss ------ -------------------------- ---------------- ---------------- ----------- n1 3/5/2022 19:21:18 -06:00 n1_clus2 n2-clus1 none 3/5/2022 19:21:20 -06:00 n1_clus2 n2_clus2 none . . n2 3/5/2022 19:21:18 -06:00 n2_clus2 n1_clus1 none 3/5/2022 19:21:20 -06:00 n2_clus2 n1_clus2 none . . n3 . . .n4 . .
For all ONTAP releases, you can also use the cluster ping-cluster -node <name>
command to check the connectivity:
cluster ping-cluster -node <name>
cluster1::*> cluster ping-cluster -node local Host is n1 Getting addresses from network interface table... Cluster n1_clus1 n1 e0a 10.10.0.1 Cluster n1_clus2 n1 e0b 10.10.0.2 Cluster n1_clus3 n1 e0c 10.10.0.3 Cluster n1_clus4 n1 e0d 10.10.0.4 Cluster n2_clus1 n2 e0a 10.10.0.5 Cluster n2_clus2 n2 e0b 10.10.0.6 Cluster n2_clus3 n2 e0c 10.10.0.7 Cluster n2_clus4 n2 e0d 10.10.0.8 Cluster n3_clus1 n4 e0a 10.10.0.9 Cluster n3_clus2 n3 e0e 10.10.0.10 Cluster n4_clus1 n4 e0a 10.10.0.11 Cluster n4_clus2 n4 e0e 10.10.0.12 Local = 10.10.0.1 10.10.0.2 10.10.0.3 10.10.0.4 Remote = 10.10.0.5 10.10.0.6 10.10.0.7 10.10.0.8 10.10.0.9 10.10.0.10 10.10.0.11 10.10.0.12 Cluster Vserver Id = 4294967293 Ping status: .... Basic connectivity succeeds on 32 path(s) Basic connectivity fails on 0 path(s) ................ Detected 9000 byte MTU on 32 path(s): Local 10.10.0.1 to Remote 10.10.0.5 Local 10.10.0.1 to Remote 10.10.0.6 Local 10.10.0.1 to Remote 10.10.0.7 Local 10.10.0.1 to Remote 10.10.0.8 Local 10.10.0.1 to Remote 10.10.0.9 Local 10.10.0.1 to Remote 10.10.0.10 Local 10.10.0.1 to Remote 10.10.0.11 Local 10.10.0.1 to Remote 10.10.0.12 Local 10.10.0.2 to Remote 10.10.0.5 Local 10.10.0.2 to Remote 10.10.0.6 Local 10.10.0.2 to Remote 10.10.0.7 Local 10.10.0.2 to Remote 10.10.0.8 Local 10.10.0.2 to Remote 10.10.0.9 Local 10.10.0.2 to Remote 10.10.0.10 Local 10.10.0.2 to Remote 10.10.0.11 Local 10.10.0.2 to Remote 10.10.0.12 Local 10.10.0.3 to Remote 10.10.0.5 Local 10.10.0.3 to Remote 10.10.0.6 Local 10.10.0.3 to Remote 10.10.0.7 Local 10.10.0.3 to Remote 10.10.0.8 Local 10.10.0.3 to Remote 10.10.0.9 Local 10.10.0.3 to Remote 10.10.0.10 Local 10.10.0.3 to Remote 10.10.0.11 Local 10.10.0.3 to Remote 10.10.0.12 Local 10.10.0.4 to Remote 10.10.0.5 Local 10.10.0.4 to Remote 10.10.0.6 Local 10.10.0.4 to Remote 10.10.0.7 Local 10.10.0.4 to Remote 10.10.0.8 Local 10.10.0.4 to Remote 10.10.0.9 Local 10.10.0.4 to Remote 10.10.0.10 Local 10.10.0.4 to Remote 10.10.0.11 Local 10.10.0.4 to Remote 10.10.0.12 Larger than PMTU communication succeeds on 32 path(s) RPC status: 8 paths up, 0 paths down (tcp check) 8 paths up, 0 paths down (udp check)
Step 4: Verify all ports and LIF are correctly migrated
-
Display the information about the devices in your configuration by entering the following commands:
You can execute the following commands in any order:
-
network device-discovery show
-
network port show -role cluster
-
network interface show -role cluster
-
system cluster-switch show
Show example
cluster::> network device-discovery show Local Discovered Node Port Device Interface Platform ----------- ------ ------------------- ---------------- ---------------- n1 /cdp e0a C1 Ethernet1/1/1 N3K-C3232C e0b C2 Ethernet1/1/1 N3K-C3232C e0c C2 Ethernet1/1/2 N3K-C3232C e0d C1 Ethernet1/1/2 N3K-C3232C n2 /cdp e0a C1 Ethernet1/1/3 N3K-C3232C e0b C2 Ethernet1/1/3 N3K-C3232C e0c C2 Ethernet1/1/4 N3K-C3232C e0d C1 Ethernet1/1/4 N3K-C3232C n3 /cdp e4a C1 Ethernet1/7 N3K-C3232C e4e C2 Ethernet1/7 N3K-C3232C n4 /cdp e4a C1 Ethernet1/8 N3K-C3232C e4e C2 Ethernet1/8 N3K-C3232C cluster::*> network port show -role cluster (network port show) Node: n1 Ignore Speed(Mbps) Health Health Port IPspace Broadcast Domain Link MTU Admin/Oper Status Status --------- ------------ ---------------- ---- ---- ----------- -------- ----- e0a Cluster Cluster up 9000 auto/10000 - e0b Cluster Cluster up 9000 auto/10000 - e0c Cluster Cluster up 9000 auto/10000 - e0d Cluster Cluster up 9000 auto/10000 - Node: n2 Ignore Speed(Mbps) Health Health Port IPspace Broadcast Domain Link MTU Admin/Oper Status Status --------- ------------ ---------------- ---- ---- ----------- -------- ----- e0a Cluster Cluster up 9000 auto/10000 - e0b Cluster Cluster up 9000 auto/10000 - e0c Cluster Cluster up 9000 auto/10000 - e0d Cluster Cluster up 9000 auto/10000 - Node: n3 Ignore Speed(Mbps) Health Health Port IPspace Broadcast Domain Link MTU Admin/Oper Status Status --------- ------------ ---------------- ---- ---- ----------- -------- ----- e4a Cluster Cluster up 9000 auto/40000 - e4e Cluster Cluster up 9000 auto/40000 - Node: n4 Ignore Speed(Mbps) Health Health Port IPspace Broadcast Domain Link MTU Admin/Oper Status Status --------- ------------ ---------------- ---- ---- ----------- -------- ----- e4a Cluster Cluster up 9000 auto/40000 - e4e Cluster Cluster up 9000 auto/40000 - cluster::*> network interface show -role cluster Logical Status Network Current Current Is Vserver Interface Admin/Oper Address/Mask Node Port Home ----------- ---------- ---------- ------------------ ------------- ------- ---- Cluster nm1_clus1 up/up 10.10.0.1/24 n1 e0a true n1_clus2 up/up 10.10.0.2/24 n1 e0b true n1_clus3 up/up 10.10.0.3/24 n1 e0c true n1_clus4 up/up 10.10.0.4/24 n1 e0d true n2_clus1 up/up 10.10.0.5/24 n2 e0a true n2_clus2 up/up 10.10.0.6/24 n2 e0b true n2_clus3 up/up 10.10.0.7/24 n2 e0c true n2_clus4 up/up 10.10.0.8/24 n2 e0d true n3_clus1 up/up 10.10.0.9/24 n3 e4a true n3_clus2 up/up 10.10.0.10/24 n3 e4e true n4_clus1 up/up 10.10.0.11/24 n4 e4a true n4_clus2 up/up 10.10.0.12/24 n4 e4e true cluster::*> system cluster-switch show Switch Type Address Model --------------------------- ------------------ ---------------- --------------- CL1 cluster-network 10.10.1.101 NX3232C Serial Number: FOX000001 Is Monitored: true Reason: None Software Version: Cisco Nexus Operating System (NX-OS) Software, Version 7.0(3)I6(1) Version Source: CDP CL2 cluster-network 10.10.1.102 NX3232C Serial Number: FOX000002 Is Monitored: true Reason: None Software Version: Cisco Nexus Operating System (NX-OS) Software, Version 7.0(3)I6(1) Version Source: CDP C2 cluster-network 10.10.1.103 NX3232C Serial Number: FOX000003 Is Monitored: true Reason: None Software Version: Cisco Nexus Operating System (NX-OS) Software, Version 7.0(3)I6(1) Version Source: CDP 3 entries were displayed.
-
-
Delete the replaced cluster switch CL2 if it has not been removed automatically:
system cluster-switch delete -device cluster-switch-name
-
Verify that the proper cluster switches are monitored:
system cluster-switch show
Show example
The following example shows the cluster switches are monitored because the
Is Monitored
state istrue
.cluster::> system cluster-switch show Switch Type Address Model --------------------------- ------------------ ---------------- --------------- CL1 cluster-network 10.10.1.101 NX3232C Serial Number: FOX000001 Is Monitored: true Reason: None Software Version: Cisco Nexus Operating System (NX-OS) Software, Version 7.0(3)I6(1) Version Source: CDP C2 cluster-network 10.10.1.103 NX3232C Serial Number: FOX000002 Is Monitored: true Reason: None Software Version: Cisco Nexus Operating System (NX-OS) Software, Version 7.0(3)I6(1) Version Source: CDP
-
If you suppressed automatic case creation, re-enable it by invoking an AutoSupport message:
system node autosupport invoke -node * -type all -message MAINT=END