Configure your ports for migration from Nexus 5596 switches to Nexus 3232C switches
Follow these steps to configure your ports for migration from the Nexus 5596 switches to the new Nexus 3232C switches.
-
Shut down the cluster interconnect ports that are physically connected to switch CL2:
network port modify -node node-name -port port-name -up-admin false
Show example
The following commands shut down the specified ports on n1 and n2, but the ports must be shut down on all nodes:
cluster::*> network port modify -node n1 -port e0b -up-admin false cluster::*> network port modify -node n1 -port e0c -up-admin false cluster::*> network port modify -node n2 -port e0b -up-admin false cluster::*> network port modify -node n2 -port e0c -up-admin false
-
Verify the connectivity of the remote cluster interfaces:
You can use the network interface check cluster-connectivity
command to start an accessibility check for cluster connectivity and then display the details:
network interface check cluster-connectivity start
and network interface check cluster-connectivity show
cluster1::*> network interface check cluster-connectivity start
NOTE: Wait for a number of seconds before running the show command to display the details.
cluster1::*> network interface check cluster-connectivity show Source Destination Packet Node Date LIF LIF Loss ------ -------------------------- ---------------- ---------------- ----------- n1 3/5/2022 19:21:18 -06:00 n1_clus2 n2-clus1 none 3/5/2022 19:21:20 -06:00 n1_clus2 n2_clus2 none n2 3/5/2022 19:21:18 -06:00 n2_clus2 n1_clus1 none 3/5/2022 19:21:20 -06:00 n2_clus2 n1_clus2 none
For all ONTAP releases, you can also use the cluster ping-cluster -node <name>
command to check the connectivity:
cluster ping-cluster -node <name>
cluster1::*> cluster ping-cluster -node local Host is n1 Getting addresses from network interface table... Cluster n1_clus1 n1 e0a 10.10.0.1 Cluster n1_clus2 n1 e0b 10.10.0.2 Cluster n1_clus3 n1 e0c 10.10.0.3 Cluster n1_clus4 n1 e0d 10.10.0.4 Cluster n2_clus1 n2 e0a 10.10.0.5 Cluster n2_clus2 n2 e0b 10.10.0.6 Cluster n2_clus3 n2 e0c 10.10.0.7 Cluster n2_clus4 n2 e0d 10.10.0.8 Local = 10.10.0.1 10.10.0.2 10.10.0.3 10.10.0.4 Remote = 10.10.0.5 10.10.0.6 10.10.0.7 10.10.0.8 Cluster Vserver Id = 4294967293 Ping status: .... Basic connectivity succeeds on 16 path(s) Basic connectivity fails on 0 path(s) ................ Detected 9000 byte MTU on 16 path(s): Local 10.10.0.1 to Remote 10.10.0.5 Local 10.10.0.1 to Remote 10.10.0.6 Local 10.10.0.1 to Remote 10.10.0.7 Local 10.10.0.1 to Remote 10.10.0.8 Local 10.10.0.2 to Remote 10.10.0.5 Local 10.10.0.2 to Remote 10.10.0.6 Local 10.10.0.2 to Remote 10.10.0.7 Local 10.10.0.2 to Remote 10.10.0.8 Local 10.10.0.3 to Remote 10.10.0.5 Local 10.10.0.3 to Remote 10.10.0.6 Local 10.10.0.3 to Remote 10.10.0.7 Local 10.10.0.3 to Remote 10.10.0.8 Local 10.10.0.4 to Remote 10.10.0.5 Local 10.10.0.4 to Remote 10.10.0.6 Local 10.10.0.4 to Remote 10.10.0.7 Local 10.10.0.4 to Remote 10.10.0.8 Larger than PMTU communication succeeds on 16 path(s) RPC status: 4 paths up, 0 paths down (tcp check) 4 paths up, 0 paths down (udp check)
-
Shut down ISLs 41 through 48 on CL1, the active Nexus 5596 switch, using the Cisco
shutdown
command.For more information on Cisco commands, see the appropriate guide in the Cisco Nexus 3000 Series NX-OS Command References.
Show example
The following example shows ISLs 41 through 48 being shut down on the Nexus 5596 switch CL1:
(CL1)# configure (CL1)(Config)# interface e1/41-48 (CL1)(config-if-range)# shutdown (CL1)(config-if-range)# exit (CL1)(Config)# exit (CL1)#
-
Build a temporary ISL between CL1 and C2 using the appropriate Cisco commands.
For more information on Cisco commands, see the appropriate guide in the Cisco Nexus 3000 Series NX-OS Command References.
Show example
The following example shows a temporary ISL being set up between CL1 and C2:
C2# configure C2(config)# interface port-channel 2 C2(config-if)# switchport mode trunk C2(config-if)# spanning-tree port type network C2(config-if)# mtu 9216 C2(config-if)# interface breakout module 1 port 24 map 10g-4x C2(config)# interface e1/24/1-4 C2(config-if-range)# switchport mode trunk C2(config-if-range)# mtu 9216 C2(config-if-range)# channel-group 2 mode active C2(config-if-range)# exit C2(config-if)# exit
-
On all nodes, remove all cables attached to the Nexus 5596 switch CL2.
With supported cabling, reconnect disconnected ports on all nodes to the Nexus 3232C switch C2.
-
Remove all the cables from the Nexus 5596 switch CL2.
Attach the appropriate Cisco QSFP to SFP+ break-out cables connecting port 1/24 on the new Cisco 3232C switch, C2, to ports 45 to 48 on existing Nexus 5596, CL1.
-
Bring up ISLs ports 45 through 48 on the active Nexus 5596 switch CL1.
For more information on Cisco commands, see the appropriate guide in the Cisco Nexus 3000 Series NX-OS Command References.
Show example
The following example shows ISLs ports 45 through 48 being brought up:
(CL1)# configure (CL1)(Config)# interface e1/45-48 (CL1)(config-if-range)# no shutdown (CL1)(config-if-range)# exit (CL1)(Config)# exit (CL1)#
-
Verify that the ISLs are
up
on the Nexus 5596 switch CL1.For more information on Cisco commands, see the appropriate guide in the Cisco Nexus 3000 Series NX-OS Command References.
Show example
The following example shows Ports eth1/45 through eth1/48 indicating (P), meaning that the ISL ports are
up
in the port-channel.CL1# show port-channel summary Flags: D - Down P - Up in port-channel (members) I - Individual H - Hot-standby (LACP only) s - Suspended r - Module-removed S - Switched R - Routed U - Up (port-channel) M - Not in use. Min-links not met -------------------------------------------------------------------------------- Group Port- Type Protocol Member Ports Channel -------------------------------------------------------------------------------- 1 Po1(SU) Eth LACP Eth1/41(D) Eth1/42(D) Eth1/43(D) Eth1/44(D) Eth1/45(P) Eth1/46(P) Eth1/47(P) Eth1/48(P)
-
Verify that interfaces eth1/45-48 already have `channel-group 1 mode active`in their running configuration.
-
On all nodes, bring up all the cluster interconnect ports connected to the 3232C switch C2:
network port modify -node node-name -port port-name -up-admin true
Show example
The following example shows the specified ports being brought up on nodes n1 and n2:
cluster::*> network port modify -node n1 -port e0b -up-admin true cluster::*> network port modify -node n1 -port e0c -up-admin true cluster::*> network port modify -node n2 -port e0b -up-admin true cluster::*> network port modify -node n2 -port e0c -up-admin true
-
On all nodes, revert all of the migrated cluster interconnect LIFs connected to C2:
network interface revert -vserver Cluster -lif lif-name
Show example
The following example shows the migrated cluster LIFs being reverted to their home ports:
cluster::*> network interface revert -vserver Cluster -lif n1_clus2 cluster::*> network interface revert -vserver Cluster -lif n1_clus3 cluster::*> network interface revert -vserver Cluster -lif n2_clus2 cluster::*> network interface revert -vserver Cluster -lif n2_clus3
-
Verify all the cluster interconnect ports are now reverted to their home:
network interface show -role cluster
Show example
The following example shows that the LIFs on clus2 reverted to their home ports and shows that the LIFs are successfully reverted if the ports in the Current Port column have a status of
true
in theIs Home
column. If theIs Home
value isfalse
, the LIF has not been reverted.cluster::*> *network interface show -role cluster* (network interface show) Logical Status Network Current Current Is Vserver Interface Admin/Oper Address/Mask Node Port Home ----------- ---------- ---------- ------------------ ------------- ------- ---- Cluster n1_clus1 up/up 10.10.0.1/24 n1 e0a true n1_clus2 up/up 10.10.0.2/24 n1 e0b true n1_clus3 up/up 10.10.0.3/24 n1 e0c true n1_clus4 up/up 10.10.0.4/24 n1 e0d true n2_clus1 up/up 10.10.0.5/24 n2 e0a true n2_clus2 up/up 10.10.0.6/24 n2 e0b true n2_clus3 up/up 10.10.0.7/24 n2 e0c true n2_clus4 up/up 10.10.0.8/24 n2 e0d true 8 entries were displayed.
-
Verify that the clustered ports are connected:
network port show -role cluster
Show example
The following example shows the result of the previous
network port modify
command, verifying that all the cluster interconnects areup
:cluster::*> network port show -role cluster (network port show) Node: n1 Ignore Speed(Mbps) Health Health Port IPspace Broadcast Domain Link MTU Admin/Oper Status Status --------- ------------ ---------------- ---- ---- ----------- -------- ------ e0a Cluster Cluster up 9000 auto/10000 - - e0b Cluster Cluster up 9000 auto/10000 - - e0c Cluster Cluster up 9000 auto/10000 - - e0d Cluster Cluster up 9000 auto/10000 - - Node: n2 Ignore Speed(Mbps) Health Health Port IPspace Broadcast Domain Link MTU Admin/Oper Status Status --------- ------------ ---------------- ---- ---- ----------- -------- ------ e0a Cluster Cluster up 9000 auto/10000 - - e0b Cluster Cluster up 9000 auto/10000 - - e0c Cluster Cluster up 9000 auto/10000 - - e0d Cluster Cluster up 9000 auto/10000 - - 8 entries were displayed.
-
Verify the connectivity of the remote cluster interfaces:
You can use the network interface check cluster-connectivity
command to start an accessibility check for cluster connectivity and then display the details:
network interface check cluster-connectivity start
and network interface check cluster-connectivity show
cluster1::*> network interface check cluster-connectivity start
NOTE: Wait for a number of seconds before running the show
command to display the details.
cluster1::*> network interface check cluster-connectivity show Source Destination Packet Node Date LIF LIF Loss ------ -------------------------- ---------------- ---------------- ----------- n1 3/5/2022 19:21:18 -06:00 n1_clus2 n2-clus1 none 3/5/2022 19:21:20 -06:00 n1_clus2 n2_clus2 none n2 3/5/2022 19:21:18 -06:00 n2_clus2 n1_clus1 none 3/5/2022 19:21:20 -06:00 n2_clus2 n1_clus2 none
For all ONTAP releases, you can also use the cluster ping-cluster -node <name>
command to check the connectivity:
cluster ping-cluster -node <name>
cluster1::*> cluster ping-cluster -node local Host is n1 Getting addresses from network interface table... Cluster n1_clus1 n1 e0a 10.10.0.1 Cluster n1_clus2 n1 e0b 10.10.0.2 Cluster n1_clus3 n1 e0c 10.10.0.3 Cluster n1_clus4 n1 e0d 10.10.0.4 Cluster n2_clus1 n2 e0a 10.10.0.5 Cluster n2_clus2 n2 e0b 10.10.0.6 Cluster n2_clus3 n2 e0c 10.10.0.7 Cluster n2_clus4 n2 e0d 10.10.0.8 Local = 10.10.0.1 10.10.0.2 10.10.0.3 10.10.0.4 Remote = 10.10.0.5 10.10.0.6 10.10.0.7 10.10.0.8 Cluster Vserver Id = 4294967293 Ping status: .... Basic connectivity succeeds on 16 path(s) Basic connectivity fails on 0 path(s) ................ Detected 9000 byte MTU on 16 path(s): Local 10.10.0.1 to Remote 10.10.0.5 Local 10.10.0.1 to Remote 10.10.0.6 Local 10.10.0.1 to Remote 10.10.0.7 Local 10.10.0.1 to Remote 10.10.0.8 Local 10.10.0.2 to Remote 10.10.0.5 Local 10.10.0.2 to Remote 10.10.0.6 Local 10.10.0.2 to Remote 10.10.0.7 Local 10.10.0.2 to Remote 10.10.0.8 Local 10.10.0.3 to Remote 10.10.0.5 Local 10.10.0.3 to Remote 10.10.0.6 Local 10.10.0.3 to Remote 10.10.0.7 Local 10.10.0.3 to Remote 10.10.0.8 Local 10.10.0.4 to Remote 10.10.0.5 Local 10.10.0.4 to Remote 10.10.0.6 Local 10.10.0.4 to Remote 10.10.0.7 Local 10.10.0.4 to Remote 10.10.0.8 Larger than PMTU communication succeeds on 16 path(s) RPC status: 4 paths up, 0 paths down (tcp check) 4 paths up, 0 paths down (udp check)
-
On each node in the cluster, migrate the interfaces associated with the first Nexus 5596 switch, CL1, to be replaced:
network interface migrate -vserver vserver-name -lif lif-name -source-node source-node-name -destination-node destination-node-name -destination-port destination-port-name
Show example
The following example shows the ports or LIFs being migrated on nodes n1 and n2:
cluster::*> network interface migrate -vserver Cluster -lif n1_clus1 -source-node n1 - destination-node n1 -destination-port e0b cluster::*> network interface migrate -vserver Cluster -lif n1_clus4 -source-node n1 - destination-node n1 -destination-port e0c cluster::*> network interface migrate -vserver Cluster -lif n2_clus1 -source-node n2 - destination-node n2 -destination-port e0b cluster::*> network interface migrate -vserver Cluster -lif n2_clus4 -source-node n2 - destination-node n2 -destination-port e0c
-
Verify the cluster's status:
network interface show
Show example
The following example shows that the required cluster LIFs have been migrated to appropriate cluster ports hosted on cluster switch, C2:
cluster::*> network interface show Logical Status Network Current Current Is Vserver Interface Admin/Oper Address/Mask Node Port Home ----------- ---------- ---------- ------------------ ------------- ------- ---- Cluster n1_clus1 up/up 10.10.0.1/24 n1 e0b false n1_clus2 up/up 10.10.0.2/24 n1 e0b true n1_clus3 up/up 10.10.0.3/24 n1 e0c true n1_clus4 up/up 10.10.0.4/24 n1 e0c false n2_clus1 up/up 10.10.0.5/24 n2 e0b false n2_clus2 up/up 10.10.0.6/24 n2 e0b true n2_clus3 up/up 10.10.0.7/24 n2 e0c true n2_clus4 up/up 10.10.0.8/24 n2 e0c false 8 entries were displayed. ----- ------- ----
-
On all the nodes, shut down the node ports that are connected to CL1:
network port modify -node node-name -port port-name -up-admin false
Show example
The following example shows the specified ports being shut down on nodes n1 and n2:
cluster::*> network port modify -node n1 -port e0a -up-admin false cluster::*> network port modify -node n1 -port e0d -up-admin false cluster::*> network port modify -node n2 -port e0a -up-admin false cluster::*> network port modify -node n2 -port e0d -up-admin false
-
Shut down ISL 24, 31 and 32 on the active 3232C switch C2.
For more information on Cisco commands, see the appropriate guide in the Cisco Nexus 3000 Series NX-OS Command References.
Show example
The following example shows ISLs being shutdown:
C2# configure C2(Config)# interface e1/24/1-4 C2(config-if-range)# shutdown C2(config-if-range)# exit C2(config)# interface 1/31-32 C2(config-if-range)# shutdown C2(config-if-range)# exit C2(config-if)# exit C2#
-
On all nodes, remove all cables attached to the Nexus 5596 switch CL1.
With supported cabling, reconnect disconnected ports on all nodes to the Nexus 3232C switch C1.
-
Remove the QSFP breakout cable from Nexus 3232C C2 ports e1/24.
Connect ports e1/31 and e1/32 on C1 to ports e1/31 and e1/32 on C2 using supported Cisco QSFP optical fiber or direct-attach cables.
-
Restore the configuration on port 24 and remove the temporary Port Channel 2 on C2.
For more information on Cisco commands, see the appropriate guide in the Cisco Nexus 3000 Series NX-OS Command References.
Show example
The following example shows the configuration on port m24 being restored using the appropriate Cisco commands:
C2# configure C2(config)# no interface breakout module 1 port 24 map 10g-4x C2(config)# no interface port-channel 2 C2(config-if)# int e1/24 C2(config-if)# description 40GbE Node Port C2(config-if)# spanning-tree port type edge C2(config-if)# spanning-tree bpduguard enable C2(config-if)# mtu 9216 C2(config-if-range)# exit C2(config)# exit C2# copy running-config startup-config [] 100% Copy Complete.
-
Bring up ISL ports 31 and 32 on C2, the active 3232C switch, by entering the following Cisco command:
no shutdown
For more information on Cisco commands, see the appropriate guide in the Cisco Nexus 3000 Series NX-OS Command References.
Show example
The following example shows the Cisco commands
switchname configure
brought up on the 3232C switch C2:C2# configure C2(config)# interface ethernet 1/31-32 C2(config-if-range)# no shutdown
-
Verify that the ISL connections are
up
on the 3232C switch C2.For more information on Cisco commands, see the appropriate guide in the Cisco Nexus 3000 Series NX-OS Command References.
Ports eth1/31 and eth1/32 should indicate (P) meaning that both ISL ports up in the port-channel
Show example
C1# show port-channel summary Flags: D - Down P - Up in port-channel (members) I - Individual H - Hot-standby (LACP only) s - Suspended r - Module-removed S - Switched R - Routed U - Up (port-channel) M - Not in use. Min-links not met -------------------------------------------------------------------------------- Group Port- Type Protocol Member Ports Channel -------------------------------------------------------------------------------- 1 Po1(SU) Eth LACP Eth1/31(P) Eth1/32(P)
-
On all nodes, bring up all the cluster interconnect ports connected to the new 3232C switch C1:
network port modify
Show example
The following example shows all the cluster interconnect ports being brought up for n1 and n2 on the 3232C switch C1:
cluster::*> network port modify -node n1 -port e0a -up-admin true cluster::*> network port modify -node n1 -port e0d -up-admin true cluster::*> network port modify -node n2 -port e0a -up-admin true cluster::*> network port modify -node n2 -port e0d -up-admin true
-
Verify the status of the cluster node port:
network port show
Show example
The following example shows verifies that all cluster interconnect ports on all nodes on the new 3232C switch C1 are up:
cluster::*> network port show –role cluster (network port show) Node: n1 Ignore Speed(Mbps) Health Health Port IPspace Broadcast Domain Link MTU Admin/Oper Status Status --------- ------------ ---------------- ---- ---- ----------- -------- ------ e0a Cluster Cluster up 9000 auto/10000 - - e0b Cluster Cluster up 9000 auto/10000 - - e0c Cluster Cluster up 9000 auto/10000 - - e0d Cluster Cluster up 9000 auto/10000 - - Node: n2 Ignore Speed(Mbps) Health Health Port IPspace Broadcast Domain Link MTU Admin/Oper Status Status --------- ------------ ---------------- ---- ---- ----------- -------- ------ e0a Cluster Cluster up 9000 auto/10000 - - e0b Cluster Cluster up 9000 auto/10000 - - e0c Cluster Cluster up 9000 auto/10000 - - e0d Cluster Cluster up 9000 auto/10000 - - 8 entries were displayed.
-
On all nodes, revert the specific cluster LIFs to their home ports:
network interface revert -server Cluster -lif lif-name
Show example
The following example shows the specific cluster LIFs being reverted to their home ports on nodes n1 and n2:
cluster::*> network interface revert -vserver Cluster -lif n1_clus1 cluster::*> network interface revert -vserver Cluster -lif n1_clus4 cluster::*> network interface revert -vserver Cluster -lif n2_clus1 cluster::*> network interface revert -vserver Cluster -lif n2_clus4
-
Verify that the interface is home:
network interface show -role cluster
Show example
The following example shows the status of cluster interconnect interfaces are
up
andIs Home
for n1 and n2:cluster::*> network interface show -role cluster (network interface show) Logical Status Network Current Current Is Vserver Interface Admin/Oper Address/Mask Node Port Home ----------- ---------- ---------- ------------------ ------------- ------- ---- Cluster n1_clus1 up/up 10.10.0.1/24 n1 e0a true n1_clus2 up/up 10.10.0.2/24 n1 e0b true n1_clus3 up/up 10.10.0.3/24 n1 e0c true n1_clus4 up/up 10.10.0.4/24 n1 e0d true n2_clus1 up/up 10.10.0.5/24 n2 e0a true n2_clus2 up/up 10.10.0.6/24 n2 e0b true n2_clus3 up/up 10.10.0.7/24 n2 e0c true n2_clus4 up/up 10.10.0.8/24 n2 e0d true 8 entries were displayed.
-
Verify the connectivity of the remote cluster interfaces:
You can use the network interface check cluster-connectivity
command to start an accessibility check for cluster connectivity and then display the details:
network interface check cluster-connectivity start
and network interface check cluster-connectivity show
cluster1::*> network interface check cluster-connectivity start
NOTE: Wait for a number of seconds before running the show
command to display the details.
cluster1::*> network interface check cluster-connectivity show Source Destination Packet Node Date LIF LIF Loss ------ -------------------------- ---------------- ---------------- ----------- n1 3/5/2022 19:21:18 -06:00 n1_clus2 n2-clus1 none 3/5/2022 19:21:20 -06:00 n1_clus2 n2_clus2 none n2 3/5/2022 19:21:18 -06:00 n2_clus2 n1_clus1 none 3/5/2022 19:21:20 -06:00 n2_clus2 n1_clus2 none
For all ONTAP releases, you can also use the cluster ping-cluster -node <name>
command to check the connectivity:
cluster ping-cluster -node <name>
cluster1::*> cluster ping-cluster -node local Host is n1 Getting addresses from network interface table... Cluster n1_clus1 n1 e0a 10.10.0.1 Cluster n1_clus2 n1 e0b 10.10.0.2 Cluster n1_clus3 n1 e0c 10.10.0.3 Cluster n1_clus4 n1 e0d 10.10.0.4 Cluster n2_clus1 n2 e0a 10.10.0.5 Cluster n2_clus2 n2 e0b 10.10.0.6 Cluster n2_clus3 n2 e0c 10.10.0.7 Cluster n2_clus4 n2 e0d 10.10.0.8 Local = 10.10.0.1 10.10.0.2 10.10.0.3 10.10.0.4 Remote = 10.10.0.5 10.10.0.6 10.10.0.7 10.10.0.8 Cluster Vserver Id = 4294967293 Ping status: .... Basic connectivity succeeds on 16 path(s) Basic connectivity fails on 0 path(s) ................ Detected 9000 byte MTU on 16 path(s): Local 10.10.0.1 to Remote 10.10.0.5 Local 10.10.0.1 to Remote 10.10.0.6 Local 10.10.0.1 to Remote 10.10.0.7 Local 10.10.0.1 to Remote 10.10.0.8 Local 10.10.0.2 to Remote 10.10.0.5 Local 10.10.0.2 to Remote 10.10.0.6 Local 10.10.0.2 to Remote 10.10.0.7 Local 10.10.0.2 to Remote 10.10.0.8 Local 10.10.0.3 to Remote 10.10.0.5 Local 10.10.0.3 to Remote 10.10.0.6 Local 10.10.0.3 to Remote 10.10.0.7 Local 10.10.0.3 to Remote 10.10.0.8 Local 10.10.0.4 to Remote 10.10.0.5 Local 10.10.0.4 to Remote 10.10.0.6 Local 10.10.0.4 to Remote 10.10.0.7 Local 10.10.0.4 to Remote 10.10.0.8 Larger than PMTU communication succeeds on 16 path(s) RPC status: 4 paths up, 0 paths down (tcp check) 4 paths up, 0 paths down (udp check)
-
Expand the cluster by adding nodes to the Nexus 3232C cluster switches.
The following examples show nodes n3 and n4 have 40 GbE cluster ports connected to ports e1/7 and e1/8 respectively on both the Nexus 3232C cluster switches, and both nodes have joined the cluster. The 40 GbE cluster interconnect ports used are e4a and e4e.
Display the information about the devices in your configuration:
-
network device-discovery show
-
network port show -role cluster
-
network interface show -role cluster
-
system cluster-switch show
Show example
cluster::> network device-discovery show Local Discovered Node Port Device Interface Platform ----------- ------ ------------------- ---------------- ---------------- n1 /cdp e0a C1 Ethernet1/1/1 N3K-C3232C e0b C2 Ethernet1/1/1 N3K-C3232C e0c C2 Ethernet1/1/2 N3K-C3232C e0d C1 Ethernet1/1/2 N3K-C3232C n2 /cdp e0a C1 Ethernet1/1/3 N3K-C3232C e0b C2 Ethernet1/1/3 N3K-C3232C e0c C2 Ethernet1/1/4 N3K-C3232C e0d C1 Ethernet1/1/4 N3K-C3232C n3 /cdp e4a C1 Ethernet1/7 N3K-C3232C e4e C2 Ethernet1/7 N3K-C3232C n4 /cdp e4a C1 Ethernet1/8 N3K-C3232C e4e C2 Ethernet1/8 N3K-C3232C 12 entries were displayed.
+
cluster::*> network port show –role cluster (network port show) Node: n1 Ignore Speed(Mbps) Health Health Port IPspace Broadcast Domain Link MTU Admin/Oper Status Status --------- ------------ ---------------- ---- ---- ----------- -------- ------ e0a Cluster Cluster up 9000 auto/10000 - - e0b Cluster Cluster up 9000 auto/10000 - - e0c Cluster Cluster up 9000 auto/10000 - - e0d Cluster Cluster up 9000 auto/10000 - - Node: n2 Ignore Speed(Mbps) Health Health Port IPspace Broadcast Domain Link MTU Admin/Oper Status Status --------- ------------ ---------------- ---- ---- ----------- -------- ------ e0a Cluster Cluster up 9000 auto/10000 - - e0b Cluster Cluster up 9000 auto/10000 - - e0c Cluster Cluster up 9000 auto/10000 - - e0d Cluster Cluster up 9000 auto/10000 - - Node: n3 Ignore Speed(Mbps) Health Health Port IPspace Broadcast Domain Link MTU Admin/Oper Status Status --------- ------------ ---------------- ---- ---- ----------- -------- ------ e4a Cluster Cluster up 9000 auto/40000 - - e4e Cluster Cluster up 9000 auto/40000 - - Node: n4 Ignore Speed(Mbps) Health Health Port IPspace Broadcast Domain Link MTU Admin/Oper Status Status --------- ------------ ---------------- ---- ---- ----------- -------- ------ e4a Cluster Cluster up 9000 auto/40000 - - e4e Cluster Cluster up 9000 auto/40000 - - 12 entries were displayed.
+
cluster::*> network interface show -role cluster (network interface show) Logical Status Network Current Current Is Vserver Interface Admin/Oper Address/Mask Node Port Home ----------- ---------- ---------- ------------------ ------------- ------- ---- Cluster n1_clus1 up/up 10.10.0.1/24 n1 e0a true n1_clus2 up/up 10.10.0.2/24 n1 e0b true n1_clus3 up/up 10.10.0.3/24 n1 e0c true n1_clus4 up/up 10.10.0.4/24 n1 e0d true n2_clus1 up/up 10.10.0.5/24 n2 e0a true n2_clus2 up/up 10.10.0.6/24 n2 e0b true n2_clus3 up/up 10.10.0.7/24 n2 e0c true n2_clus4 up/up 10.10.0.8/24 n2 e0d true n3_clus1 up/up 10.10.0.9/24 n3 e4a true n3_clus2 up/up 10.10.0.10/24 n3 e4e true n4_clus1 up/up 10.10.0.11/24 n4 e4a true n4_clus2 up/up 10.10.0.12/24 n4 e4e true 12 entries were displayed.
+
cluster::*> system cluster-switch show Switch Type Address Model --------------------------- ------------------ ---------------- --------------- C1 cluster-network 10.10.1.103 NX3232C Serial Number: FOX000001 Is Monitored: true Reason: Software Version: Cisco Nexus Operating System (NX-OS) Software, Version 7.0(3)I4(1) Version Source: CDP C2 cluster-network 10.10.1.104 NX3232C Serial Number: FOX000002 Is Monitored: true Reason: Software Version: Cisco Nexus Operating System (NX-OS) Software, Version 7.0(3)I4(1) Version Source: CDP CL1 cluster-network 10.10.1.101 NX5596 Serial Number: 01234567 Is Monitored: true Reason: Software Version: Cisco Nexus Operating System (NX-OS) Software, Version 7.1(1)N1(1) Version Source: CDP CL2 cluster-network 10.10.1.102 NX5596 Serial Number: 01234568 Is Monitored: true Reason: Software Version: Cisco Nexus Operating System (NX-OS) Software, Version 7.1(1)N1(1) Version Source: CDP 4 entries were displayed.
-
-
Remove the replaced Nexus 5596 by using the
system cluster-switch delete
command, if it is not automatically removed:system cluster-switch delete -device switch-name
Show example
cluster::> system cluster-switch delete -device CL1 cluster::> system cluster-switch delete -device CL2