Replace a Cisco Nexus 92300YC switch
Replacing a defective Nexus 92300YC switch in a cluster network is a nondisruptive procedure (NDU).
Review requirements
Before performing the switch replacement, ensure that:
-
In the existing cluster and network infrastructure:
-
The existing cluster is verified as completely functional, with at least one fully connected cluster switch.
-
All cluster ports are up.
-
All cluster logical interfaces (LIFs) are up and on their home ports.
-
The ONTAP cluster ping-cluster -node node1 command must indicate that basic connectivity and larger than PMTU communication are successful on all paths.
-
-
For the Nexus 92300YC replacement switch:
-
Management network connectivity on the replacement switch are functional.
-
Console access to the replacement switch are in place.
-
The node connections are ports 1/1 through 1/64.
-
All Inter-Switch Link (ISL) ports are disabled on ports 1/65 and 1/66.
-
The desired reference configuration file (RCF) and NX-OS operating system image switch are loaded onto the switch.
-
Initial customization of the switch are complete, as detailed in: Configure the Cisco Nexus 92300YC switch.
Any previous site customizations, such as STP, SNMP, and SSH, are copied to the new switch.
-
Enable console logging
NetApp strongly recommends that you enable console logging on the devices that you are using and take the following actions when replacing your switch:
-
Leave AutoSupport enabled during maintenance.
-
Trigger a maintenance AutoSupport before and after maintenance to disable case creation for the duration of the maintenance. See this Knowledge Base article SU92: How to suppress automatic case creation during scheduled maintenance windows for further details.
-
Enable session logging for any CLI sessions. For instructions on how to enable session logging, review the "Logging Session Output" section in this Knowledge Base article How to configure PuTTY for optimal connectivity to ONTAP systems.
Replace the switch
The examples in this procedure use the following switch and node nomenclature:
-
The names of the existing Nexus 92300YC switches are cs1 and cs2.
-
The name of the new Nexus 92300YC switch is newcs2.
-
The node names are node1 and node2.
-
The cluster ports on each node are named e0a and e0b.
-
The cluster LIF names are node1_clus1 and node1_clus2 for node1, and node2_clus1 and node2_clus2 for node2.
-
The prompt for changes to all cluster nodes is cluster1::*>
You must execute the command for migrating a cluster LIF from the node where the cluster LIF is hosted.
The following procedure is based on the following cluster network topology:
Show topology
cluster1::*> network port show -ipspace Cluster Node: node1 Ignore Speed(Mbps) Health Health Port IPspace Broadcast Domain Link MTU Admin/Oper Status Status --------- ------------ ---------------- ---- ---- ----------- -------- ------ e0a Cluster Cluster up 9000 auto/10000 healthy false e0b Cluster Cluster up 9000 auto/10000 healthy false Node: node2 Ignore Speed(Mbps) Health Health Port IPspace Broadcast Domain Link MTU Admin/Oper Status Status --------- ------------ ---------------- ---- ---- ----------- -------- ------ e0a Cluster Cluster up 9000 auto/10000 healthy false e0b Cluster Cluster up 9000 auto/10000 healthy false 4 entries were displayed. cluster1::*> network interface show -vserver Cluster Logical Status Network Current Current Is Vserver Interface Admin/Oper Address/Mask Node Port Home ----------- ---------- ---------- ------------------ ------------- ------- ---- Cluster node1_clus1 up/up 169.254.209.69/16 node1 e0a true node1_clus2 up/up 169.254.49.125/16 node1 e0b true node2_clus1 up/up 169.254.47.194/16 node2 e0a true node2_clus2 up/up 169.254.19.183/16 node2 e0b true 4 entries were displayed. cluster1::*> network device-discovery show -protocol cdp Node/ Local Discovered Protocol Port Device (LLDP: ChassisID) Interface Platform ----------- ------ ------------------------- ---------------- ---------------- node2 /cdp e0a cs1 Eth1/2 N9K-C92300YC e0b cs2 Eth1/2 N9K-C92300YC node1 /cdp e0a cs1 Eth1/1 N9K-C92300YC e0b cs2 Eth1/1 N9K-C92300YC 4 entries were displayed. cs1# show cdp neighbors Capability Codes: R - Router, T - Trans-Bridge, B - Source-Route-Bridge S - Switch, H - Host, I - IGMP, r - Repeater, V - VoIP-Phone, D - Remotely-Managed-Device, s - Supports-STP-Dispute Device-ID Local Intrfce Hldtme Capability Platform Port ID node1 Eth1/1 144 H FAS2980 e0a node2 Eth1/2 145 H FAS2980 e0a cs2(FDO220329V5) Eth1/65 176 R S I s N9K-C92300YC Eth1/65 cs2(FDO220329V5) Eth1/66 176 R S I s N9K-C92300YC Eth1/66 Total entries displayed: 4 cs2# show cdp neighbors Capability Codes: R - Router, T - Trans-Bridge, B - Source-Route-Bridge S - Switch, H - Host, I - IGMP, r - Repeater, V - VoIP-Phone, D - Remotely-Managed-Device, s - Supports-STP-Dispute Device-ID Local Intrfce Hldtme Capability Platform Port ID node1 Eth1/1 139 H FAS2980 e0b node2 Eth1/2 124 H FAS2980 e0b cs1(FDO220329KU) Eth1/65 178 R S I s N9K-C92300YC Eth1/65 cs1(FDO220329KU) Eth1/66 178 R S I s N9K-C92300YC Eth1/66 Total entries displayed: 4
Step 1: Prepare for replacement
-
Install the appropriate RCF and image on the switch, newcs2, and make any necessary site preparations.
If necessary, verify, download, and install the appropriate versions of the RCF and NX-OS software for the new switch. If you have verified that the new switch is correctly set up and does not need updates to the RCF and NX-OS software, continue to step 2.
-
Go to the NetApp Cluster and Management Network Switches Reference Configuration File Description Page on the NetApp Support Site.
-
Click the link for the Cluster Network and Management Network Compatibility Matrix, and then note the required switch software version.
-
Click your browser's back arrow to return to the Description page, click CONTINUE, accept the license agreement, and then go to the Download page.
-
Follow the steps on the Download page to download the correct RCF and NX-OS files for the version of ONTAP software you are installing.
-
-
On the new switch, log in as admin and shut down all of the ports that will be connected to the node cluster interfaces (ports 1/1 to 1/64).
If the switch that you are replacing is not functional and is powered down, go to Step 4. The LIFs on the cluster nodes should have already failed over to the other cluster port for each node.
Show example
newcs2# config Enter configuration commands, one per line. End with CNTL/Z. newcs2(config)# interface e1/1-64 newcs2(config-if-range)# shutdown
-
Verify that all cluster LIFs have auto-revert enabled:
network interface show -vserver Cluster -fields auto-revert
Show example
cluster1::> network interface show -vserver Cluster -fields auto-revert Logical Vserver Interface Auto-revert ------------ ------------- ------------- Cluster node1_clus1 true Cluster node1_clus2 true Cluster node2_clus1 true Cluster node2_clus2 true 4 entries were displayed.
-
Verify the connectivity of the remote cluster interfaces:
You can use the network interface check cluster-connectivity
command to start an accessibility check for cluster connectivity and then display the details:
network interface check cluster-connectivity start
and network interface check cluster-connectivity show
cluster1::*> network interface check cluster-connectivity start
NOTE: Wait for a number of seconds before running the show
command to display the details.
cluster1::*> network interface check cluster-connectivity show Source Destination Packet Node Date LIF LIF Loss ------ -------------------------- ---------------- ---------------- ----------- node1 3/5/2022 19:21:18 -06:00 node1_clus2 node2-clus1 none 3/5/2022 19:21:20 -06:00 node1_clus2 node2_clus2 none node2 3/5/2022 19:21:18 -06:00 node2_clus2 node1_clus1 none 3/5/2022 19:21:20 -06:00 node2_clus2 node1_clus2 none
For all ONTAP releases, you can also use the cluster ping-cluster -node <name>
command to check the connectivity:
cluster ping-cluster -node <name>
cluster1::*> cluster ping-cluster -node local Host is node2 Getting addresses from network interface table... Cluster node1_clus1 169.254.209.69 node1 e0a Cluster node1_clus2 169.254.49.125 node1 e0b Cluster node2_clus1 169.254.47.194 node2 e0a Cluster node2_clus2 169.254.19.183 node2 e0b Local = 169.254.47.194 169.254.19.183 Remote = 169.254.209.69 169.254.49.125 Cluster Vserver Id = 4294967293 Ping status: .... Basic connectivity succeeds on 4 path(s) Basic connectivity fails on 0 path(s) ................ Detected 9000 byte MTU on 4 path(s): Local 169.254.47.194 to Remote 169.254.209.69 Local 169.254.47.194 to Remote 169.254.49.125 Local 169.254.19.183 to Remote 169.254.209.69 Local 169.254.19.183 to Remote 169.254.49.125 Larger than PMTU communication succeeds on 4 path(s) RPC status: 2 paths up, 0 paths down (tcp check) 2 paths up, 0 paths down (udp check)
Step 2: Configure cables and ports
-
Shut down the ISL ports 1/65 and 1/66 on the Nexus 92300YC switch cs1:
Show example
cs1# configure Enter configuration commands, one per line. End with CNTL/Z. cs1(config)# interface e1/65-66 cs1(config-if-range)# shutdown cs1(config-if-range)#
-
Remove all of the cables from the Nexus 92300YC cs2 switch, and then connect them to the same ports on the Nexus 92300YC newcs2 switch.
-
Bring up the ISLs ports 1/65 and 1/66 between the cs1 and newcs2 switches, and then verify the port channel operation status.
Port-Channel should indicate Po1(SU) and Member Ports should indicate Eth1/65(P) and Eth1/66(P).
Show example
This example enables ISL ports 1/65 and 1/66 and displays the port channel summary on switch cs1:
cs1# configure Enter configuration commands, one per line. End with CNTL/Z. cs1(config)# int e1/65-66 cs1(config-if-range)# no shutdown cs1(config-if-range)# show port-channel summary Flags: D - Down P - Up in port-channel (members) I - Individual H - Hot-standby (LACP only) s - Suspended r - Module-removed b - BFD Session Wait S - Switched R - Routed U - Up (port-channel) p - Up in delay-lacp mode (member) M - Not in use. Min-links not met -------------------------------------------------------------------------------- Group Port- Type Protocol Member Ports Channel -------------------------------------------------------------------------------- 1 Po1(SU) Eth LACP Eth1/65(P) Eth1/66(P) cs1(config-if-range)#
-
Verify that port e0b is up on all nodes:
network port show ipspace Cluster
Show example
The output should be similar to the following:
cluster1::*> network port show -ipspace Cluster Node: node1 Ignore Speed(Mbps) Health Health Port IPspace Broadcast Domain Link MTU Admin/Oper Status Status --------- ------------ ---------------- ---- ----- ----------- -------- ------- e0a Cluster Cluster up 9000 auto/10000 healthy false e0b Cluster Cluster up 9000 auto/10000 healthy false Node: node2 Ignore Speed(Mbps) Health Health Port IPspace Broadcast Domain Link MTU Admin/Oper Status Status --------- ------------ ---------------- ---- ----- ----------- -------- ------- e0a Cluster Cluster up 9000 auto/10000 healthy false e0b Cluster Cluster up 9000 auto/auto - false 4 entries were displayed.
-
On the same node you used in the previous step, revert the cluster LIF associated with the port in the previous step by using the network interface revert command.
Show example
In this example, LIF node1_clus2 on node1 is successfully reverted if the Home value is true and the port is e0b.
The following commands return LIF
node1_clus2
onnode1
to home porte0a
and displays information about the LIFs on both nodes. Bringing up the first node is successful if the Is Home column is true for both cluster interfaces and they show the correct port assignments, in this examplee0a
ande0b
on node1.cluster1::*> network interface show -vserver Cluster Logical Status Network Current Current Is Vserver Interface Admin/Oper Address/Mask Node Port Home ----------- ------------ ---------- ------------------ ---------- ------- ----- Cluster node1_clus1 up/up 169.254.209.69/16 node1 e0a true node1_clus2 up/up 169.254.49.125/16 node1 e0b true node2_clus1 up/up 169.254.47.194/16 node2 e0a true node2_clus2 up/up 169.254.19.183/16 node2 e0a false 4 entries were displayed.
-
Display information about the nodes in a cluster:
cluster show
Show example
This example shows that the node health for node1 and node2 in this cluster is true:
cluster1::*> cluster show Node Health Eligibility ------------- ------- ------------ node1 false true node2 true true
-
Verify that all physical cluster ports are up:
network port show ipspace Cluster
Show example
cluster1::*> network port show -ipspace Cluster Node: node1 Ignore Speed(Mbps) Health Health Port IPspace Broadcast Domain Link MTU Admin/Oper Status Status --------- ----------- ----------------- ----- ----- ----------- -------- ------ e0a Cluster Cluster up 9000 auto/10000 healthy false e0b Cluster Cluster up 9000 auto/10000 healthy false Node: node2 Ignore Speed(Mbps) Health Health Port IPspace Broadcast Domain Link MTU Admin/Oper Status Status --------- ------------ ---------------- ----- ----- ----------- -------- ------ e0a Cluster Cluster up 9000 auto/10000 healthy false e0b Cluster Cluster up 9000 auto/10000 healthy false 4 entries were displayed.
Step 3: Complete the procedure
-
Verify the connectivity of the remote cluster interfaces:
You can use the network interface check cluster-connectivity
command to start an accessibility check for cluster connectivity and then display the details:
network interface check cluster-connectivity start
and network interface check cluster-connectivity show
cluster1::*> network interface check cluster-connectivity start
NOTE: Wait for a number of seconds before running the show
command to display the details.
cluster1::*> network interface check cluster-connectivity show Source Destination Packet Node Date LIF LIF Loss ------ -------------------------- ---------------- ---------------- ----------- node1 3/5/2022 19:21:18 -06:00 node1_clus2 node2-clus1 none 3/5/2022 19:21:20 -06:00 node1_clus2 node2_clus2 none node2 3/5/2022 19:21:18 -06:00 node2_clus2 node1_clus1 none 3/5/2022 19:21:20 -06:00 node2_clus2 node1_clus2 none
For all ONTAP releases, you can also use the cluster ping-cluster -node <name>
command to check the connectivity:
cluster ping-cluster -node <name>
cluster1::*> cluster ping-cluster -node local Host is node2 Getting addresses from network interface table... Cluster node1_clus1 169.254.209.69 node1 e0a Cluster node1_clus2 169.254.49.125 node1 e0b Cluster node2_clus1 169.254.47.194 node2 e0a Cluster node2_clus2 169.254.19.183 node2 e0b Local = 169.254.47.194 169.254.19.183 Remote = 169.254.209.69 169.254.49.125 Cluster Vserver Id = 4294967293 Ping status: .... Basic connectivity succeeds on 4 path(s) Basic connectivity fails on 0 path(s) ................ Detected 9000 byte MTU on 4 path(s): Local 169.254.47.194 to Remote 169.254.209.69 Local 169.254.47.194 to Remote 169.254.49.125 Local 169.254.19.183 to Remote 169.254.209.69 Local 169.254.19.183 to Remote 169.254.49.125 Larger than PMTU communication succeeds on 4 path(s) RPC status: 2 paths up, 0 paths down (tcp check) 2 paths up, 0 paths down (udp check)
-
Confirm the following cluster network configuration:
network port show
Show example
cluster1::*> network port show -ipspace Cluster Node: node1 Ignore Speed(Mbps) Health Health Port IPspace Broadcast Domain Link MTU Admin/Oper Status Status --------- ----------- ---------------- ---- ----- ----------- -------- ------ e0a Cluster Cluster up 9000 auto/10000 healthy false e0b Cluster Cluster up 9000 auto/10000 healthy false Node: node2 Ignore Speed(Mbps) Health Health Port IPspace Broadcast Domain Link MTU Admin/Oper Status Status --------- ------------ ---------------- ---- ---- ----------- -------- ------ e0a Cluster Cluster up 9000 auto/10000 healthy false e0b Cluster Cluster up 9000 auto/10000 healthy false 4 entries were displayed. cluster1::*> network interface show -vserver Cluster Logical Status Network Current Current Is Vserver Interface Admin/Oper Address/Mask Node Port Home ----------- ---------- ---------- ------------------ ------------- ------- ---- Cluster node1_clus1 up/up 169.254.209.69/16 node1 e0a true node1_clus2 up/up 169.254.49.125/16 node1 e0b true node2_clus1 up/up 169.254.47.194/16 node2 e0a true node2_clus2 up/up 169.254.19.183/16 node2 e0b true 4 entries were displayed. cluster1::> network device-discovery show -protocol cdp Node/ Local Discovered Protocol Port Device (LLDP: ChassisID) Interface Platform ----------- ------ ------------------------- ---------------- ---------------- node2 /cdp e0a cs1 0/2 N9K-C92300YC e0b newcs2 0/2 N9K-C92300YC node1 /cdp e0a cs1 0/1 N9K-C92300YC e0b newcs2 0/1 N9K-C92300YC 4 entries were displayed. cs1# show cdp neighbors Capability Codes: R - Router, T - Trans-Bridge, B - Source-Route-Bridge S - Switch, H - Host, I - IGMP, r - Repeater, V - VoIP-Phone, D - Remotely-Managed-Device, s - Supports-STP-Dispute Device-ID Local Intrfce Hldtme Capability Platform Port ID node1 Eth1/1 144 H FAS2980 e0a node2 Eth1/2 145 H FAS2980 e0a newcs2(FDO296348FU) Eth1/65 176 R S I s N9K-C92300YC Eth1/65 newcs2(FDO296348FU) Eth1/66 176 R S I s N9K-C92300YC Eth1/66 Total entries displayed: 4 cs2# show cdp neighbors Capability Codes: R - Router, T - Trans-Bridge, B - Source-Route-Bridge S - Switch, H - Host, I - IGMP, r - Repeater, V - VoIP-Phone, D - Remotely-Managed-Device, s - Supports-STP-Dispute Device-ID Local Intrfce Hldtme Capability Platform Port ID node1 Eth1/1 139 H FAS2980 e0b node2 Eth1/2 124 H FAS2980 e0b cs1(FDO220329KU) Eth1/65 178 R S I s N9K-C92300YC Eth1/65 cs1(FDO220329KU) Eth1/66 178 R S I s N9K-C92300YC Eth1/66 Total entries displayed: 4