Migrate CN1610 cluster switches to BES-53248 cluster switches
To migrate the CN1610 cluster switches in a cluster to Broadcom-supported BES-53248 cluster switches, review the migration requirements and then follow the migration procedure.
The following cluster switches are supported:
-
CN1610
-
BES-53248
Review requirements
Verify that your configuration meets the following requirements:
-
Some of the ports on BES-53248 switches are configured to run at 10GbE.
-
The 10GbE connectivity from nodes to BES-53248 cluster switches have been planned, migrated, and documented.
-
The cluster is fully functioning (there should be no errors in the logs or similar issues).
-
Initial customization of the BES-53248 switches is complete, so that:
-
BES-53248 switches are running the latest recommended version of EFOS software.
-
Reference Configuration Files (RCFs) have been applied to the switches.
-
Any site customization, such as DNS, NTP, SMTP, SNMP, and SSH, are configured on the new switches.
-
Node connections
The cluster switches support the following node connections:
-
NetApp CN1610: ports 0/1 through 0/12 (10GbE)
-
BES-53248: ports 0/1-0/16 (10GbE/25GbE)
Additional ports can be activated by purchasing port licenses.
ISL ports
The cluster switches use the following inter-switch link (ISL) ports:
-
NetApp CN1610: ports 0/13 through 0/16 (10GbE)
-
BES-53248: ports 0/55-0/56 (100GbE)
The NetApp Hardware Universe contains information about ONTAP compatibility, supported EFOS firmware, and cabling to BES-53248 cluster switches.
ISL cabling
The appropriate ISL cabling is as follows:
-
Beginning: For CN1610 to CN1610 (SFP+ to SFP+), four SFP+ optical fiber or copper direct-attach cables.
-
Final: For BES-53248 to BES-53248 (QSFP28 to QSFP28), two QSFP28 optical transceivers/fiber or copper direct-attach cables.
Migrate the switches
Follow this procedure to migrate CN1610 cluster switches to BES-53248 cluster switches.
The examples in this procedure use the following switch and node nomenclature:
-
The examples use two nodes, each deploying two 10 GbE cluster interconnect ports:
e0a
ande0b
. -
The command outputs might vary depending on different releases of ONTAP software.
-
The CN1610 switches to be replaced are
CL1
andCL2
. -
The BES-53248 switches to replace the CN1610 switches are
cs1
andcs2
. -
The nodes are
node1
andnode2
. -
The switch CL2 is replaced by cs2 first, followed with CL1 by cs1.
-
The BES-53248 switches are pre-loaded with the supported versions of Reference Configuration File (RCF) and Ethernet Fabric OS (EFOS) with ISL cables connected on ports 55 and 56.
-
The cluster LIF names are
node1_clus1
andnode1_clus2
for node1, andnode2_clus1
andnode2_clus2
for node2.
This procedure covers the following scenario:
-
The cluster starts with two nodes connected to two CN1610 cluster switches.
-
CN1610 switch CL2 is replaced by BES-53248 switch cs2:
-
Shut down the ports to the cluster nodes. All ports must be shut down simultaneously to avoid cluster instability.
-
Disconnect the cables from all cluster ports on all nodes connected to CL2, and then use supported cables to reconnect the ports to the new cluster switch cs2.
-
-
CN1610 switch CL1 is replaced by BES-53248 switch cs1:
-
Shut down the ports to the cluster nodes. All ports must be shut down simultaneously to avoid cluster instability.
-
Disconnect the cables from all cluster ports on all nodes connected to CL1, and then use supported cables to reconnect the ports to the new cluster switch cs1.
-
No operational inter-switch link (ISL) is needed during this procedure. This is by design because RCF version changes can affect ISL connectivity temporarily. To ensure non-disruptive cluster operations, the following procedure migrates all of the cluster LIFs to the operational partner switch while performing the steps on the target switch. |
Step 1: Prepare for migration
-
If AutoSupport is enabled on this cluster, suppress automatic case creation by invoking an AutoSupport message:
system node autosupport invoke -node * -type all -message MAINT=xh
where x is the duration of the maintenance window in hours.
The AutoSupport message notifies technical support of this maintenance task so that automatic case creation is suppressed during the maintenance window. The following command suppresses automatic case creation for two hours:
cluster1::*> system node autosupport invoke -node * -type all -message MAINT=2h
-
Change the privilege level to advanced, entering y when prompted to continue:
set -privilege advanced
The advanced prompt (*>) appears.
Step 2: Configure ports and cabling
-
On the new switches, confirm that the ISL is cabled and healthy between switches cs1 and cs2:
show port-channel
Show example
The following example shows that the ISL ports are up on switch cs1:
(cs1)# show port-channel 1/1 Local Interface................................ 1/1 Channel Name................................... Cluster-ISL Link State..................................... Up Admin Mode..................................... Enabled Type........................................... Dynamic Port channel Min-links......................... 1 Load Balance Option............................ 7 (Enhanced hashing mode) Mbr Device/ Port Port Ports Timeout Speed Active ------ ------------- --------- ------- 0/55 actor/long 100G Full True partner/long 0/56 actor/long 100G Full True partner/long (cs1) #
The following example shows that the ISL ports are up on switch cs2:
(cs2)# show port-channel 1/1 Local Interface................................ 1/1 Channel Name................................... Cluster-ISL Link State..................................... Up Admin Mode..................................... Enabled Type........................................... Dynamic Port channel Min-links......................... 1 Load Balance Option............................ 7 (Enhanced hashing mode) Mbr Device/ Port Port Ports Timeout Speed Active ------ ------------- --------- ------- 0/55 actor/long 100G Full True partner/long 0/56 actor/long 100G Full True partner/long
-
Display the cluster ports on each node that is connected to the existing cluster switches:
network device-discovery show -protocol cdp
Show example
The following example displays how many cluster interconnect interfaces have been configured in each node for each cluster interconnect switch:
cluster1::*> network device-discovery show -protocol cdp Node/ Local Discovered Protocol Port Device (LLDP: ChassisID) Interface Platform ----------- ------ ------------------------- ---------------- ---------------- node2 /cdp e0a CL1 0/2 CN1610 e0b CL2 0/2 CN1610 node1 /cdp e0a CL1 0/1 CN1610 e0b CL2 0/1 CN1610
-
Determine the administrative or operational status for each cluster interface.
-
Verify that all the cluster ports are
up
with ahealthy
status:network port show -ipspace Cluster
Show example
cluster1::*> network port show -ipspace Cluster Node: node1 Ignore Speed(Mbps) Health Health Port IPspace Broadcast Domain Link MTU Admin/Oper Status Status --------- ------------ ---------------- ---- ---- ----------- -------- ------ e0a Cluster Cluster up 9000 auto/10000 healthy false e0b Cluster Cluster up 9000 auto/10000 healthy false Node: node2 Ignore Speed(Mbps) Health Health Port IPspace Broadcast Domain Link MTU Admin/Oper Status Status --------- ------------ ---------------- ---- ---- ----------- -------- ------ e0a Cluster Cluster up 9000 auto/10000 healthy false e0b Cluster Cluster up 9000 auto/10000 healthy false
-
Verify that all the cluster interfaces (LIFs) are on their home ports:
network interface show -vserver Cluster
Show example
cluster1::*> network interface show -vserver Cluster Logical Status Network Current Current Is Vserver Interface Admin/Oper Address/Mask Node Port Home ----------- ---------- ---------- ------------------ ------------- ------- ---- Cluster node1_clus1 up/up 169.254.209.69/16 node1 e0a true node1_clus2 up/up 169.254.49.125/16 node1 e0b true node2_clus1 up/up 169.254.47.194/16 node2 e0a true node2_clus2 up/up 169.254.19.183/16 node2 e0b true
-
-
Verify that the cluster displays information for both cluster switches:
Beginning with ONTAP 9.8, use the command: system switch ethernet show -is-monitoring-enabled-operational true
cluster1::*> system switch ethernet show -is-monitoring-enabled-operational true Switch Type Address Model ----------------------------- ---------------- ------------- -------- CL1 cluster-network 10.10.1.101 CN1610 Serial Number: 01234567 Is Monitored: true Reason: Software Version: 1.3.0.3 Version Source: ISDP CL2 cluster-network 10.10.1.102 CN1610 Serial Number: 01234568 Is Monitored: true Reason: Software Version: 1.3.0.3 Version Source: ISDP cluster1::*>
For ONTAP 9.7 and earlier, use the command: system cluster-switch show -is-monitoring-enabled-operational true
cluster1::*> system cluster-switch show -is-monitoring-enabled-operational true Switch Type Address Model ----------------------------- ---------------- ------------- -------- CL1 cluster-network 10.10.1.101 CN1610 Serial Number: 01234567 Is Monitored: true Reason: Software Version: 1.3.0.3 Version Source: ISDP CL2 cluster-network 10.10.1.102 CN1610 Serial Number: 01234568 Is Monitored: true Reason: Software Version: 1.3.0.3 Version Source: ISDP cluster1::*>
-
Disable auto-revert on the cluster LIFs.
cluster1::*> network interface modify -vserver Cluster -lif * -auto-revert false
-
On cluster switch CL2, shut down the ports connected to the cluster ports of the nodes in order to fail over the cluster LIFs:
(CL2)# configure (CL2)(Config)# interface 0/1-0/16 (CL2)(Interface 0/1-0/16)# shutdown (CL2)(Interface 0/1-0/16)# exit (CL2)(Config)# exit (CL2)#
-
Verify that the cluster LIFs have failed over to the ports hosted on cluster switch CL1. This might take a few seconds.
network interface show -vserver Cluster
Show example
cluster1::*> network interface show -vserver Cluster Logical Status Network Current Current Is Vserver Interface Admin/Oper Address/Mask Node Port Home ----------- ------------ ---------- ------------------ ------------- ------- ---- Cluster node1_clus1 up/up 169.254.209.69/16 node1 e0a true node1_clus2 up/up 169.254.49.125/16 node1 e0a false node2_clus1 up/up 169.254.47.194/16 node2 e0a true node2_clus2 up/up 169.254.19.183/16 node2 e0a false
-
Verify that the cluster is healthy:
cluster show
Show example
cluster1::*> cluster show Node Health Eligibility Epsilon ---------- ------- ------------- ------- node1 true true false node2 true true false
-
Move all cluster node connection cables from the old CL2 switch to the new cs2 switch.
-
Confirm the health of the network connections moved to cs2:
network port show -ipspace Cluster
Show example
cluster1::*> network port show -ipspace Cluster Node: node1 Ignore Speed(Mbps) Health Health Port IPspace Broadcast Domain Link MTU Admin/Oper Status Status --------- ------------ ---------------- ---- ---- ----------- -------- ------ e0a Cluster Cluster up 9000 auto/10000 healthy false e0b Cluster Cluster up 9000 auto/10000 healthy false Node: node2 Ignore Speed(Mbps) Health Health Port IPspace Broadcast Domain Link MTU Admin/Oper Status Status --------- ------------ ---------------- ---- ---- ----------- -------- ------ e0a Cluster Cluster up 9000 auto/10000 healthy false e0b Cluster Cluster up 9000 auto/10000 healthy false
All cluster ports that were moved should be
up
. -
Check neighbor information on the cluster ports:
network device-discovery show -protocol cdp
Show example
cluster1::*> network device-discovery show -protocol cdp Node/ Local Discovered Protocol Port Device (LLDP: ChassisID) Interface Platform ----------- ------ ------------------------- ---------------- ---------------- node2 /cdp e0a CL1 0/2 CN1610 e0b cs2 0/2 BES-53248 node1 /cdp e0a CL1 0/1 CN1610 e0b cs2 0/1 BES-53248
-
Confirm the switch port connections are healthy from switch cs2's perspective:
cs2# show port all cs2# show isdp neighbors
-
On cluster switch CL1, shut down the ports connected to the cluster ports of the nodes in order to fail over the cluster LIFs:
(CL1)# configure (CL1)(Config)# interface 0/1-0/16 (CL1)(Interface 0/1-0/16)# shutdown (CL1)(Interface 0/13-0/16)# exit (CL1)(Config)# exit (CL1)#
All cluster LIFs failover to the cs2 switch.
-
Verify that the cluster LIFs have failed over to the ports hosted on switch cs2. This might take a few seconds:
network interface show -vserver Cluster
Show example
cluster1::*> network interface show -vserver Cluster Logical Status Network Current Current Is Vserver Interface Admin/Oper Address/Mask Node Port Home ----------- ------------ ---------- ------------------ ------------- ------- ---- Cluster node1_clus1 up/up 169.254.209.69/16 node1 e0b false node1_clus2 up/up 169.254.49.125/16 node1 e0b true node2_clus1 up/up 169.254.47.194/16 node2 e0b false node2_clus2 up/up 169.254.19.183/16 node2 e0b true
-
Verify that the cluster is healthy:
cluster show
Show example
cluster1::*> cluster show Node Health Eligibility Epsilon ---------- ------- ------------- ------- node1 true true false node2 true true false
-
Move the cluster node connection cables from CL1 to the new cs1 switch.
-
Confirm the health of the network connections moved to cs1:
network port show -ipspace Cluster
Show example
cluster1::*> network port show -ipspace Cluster Node: node1 Ignore Speed(Mbps) Health Health Port IPspace Broadcast Domain Link MTU Admin/Oper Status Status --------- ------------ ---------------- ---- ---- ----------- -------- ------ e0a Cluster Cluster up 9000 auto/10000 healthy false e0b Cluster Cluster up 9000 auto/10000 healthy false Node: node2 Ignore Speed(Mbps) Health Health Port IPspace Broadcast Domain Link MTU Admin/Oper Status Status --------- ------------ ---------------- ---- ---- ----------- -------- ------ e0a Cluster Cluster up 9000 auto/10000 healthy false e0b Cluster Cluster up 9000 auto/10000 healthy false
All cluster ports that were moved should be
up
. -
Check neighbor information on the cluster ports:
network device-discovery show
Show example
cluster1::*> network device-discovery show -protocol cdp Node/ Local Discovered Protocol Port Device (LLDP: ChassisID) Interface Platform ----------- ------ ------------------------- ---------------- ---------------- node1 /cdp e0a cs1 0/1 BES-53248 e0b cs2 0/1 BES-53248 node2 /cdp e0a cs1 0/2 BES-53248 e0b cs2 0/2 BES-53248
-
Confirm the switch port connections are healthy from switch cs1's perspective:
cs1# show port all cs1# show isdp neighbors
-
Verify that the ISL between cs1 and cs2 is still operational:
show port-channel
Show example
The following example shows that the ISL ports are up on switch cs1:
(cs1)# show port-channel 1/1 Local Interface................................ 1/1 Channel Name................................... Cluster-ISL Link State..................................... Up Admin Mode..................................... Enabled Type........................................... Dynamic Port channel Min-links......................... 1 Load Balance Option............................ 7 (Enhanced hashing mode) Mbr Device/ Port Port Ports Timeout Speed Active ------ ------------- --------- ------- 0/55 actor/long 100G Full True partner/long 0/56 actor/long 100G Full True partner/long (cs1) #
The following example shows that the ISL ports are up on switch cs2:
(cs2)# show port-channel 1/1 Local Interface................................ 1/1 Channel Name................................... Cluster-ISL Link State..................................... Up Admin Mode..................................... Enabled Type........................................... Dynamic Port channel Min-links......................... 1 Load Balance Option............................ 7 (Enhanced hashing mode) Mbr Device/ Port Port Ports Timeout Speed Active ------ ------------- --------- ------- 0/55 actor/long 100G Full True partner/long 0/56 actor/long 100G Full True partner/long
-
Delete the replaced CN1610 switches from the cluster's switch table, if they are not automatically removed:
Beginning with ONTAP 9.8, use the command: system switch ethernet delete -device device-name
cluster::*> system switch ethernet delete -device CL1 cluster::*> system switch ethernet delete -device CL2
For ONTAP 9.7 and earlier, use the command: system cluster-switch delete -device device-name
cluster::*> system cluster-switch delete -device CL1 cluster::*> system cluster-switch delete -device CL2
Step 3: Verify the configuration
-
Enable auto-revert on the cluster LIFs.
cluster1::*> network interface modify -vserver Cluster -lif * -auto-revert true
-
Verify that the cluster LIFs have reverted to their home ports (this might take a minute):
network interface show -vserver Cluster
If the cluster LIFs have not reverted to their home port, manually revert them:
network interface revert -vserver Cluster -lif *
-
Verify that the cluster is healthy:
cluster show
-
Verify the connectivity of the remote cluster interfaces:
You can use the network interface check cluster-connectivity
command to start an accessibility check for cluster connectivity and then display the details:
network interface check cluster-connectivity start
and network interface check cluster-connectivity show
cluster1::*> network interface check cluster-connectivity start
NOTE: Wait for a number of seconds before running the show
command to display the details.
cluster1::*> network interface check cluster-connectivity show Source Destination Packet Node Date LIF LIF Loss ------ -------------------------- --------------- ----------------- ----------- node1 3/5/2022 19:21:18 -06:00 node1_clus2 node2_clus1 none 3/5/2022 19:21:20 -06:00 node1_clus2 node2_clus2 none node2 3/5/2022 19:21:18 -06:00 node2_clus2 node1_clus1 none 3/5/2022 19:21:20 -06:00 node2_clus2 node1_clus2 none
For all ONTAP releases, you can also use the cluster ping-cluster -node <name>
command to check the connectivity:
cluster ping-cluster -node <name>
cluster1::*> cluster ping-cluster -node node2 Host is node2 Getting addresses from network interface table... Cluster node1_clus1 169.254.209.69 node1 e0a Cluster node1_clus2 169.254.49.125 node1 e0b Cluster node2_clus1 169.254.47.194 node2 e0a Cluster node2_clus2 169.254.19.183 node2 e0b Local = 169.254.47.194 169.254.19.183 Remote = 169.254.209.69 169.254.49.125 Cluster Vserver Id = 4294967293 Ping status: .... Basic connectivity succeeds on 4 path(s) Basic connectivity fails on 0 path(s) ................ Detected 9000 byte MTU on 4 path(s): Local 169.254.19.183 to Remote 169.254.209.69 Local 169.254.19.183 to Remote 169.254.49.125 Local 169.254.47.194 to Remote 169.254.209.69 Local 169.254.47.194 to Remote 169.254.49.125 Larger than PMTU communication succeeds on 4 path(s) RPC status: 2 paths up, 0 paths down (tcp check) 2 paths up, 0 paths down (udp check)