Install the Reference Configuration File (RCF)
You can install the Reference Configuration File (RCF) after configuring the BES-53248 cluster switch and after applying the new licenses.
For EFOS 3.12 and later, follow the installation steps in Install the Reference Configuration File (RCF) and license file. |
Review requirements
Verify that the following are in place:
-
A current backup of the switch configuration.
-
A fully functioning cluster (no errors in the logs or similar issues).
-
The current RCF file, available from the Broadcom Cluster Switches page.
-
A boot configuration in the RCF that reflects the desired boot images, required if you are installing only EFOS and keeping your current RCF version. If you need to change the boot configuration to reflect the current boot images, you must do so before reapplying the RCF so that the correct version is instantiated on future reboots.
-
A console connection to the switch, required when installing the RCF from a factory-default state. This requirement is optional if you have used the Knowledge Base article How to clear configuration on a Broadcom interconnect switch while retaining remote connectivity to clear the configuration, beforehand.
Consult the switch compatibility table for the supported ONTAP and RCF versions. See the EFOS Software download page. Note that there can be command dependencies between the command syntax in the RCF and that found in versions of EFOS.
Install the configuration file
The examples in this procedure use the following switch and node nomenclature:
-
The names of the two BES-53248 switches are cs1 and cs2.
-
The node names are cluster1-01, cluster1-02, cluster1-03, and cluster1-04.
-
The cluster LIF names are cluster1-01_clus1, cluster1-01_clus2, cluster1-02_clus1, cluster1-02_clus2, cluster1-03_clus1, cluster1-03_clus2, cluster1-04_clus1, and cluster1-04_clus2.
-
The
cluster1::*>
prompt indicates the name of the cluster. -
The examples in this procedure use four nodes. These nodes use two 10GbE cluster interconnect ports
e0a
ande0b
. See the Hardware Universe to verify the correct cluster ports on your platforms.
The command outputs might vary depending on different releases of ONTAP. |
The procedure requires the use of both ONTAP commands and Broadcom switch commands; ONTAP commands are used unless otherwise indicated.
No operational inter-switch link (ISL) is needed during this procedure. This is by design because RCF version changes can affect ISL connectivity temporarily. To ensure non-disruptive cluster operations, the following procedure migrates all the cluster LIFs to the operational partner switch while performing the steps on the target switch.
Before installing a new switch software version and RCFs, use the Knowledge Base article How to clear configuration on a Broadcom interconnect switch while retaining remote connectivity. If you must erase the switch settings completely, then you need to perform the basic configuration again. You must be connected to the switch using the serial console because a complete configuration erasure resets the configuration of the management network. |
Step 1: Prepare for the installation
-
If AutoSupport is enabled on this cluster, suppress automatic case creation by invoking an AutoSupport message:
system node autosupport invoke -node * -type all -message MAINT=xh
where x is the duration of the maintenance window in hours.
The AutoSupport message notifies technical support of this maintenance task so that automatic case creation is suppressed during the maintenance window. The following command suppresses automatic case creation for two hours:
cluster1::*> system node autosupport invoke -node \* -type all -message MAINT=2h
-
Change the privilege level to advanced, entering y when prompted to continue:
set -privilege advanced
The advanced prompt (*>) appears.
-
Display the cluster ports on each node that are connected to the cluster switches:
network device-discovery show
Show example
cluster1::*> network device-discovery show Node/ Local Discovered Protocol Port Device (LLDP: ChassisID) Interface Platform ----------- ------ ------------------------- ---------------- -------- cluster1-01/cdp e0a cs1 0/2 BES-53248 e0b cs2 0/2 BES-53248 cluster1-02/cdp e0a cs1 0/1 BES-53248 e0b cs2 0/1 BES-53248 cluster1-03/cdp e0a cs1 0/4 BES-53248 e0b cs2 0/4 BES-53248 cluster1-04/cdp e0a cs1 0/3 BES-53248 e0b cs2 0/3 BES-53248 cluster1::*>
-
Check the administrative and operational status of each cluster port.
-
Verify that all the cluster ports are up with a healthy status:
network port show -role cluster
Show example
cluster1::*> network port show -role cluster Node: cluster1-01 Ignore Speed(Mbps) Health Health Port IPspace Broadcast Domain Link MTU Admin/Oper Status Status --------- ------------ ---------------- ---- ---- ----------- -------- ------ e0a Cluster Cluster up 9000 auto/100000 healthy false e0b Cluster Cluster up 9000 auto/100000 healthy false Node: cluster1-02 Ignore Speed(Mbps) Health Health Port IPspace Broadcast Domain Link MTU Admin/Oper Status Status --------- ------------ ---------------- ---- ---- ----------- -------- ------ e0a Cluster Cluster up 9000 auto/100000 healthy false e0b Cluster Cluster up 9000 auto/100000 healthy false 8 entries were displayed. Node: cluster1-03 Ignore Speed(Mbps) Health Health Port IPspace Broadcast Domain Link MTU Admin/Oper Status Status --------- ------------ ---------------- ---- ---- ----------- -------- ------ e0a Cluster Cluster up 9000 auto/10000 healthy false e0b Cluster Cluster up 9000 auto/10000 healthy false Node: cluster1-04 Ignore Speed(Mbps) Health Health Port IPspace Broadcast Domain Link MTU Admin/Oper Status Status --------- ------------ ---------------- ---- ---- ----------- -------- ------ e0a Cluster Cluster up 9000 auto/10000 healthy false e0b Cluster Cluster up 9000 auto/10000 healthy false cluster1::*>
-
Verify that all the cluster interfaces (LIFs) are on the home port:
network interface show -role cluster
Show example
cluster1::*> network interface show -role cluster Logical Status Network Current Current Is Vserver Interface Admin/Oper Address/Mask Node Port Home ----------- ------------------ ---------- ----------------- ------------ ------- ---- Cluster cluster1-01_clus1 up/up 169.254.3.4/23 cluster1-01 e0a true cluster1-01_clus2 up/up 169.254.3.5/23 cluster1-01 e0b true cluster1-02_clus1 up/up 169.254.3.8/23 cluster1-02 e0a true cluster1-02_clus2 up/up 169.254.3.9/23 cluster1-02 e0b true cluster1-03_clus1 up/up 169.254.1.3/23 cluster1-03 e0a true cluster1-03_clus2 up/up 169.254.1.1/23 cluster1-03 e0b true cluster1-04_clus1 up/up 169.254.1.6/23 cluster1-04 e0a true cluster1-04_clus2 up/up 169.254.1.7/23 cluster1-04 e0b true
-
-
Verify that the cluster displays information for both cluster switches.
Beginning with ONTAP 9.8, use the command:
system switch ethernet show -is-monitoring-enabled-operational true
cluster1::*> system switch ethernet show -is-monitoring-enabled-operational true Switch Type Address Model --------------------------- ------------------ ---------------- --------------- cs1 cluster-network 10.228.143.200 BES-53248 Serial Number: QTWCU22510008 Is Monitored: true Reason: None Software Version: 3.10.0.3 Version Source: CDP/ISDP cs2 cluster-network 10.228.143.202 BES-53248 Serial Number: QTWCU22510009 Is Monitored: true Reason: None Software Version: 3.10.0.3 Version Source: CDP/ISDP cluster1::*>
For ONTAP 9.7 and earlier, use the command:
system cluster-switch show -is-monitoring-enabled-operational true
cluster1::*> system cluster-switch show -is-monitoring-enabled-operational true Switch Type Address Model --------------------------- ------------------ ---------------- --------------- cs1 cluster-network 10.228.143.200 BES-53248 Serial Number: QTWCU22510008 Is Monitored: true Reason: None Software Version: 3.10.0.3 Version Source: CDP/ISDP cs2 cluster-network 10.228.143.202 BES-53248 Serial Number: QTWCU22510009 Is Monitored: true Reason: None Software Version: 3.10.0.3 Version Source: CDP/ISDP cluster1::*>
Step 2: Configure ports
-
On switch cs2, confirm the list of ports that are connected to the nodes in the cluster.
show isdp neighbor
-
On cluster switch cs2, shut down the ports connected to the cluster ports of the nodes. For example, if ports 0/1 to 0/16 are connected to ONTAP nodes:
(cs2)> enable (cs2)# configure (cs2)(Config)# interface 0/1-0/16 (cs2)(Interface 0/1-0/16)# shutdown (cs2)(Interface 0/1-0/16)# exit (cs2)(Config)#
-
Verify that the cluster LIFs have migrated to the ports hosted on cluster switch cs1. This might take a few seconds.
network interface show -role cluster
Show example
cluster1::*> network interface show -role cluster Logical Status Network Current Current Is Vserver Interface Admin/Oper Address/Mask Node Port Home ----------- ----------------- ---------- ------------------ ------------- ------- ---- Cluster cluster1-01_clus1 up/up 169.254.3.4/23 cluster1-01 e0a true cluster1-01_clus2 up/up 169.254.3.5/23 cluster1-01 e0a false cluster1-02_clus1 up/up 169.254.3.8/23 cluster1-02 e0a true cluster1-02_clus2 up/up 169.254.3.9/23 cluster1-02 e0a false cluster1-03_clus1 up/up 169.254.1.3/23 cluster1-03 e0a true cluster1-03_clus2 up/up 169.254.1.1/23 cluster1-03 e0a false cluster1-04_clus1 up/up 169.254.1.6/23 cluster1-04 e0a true cluster1-04_clus2 up/up 169.254.1.7/23 cluster1-04 e0a false cluster1::*>
-
Verify that the cluster is healthy:
cluster show
Show example
cluster1::*> cluster show Node Health Eligibility Epsilon -------------------- ------- ------------ ------- cluster1-01 true true false cluster1-02 true true false cluster1-03 true true true cluster1-04 true true false
-
If you have not already done so, save the current switch configuration by copying the output of the following command to a log file:
show running-config
-
Clean the configuration on switch cs2 and perform a basic setup.
When updating or applying a new RCF, you must erase the switch settings and perform basic configuration. You must be connected to the switch using the serial console to erase switch settings. This requirement is optional if you have used the Knowledge Base article How to clear the configuration on a Broadcom interconnect switch while retaining remote connectivity to clear the configuration, beforehand. Clearing the configuration does not delete licenses. -
SSH into the switch.
Only proceed when all the cluster LIFs have been removed from the ports on the switch and the switch is prepared to have the configuration cleared.
-
Enter privilege mode:
(cs2)> enable (cs2)#
-
Copy and paste the following commands to remove the previous RCF configuration (depending on the previous RCF version used, some commands might generate an error if a particular setting is not present):
clear config interface 0/1-0/56 y clear config interface lag 1 y configure deleteport 1/1 all no policy-map CLUSTER no policy-map WRED_25G no policy-map WRED_100G no class-map CLUSTER no class-map HA no class-map RDMA no classofservice dot1p-mapping no random-detect queue-parms 0 no random-detect queue-parms 1 no random-detect queue-parms 2 no random-detect queue-parms 3 no random-detect queue-parms 4 no random-detect queue-parms 5 no random-detect queue-parms 6 no random-detect queue-parms 7 no cos-queue min-bandwidth no cos-queue random-detect 0 no cos-queue random-detect 1 no cos-queue random-detect 2 no cos-queue random-detect 3 no cos-queue random-detect 4 no cos-queue random-detect 5 no cos-queue random-detect 6 no cos-queue random-detect 7 exit vlan database no vlan 17 no vlan 18 exit
-
Save the running configuration to the startup configuration:
(cs2)# write memory This operation may take a few minutes. Management interfaces will not be available during this time. Are you sure you want to save? (y/n) y Config file 'startup-config' created successfully. Configuration Saved!
-
Perform a reboot of the switch:
(cs2)# reload Are you sure you would like to reset the system? (y/n) y
-
Log in to the switch again using SSH to complete the RCF installation.
-
-
Note the following:
-
If additional port licenses have been installed on the switch, you must modify the RCF to configure the additional licensed ports. See Activate newly licensed ports for details.
-
Record any customizations that were made in the previous RCF and apply these to the new RCF. For example, setting port speeds or hard-coding FEC mode.
-
-
Copy the RCF to the bootflash of switch cs2 using one of the following transfer protocols: FTP, TFTP, SFTP, or SCP.
This example shows SFTP being used to copy an RCF to the bootflash on switch cs2:
(cs2)# copy sftp://172.19.2.1/BES-53248-RCF-v1.9-Cluster-HA.txt nvram:reference-config Remote Password:** Mode........................................... TFTP Set Server IP.................................. 172.19.2.1 Path........................................... //tmp/ Filename....................................... BES-53248_RCF_v1.9-Cluster-HA.txt Data Type...................................... Config Script Destination Filename........................... BES-53248_RCF_v1.9-Cluster-HA.scr Management access will be blocked for the duration of the transfer Are you sure you want to start? (y/n) y TFTP Code transfer starting... File transfer operation completed successfully.
-
Verify that the script was downloaded and saved under the file name you gave it:
script list
(cs2)# script list Configuration Script Name Size(Bytes) Date of Modification ----------------------------------------- ----------- -------------------- Reference-config.scr 2680 2024 05 31 21:54:22 2 configuration script(s) found. 2042 Kbytes free.
-
Apply the script to the switch:
script apply
(cs2)# script apply reference-config.scr Are you sure you want to apply the configuration script? (y/n) y The system has unsaved changes. Would you like to save them now? (y/n) y Config file 'startup-config' created successfully. Configuration Saved! Configuration script 'BES-53248_RCF_v1.9-Cluster-HA.scr' applied.
-
Copy the RCF to the bootflash of switch cs2 using one of the following transfer protocols: FTP, TFTP, SFTP, or SCP.
This example shows SFTP being used to copy an RCF to the bootflash on switch cs2:
(cs2)# copy sftp://172.19.2.1/tmp/BES-53248_RCF_v1.9-Cluster-HA.txt nvram:script BES-53248_RCF_v1.9-Cluster-HA.scr Remote Password:** Mode........................................... SFTP Set Server IP.................................. 172.19.2.1 Path........................................... //tmp/ Filename....................................... BES-53248_RCF_v1.9-Cluster-HA.txt Data Type...................................... Config Script Destination Filename........................... BES-53248_RCF_v1.9-Cluster-HA.scr Management access will be blocked for the duration of the transfer Are you sure you want to start? (y/n) y SFTP Code transfer starting... File transfer operation completed successfully.
-
Verify that the script was downloaded and saved to the file name you gave it:
script list
(cs2)# script list Configuration Script Name Size(Bytes) Date of Modification ----------------------------------------- ----------- -------------------- BES-53248_RCF_v1.9-Cluster-HA.scr 2241 2020 09 30 05:41:00 1 configuration script(s) found.
-
Apply the script to the switch:
script apply
(cs2)# script apply BES-53248_RCF_v1.9-Cluster-HA.scr Are you sure you want to apply the configuration script? (y/n) y The system has unsaved changes. Would you like to save them now? (y/n) y Config file 'startup-config' created successfully. Configuration Saved! Configuration script 'BES-53248_RCF_v1.9-Cluster-HA.scr' applied.
-
Examine the banner output from the
show clibanner
command. You must read and follow these instructions to verify the proper configuration and operation of the switch.Show example
(cs2)# show clibanner Banner Message configured : ========================= BES-53248 Reference Configuration File v1.9 for Cluster/HA/RDMA Switch : BES-53248 Filename : BES-53248-RCF-v1.9-Cluster.txt Date : 10-26-2022 Version : v1.9 Port Usage: Ports 01 - 16: 10/25GbE Cluster Node Ports, base config Ports 17 - 48: 10/25GbE Cluster Node Ports, with licenses Ports 49 - 54: 40/100GbE Cluster Node Ports, with licenses, added right to left Ports 55 - 56: 100GbE Cluster ISL Ports, base config NOTE: - The 48 SFP28/SFP+ ports are organized into 4-port groups in terms of port speed: Ports 1-4, 5-8, 9-12, 13-16, 17-20, 21-24, 25-28, 29-32, 33-36, 37-40, 41-44, 45-48 The port speed should be the same (10GbE or 25GbE) across all ports in a 4-port group - If additional licenses are purchased, follow the 'Additional Node Ports activated with Licenses' section for instructions - If SSH is active, it will have to be re-enabled manually after 'erase startup-config' command has been executed and the switch rebooted
-
On the switch, verify that the additional licensed ports appear after the RCF is applied:
show port all | exclude Detach
Show example
(cs2)# show port all | exclude Detach Admin Physical Physical Link Link LACP Actor Intf Type Mode Mode Status Status Trap Mode Timeout --------- ------ --------- ------------ ---------- ------ ------- ------ -------- 0/1 Enable Auto Down Enable Enable long 0/2 Enable Auto Down Enable Enable long 0/3 Enable Auto Down Enable Enable long 0/4 Enable Auto Down Enable Enable long 0/5 Enable Auto Down Enable Enable long 0/6 Enable Auto Down Enable Enable long 0/7 Enable Auto Down Enable Enable long 0/8 Enable Auto Down Enable Enable long 0/9 Enable Auto Down Enable Enable long 0/10 Enable Auto Down Enable Enable long 0/11 Enable Auto Down Enable Enable long 0/12 Enable Auto Down Enable Enable long 0/13 Enable Auto Down Enable Enable long 0/14 Enable Auto Down Enable Enable long 0/15 Enable Auto Down Enable Enable long 0/16 Enable Auto Down Enable Enable long 0/49 Enable 40G Full Down Enable Enable long 0/50 Enable 40G Full Down Enable Enable long 0/51 Enable 100G Full Down Enable Enable long 0/52 Enable 100G Full Down Enable Enable long 0/53 Enable 100G Full Down Enable Enable long 0/54 Enable 100G Full Down Enable Enable long 0/55 Enable 100G Full Down Enable Enable long 0/56 Enable 100G Full Down Enable Enable long
-
Verify on the switch that your changes have been made:
show running-config
(cs2)# show running-config
-
Save the running configuration so that it becomes the startup configuration when you reboot the switch:
write memory
(cs2)# write memory This operation may take a few minutes. Management interfaces will not be available during this time. Are you sure you want to save? (y/n) y Config file 'startup-config' created successfully. Configuration Saved!
-
Reboot the switch and verify that the running configuration is correct:
reload
(cs2)# reload Are you sure you would like to reset the system? (y/n) y System will now restart!
-
On cluster switch cs2, bring up the ports connected to the cluster ports of the nodes. For example, if ports 0/1 to 0/16 are connected to ONTAP nodes:
(cs2)> enable (cs2)# configure (cs2)(Config)# interface 0/1-0/16 (cs2)(Interface 0/1-0/16)# no shutdown (cs2)(Interface 0/1-0/16)# exit (cs2)(Config)#
-
Verify the ports on switch cs2:
show interfaces status all | exclude Detach
Show example
(cs1)# show interfaces status all | exclude Detach Link Physical Physical Media Flow Port Name State Mode Status Type Control VLAN --------- ------------------- ------ ---------- ---------- ---------- ---------- ------ . . . 0/16 10/25GbE Node Port Down Auto Inactive Trunk 0/17 10/25GbE Node Port Down Auto Inactive Trunk 0/18 10/25GbE Node Port Up 25G Full 25G Full 25GBase-SR Inactive Trunk 0/19 10/25GbE Node Port Up 25G Full 25G Full 25GBase-SR Inactive Trunk . . . 0/50 40/100GbE Node Port Down Auto Inactive Trunk 0/51 40/100GbE Node Port Down Auto Inactive Trunk 0/52 40/100GbE Node Port Down Auto Inactive Trunk 0/53 40/100GbE Node Port Down Auto Inactive Trunk 0/54 40/100GbE Node Port Down Auto Inactive Trunk 0/55 Cluster ISL Port Up Auto 100G Full Copper Inactive Trunk 0/56 Cluster ISL Port Up Auto 100G Full Copper Inactive Trunk
-
Verify the health of cluster ports on the cluster.
-
Verify that e0b ports are up and healthy across all nodes in the cluster:
network port show -role cluster
Show example
cluster1::*> network port show -role cluster Node: cluster1-01 Ignore Speed(Mbps) Health Health Port IPspace Broadcast Domain Link MTU Admin/Oper Status Status --------- ------------ ---------------- ---- ---- ----------- -------- ----- e0a Cluster Cluster up 9000 auto/10000 healthy false e0b Cluster Cluster up 9000 auto/10000 healthy false Node: cluster1-02 Ignore Speed(Mbps) Health Health Port IPspace Broadcast Domain Link MTU Admin/Oper Status Status --------- ------------ ---------------- ---- ---- ----------- -------- ----- e0a Cluster Cluster up 9000 auto/10000 healthy false e0b Cluster Cluster up 9000 auto/10000 healthy false Node: cluster1-03 Ignore Speed(Mbps) Health Health Port IPspace Broadcast Domain Link MTU Admin/Oper Status Status --------- ------------ ---------------- ---- ---- ----------- -------- ----- e0a Cluster Cluster up 9000 auto/100000 healthy false e0b Cluster Cluster up 9000 auto/100000 healthy false Node: cluster1-04 Ignore Speed(Mbps) Health Health Port IPspace Broadcast Domain Link MTU Admin/Oper Status Status --------- ------------ ---------------- ---- ---- ----------- -------- ----- e0a Cluster Cluster up 9000 auto/100000 healthy false e0b Cluster Cluster up 9000 auto/100000 healthy false
-
Verify the switch health from the cluster:
network device-discovery show -protocol cdp
Show example
cluster1::*> network device-discovery show -protocol cdp Node/ Local Discovered Protocol Port Device (LLDP: ChassisID) Interface Platform ----------- ------ ------------------------- ----------------- -------- cluster1-01/cdp e0a cs1 0/2 BES-53248 e0b cs2 0/2 BES-53248 cluster01-2/cdp e0a cs1 0/1 BES-53248 e0b cs2 0/1 BES-53248 cluster01-3/cdp e0a cs1 0/4 BES-53248 e0b cs2 0/4 BES-53248 cluster1-04/cdp e0a cs1 0/3 BES-53248 e0b cs2 0/2 BES-53248
-
-
Verify that the cluster displays information for both cluster switches.
Beginning with ONTAP 9.8, use the command:
system switch ethernet show -is-monitoring-enabled-operational true
cluster1::*> system switch ethernet show -is-monitoring-enabled-operational true Switch Type Address Model --------------------------- ------------------ ---------------- --------------- cs1 cluster-network 10.228.143.200 BES-53248 Serial Number: QTWCU22510008 Is Monitored: true Reason: None Software Version: 3.10.0.3 Version Source: CDP/ISDP cs2 cluster-network 10.228.143.202 BES-53248 Serial Number: QTWCU22510009 Is Monitored: true Reason: None Software Version: 3.10.0.3 Version Source: CDP/ISDP cluster1::*>
For ONTAP 9.7 and earlier, use the command:
system cluster-switch show -is-monitoring-enabled-operational true
cluster1::*> system cluster-switch show -is-monitoring-enabled-operational true Switch Type Address Model --------------------------- ------------------ ---------------- --------------- cs1 cluster-network 10.228.143.200 BES-53248 Serial Number: QTWCU22510008 Is Monitored: true Reason: None Software Version: 3.10.0.3 Version Source: CDP/ISDP cs2 cluster-network 10.228.143.202 BES-53248 Serial Number: QTWCU22510009 Is Monitored: true Reason: None Software Version: 3.10.0.3 Version Source: CDP/ISDP cluster1::*>
-
On cluster switch cs1, shut down the ports connected to the cluster ports of the nodes.
The following example uses the interface example output:
(cs1)> enable (cs1)# configure (cs1)(Config)# interface 0/1-0/16 (cs1)(Interface 0/1-0/16)# shutdown
-
Verify that the cluster LIFs have migrated to the ports hosted on switch cs2. This might take a few seconds.
network interface show -role cluster
Show example
cluster1::*> network interface show -role cluster Logical Status Network Current Current Is Vserver Interface Admin/Oper Address/Mask Node Port Home ----------- ------------------ ---------- ------------------ ------------------ -------- ---- Cluster cluster1-01_clus1 up/up 169.254.3.4/23 cluster1-01 e0a false cluster1-01_clus2 up/up 169.254.3.5/23 cluster1-01 e0b true cluster1-02_clus1 up/up 169.254.3.8/23 cluster1-02 e0a false cluster1-02_clus2 up/up 169.254.3.9/23 cluster1-02 e0b true cluster1-03_clus1 up/up 169.254.1.3/23 cluster1-03 e0a false cluster1-03_clus2 up/up 169.254.1.1/23 cluster1-03 e0b true cluster1-04_clus1 up/up 169.254.1.6/23 cluster1-04 e0a false cluster1-04_clus2 up/up 169.254.1.7/23 cluster1-04 e0b true cluster1::*>
-
Verify that the cluster is healthy:
cluster show
Show example
cluster1::*> cluster show Node Health Eligibility Epsilon -------------------- -------- ------------- ------- cluster1-01 true true false cluster1-02 true true false cluster1-03 true true true cluster1-04 true true false
-
Repeat steps 4 to 19 on switch cs1.
-
Enable auto-revert on the cluster LIFs:
network interface modify -vserver Cluster -lif * -auto-revert true
-
Reboot switch cs1. This triggers the cluster LIFs to revert to their home ports. You can ignore the “cluster ports down” events reported on the nodes while the switch reboots.
(cs1)# reload The system has unsaved changes. Would you like to save them now? (y/n) y Config file 'startup-config' created successfully. Configuration Saved! System will now restart!
Step 3: Verify the configuration
-
On switch cs1, verify that the switch ports connected to the cluster ports are up:
show interfaces status all | exclude Detach
Show example
(cs1)# show interfaces status all | exclude Detach Link Physical Physical Media Flow Port Name State Mode Status Type Control VLAN --------- ------------------- ------ ---------- ---------- ---------- ---------- ------ . . . 0/16 10/25GbE Node Port Down Auto Inactive Trunk 0/17 10/25GbE Node Port Down Auto Inactive Trunk 0/18 10/25GbE Node Port Up 25G Full 25G Full 25GBase-SR Inactive Trunk 0/19 10/25GbE Node Port Up 25G Full 25G Full 25GBase-SR Inactive Trunk . . . 0/50 40/100GbE Node Port Down Auto Inactive Trunk 0/51 40/100GbE Node Port Down Auto Inactive Trunk 0/52 40/100GbE Node Port Down Auto Inactive Trunk 0/53 40/100GbE Node Port Down Auto Inactive Trunk 0/54 40/100GbE Node Port Down Auto Inactive Trunk 0/55 Cluster ISL Port Up Auto 100G Full Copper Inactive Trunk 0/56 Cluster ISL Port Up Auto 100G Full Copper Inactive Trunk
-
Verify that the ISL between switches cs1 and cs2 is functional:
show port-channel 1/1
Show example
(cs1)# show port-channel 1/1 Local Interface................................ 1/1 Channel Name................................... Cluster-ISL Link State..................................... Up Admin Mode..................................... Enabled Type........................................... Dynamic Port-channel Min-links......................... 1 Load Balance Option............................ 7 (Enhanced hashing mode) Mbr Device/ Port Port Ports Timeout Speed Active ------- ------------- --------- ------- 0/55 actor/long Auto True partner/long 0/56 actor/long Auto True partner/long
-
Verify that the cluster LIFs have reverted to their home port:
network interface show -role cluster
Show example
cluster1::*> network interface show -role cluster Logical Status Network Current Current Is Vserver Interface Admin/Oper Address/Mask Node Port Home ----------- ------------------ ---------- ------------------ ------------------- ------- ---- Cluster cluster1-01_clus1 up/up 169.254.3.4/23 cluster1-01 e0a true cluster1-01_clus2 up/up 169.254.3.5/23 cluster1-01 e0b true cluster1-02_clus1 up/up 169.254.3.8/23 cluster1-02 e0a true cluster1-02_clus2 up/up 169.254.3.9/23 cluster1-02 e0b true cluster1-03_clus1 up/up 169.254.1.3/23 cluster1-03 e0a true cluster1-03_clus2 up/up 169.254.1.1/23 cluster1-03 e0b true cluster1-04_clus1 up/up 169.254.1.6/23 cluster1-04 e0a true cluster1-04_clus2 up/up 169.254.1.7/23 cluster1-04 e0b true
-
Verify that the cluster is healthy:
cluster show
Show example
cluster1::*> cluster show Node Health Eligibility Epsilon -------------------- ------- ------------- ------- cluster1-01 true true false cluster1-02 true true false cluster1-03 true true true cluster1-04 true true false
-
Verify the connectivity of the remote cluster interfaces:
You can use the network interface check cluster-connectivity
command to start an accessibility check for cluster connectivity and then display the details:
network interface check cluster-connectivity start
and network interface check cluster-connectivity show
cluster1::*> network interface check cluster-connectivity start
NOTE: Wait for a number of seconds before running the show
command to display the details.
cluster1::*> network interface check cluster-connectivity show Source Destination Packet Node Date LIF LIF Loss ------ -------------------------- ------------------- ------------------- ------- cluster1-01 3/5/2022 19:21:18 -06:00 cluster1-01_clus2 cluster01-02_clus1 none 3/5/2022 19:21:20 -06:00 cluster1-01_clus2 cluster01-02_clus2 none cluster1-02 3/5/2022 19:21:18 -06:00 cluster1-02_clus2 cluster1-02_clus1 none 3/5/2022 19:21:20 -06:00 cluster1-02_clus2 cluster1-02_clus2 none
For all ONTAP releases, you can also use the cluster ping-cluster -node <name>
command to check the connectivity:
cluster ping-cluster -node <name>
cluster1::*> cluster ping-cluster -node local Host is cluster1-03 Getting addresses from network interface table... Cluster cluster1-03_clus1 169.254.1.3 cluster1-03 e0a Cluster cluster1-03_clus2 169.254.1.1 cluster1-03 e0b Cluster cluster1-04_clus1 169.254.1.6 cluster1-04 e0a Cluster cluster1-04_clus2 169.254.1.7 cluster1-04 e0b Cluster cluster1-01_clus1 169.254.3.4 cluster1-01 e0a Cluster cluster1-01_clus2 169.254.3.5 cluster1-01 e0b Cluster cluster1-02_clus1 169.254.3.8 cluster1-02 e0a Cluster cluster1-02_clus2 169.254.3.9 cluster1-02 e0b Local = 169.254.1.3 169.254.1.1 Remote = 169.254.1.6 169.254.1.7 169.254.3.4 169.254.3.5 169.254.3.8 169.254.3.9 Cluster Vserver Id = 4294967293 Ping status: ............ Basic connectivity succeeds on 12 path(s) Basic connectivity fails on 0 path(s) ................................................ Detected 9000 byte MTU on 12 path(s): Local 169.254.1.3 to Remote 169.254.1.6 Local 169.254.1.3 to Remote 169.254.1.7 Local 169.254.1.3 to Remote 169.254.3.4 Local 169.254.1.3 to Remote 169.254.3.5 Local 169.254.1.3 to Remote 169.254.3.8 Local 169.254.1.3 to Remote 169.254.3.9 Local 169.254.1.1 to Remote 169.254.1.6 Local 169.254.1.1 to Remote 169.254.1.7 Local 169.254.1.1 to Remote 169.254.3.4 Local 169.254.1.1 to Remote 169.254.3.5 Local 169.254.1.1 to Remote 169.254.3.8 Local 169.254.1.1 to Remote 169.254.3.9 Larger than PMTU communication succeeds on 12 path(s) RPC status: 6 paths up, 0 paths down (tcp check) 6 paths up, 0 paths down (udp check)
-
Change the privilege level back to admin:
set -privilege admin
-
If you suppressed automatic case creation, re-enable it by invoking an AutoSupport message:
system node autosupport invoke -node * -type all -message MAINT=END