Install the Reference Configuration File (RCF)
You install the Reference Configuration File (RCF) after setting up the Nexus 3132Q-V switches for the first time.
Verify the following installations and connections:
-
A current backup of the switch configuration.
-
A fully functioning cluster (no errors in the logs or similar issues).
-
The current RCF.
-
A console connection to the switch, required when installing the RCF.
The procedure requires the use of both ONTAP commands and Cisco Nexus 3000 Series Switches commands; ONTAP commands are used unless otherwise indicated.
No operational inter-switch link (ISL) is needed during this procedure. This is by design because RCF version changes can affect ISL connectivity temporarily. To enable non-disruptive cluster operations, the following procedure migrates all of the cluster LIFs to the operational partner switch while performing the steps on the target switch.
Step 1: Install the RCF on the switches
-
Display the cluster ports on each node that are connected to the cluster switches:
network device-discovery showShow example
cluster1::*> network device-discovery show Node/ Local Discovered Protocol Port Device (LLDP: ChassisID) Interface Platform ----------- ------ ------------------------- ---------------- ------------ cluster1-01/cdp e0a cs1 Ethernet1/7 N3K-C3132Q-V e0d cs2 Ethernet1/7 N3K-C3132Q-V cluster1-02/cdp e0a cs1 Ethernet1/8 N3K-C3132Q-V e0d cs2 Ethernet1/8 N3K-C3132Q-V cluster1-03/cdp e0a cs1 Ethernet1/1/1 N3K-C3132Q-V e0b cs2 Ethernet1/1/1 N3K-C3132Q-V cluster1-04/cdp e0a cs1 Ethernet1/1/2 N3K-C3132Q-V e0b cs2 Ethernet1/1/2 N3K-C3132Q-V cluster1::*> -
Check the administrative and operational status of each cluster port.
-
Verify that all the cluster ports are up with a healthy status:
network port show -ipspace ClusterShow example
cluster1::*> network port show -ipspace Cluster Node: cluster1-01 Ignore Speed(Mbps) Health Health Port IPspace Broadcast Domain Link MTU Admin/Oper Status Status --------- ------------ ---------------- ---- ---- ----------- -------- ------ e0a Cluster Cluster up 9000 auto/100000 healthy false e0d Cluster Cluster up 9000 auto/100000 healthy false Node: cluster1-02 Ignore Speed(Mbps) Health Health Port IPspace Broadcast Domain Link MTU Admin/Oper Status Status --------- ------------ ---------------- ---- ---- ----------- -------- ------ e0a Cluster Cluster up 9000 auto/100000 healthy false e0d Cluster Cluster up 9000 auto/100000 healthy false 8 entries were displayed. Node: cluster1-03 Ignore Speed(Mbps) Health Health Port IPspace Broadcast Domain Link MTU Admin/Oper Status Status --------- ------------ ---------------- ---- ---- ----------- -------- ------ e0a Cluster Cluster up 9000 auto/10000 healthy false e0b Cluster Cluster up 9000 auto/10000 healthy false Node: cluster1-04 Ignore Speed(Mbps) Health Health Port IPspace Broadcast Domain Link MTU Admin/Oper Status Status --------- ------------ ---------------- ---- ---- ----------- -------- ------ e0a Cluster Cluster up 9000 auto/10000 healthy false e0b Cluster Cluster up 9000 auto/10000 healthy false cluster1::*> -
Verify that all the cluster interfaces (LIFs) are on the home port:
network interface show -vserver ClusterShow example
cluster1::*> network interface show -vserver Cluster Logical Status Network Current Current Is Vserver Interface Admin/Oper Address/Mask Node Port Home ----------- ------------------ ---------- ----------------- ------------ ------- ---- Cluster cluster1-01_clus1 up/up 169.254.3.4/23 cluster1-01 e0a true cluster1-01_clus2 up/up 169.254.3.5/23 cluster1-01 e0d true cluster1-02_clus1 up/up 169.254.3.8/23 cluster1-02 e0a true cluster1-02_clus2 up/up 169.254.3.9/23 cluster1-02 e0d true cluster1-03_clus1 up/up 169.254.1.3/23 cluster1-03 e0a true cluster1-03_clus2 up/up 169.254.1.1/23 cluster1-03 e0b true cluster1-04_clus1 up/up 169.254.1.6/23 cluster1-04 e0a true cluster1-04_clus2 up/up 169.254.1.7/23 cluster1-04 e0b true cluster1::*> -
Verify that the cluster displays information for both cluster switches:
system cluster-switch show -is-monitoring-enabled-operational trueShow example
cluster1::*> system cluster-switch show -is-monitoring-enabled-operational true Switch Type Address Model --------------------------- ------------------ ---------------- --------------- cs1 cluster-network 10.0.0.1 NX3132QV Serial Number: FOXXXXXXXGS Is Monitored: true Reason: None Software Version: Cisco Nexus Operating System (NX-OS) Software, Version 9.3(4) Version Source: CDP cs2 cluster-network 10.0.0.2 NX3132QV Serial Number: FOXXXXXXXGD Is Monitored: true Reason: None Software Version: Cisco Nexus Operating System (NX-OS) Software, Version 9.3(4) Version Source: CDP 2 entries were displayed.
For ONTAP 9.8 and later, use the command system switch ethernet show -is-monitoring-enabled-operational true. -
-
Disable auto-revert on the cluster LIFs.
cluster1::*> network interface modify -vserver Cluster -lif * -auto-revert false
Make sure that auto-revert is disabled after running this command.
-
On cluster switch cs2, shut down the ports connected to the cluster ports of the nodes.
cs2> enable cs2# configure cs2(config)# interface eth1/1/1-2,eth1/7-8 cs2(config-if-range)# shutdown cs2(config-if-range)# exit cs2# exit
The number of ports displayed varies based on the number of nodes in the cluster. -
Verify that the cluster ports have failed over to the ports hosted on cluster switch cs1. This might take a few seconds.
network interface show -vserver ClusterShow example
cluster1::*> network interface show -vserver Cluster Logical Status Network Current Current Is Vserver Interface Admin/Oper Address/Mask Node Port Home ----------- ----------------- ---------- ------------------ ------------- ------- ---- Cluster cluster1-01_clus1 up/up 169.254.3.4/23 cluster1-01 e0a true cluster1-01_clus2 up/up 169.254.3.5/23 cluster1-01 e0a false cluster1-02_clus1 up/up 169.254.3.8/23 cluster1-02 e0a true cluster1-02_clus2 up/up 169.254.3.9/23 cluster1-02 e0a false cluster1-03_clus1 up/up 169.254.1.3/23 cluster1-03 e0a true cluster1-03_clus2 up/up 169.254.1.1/23 cluster1-03 e0a false cluster1-04_clus1 up/up 169.254.1.6/23 cluster1-04 e0a true cluster1-04_clus2 up/up 169.254.1.7/23 cluster1-04 e0a false cluster1::*> -
Verify that the cluster is healthy:
cluster showShow example
cluster1::*> cluster show Node Health Eligibility Epsilon -------------------- ------- ------------ ------- cluster1-01 true true false cluster1-02 true true false cluster1-03 true true true cluster1-04 true true false cluster1::*>
-
If you have not already done so, save a copy of the current switch configuration by copying the output of the following command to a text file:
show running-config -
Record any custom additions between the current running-config and the RCF file in use.
Make sure to configure the following: * Username and password * Management IP address * Default gateway * Switch name
-
Save basic configuration details to the
write_erase.cfgfile on the bootflash.When upgrading or applying a new RCF, you must erase the switch settings and perform basic configuration. You must be connected to the switch serial console port to set up the switch again. cs2# show run | section "switchname" > bootflash:write_erase.cfgcs2# show run | section "hostname" >> bootflash:write_erase.cfgcs2# show run | i "username admin password" >> bootflash:write_erase.cfgcs2# show run | section "vrf context management" >> bootflash:write_erase.cfgcs2# show run | section "interface mgmt0" >> bootflash:write_erase.cfg -
For RCF version 1.12 and later, run the following commands:
cs2# echo "hardware access-list tcam region vpc-convergence 256" >> bootflash:write_erase.cfgcs2# echo "hardware access-list tcam region racl 256" >> bootflash:write_erase.cfgcs2# echo "hardware access-list tcam region e-racl 256" >> bootflash:write_erase.cfgcs2# echo "hardware access-list tcam region qos 256" >> bootflash:write_erase.cfgSee the Knowledge Base article How to clear configuration on a Cisco interconnect switch while retaining remote connectivity for further details.
-
Verify that the
write_erase.cfgfile is populated as expected:show file bootflash:write_erase.cfg -
Issue the
write erasecommand to erase the current saved configuration:cs2# write eraseWarning: This command will erase the startup-configuration.Do you wish to proceed anyway? (y/n) [n] y -
Copy the previously saved basic configuration into the startup configuration.
cs2# copy bootflash:write_erase.cfg startup-config -
Reboot the switch:
cs2# reloadThis command will reboot the system. (y/n)? [n] y -
Repeat Steps 7 to 14 on switch cs1.
-
Connect the cluster ports of all nodes in the ONTAP cluster to switches cs1 and cs2.
Step 2: Verify the switch connections
-
Verify that the switch ports connected to the cluster ports are up.
show interface brief | grep upShow example
cs1# show interface brief | grep up . . Eth1/1/1 1 eth access up none 10G(D) -- Eth1/1/2 1 eth access up none 10G(D) -- Eth1/7 1 eth trunk up none 100G(D) -- Eth1/8 1 eth trunk up none 100G(D) -- . .
-
Verify that the ISL between cs1 and cs2 is functional:
show port-channel summaryShow example
cs1# show port-channel summary Flags: D - Down P - Up in port-channel (members) I - Individual H - Hot-standby (LACP only) s - Suspended r - Module-removed b - BFD Session Wait S - Switched R - Routed U - Up (port-channel) p - Up in delay-lacp mode (member) M - Not in use. Min-links not met -------------------------------------------------------------------------------- Group Port- Type Protocol Member Ports Channel -------------------------------------------------------------------------------- 1 Po1(SU) Eth LACP Eth1/31(P) Eth1/32(P) cs1# -
Verify that the cluster LIFs have reverted to their home port:
network interface show -vserver ClusterShow example
cluster1::*> network interface show -vserver Cluster Logical Status Network Current Current Is Vserver Interface Admin/Oper Address/Mask Node Port Home ----------- ------------------ ---------- ------------------ ------------------- ------- ---- Cluster cluster1-01_clus1 up/up 169.254.3.4/23 cluster1-01 e0d true cluster1-01_clus2 up/up 169.254.3.5/23 cluster1-01 e0d true cluster1-02_clus1 up/up 169.254.3.8/23 cluster1-02 e0d true cluster1-02_clus2 up/up 169.254.3.9/23 cluster1-02 e0d true cluster1-03_clus1 up/up 169.254.1.3/23 cluster1-03 e0b true cluster1-03_clus2 up/up 169.254.1.1/23 cluster1-03 e0b true cluster1-04_clus1 up/up 169.254.1.6/23 cluster1-04 e0b true cluster1-04_clus2 up/up 169.254.1.7/23 cluster1-04 e0b true cluster1::*> -
Verify that the cluster is healthy:
cluster showShow example
cluster1::*> cluster show Node Health Eligibility Epsilon -------------------- ------- ------------- ------- cluster1-01 true true false cluster1-02 true true false cluster1-03 true true true cluster1-04 true true false cluster1::*>
Step 3: Setup your ONTAP cluster
NetApp recommends that you use System Manager to set up new clusters.
System Manager provides a simple and easy workflow for cluster set up and configuration including assigning a node management IP address, initializing the cluster, creating a local tier, configuring protocols, and provisioning initial storage.
Refer to Configure ONTAP on a new cluster with System Manager for setup instructions.
After you've installed the RCF, you can verify the SSH configuration.