Install the Reference Configuration File (RCF)
You can install the RCF after setting up the Nexus 92300YC switch for the first time. You can also use this procedure to upgrade your RCF version.
The examples in this procedure use the following switch and node nomenclature:
-
The names of the two Cisco switches are
cs1
andcs2
. -
The node names are
node1
andnode2
. -
The cluster LIF names are
node1_clus1
,node1_clus2
,node2_clus1
, andnode2_clus2
. -
The
cluster1::*>
prompt indicates the name of the cluster.
|
-
Display the cluster ports on each node that are connected to the cluster switches:
network device-discovery show
Show example
cluster1::*> *network device-discovery show* Node/ Local Discovered Protocol Port Device (LLDP: ChassisID) Interface Platform ----------- ------ ------------------------- ---------------- ------------ node1/cdp e0a cs1 Ethernet1/1/1 N9K-C92300YC e0b cs2 Ethernet1/1/1 N9K-C92300YC node2/cdp e0a cs1 Ethernet1/1/2 N9K-C92300YC e0b cs2 Ethernet1/1/2 N9K-C92300YC cluster1::*>
-
Check the administrative and operational status of each cluster port.
-
Verify that all the cluster ports are up with a healthy status:
network port show -ipspace Cluster
Show example
cluster1::*> *network port show -ipspace Cluster* Node: node1 Ignore Speed(Mbps) Health Health Port IPspace Broadcast Domain Link MTU Admin/Oper Status Status --------- ------------ ---------------- ---- ---- ----------- -------- ------ e0c Cluster Cluster up 9000 auto/100000 healthy false e0d Cluster Cluster up 9000 auto/100000 healthy false Node: node2 Ignore Speed(Mbps) Health Health Port IPspace Broadcast Domain Link MTU Admin/Oper Status Status --------- ------------ ---------------- ---- ---- ----------- -------- ------ e0c Cluster Cluster up 9000 auto/100000 healthy false e0d Cluster Cluster up 9000 auto/100000 healthy false cluster1::*>
-
Verify that all the cluster interfaces (LIFs) are on the home port:
network interface show -vserver Cluster
Show example
cluster1::*> *network interface show -vserver Cluster* Logical Status Network Current Current Is Vserver Interface Admin/Oper Address/Mask Node Port Home ----------- ------------------ ---------- ----------------- ------------ ------- ---- Cluster node1_clus1 up/up 169.254.3.4/23 node1 e0c true node1_clus2 up/up 169.254.3.5/23 node1 e0d true node2_clus1 up/up 169.254.3.8/23 node2 e0c true node2_clus2 up/up 169.254.3.9/23 node2 e0d true cluster1::*>
-
Verify that the cluster displays information for both cluster switches:
system cluster-switch show -is-monitoring-enabled-operational true
Show example
cluster1::*> *system cluster-switch show -is-monitoring-enabled-operational true* Switch Type Address Model --------------------------- ------------------ ---------------- --------------- cs1 cluster-network 10.233.205.92 N9K-C92300YC Serial Number: FOXXXXXXXGS Is Monitored: true Reason: None Software Version: Cisco Nexus Operating System (NX-OS) Software, Version 9.3(4) Version Source: CDP cs2 cluster-network 10.233.205.93 N9K-C92300YC Serial Number: FOXXXXXXXGD Is Monitored: true Reason: None Software Version: Cisco Nexus Operating System (NX-OS) Software, Version 9.3(4) Version Source: CDP 2 entries were displayed.
-
-
Disable auto-revert on the cluster LIFs.
cluster1::*> network interface modify -vserver Cluster -lif * -auto-revert false
-
On cluster switch cs2, shut down the ports connected to the cluster ports of the nodes.
cs2(config)# interface e1/1-64 cs2(config-if-range)# shutdown
-
Verify that the cluster ports have migrated to the ports hosted on cluster switch cs1. This might take a few seconds.
network interface show -vserver Cluster
Show example
cluster1::*> *network interface show -vserver Cluster* Logical Status Network Current Current Is Vserver Interface Admin/Oper Address/Mask Node Port Home ----------- ----------------- ---------- ------------------ ------------- ------- ---- Cluster node1_clus1 up/up 169.254.3.4/23 node1 e0c true node1_clus2 up/up 169.254.3.5/23 node1 e0c false node2_clus1 up/up 169.254.3.8/23 node2 e0c true node2_clus2 up/up 169.254.3.9/23 node2 e0c false cluster1::*>
-
Verify that the cluster is healthy:
cluster show
Show example
cluster1::*> *cluster show* Node Health Eligibility Epsilon -------------- ------- ------------ ------- node1 true true false node2 true true false cluster1::*>
-
If you have not already done so, save a copy of the current switch configuration by copying the output of the following command to a text file:
show running-config
-
Clean the configuration on switch cs2 and perform a basic setup.
When updating or applying a new RCF, you must erase the switch settings and perform basic configuration. You must be connected to the switch serial console port to set up the switch again. -
Clean the configuration:
Show example
(cs2)# write erase Warning: This command will erase the startup-configuration. Do you wish to proceed anyway? (y/n) [n] y
-
Perform a reboot of the switch:
Show example
(cs2)# reload Are you sure you would like to reset the system? (y/n) y
-
-
Copy the RCF to the bootflash of switch cs2 using one of the following transfer protocols: FTP, TFTP, SFTP, or SCP. For more information on Cisco commands, see the appropriate guide in the Cisco Nexus 9000 Series Switches guides.
This example shows TFTP being used to copy an RCF to the bootflash on switch cs2:
cs2# copy tftp: bootflash: vrf management Enter source filename: /code/Nexus_92300YC_RCF_v1.0.2.txt Enter hostname for the tftp server: 172.19.2.1 Enter username: user1 Outbound-ReKey for 172.19.2.1:22 Inbound-ReKey for 172.19.2.1:22 user1@172.19.2.1's password: tftp> progress Progress meter enabled tftp> get /code/Nexus_92300YC_RCF_v1.0.2.txt /bootflash/nxos.9.2.2.bin /code/Nexus_92300YC_R 100% 9687 530.2KB/s 00:00 tftp> exit Copy complete, now saving to disk (please wait)... Copy complete.
-
Apply the RCF previously downloaded to the bootflash.
For more information on Cisco commands, see the appropriate guide in the Cisco Nexus 9000 Series Switches guides.
This example shows the RCF file
Nexus_92300YC_RCF_v1.0.2.txt
being installed on switch cs2:cs2# copy Nexus_92300YC_RCF_v1.0.2.txt running-config echo-commands Disabling ssh: as its enabled right now: generating ecdsa key(521 bits)...... generated ecdsa key Enabling ssh: as it has been disabled this command enables edge port type (portfast) by default on all interfaces. You should now disable edge port type (portfast) explicitly on switched ports leading to hubs, switches and bridges as they may create temporary bridging loops. Edge port type (portfast) should only be enabled on ports connected to a single host. Connecting hubs, concentrators, switches, bridges, etc... to this interface when edge port type (portfast) is enabled, can cause temporary bridging loops. Use with CAUTION Edge Port Type (Portfast) has been configured on Ethernet1/1 but will only have effect when the interface is in a non-trunking mode. ... Copy complete, now saving to disk (please wait)... Copy complete.
-
Verify on the switch that the RCF has been merged successfully:
show running-config
cs2# show running-config !Command: show running-config !Running configuration last done at: Wed Apr 10 06:32:27 2019 !Time: Wed Apr 10 06:36:00 2019 version 9.2(2) Bios:version 05.33 switchname cs2 vdc cs2 id 1 limit-resource vlan minimum 16 maximum 4094 limit-resource vrf minimum 2 maximum 4096 limit-resource port-channel minimum 0 maximum 511 limit-resource u4route-mem minimum 248 maximum 248 limit-resource u6route-mem minimum 96 maximum 96 limit-resource m4route-mem minimum 58 maximum 58 limit-resource m6route-mem minimum 8 maximum 8 feature lacp no password strength-check username admin password 5 $5$HY9Kk3F9$YdCZ8iQJ1RtoiEFa0sKP5IO/LNG1k9C4lSJfi5kesl 6 role network-admin ssh key ecdsa 521 banner motd # * * * Nexus 92300YC Reference Configuration File (RCF) v1.0.2 (10-19-2018) * * * * Ports 1/1 - 1/48: 10GbE Intra-Cluster Node Ports * * Ports 1/49 - 1/64: 40/100GbE Intra-Cluster Node Ports * * Ports 1/65 - 1/66: 40/100GbE Intra-Cluster ISL Ports * * *
When applying the RCF for the first time, the ERROR: Failed to write VSH commands message is expected and can be ignored. |
-
Verify that the RCF file is the correct newer version:
show running-config
When you check the output to verify you have the correct RCF, make sure that the following information is correct:
-
The RCF banner
-
The node and port settings
-
Customizations
The output varies according to your site configuration. Check the port settings and refer to the release notes for any changes specific to the RCF that you have installed.
-
-
Reapply any previous customizations to the switch configuration. Refer to Review cabling and configuration considerations for details of any further changes required.
-
After you verify the RCF versions and switch settings are correct, copy the running-config file to the startup-config file.
For more information on Cisco commands, see the appropriate guide in the Cisco Nexus 9000 Series Switches guides.
cs2# copy running-config startup-config [] 100% Copy complete
-
Reboot switch cs2. You can ignore the "cluster ports down" events reported on the nodes while the switch reboots.
cs2# reload This command will reboot the system. (y/n)? [n] y
-
Verify the health of the cluster ports on the cluster.
-
Verify that e0d ports are up and healthy across all nodes in the cluster:
network port show -ipspace Cluster
Show example
cluster1::*> *network port show -ipspace Cluster* Node: node1 Ignore Speed(Mbps) Health Health Port IPspace Broadcast Domain Link MTU Admin/Oper Status Status --------- ------------ ---------------- ---- ---- ----------- -------- ------ e0a Cluster Cluster up 9000 auto/10000 healthy false e0b Cluster Cluster up 9000 auto/10000 healthy false Node: node2 Ignore Speed(Mbps) Health Health Port IPspace Broadcast Domain Link MTU Admin/Oper Status Status --------- ------------ ---------------- ---- ---- ----------- -------- ------ e0a Cluster Cluster up 9000 auto/10000 healthy false e0b Cluster Cluster up 9000 auto/10000 healthy false
-
Verify the switch health from the cluster (this might not show switch cs2, since LIFs are not homed on e0d).
Show example
cluster1::*> *network device-discovery show -protocol cdp* Node/ Local Discovered Protocol Port Device (LLDP: ChassisID) Interface Platform ----------- ------ ------------------------- ----------------- ------------ node1/cdp e0a cs1 Ethernet1/1 N9K-C92300YC e0b cs2 Ethernet1/1 N9K-C92300YC node2/cdp e0a cs1 Ethernet1/2 N9K-C92300YC e0b cs2 Ethernet1/2 N9K-C92300YC cluster1::*> *system cluster-switch show -is-monitoring-enabled-operational true* Switch Type Address Model --------------------------- ------------------ ---------------- ------------ cs1 cluster-network 10.233.205.90 N9K-C92300YC Serial Number: FOXXXXXXXGD Is Monitored: true Reason: None Software Version: Cisco Nexus Operating System (NX-OS) Software, Version 9.3(4) Version Source: CDP cs2 cluster-network 10.233.205.91 N9K-C92300YC Serial Number: FOXXXXXXXGS Is Monitored: true Reason: None Software Version: Cisco Nexus Operating System (NX-OS) Software, Version 9.3(4) Version Source: CDP 2 entries were displayed.
You might observe the following output on the cs1 switch console depending on the RCF version previously loaded on the switch
2020 Nov 17 16:07:18 cs1 %$ VDC-1 %$ %STP-2-UNBLOCK_CONSIST_PORT: Unblocking port port-channel1 on VLAN0092. Port consistency restored. 2020 Nov 17 16:07:23 cs1 %$ VDC-1 %$ %STP-2-BLOCK_PVID_PEER: Blocking port-channel1 on VLAN0001. Inconsistent peer vlan. 2020 Nov 17 16:07:23 cs1 %$ VDC-1 %$ %STP-2-BLOCK_PVID_LOCAL: Blocking port-channel1 on VLAN0092. Inconsistent local vlan.
-
-
On cluster switch cs1, shut down the ports connected to the cluster ports of the nodes.
The following example uses the interface example output from step 1:
cs1(config)# interface e1/1-64 cs1(config-if-range)# shutdown
-
Verify that the cluster LIFs have migrated to the ports hosted on switch cs2. This might take a few seconds.
network interface show -vserver Cluster
Show example
cluster1::*> *network interface show -vserver Cluster* Logical Status Network Current Current Is Vserver Interface Admin/Oper Address/Mask Node Port Home ----------- ---------------- ---------- ------------------ ----------------- ------- ---- Cluster node1_clus1 up/up 169.254.3.4/23 node1 e0d false node1_clus2 up/up 169.254.3.5/23 node1 e0d true node2_clus1 up/up 169.254.3.8/23 node2 e0d false node2_clus2 up/up 169.254.3.9/23 node2 e0d true cluster1::*>
-
Verify that the cluster is healthy:
cluster show
Show example
cluster1::*> *cluster show* Node Health Eligibility Epsilon -------------- -------- ------------- ------- node1 true true false node2 true true false cluster1::*>
-
Repeat Steps 7 to 14 on switch cs1.
-
Enable auto-revert on the cluster LIFs.
cluster1::*> network interface modify -vserver Cluster -lif * -auto-revert True
-
Reboot switch cs1. You do this to trigger the cluster LIFs to revert to their home ports. You can ignore the "cluster ports down" events reported on the nodes while the switch reboots.
cs1# reload This command will reboot the system. (y/n)? [n] y
-
Verify that the switch ports connected to the cluster ports are up.
cs1# show interface brief | grep up . . Ethernet1/1 1 eth access up none 10G(D) -- Ethernet1/2 1 eth access up none 10G(D) -- Ethernet1/3 1 eth trunk up none 100G(D) -- Ethernet1/4 1 eth trunk up none 100G(D) -- . .
-
Verify that the ISL between cs1 and cs2 is functional:
show port-channel summary
Show example
cs1# *show port-channel summary* Flags: D - Down P - Up in port-channel (members) I - Individual H - Hot-standby (LACP only) s - Suspended r - Module-removed b - BFD Session Wait S - Switched R - Routed U - Up (port-channel) p - Up in delay-lacp mode (member) M - Not in use. Min-links not met -------------------------------------------------------------------------------- Group Port- Type Protocol Member Ports Channel -------------------------------------------------------------------------------- 1 Po1(SU) Eth LACP Eth1/65(P) Eth1/66(P) cs1#
-
Verify that the cluster LIFs have reverted to their home port:
network interface show -vserver Cluster
Show example
cluster1::*> *network interface show -vserver Cluster* Logical Status Network Current Current Is Vserver Interface Admin/Oper Address/Mask Node Port Home ----------- ------------- ---------- ------------------ ------------- ------- ---- Cluster node1_clus1 up/up 169.254.3.4/23 node1 e0d true node1_clus2 up/up 169.254.3.5/23 node1 e0d true node2_clus1 up/up 169.254.3.8/23 node2 e0d true node2_clus2 up/up 169.254.3.9/23 node2 e0d true cluster1::*>
-
Verify that the cluster is healthy:
cluster show
Show example
cluster1::*> *cluster show* Node Health Eligibility Epsilon -------------- ------- ------------- ------- node1 true true false node2 true true false
-
Verify the connectivity of the remote cluster interfaces:
You can use the network interface check cluster-connectivity
command to start an accessibility check for cluster connectivity and then display the details:
network interface check cluster-connectivity start
and network interface check cluster-connectivity show
cluster1::*> network interface check cluster-connectivity start
NOTE: Wait for a number of seconds before running the show
command to display the details.
cluster1::*> network interface check cluster-connectivity show Source Destination Packet Node Date LIF LIF Loss ------ -------------------------- ---------------- ---------------- ----------- node1 3/5/2022 19:21:18 -06:00 node1_clus2 node2-clus1 none 3/5/2022 19:21:20 -06:00 node1_clus2 node2_clus2 none node2 3/5/2022 19:21:18 -06:00 node2_clus2 node1_clus1 none 3/5/2022 19:21:20 -06:00 node2_clus2 node1_clus2 none
For all ONTAP releases, you can also use the cluster ping-cluster -node <name>
command to check the connectivity:
cluster ping-cluster -node <name>
cluster1::*> cluster ping-cluster -node local Host is node1 Getting addresses from network interface table... Cluster node1_clus1 169.254.3.4 node1 e0a Cluster node1_clus2 169.254.3.5 node1 e0b Cluster node2_clus1 169.254.3.8 node2 e0a Cluster node2_clus2 169.254.3.9 node2 e0b Local = 169.254.1.3 169.254.1.1 Remote = 169.254.1.6 169.254.1.7 169.254.3.4 169.254.3.5 169.254.3.8 169.254.3.9 Cluster Vserver Id = 4294967293 Ping status: ............ Basic connectivity succeeds on 12 path(s) Basic connectivity fails on 0 path(s) ................................................ Detected 9000 byte MTU on 12 path(s): Local 169.254.1.3 to Remote 169.254.1.6 Local 169.254.1.3 to Remote 169.254.1.7 Local 169.254.1.3 to Remote 169.254.3.4 Local 169.254.1.3 to Remote 169.254.3.5 Local 169.254.1.3 to Remote 169.254.3.8 Local 169.254.1.3 to Remote 169.254.3.9 Local 169.254.1.1 to Remote 169.254.1.6 Local 169.254.1.1 to Remote 169.254.1.7 Local 169.254.1.1 to Remote 169.254.3.4 Local 169.254.1.1 to Remote 169.254.3.5 Local 169.254.1.1 to Remote 169.254.3.8 Local 169.254.1.1 to Remote 169.254.3.9 Larger than PMTU communication succeeds on 12 path(s) RPC status: 6 paths up, 0 paths down (tcp check) 6 paths up, 0 paths down (udp check)