Skip to main content
Install and maintain

Upgrade your Reference Configuration File (RCF)

Contributors netapp-yvonneo

You upgrade your RCF version when you have an existing version of the RCF file installed on your operational switches.

Before you begin

Make sure you have the following:

  • A current backup of the switch configuration.

  • A fully functioning cluster (no errors in the logs or similar issues).

  • The current RCF.

  • If you are updating your RCF version, you need a boot configuration in the RCF that reflects the desired boot images.

    If you need to change the boot configuration to reflect the current boot images, you must do so before reapplying the RCF so that the correct version is instantiated on future reboots.

Note No operational inter-switch link (ISL) is needed during this procedure. This is by design because RCF version changes can affect ISL connectivity temporarily. To ensure non-disruptive cluster operations, the following procedure migrates all of the cluster LIFs to the operational partner switch while performing the steps on the target switch.
Caution Before installing a new switch software version and RCFs, you must erase the switch settings and perform basic configuration. You must be connected to the switch using the serial console or have preserved basic configuration information before erasing the switch settings.

Step 1: Prepare for the upgrade

  1. If AutoSupport is enabled on this cluster, suppress automatic case creation by invoking an AutoSupport message:

    system node autosupport invoke -node * -type all -message MAINT=xh

    Where x is the duration of the maintenance window in hours.

  2. Change the privilege level to advanced, entering y when prompted to continue:

    set -privilege advanced

    The advanced prompt (*>) appears.

  3. Display the ports on each node that are connected to the switches:

    network device-discovery show
    Show example
    cluster1::*> network device-discovery show
    Node/       Local  Discovered
    Protocol    Port   Device (LLDP: ChassisID) Interface       Platform
    ----------- ------ ------------------------ --------------- ---------
    node1-01/cdp
                e3a    cs1                       Ethernet1/7     N9K-C9336C
                e3b    cs2                       Ethernet1/7     N9K-C9336C
    node1-02/cdp
                e3a    cs1                       Ethernet1/8     N9K-C9336C
                e3b    cs2                       Ethernet1/8     N9K-C9336C
    .
    .
    .
  4. Verify that all the storage ports are up with a healthy status:

    storage port show -port-type ENET
    Show example
    cluster1::*> storage port show -port-type ENET
    
    
                                          Speed
    Node               Port Type  Mode    (Gb/s) State    Status
    ------------------ ---- ----- ------- ------ -------- -----------
    node1-01
                       e3a ENET  -         100   enabled  online
                       e3b ENET  -         100   enabled  online
                       e7a ENET  -         100   enabled  online
                       e7b ENET  -         100   enabled  online
    node1-02
                       e3a ENET  -         100   enabled  online
                       e3b ENET  -         100   enabled  online
                       e7a ENET  -         100   enabled  online
                       e7b ENET  -         100   enabled  online
    .
    .
    .
  5. Disable auto-revert on the cluster LIFs.

    network interface modify -vserver Cluster -lif * -auto-revert false

Step 2: Configure ports

  1. On switch cs1, shut down the ports connected to all the ports of the nodes.

    cs1> enable
    cs1# configure
    cs1(config)# interface eth1/1/1-2,eth1/7-8
    cs1(config-if-range)# shutdown
    cs1(config-if-range)# exit
    cs1(config)# exit
    Caution Make sure to shutdown all connected ports to avoid any network connection issues. See the Knowledge Base article Node out of quorum when migrating cluster LIF during switch OS upgrade for further details.
  2. Verify that the cluster LIFs have failed over to the ports hosted on switch cs1. This might take a few seconds.

    network interface show -role cluster
    Show example
    cluster1::*> network interface show -role cluster
    
                Logical         Status     Network            Current     Current Is
    Vserver     Interface       Admin/Oper Address/Mask       Node        Port    Home
    ----------- --------------- ---------- ------------------ ----------- ------- ----
    Cluster
                node1-01_clus1  up/up      169.254.36.44/16   node1-01    e7a     true
                node1-01_clus2  up/up      169.254.7.5/16     node1-01    e7b     true
                node1-02_clus1  up/up      169.254.197.206/16 node1-02    e7a     true
                node1-02_clus2  up/up      169.254.195.186/16 node1-02    e7b     true
                node1-03_clus1  up/up      169.254.192.49/16  node1-03    e7a     true
                node1-03_clus2  up/up      169.254.182.76/16  node1-03    e7b     true
                node1-04_clus1  up/up      169.254.59.49/16   node1-04    e7a     true
                node1-04_clus2  up/up      169.254.62.244/16  node1-04    e7b     true
    
    8 entries were displayed.
  3. Verify that the cluster is healthy:

    cluster show

    Show example
    cluster1::*> cluster show
    Node              Health  Eligibility   Epsilon
    ----------------- ------- ------------  -------
    node1-01          true    true          false
    node1-02          true    true          false
    node1-03          true    true          true
    node1-04          true    true          false
    
    4 entries were displayed.
  4. If you have not already done so, save a copy of the current switch configuration by copying the output of the following command to a text file:

    show running-config

    1. Record any custom additions between the current running-config and the RCF file in use (such as an SNMP configuration for your organization).

    2. For NX-OS 10.2 and later, use the show diff running-config command to compare with the saved RCF file in the bootflash. Otherwise, use a third-party diff or compare tool.

  5. Save basic configuration details to the write_erase.cfg file on the bootflash.

    Note

    Make sure to configure the following:

    • Username and password

    • Management IP address

    • Default gateway

    • Switch name

    cs1# show run | i "username admin password" > bootflash:write_erase.cfg

    cs1# show run | section "vrf context management" >> bootflash:write_erase.cfg

    cs1# show run | section "interface mgmt0" >> bootflash:write_erase.cfg

    cs1# show run | section "switchname" >> bootflash:write_erase.cfg

  6. When upgrading to RCF version 1.12 and later, run the following commands: cs1# echo "hardware access-list tcam region ing-racl 1024" >> bootflash:write_erase.cfg

    cs1# echo "hardware access-list tcam region egr-racl 1024" >> bootflash:write_erase.cfg

    cs1# echo "hardware access-list tcam region ing-l2-qos 1280 >> bootflash:write_erase.cfg

  7. Verify that the write_erase.cfg file is populated as expected:

    show file bootflash:write_erase.cfg

  8. Issue the write erase command to erase the current saved configuration:

    cs1# write erase

    Warning: This command will erase the startup-configuration.

    Do you wish to proceed anyway? (y/n) [n] y

  9. Copy the previously saved basic configuration into the startup configuration.

    cs1# copy bootflash:write_erase.cfg startup-config

  10. Reboot the switch:

    cs1# reload

    This command will reboot the system. (y/n)? [n] y

  11. After the management IP address is reachable again, log in to the switch through SSH.

    You might need to update host file entries related to the SSH keys.

  12. Copy the RCF to the bootflash of switch cs1 using one of the following transfer protocols: FTP, TFTP, SFTP, or SCP.

    For more information on Cisco commands, see the appropriate guide in the Cisco Nexus 9000 Series NX-OS Command Reference guides.

    Show example

    This example shows TFTP being used to copy an RCF to the bootflash on switch cs1:

    cs1# copy tftp: bootflash: vrf management
    Enter source filename: Nexus_9336C_RCF_v1.6-Storage.txt
    Enter hostname for the tftp server: 172.22.201.50
    Trying to connect to tftp server......Connection to Server Established.
    TFTP get operation was successful
    Copy complete, now saving to disk (please wait)...
  13. Apply the RCF previously downloaded to the bootflash.

    For more information on Cisco commands, see the appropriate guide in the Cisco Nexus 9000 Series NX-OS Command Reference guides.

    This example shows the RCF file NX9336C-FX2-RCF-v1.13-1-Storage.txt being installed on switch cs1:

    cs1# copy Nexus_9336C_RCF_v1.6-Storage.txt running-config echo-commands
    Caution

    Make sure to thoroughly read the Installation notes, Important Notes, and banner sections of your RCF. You must read and follow these instructions to ensure the proper configuration and operation of the switch.

  14. Verify that the RCF file is the correct newer version:

    show running-config

    When you check the output to verify you have the correct RCF, make sure that the following information is correct:

    • The RCF banner

    • The node and port settings

    • Customizations

      The output varies according to your site configuration. Check the port settings and refer to the release notes for any changes specific to the RCF that you have installed.

  15. Reapply any previous customizations to the switch configuration.

  16. After you verify the RCF versions, custom additions, and switch settings are correct, copy the running-config file to the startup-config file.

    For more information on Cisco commands, see the appropriate guide in the Cisco Nexus 9000 Series NX-OS Command Reference guides.

    cs1# copy running-config startup-config

    [] 100% Copy complete

  17. Reboot switch cs1. You can ignore the “cluster switch health monitor” alerts and “cluster ports down” events reported on the nodes while the switch reboots.

    cs1# reload

    This command will reboot the system. (y/n)? [n] y

  18. Verify that all the storage ports are up with a healthy status:

    storage port show -port-type ENET
    Show example
    cluster1::*> storage port show -port-type ENET
    
    
                                          Speed
    Node               Port Type  Mode    (Gb/s) State    Status
    ------------------ ---- ----- ------- ------ -------- -----------
    node1-01
                       e3a  ENET  -          100 enabled  online
                       e3b  ENET  -          100 enabled  online
                       e7a  ENET  -          100 enabled  online
                       e7b  ENET  -          100 enabled  online
    node1-02
                       e3a  ENET  -          100 enabled  online
                       e3b  ENET  -          100 enabled  online
                       e7a  ENET  -          100 enabled  online
                       e7b  ENET  -          100 enabled  online
    .
    .
    .
  19. Verify that the cluster is healthy:

    cluster show

    Show example
    cluster1::*> cluster show
    Node              Health   Eligibility   Epsilon
    ----------------- -------- ------------- -------
    node1-01          true     true          false
    node1-02          true     true          false
    node1-03          true     true          true
    node1-04          true     true          false
    
    4 entries were displayed.
  20. Repeat steps 4 to 19 on switch cs2.

  21. Enable auto-revert on the cluster LIFs.

    network interface modify -vserver Cluster -lif * -auto-revert true

Step 3: Verify the cluster network configuration and cluster health

  1. Verify that the switch ports connected to the cluster ports are up.

    show interface brief
  2. Verify that the expected nodes are still connected:

    show cdp neighbors
  3. Verify that the cluster nodes are in their correct cluster VLANs using the following commands:

    show vlan brief
    show interface trunk
  4. Verify that the cluster LIFs have reverted to their home port:

    network interface show -role cluster

    If any cluster LIFs have not returned to their home ports, revert them manually from the local node:

    network interface revert -vserver vserver_name -lif <lif-name>

  5. Verify that the cluster is healthy:

    cluster show
  6. Verify the connectivity of the remote cluster interfaces:

    1. You can use the network interface check cluster-connectivity show command to display the details of an accessibility check for cluster connectivity:

      network interface check cluster-connectivity show
    2. Alternatively, you can use the cluster ping-cluster -node <node-name> command to check the connectivity:

      cluster ping-cluster -node <node-name>
What's next?

After you've upgraded your RCF, you can verify the SSH configuration.