Skip to main content
Cluster and storage switches

Install the Reference Configuration File (RCF)

Contributors netapp-yvonneo

You can install the RCF after setting up the Nexus 92300YC switch for the first time. You can also use this procedure to upgrade your RCF version.

About this task

The examples in this procedure use the following switch and node nomenclature:

  • The names of the two Cisco switches are cs1 and cs2.

  • The node names are node1 and node2.

  • The cluster LIF names are node1_clus1, node1_clus2, node2_clus1, and node2_clus2.

  • The cluster1::*> prompt indicates the name of the cluster.

Note
  • The procedure requires the use of both ONTAP commands and Cisco Nexus 9000 Series Switches; ONTAP commands are used unless otherwise indicated.

  • Before you perform this procedure, make sure that you have a current backup of the switch configuration.

  • No operational inter-switch link (ISL) is needed during this procedure. This is by design because RCF version changes can affect ISL connectivity temporarily. To ensure non-disruptive cluster operations, the following procedure migrates all of the cluster LIFs to the operational partner switch while performing the steps on the target switch.

Steps
  1. Display the cluster ports on each node that are connected to the cluster switches: network device-discovery show

    Show example
    cluster1::*> *network device-discovery show*
    Node/       Local  Discovered
    Protocol    Port   Device (LLDP: ChassisID)  Interface         Platform
    ----------- ------ ------------------------- ----------------  ------------
    node1/cdp
                e0a    cs1                       Ethernet1/1/1     N9K-C92300YC
                e0b    cs2                       Ethernet1/1/1     N9K-C92300YC
    node2/cdp
                e0a    cs1                       Ethernet1/1/2     N9K-C92300YC
                e0b    cs2                       Ethernet1/1/2     N9K-C92300YC
    cluster1::*>
  2. Check the administrative and operational status of each cluster port.

    1. Verify that all the cluster ports are up with a healthy status: network port show -ipspace Cluster

      Show example
      cluster1::*> *network port show -ipspace Cluster*
      
      Node: node1
                                                                             Ignore
                                                        Speed(Mbps) Health   Health
      Port      IPspace      Broadcast Domain Link MTU  Admin/Oper  Status   Status
      --------- ------------ ---------------- ---- ---- ----------- -------- ------
      e0c       Cluster      Cluster          up   9000  auto/100000 healthy false
      e0d       Cluster      Cluster          up   9000  auto/100000 healthy false
      
      Node: node2
                                                                             Ignore
                                                        Speed(Mbps) Health   Health
      Port      IPspace      Broadcast Domain Link MTU  Admin/Oper  Status   Status
      --------- ------------ ---------------- ---- ---- ----------- -------- ------
      e0c       Cluster      Cluster          up   9000  auto/100000 healthy false
      e0d       Cluster      Cluster          up   9000  auto/100000 healthy false
      cluster1::*>
    2. Verify that all the cluster interfaces (LIFs) are on the home port: network interface show -vserver Cluster

      Show example
      cluster1::*> *network interface show -vserver Cluster*
                  Logical            Status     Network           Current      Current Is
      Vserver     Interface          Admin/Oper Address/Mask      Node         Port    Home
      ----------- ------------------ ---------- ----------------- ------------ ------- ----
      Cluster
                  node1_clus1        up/up      169.254.3.4/23    node1        e0c     true
                  node1_clus2        up/up      169.254.3.5/23    node1        e0d     true
                  node2_clus1        up/up      169.254.3.8/23    node2        e0c     true
                  node2_clus2        up/up      169.254.3.9/23    node2        e0d     true
      cluster1::*>
    3. Verify that the cluster displays information for both cluster switches: system cluster-switch show -is-monitoring-enabled-operational true

      Show example
      cluster1::*> *system cluster-switch show -is-monitoring-enabled-operational true*
      Switch                      Type               Address          Model
      --------------------------- ------------------ ---------------- ---------------
      cs1                         cluster-network    10.233.205.92    N9K-C92300YC
           Serial Number: FOXXXXXXXGS
            Is Monitored: true
                  Reason: None
        Software Version: Cisco Nexus Operating System (NX-OS) Software, Version
                          9.3(4)
          Version Source: CDP
      
      cs2                         cluster-network    10.233.205.93    N9K-C92300YC
           Serial Number: FOXXXXXXXGD
            Is Monitored: true
                  Reason: None
        Software Version: Cisco Nexus Operating System (NX-OS) Software, Version
                          9.3(4)
          Version Source: CDP
      
      2 entries were displayed.
  3. Disable auto-revert on the cluster LIFs.

    cluster1::*> network interface modify -vserver Cluster -lif * -auto-revert false
  4. On cluster switch cs2, shut down the ports connected to the cluster ports of the nodes.

    cs2(config)# interface e1/1-64
    cs2(config-if-range)# shutdown
  5. Verify that the cluster ports have migrated to the ports hosted on cluster switch cs1. This might take a few seconds. network interface show -vserver Cluster

    Show example
    cluster1::*> *network interface show -vserver Cluster*
                Logical           Status     Network            Current       Current Is
    Vserver     Interface         Admin/Oper Address/Mask       Node          Port    Home
    ----------- ----------------- ---------- ------------------ ------------- ------- ----
    Cluster
                node1_clus1       up/up      169.254.3.4/23     node1         e0c     true
                node1_clus2       up/up      169.254.3.5/23     node1         e0c     false
                node2_clus1       up/up      169.254.3.8/23     node2         e0c     true
                node2_clus2       up/up      169.254.3.9/23     node2         e0c     false
    cluster1::*>
  6. Verify that the cluster is healthy: cluster show

    Show example
    cluster1::*> *cluster show*
    Node           Health  Eligibility   Epsilon
    -------------- ------- ------------  -------
    node1          true    true          false
    node2          true    true          false
    cluster1::*>
  7. If you have not already done so, save a copy of the current switch configuration by copying the output of the following command to a text file:

    show running-config

  8. Clean the configuration on switch cs2 and perform a basic setup.

    Caution When updating or applying a new RCF, you must erase the switch settings and perform basic configuration. You must be connected to the switch serial console port to set up the switch again.
    1. Clean the configuration:

      Show example
      (cs2)# write erase
      
      Warning: This command will erase the startup-configuration.
      
      Do you wish to proceed anyway? (y/n)  [n]  y
    2. Perform a reboot of the switch:

      Show example
      (cs2)# reload
      
      Are you sure you would like to reset the system? (y/n) y
  9. Copy the RCF to the bootflash of switch cs2 using one of the following transfer protocols: FTP, TFTP, SFTP, or SCP. For more information on Cisco commands, see the appropriate guide in the Cisco Nexus 9000 Series Switches guides.

    This example shows TFTP being used to copy an RCF to the bootflash on switch cs2:

    cs2# copy tftp: bootflash: vrf management
    Enter source filename: /code/Nexus_92300YC_RCF_v1.0.2.txt
    Enter hostname for the tftp server: 172.19.2.1
    Enter username: user1
    
    Outbound-ReKey for 172.19.2.1:22
    Inbound-ReKey for 172.19.2.1:22
    user1@172.19.2.1's password:
    tftp> progress
    Progress meter enabled
    tftp> get /code/Nexus_92300YC_RCF_v1.0.2.txt /bootflash/nxos.9.2.2.bin
    /code/Nexus_92300YC_R  100% 9687   530.2KB/s   00:00
    tftp> exit
    Copy complete, now saving to disk (please wait)...
    Copy complete.
  10. Apply the RCF previously downloaded to the bootflash.

    For more information on Cisco commands, see the appropriate guide in the Cisco Nexus 9000 Series Switches guides.

    This example shows the RCF file Nexus_92300YC_RCF_v1.0.2.txt being installed on switch cs2:

    cs2# copy Nexus_92300YC_RCF_v1.0.2.txt running-config echo-commands
    
    Disabling ssh: as its enabled right now:
     generating ecdsa key(521 bits)......
    generated ecdsa key
    
    Enabling ssh: as it has been disabled
     this command enables edge port type (portfast) by default on all interfaces. You
     should now disable edge port type (portfast) explicitly on switched ports leading to hubs,
     switches and bridges as they may create temporary bridging loops.
    
    Edge port type (portfast) should only be enabled on ports connected to a single
     host. Connecting hubs, concentrators, switches, bridges, etc...  to this
     interface when edge port type (portfast) is enabled, can cause temporary bridging loops.
     Use with CAUTION
    
    Edge Port Type (Portfast) has been configured on Ethernet1/1 but will only
     have effect when the interface is in a non-trunking mode.
    
    ...
    
    Copy complete, now saving to disk (please wait)...
    Copy complete.
  11. Verify on the switch that the RCF has been merged successfully:

    show running-config

    cs2# show running-config
    !Command: show running-config
    !Running configuration last done at: Wed Apr 10 06:32:27 2019
    !Time: Wed Apr 10 06:36:00 2019
    
    version 9.2(2) Bios:version 05.33
    switchname cs2
    vdc cs2 id 1
      limit-resource vlan minimum 16 maximum 4094
      limit-resource vrf minimum 2 maximum 4096
      limit-resource port-channel minimum 0 maximum 511
      limit-resource u4route-mem minimum 248 maximum 248
      limit-resource u6route-mem minimum 96 maximum 96
      limit-resource m4route-mem minimum 58 maximum 58
      limit-resource m6route-mem minimum 8 maximum 8
    
    feature lacp
    
    no password strength-check
    username admin password 5 $5$HY9Kk3F9$YdCZ8iQJ1RtoiEFa0sKP5IO/LNG1k9C4lSJfi5kesl
    6  role network-admin
    ssh key ecdsa 521
    
    banner motd #
    
    *                                                                              *
    *  Nexus 92300YC Reference Configuration File (RCF) v1.0.2 (10-19-2018)        *
    *                                                                              *
    *  Ports 1/1  - 1/48: 10GbE Intra-Cluster Node Ports                           *
    *  Ports 1/49 - 1/64: 40/100GbE Intra-Cluster Node Ports                       *
    *  Ports 1/65 - 1/66: 40/100GbE Intra-Cluster ISL Ports                        *
    *                                                                              *
    
Note When applying the RCF for the first time, the ERROR: Failed to write VSH commands message is expected and can be ignored.
  1. Verify that the RCF file is the correct newer version: show running-config

    When you check the output to verify you have the correct RCF, make sure that the following information is correct:

    • The RCF banner

    • The node and port settings

    • Customizations

      The output varies according to your site configuration. Check the port settings and refer to the release notes for any changes specific to the RCF that you have installed.

  2. After you verify the RCF versions and switch settings are correct, copy the running-config file to the startup-config file.

    For more information on Cisco commands, see the appropriate guide in the Cisco Nexus 9000 Series Switches guides.

    cs2# copy running-config startup-config
    [] 100% Copy complete
  3. Reboot switch cs2. You can ignore the "cluster ports down" events reported on the nodes while the switch reboots.

    cs2# reload
    This command will reboot the system. (y/n)?  [n] y
  4. Verify the health of the cluster ports on the cluster.

    1. Verify that e0d ports are up and healthy across all nodes in the cluster: network port show -ipspace Cluster

      Show example
      cluster1::*> *network port show -ipspace Cluster*
      
      Node: node1
                                                                             Ignore
                                                        Speed(Mbps) Health   Health
      Port      IPspace      Broadcast Domain Link MTU  Admin/Oper  Status   Status
      --------- ------------ ---------------- ---- ---- ----------- -------- ------
      e0a       Cluster      Cluster          up   9000  auto/10000 healthy  false
      e0b       Cluster      Cluster          up   9000  auto/10000 healthy  false
      
      Node: node2
                                                                             Ignore
                                                        Speed(Mbps) Health   Health
      Port      IPspace      Broadcast Domain Link MTU  Admin/Oper  Status   Status
      --------- ------------ ---------------- ---- ---- ----------- -------- ------
      e0a       Cluster      Cluster          up   9000  auto/10000 healthy  false
      e0b       Cluster      Cluster          up   9000  auto/10000 healthy  false
    2. Verify the switch health from the cluster (this might not show switch cs2, since LIFs are not homed on e0d).

      Show example
      cluster1::*> *network device-discovery show -protocol cdp*
      Node/       Local  Discovered
      Protocol    Port   Device (LLDP: ChassisID)  Interface         Platform
      ----------- ------ ------------------------- ----------------- ------------
      node1/cdp
                  e0a    cs1                       Ethernet1/1       N9K-C92300YC
                  e0b    cs2                       Ethernet1/1       N9K-C92300YC
      node2/cdp
                  e0a    cs1                       Ethernet1/2       N9K-C92300YC
                  e0b    cs2                       Ethernet1/2       N9K-C92300YC
      
      cluster1::*> *system cluster-switch show -is-monitoring-enabled-operational true*
      Switch                      Type               Address          Model
      --------------------------- ------------------ ---------------- ------------
      cs1                         cluster-network    10.233.205.90    N9K-C92300YC
           Serial Number: FOXXXXXXXGD
            Is Monitored: true
                  Reason: None
        Software Version: Cisco Nexus Operating System (NX-OS) Software, Version
                          9.3(4)
          Version Source: CDP
      
      cs2                         cluster-network    10.233.205.91    N9K-C92300YC
           Serial Number: FOXXXXXXXGS
            Is Monitored: true
                  Reason: None
        Software Version: Cisco Nexus Operating System (NX-OS) Software, Version
                          9.3(4)
          Version Source: CDP
      
      2 entries were displayed.
    Note

    You might observe the following output on the cs1 switch console depending on the RCF version previously loaded on the switch

    2020 Nov 17 16:07:18 cs1 %$ VDC-1 %$ %STP-2-UNBLOCK_CONSIST_PORT: Unblocking port port-channel1 on VLAN0092. Port consistency restored.
    2020 Nov 17 16:07:23 cs1 %$ VDC-1 %$ %STP-2-BLOCK_PVID_PEER: Blocking port-channel1 on VLAN0001. Inconsistent peer vlan.
    2020 Nov 17 16:07:23 cs1 %$ VDC-1 %$ %STP-2-BLOCK_PVID_LOCAL: Blocking port-channel1 on VLAN0092. Inconsistent local vlan.
  5. On cluster switch cs1, shut down the ports connected to the cluster ports of the nodes.

    The following example uses the interface example output from step 1:

    cs1(config)# interface e1/1-64
    cs1(config-if-range)# shutdown
  6. Verify that the cluster LIFs have migrated to the ports hosted on switch cs2. This might take a few seconds. network interface show -vserver Cluster

    Show example
    cluster1::*> *network interface show -vserver Cluster*
                Logical          Status     Network            Current           Current Is
    Vserver     Interface        Admin/Oper Address/Mask       Node              Port    Home
    ----------- ---------------- ---------- ------------------ ----------------- ------- ----
    Cluster
                node1_clus1      up/up      169.254.3.4/23     node1             e0d     false
                node1_clus2      up/up      169.254.3.5/23     node1             e0d     true
                node2_clus1      up/up      169.254.3.8/23     node2             e0d     false
                node2_clus2      up/up      169.254.3.9/23     node2             e0d     true
    cluster1::*>
  7. Verify that the cluster is healthy: cluster show

    Show example
    cluster1::*> *cluster show*
    Node           Health   Eligibility   Epsilon
    -------------- -------- ------------- -------
    node1          true     true          false
    node2          true     true          false
    cluster1::*>
  8. Repeat Steps 7 to 14 on switch cs1.

  9. Enable auto-revert on the cluster LIFs.

    cluster1::*> network interface modify -vserver Cluster -lif * -auto-revert True
  10. Reboot switch cs1. You do this to trigger the cluster LIFs to revert to their home ports. You can ignore the "cluster ports down" events reported on the nodes while the switch reboots.

    cs1# reload
    This command will reboot the system. (y/n)?  [n] y
  11. Verify that the switch ports connected to the cluster ports are up.

    cs1# show interface brief | grep up
    .
    .
    Ethernet1/1      1       eth  access up      none                    10G(D) --
    Ethernet1/2      1       eth  access up      none                    10G(D) --
    Ethernet1/3      1       eth  trunk  up      none                   100G(D) --
    Ethernet1/4      1       eth  trunk  up      none                   100G(D) --
    .
    .
  12. Verify that the ISL between cs1 and cs2 is functional: show port-channel summary

    Show example
    cs1# *show port-channel summary*
    Flags:  D - Down        P - Up in port-channel (members)
            I - Individual  H - Hot-standby (LACP only)
            s - Suspended   r - Module-removed
            b - BFD Session Wait
            S - Switched    R - Routed
            U - Up (port-channel)
            p - Up in delay-lacp mode (member)
            M - Not in use. Min-links not met
    --------------------------------------------------------------------------------
    Group Port-       Type     Protocol  Member Ports
          Channel
    --------------------------------------------------------------------------------
    1     Po1(SU)     Eth      LACP      Eth1/65(P)   Eth1/66(P)
    cs1#
  13. Verify that the cluster LIFs have reverted to their home port: network interface show -vserver Cluster

    Show example
    cluster1::*> *network interface show -vserver Cluster*
                Logical       Status     Network            Current       Current Is
    Vserver     Interface     Admin/Oper Address/Mask       Node          Port    Home
    ----------- ------------- ---------- ------------------ ------------- ------- ----
    Cluster
                node1_clus1   up/up      169.254.3.4/23     node1         e0d     true
                node1_clus2   up/up      169.254.3.5/23     node1         e0d     true
                node2_clus1   up/up      169.254.3.8/23     node2         e0d     true
                node2_clus2   up/up      169.254.3.9/23     node2         e0d     true
    cluster1::*>
  14. Verify that the cluster is healthy: cluster show

    Show example
    cluster1::*> *cluster show*
    Node           Health  Eligibility   Epsilon
    -------------- ------- ------------- -------
    node1          true    true          false
    node2          true    true          false
  15. Ping the remote cluster interfaces to verify connectivity: cluster ping-cluster -node local

    Show example
    cluster1::*> *cluster ping-cluster -node local*
    Host is node1
    Getting addresses from network interface table...
    Cluster node1_clus1 169.254.3.4 node1 e0a
    Cluster node1_clus2 169.254.3.5 node1 e0b
    Cluster node2_clus1 169.254.3.8 node2 e0a
    Cluster node2_clus2 169.254.3.9 node2 e0b
    Local = 169.254.1.3 169.254.1.1
    Remote = 169.254.1.6 169.254.1.7 169.254.3.4 169.254.3.5 169.254.3.8 169.254.3.9
    Cluster Vserver Id = 4294967293
    Ping status:
    ............
    Basic connectivity succeeds on 12 path(s)
    Basic connectivity fails on 0 path(s)
    ................................................
    Detected 9000 byte MTU on 12 path(s):
        Local 169.254.1.3 to Remote 169.254.1.6
        Local 169.254.1.3 to Remote 169.254.1.7
        Local 169.254.1.3 to Remote 169.254.3.4
        Local 169.254.1.3 to Remote 169.254.3.5
        Local 169.254.1.3 to Remote 169.254.3.8
        Local 169.254.1.3 to Remote 169.254.3.9
        Local 169.254.1.1 to Remote 169.254.1.6
        Local 169.254.1.1 to Remote 169.254.1.7
        Local 169.254.1.1 to Remote 169.254.3.4
        Local 169.254.1.1 to Remote 169.254.3.5
        Local 169.254.1.1 to Remote 169.254.3.8
        Local 169.254.1.1 to Remote 169.254.3.9
    Larger than PMTU communication succeeds on 12 path(s)
    RPC status:
    6 paths up, 0 paths down (tcp check)
    6 paths up, 0 paths down (udp check)
For ONTAP 9.8 and later

For ONTAP 9.8 and later, enable the cluster switch health monitor log collection feature for collecting switch-related log files, using the commands: system switch ethernet log setup-password and system switch ethernet log enable-collection

Enter: system switch ethernet log setup-password

cluster1::*> system switch ethernet log setup-password
Enter the switch name: <return>
The switch name entered is not recognized.
Choose from the following list:
cs1
cs2

cluster1::*> system switch ethernet log setup-password

Enter the switch name: cs1
RSA key fingerprint is e5:8b:c6:dc:e2:18:18:09:36:63:d9:63:dd:03:d9:cc
Do you want to continue? {y|n}::[n] y

Enter the password: <enter switch password>
Enter the password again: <enter switch password>

cluster1::*> system switch ethernet log setup-password
Enter the switch name: cs2
RSA key fingerprint is 57:49:86:a1:b9:80:6a:61:9a:86:8e:3c:e3:b7:1f:b1
Do you want to continue? {y|n}:: [n] y

Enter the password: <enter switch password>
Enter the password again: <enter switch password>

Followed by: system switch ethernet log enable-collection

cluster1::*> system switch ethernet log enable-collection

Do you want to enable cluster log collection for all nodes in the cluster?
{y|n}: [n] y

Enabling cluster switch log collection.

cluster1::*>
For ONTAP 9.4 and later

For ONTAP 9.4 and later, enable the cluster switch health monitor log collection feature for collecting switch-related log files using the commands:

system cluster-switch log setup-password and system cluster-switch log enable-collection

Enter: system cluster-switch log setup-password

cluster1::*> system cluster-switch log setup-password
Enter the switch name: <return>
The switch name entered is not recognized.
Choose from the following list:
cs1
cs2

cluster1::*> system cluster-switch log setup-password

Enter the switch name: cs1
RSA key fingerprint is e5:8b:c6:dc:e2:18:18:09:36:63:d9:63:dd:03:d9:cc
Do you want to continue? {y|n}::[n] y

Enter the password: <enter switch password>
Enter the password again: <enter switch password>

cluster1::*> system cluster-switch log setup-password

Enter the switch name: cs2
RSA key fingerprint is 57:49:86:a1:b9:80:6a:61:9a:86:8e:3c:e3:b7:1f:b1
Do you want to continue? {y|n}:: [n] y

Enter the password: <enter switch password>
Enter the password again: <enter switch password>

Followed by: system cluster-switch log enable-collection

cluster1::*> system cluster-switch log enable-collection

Do you want to enable cluster log collection for all nodes in the cluster?
{y|n}: [n] y

Enabling cluster switch log collection.

cluster1::*>
Note If any of these commands return an error, contact NetApp support.