Skip to main content
Cluster and storage switches

Install the Reference Configuration File (RCF)

Contributors netapp-yvonneo

Follow this procedure to install the RCF after setting up the Nexus 3132Q-V switch for the first time. You can also use this procedure to upgrade your RCF version.

Review requirements

What you'll need
  • A current backup of the switch configuration.

  • A fully functioning cluster (no errors in the logs or similar issues).

  • The current Reference Configuration File (RCF).

  • A console connection to the switch is required when installing the RCF. This requirement is optional if you have used the Knowledge Base article How to clear the configuration on a Cisco interconnect switch while retaining remote connectivity to clear the configuration, beforehand.

  • Cisco Ethernet switch. Consult the switch compatibility table for the supported ONTAP and RCF versions. Note that there can be command dependencies between the command syntax in the RCF and that found in versions of NX-OS.

  • Cisco Nexus 3000 Series Switches. Consult the appropriate software and upgrade guides available on the Cisco web site for complete documentation on the Cisco switch upgrade and downgrade procedures.

Install the file

About the examples

The examples in this procedure use the following switch and node nomenclature:

  • The names of the two Cisco switches are cs1 and cs2.

  • The node names are cluster1-01, cluster1-02, cluster1-03, and cluster1-04.

  • The cluster LIF names are cluster1-01_clus1, cluster1-01_clus2, cluster1-02_clus1, cluster1-02_clus2 , cluster1-03_clus1, cluster1-03_clus2, cluster1-04_clus1, and cluster1-04_clus2.

  • The cluster1::*> prompt indicates the name of the cluster.

About this task

The procedure requires the use of both ONTAP commands and Cisco Nexus 3000 Series Switches commands; ONTAP commands are used unless otherwise indicated.

No operational inter-switch link (ISL) is needed during this procedure. This is by design because RCF version changes can affect ISL connectivity temporarily. To ensure non-disruptive cluster operations, the following procedure migrates all of the cluster LIFs to the operational partner switch while performing the steps on the target switch.

Be sure to complete the procedure in Prepare to install NX-OS software and Reference Configuration File, and then follow the steps below.

Step 1: Check port status

  1. Display the cluster ports on each node that are connected to the cluster switches:

    network device-discovery show

    Show example
    cluster1::*> network device-discovery show
    Node/       Local  Discovered
    Protocol    Port   Device (LLDP: ChassisID)  Interface         Platform
    ----------- ------ ------------------------- ----------------  ------------
    cluster1-01/cdp
                e0a    cs1                       Ethernet1/7       N3K-C3132Q-V
                e0d    cs2                       Ethernet1/7       N3K-C3132Q-V
    cluster1-02/cdp
                e0a    cs1                       Ethernet1/8       N3K-C3132Q-V
                e0d    cs2                       Ethernet1/8       N3K-C3132Q-V
    cluster1-03/cdp
                e0a    cs1                       Ethernet1/1/1     N3K-C3132Q-V
                e0b    cs2                       Ethernet1/1/1     N3K-C3132Q-V
    cluster1-04/cdp
                e0a    cs1                       Ethernet1/1/2     N3K-C3132Q-V
                e0b    cs2                       Ethernet1/1/2     N3K-C3132Q-V
    cluster1::*>
  2. Check the administrative and operational status of each cluster port.

    1. Verify that all the cluster ports are up with a healthy status:

      network port show -ipspace Cluster

      Show example
      cluster1::*> network port show -ipspace Cluster
      
      Node: cluster1-01
                                                                             Ignore
                                                        Speed(Mbps) Health   Health
      Port      IPspace      Broadcast Domain Link MTU  Admin/Oper  Status   Status
      --------- ------------ ---------------- ---- ---- ----------- -------- ------
      e0a       Cluster      Cluster          up   9000  auto/100000 healthy false
      e0d       Cluster      Cluster          up   9000  auto/100000 healthy false
      
      Node: cluster1-02
                                                                             Ignore
                                                        Speed(Mbps) Health   Health
      Port      IPspace      Broadcast Domain Link MTU  Admin/Oper  Status   Status
      --------- ------------ ---------------- ---- ---- ----------- -------- ------
      e0a       Cluster      Cluster          up   9000  auto/100000 healthy false
      e0d       Cluster      Cluster          up   9000  auto/100000 healthy false
      8 entries were displayed.
      
      Node: cluster1-03
      
         Ignore
                                                        Speed(Mbps) Health   Health
      Port      IPspace      Broadcast Domain Link MTU  Admin/Oper  Status   Status
      --------- ------------ ---------------- ---- ---- ----------- -------- ------
      e0a       Cluster      Cluster          up   9000  auto/10000 healthy  false
      e0b       Cluster      Cluster          up   9000  auto/10000 healthy  false
      
      Node: cluster1-04
                                                                             Ignore
                                                        Speed(Mbps) Health   Health
      Port      IPspace      Broadcast Domain Link MTU  Admin/Oper  Status   Status
      --------- ------------ ---------------- ---- ---- ----------- -------- ------
      e0a       Cluster      Cluster          up   9000  auto/10000 healthy  false
      e0b       Cluster      Cluster          up   9000  auto/10000 healthy  false
      cluster1::*>
    2. Verify that all the cluster interfaces (LIFs) are on the home port:

      network interface show -vserver Cluster

      Show example
      cluster1::*> network interface show -vserver Cluster
                  Logical            Status     Network           Current      Current Is
      Vserver     Interface          Admin/Oper Address/Mask      Node         Port    Home
      ----------- ------------------ ---------- ----------------- ------------ ------- ----
      Cluster
                  cluster1-01_clus1  up/up     169.254.3.4/23     cluster1-01  e0a     true
                  cluster1-01_clus2  up/up     169.254.3.5/23     cluster1-01  e0d     true
                  cluster1-02_clus1  up/up     169.254.3.8/23     cluster1-02  e0a     true
                  cluster1-02_clus2  up/up     169.254.3.9/23     cluster1-02  e0d     true
                  cluster1-03_clus1  up/up     169.254.1.3/23     cluster1-03  e0a     true
                  cluster1-03_clus2  up/up     169.254.1.1/23     cluster1-03  e0b     true
                  cluster1-04_clus1  up/up     169.254.1.6/23     cluster1-04  e0a     true
                  cluster1-04_clus2  up/up     169.254.1.7/23     cluster1-04  e0b     true
      cluster1::*>
    3. Verify that the cluster displays information for both cluster switches:

      system cluster-switch show -is-monitoring-enabled-operational true

      Show example
      cluster1::*> system cluster-switch show -is-monitoring-enabled-operational true
      Switch                      Type               Address          Model
      --------------------------- ------------------ ---------------- ---------------
      cs1                         cluster-network    10.0.0.1         NX3132QV
           Serial Number: FOXXXXXXXGS
            Is Monitored: true
                  Reason: None
        Software Version: Cisco Nexus Operating System (NX-OS) Software, Version
                          9.3(4)
          Version Source: CDP
      
      cs2                         cluster-network    10.0.0.2         NX3132QV
           Serial Number: FOXXXXXXXGD
            Is Monitored: true
                  Reason: None
        Software Version: Cisco Nexus Operating System (NX-OS) Software, Version
                          9.3(4)
          Version Source: CDP
      
      2 entries were displayed.
    Note For ONTAP 9.8 and later, use the command system switch ethernet show -is-monitoring-enabled-operational true.
  3. Disable auto-revert on the cluster LIFs.

    cluster1::*> network interface modify -vserver Cluster -lif * -auto-revert false

    Make sure that auto-revert is disabled after running this command.

  4. On cluster switch cs2, shut down the ports connected to the cluster ports of the nodes.

    cs2(config)# interface eth1/1/1-2,eth1/7-8
    cs2(config-if-range)# shutdown
  5. Verify that the cluster ports have migrated to the ports hosted on cluster switch cs1. This might take a few seconds.

    network interface show -vserver Cluster

    Show example
    cluster1::*> network interface show -vserver Cluster
                Logical           Status     Network            Current       Current Is
    Vserver     Interface         Admin/Oper Address/Mask       Node          Port    Home
    ----------- ----------------- ---------- ------------------ ------------- ------- ----
    Cluster
                cluster1-01_clus1 up/up      169.254.3.4/23     cluster1-01   e0a     true
                cluster1-01_clus2 up/up      169.254.3.5/23     cluster1-01   e0a     false
                cluster1-02_clus1 up/up      169.254.3.8/23     cluster1-02   e0a     true
                cluster1-02_clus2 up/up      169.254.3.9/23     cluster1-02   e0a     false
                cluster1-03_clus1 up/up      169.254.1.3/23     cluster1-03   e0a     true
                cluster1-03_clus2 up/up      169.254.1.1/23     cluster1-03   e0a     false
                cluster1-04_clus1 up/up      169.254.1.6/23     cluster1-04   e0a     true
                cluster1-04_clus2 up/up      169.254.1.7/23     cluster1-04   e0a     false
    cluster1::*>
  6. Verify that the cluster is healthy:

    cluster show

    Show example
    cluster1::*> cluster show
    Node                 Health  Eligibility   Epsilon
    -------------------- ------- ------------  -------
    cluster1-01          true    true          false
    cluster1-02          true    true          false
    cluster1-03          true    true          true
    cluster1-04          true    true          false
    cluster1::*>

Step 2: Configure and verify the setup

  1. If you have not already done so, save a copy of the current switch configuration by copying the output of the following command to a text file:

    show running-config

  2. Clean the configuration on switch cs2 and perform a basic setup.

    Warning When updating or applying a new RCF, you must erase the switch settings and perform basic configuration. You must be connected to the switch serial console port to set up the switch again. However, this requirement is optional if you have used the Knowledge Base article How to clear the configuration on a Cisco interconnect switch while retaining remote connectivity to clear the configuration, beforehand.
    1. Clean the configuration:

      Show example
      (cs2)# write erase
      
      Warning: This command will erase the startup-configuration.
      
      Do you wish to proceed anyway? (y/n)  [n]  y
    2. Perform a reboot of the switch:

      Show example
      (cs2)# reload
      
      Are you sure you would like to reset the system? (y/n) y
  3. Copy the RCF to the bootflash of switch cs2 using one of the following transfer protocols: FTP, TFTP, SFTP, or SCP. For more information on Cisco commands, see the appropriate guide in the Cisco Nexus 3000 Series NX-OS Command Reference guides.

    Show example
    cs2# copy tftp: bootflash: vrf management
    Enter source filename: Nexus_3132QV_RCF_v1.6-Cluster-HA-Breakout.txt
    Enter hostname for the tftp server: 172.22.201.50
    Trying to connect to tftp server......Connection to Server Established.
    TFTP get operation was successful
    Copy complete, now saving to disk (please wait)...
  4. Apply the RCF previously downloaded to the bootflash.

    For more information on Cisco commands, see the appropriate guide in the Cisco Nexus 3000 Series NX-OS Command Reference guides.

    Show example
    cs2# copy Nexus_3132QV_RCF_v1.6-Cluster-HA-Breakout.txt running-config echo-commands
  5. Examine the banner output from the show banner motd command. You must read and follow the instructions under Important Notes to ensure the proper configuration and operation of the switch.

    Show example
    cs2# show banner motd
    
    ******************************************************************************
    * NetApp Reference Configuration File (RCF)
    *
    * Switch   : Cisco Nexus 3132Q-V
    * Filename : Nexus_3132QV_RCF_v1.6-Cluster-HA-Breakout.txt
    * Date     : Nov-02-2020
    * Version  : v1.6
    *
    * Port Usage : Breakout configuration
    * Ports  1- 6: Breakout mode (4x10GbE) Intra-Cluster Ports, int e1/1/1-4,
    * e1/2/1-4, e1/3/1-4,int e1/4/1-4, e1/5/1-4, e1/6/1-4
    * Ports  7-30: 40GbE Intra-Cluster/HA Ports, int e1/7-30
    * Ports 31-32: Intra-Cluster ISL Ports, int e1/31-32
    *
    * IMPORTANT NOTES
    * - Load Nexus_3132QV_RCF_v1.6-Cluster-HA.txt for non breakout config
    *
    * - This RCF utilizes QoS and requires specific TCAM configuration, requiring
    *   cluster switch to be rebooted before the cluster becomes operational.
    *
    * - Perform the following steps to ensure proper RCF installation:
    *
    *   (1) Apply RCF, expect following messages:
    *       - Please save config and reload the system...
    *       - Edge port type (portfast) should only be enabled on ports...
    *       - TCAM region is not configured for feature QoS class IPv4...
    *
    *   (2) Save running-configuration and reboot Cluster Switch
    *
    *   (3) After reboot, apply same RCF second time and expect following messages:
    *       - % Invalid command at '^' marker
    *
    *   (4) Save running-configuration again
    *
    * - If running NX-OS versions 9.3(5) 9.3(6), 9.3(7), or 9.3(8)
    *    - Downgrade the NX-OS firmware to version 9.3(5) or earlier if
    *      NX-OS using a version later than 9.3(5).
    *    - Do not upgrade NX-OS prior to applying v1.9 RCF file.
    *    - After the RCF is applied and switch rebooted, then proceed to upgrade
    *      NX-OS to version 9.3(5) or later.
    *
    * - If running 9.3(9) 10.2(2) or later the RCF can be applied to the switch
    *      after the upgrade.
    *
    * - Port 1 multiplexed H/W configuration options:
    *     hardware profile front portmode qsfp      (40G H/W port 1/1 is active - default)
    *     hardware profile front portmode sfp-plus  (10G H/W ports 1/1/1 - 1/1/4 are active)
    *     hardware profile front portmode qsfp      (To reset to QSFP)
    *
    ******************************************************************************
  6. Verify that the RCF file is the correct newer version:

    show running-config

    When you check the output to verify you have the correct RCF, make sure that the following information is correct:

    • The RCF banner

    • The node and port settings

    • Customizations

      The output varies according to your site configuration. Check the port settings and refer to the release notes for any changes specific to the RCF that you have installed.

      Note For steps on how to bring your 10GbE ports online after an upgrade of the RCF, see the Knowledge Base article 10GbE ports on a Cisco 3132Q cluster switch do not come online.
  7. Reapply any previous customizations to the switch configuration. Refer to Review cabling and configuration considerations for details of any further changes required.

  8. After you verify the RCF versions and switch settings are correct, copy the running-config file to the startup-config file.

    For more information on Cisco commands, see the appropriate guide in the Cisco Nexus 3000 Series NX-OS Command Reference guides.

    Show example
    cs2# copy running-config startup-config [########################################] 100% Copy complete
  9. Reboot switch cs2. You can ignore both the "cluster ports down" events reported on the nodes while the switch reboots and the error % Invalid command at '^' marker output.

    Show example
    cs2# reload
    This command will reboot the system. (y/n)?  [n] y
  10. Apply the same RCF and save the running configuration for a second time. This is necessary as the RCF utilizes QoS and requires TCAM re-configuration that involves loading the RCF twice with the switch rebooted in between.

    Show example
    cs2# copy Nexus_3132QV_RCF_v1.6-Cluster-HA-Breakout.txt running-config echo-commands
    cs2# copy running-config startup-config [########################################] 100% Copy complete
  11. Verify the health of cluster ports on the cluster.

    1. Verify that cluster ports are up and healthy across all nodes in the cluster:

      network port show -ipspace Cluster

      Show example
      cluster1::*> network port show -ipspace Cluster
      
      Node: cluster1-01
                                                                             Ignore
                                                        Speed(Mbps) Health   Health
      Port      IPspace      Broadcast Domain Link MTU  Admin/Oper  Status   Status
      --------- ------------ ---------------- ---- ---- ----------- -------- ------
      e0a       Cluster      Cluster          up   9000  auto/10000 healthy  false
      e0b       Cluster      Cluster          up   9000  auto/10000 healthy  false
      
      Node: cluster1-02
                                                                             Ignore
                                                        Speed(Mbps) Health   Health
      Port      IPspace      Broadcast Domain Link MTU  Admin/Oper  Status   Status
      --------- ------------ ---------------- ---- ---- ----------- -------- ------
      e0a       Cluster      Cluster          up   9000  auto/10000 healthy  false
      e0b       Cluster      Cluster          up   9000  auto/10000 healthy  false
      
      Node: cluster1-03
                                                                             Ignore
                                                        Speed(Mbps) Health   Health
      Port      IPspace      Broadcast Domain Link MTU  Admin/Oper  Status   Status
      --------- ------------ ---------------- ---- ---- ----------- -------- ------
      e0a       Cluster      Cluster          up   9000  auto/100000 healthy false
      e0d       Cluster      Cluster          up   9000  auto/100000 healthy false
      
      Node: cluster1-04
                                                                             Ignore
                                                        Speed(Mbps) Health   Health
      Port      IPspace      Broadcast Domain Link MTU  Admin/Oper  Status   Status
      --------- ------------ ---------------- ---- ---- ----------- -------- ------
      e0a       Cluster      Cluster          up   9000  auto/100000 healthy false
      e0d       Cluster      Cluster          up   9000  auto/100000 healthy false
    2. Verify the switch health from the cluster.

      network device-discovery show -protocol cdp

      Show example
      cluster1::*> network device-discovery show -protocol cdp
      Node/       Local  Discovered
      Protocol    Port   Device (LLDP: ChassisID)  Interface         Platform
      ----------- ------ ------------------------- ----------------- --------
      cluster1-01/cdp
                  e0a    cs1                       Ethernet1/7       N3K-C3132Q-V
                  e0d    cs2                       Ethernet1/7       N3K-C3132Q-V
      cluster01-2/cdp
                  e0a    cs1                       Ethernet1/8       N3K-C3132Q-V
                  e0d    cs2                       Ethernet1/8       N3K-C3132Q-V
      cluster01-3/cdp
                  e0a    cs1                       Ethernet1/1/1     N3K-C3132Q-V
                  e0b    cs2                       Ethernet1/1/1     N3K-C3132Q-V
      cluster1-04/cdp
                  e0a    cs1                       Ethernet1/1/2     N3K-C3132Q-V
                  e0b    cs2                       Ethernet1/1/2     N3K-C3132Q-V
      
      cluster1::*> system cluster-switch show -is-monitoring-enabled-operational true
      Switch                      Type               Address          Model
      --------------------------- ------------------ ---------------- -----
      cs1                         cluster-network    10.233.205.90    N3K-C3132Q-V
           Serial Number: FOXXXXXXXGD
            Is Monitored: true
                  Reason: None
        Software Version: Cisco Nexus Operating System (NX-OS) Software, Version
                          9.3(4)
          Version Source: CDP
      
      cs2                         cluster-network    10.233.205.91    N3K-C3132Q-V
           Serial Number: FOXXXXXXXGS
            Is Monitored: true
                  Reason: None
        Software Version: Cisco Nexus Operating System (NX-OS) Software, Version
                          9.3(4)
          Version Source: CDP
      
      2 entries were displayed.
      Note For ONTAP 9.8 and later, use the command system switch ethernet show -is-monitoring-enabled-operational true.
      Note

      You might observe the following output on the cs1 switch console depending on the RCF version previously loaded on the switch:

      2020 Nov 17 16:07:18 cs1 %$ VDC-1 %$ %STP-2-UNBLOCK_CONSIST_PORT: Unblocking port port-channel1 on VLAN0092. Port consistency restored.
      2020 Nov 17 16:07:23 cs1 %$ VDC-1 %$ %STP-2-BLOCK_PVID_PEER: Blocking port-channel1 on VLAN0001. Inconsistent peer vlan.
      2020 Nov 17 16:07:23 cs1 %$ VDC-1 %$ %STP-2-BLOCK_PVID_LOCAL: Blocking port-channel1 on VLAN0092. Inconsistent local vlan.
      Note It can take up to 5 minutes for the cluster nodes to report as healthy.
  12. On cluster switch cs1, shut down the ports connected to the cluster ports of the nodes.

    Show example
    cs1(config)# interface eth1/1/1-2,eth1/7-8
    cs1(config-if-range)# shutdown
  13. Verify that the cluster LIFs have migrated to the ports hosted on switch cs2. This might take a few seconds.

    network interface show -vserver Cluster

    Show example
    cluster1::*> network interface show -vserver Cluster
                Logical            Status     Network            Current             Current Is
    Vserver     Interface          Admin/Oper Address/Mask       Node                Port    Home
    ----------- ------------------ ---------- ------------------ ------------------- ------- ----
    Cluster
                cluster1-01_clus1  up/up      169.254.3.4/23     cluster1-01         e0d     false
                cluster1-01_clus2  up/up      169.254.3.5/23     cluster1-01         e0d     true
                cluster1-02_clus1  up/up      169.254.3.8/23     cluster1-02         e0d     false
                cluster1-02_clus2  up/up      169.254.3.9/23     cluster1-02         e0d     true
                cluster1-03_clus1  up/up      169.254.1.3/23     cluster1-03         e0b     false
                cluster1-03_clus2  up/up      169.254.1.1/23     cluster1-03         e0b     true
                cluster1-04_clus1  up/up      169.254.1.6/23     cluster1-04         e0b     false
                cluster1-04_clus2  up/up      169.254.1.7/23     cluster1-04         e0b     true
    cluster1::*>
  14. Verify that the cluster is healthy:

    cluster show

    Show example
    cluster1::*> cluster show
    Node                 Health   Eligibility   Epsilon
    -------------------- -------- ------------- -------
    cluster1-01          true     true          false
    cluster1-02          true     true          false
    cluster1-03          true     true          true
    cluster1-04          true     true          false
    4 entries were displayed.
    cluster1::*>
  15. Repeat Steps 1 to 10 on switch cs1.

  16. Enable auto-revert on the cluster LIFs.

    Show example
    cluster1::*> network interface modify -vserver Cluster -lif * -auto-revert True
  17. Reboot switch cs1. You do this to trigger the cluster LIFs to revert to their home ports. You can ignore the "cluster ports down" events reported on the nodes while the switch reboots.

    cs1# reload
    This command will reboot the system. (y/n)?  [n] y

Step 3: Verify the configuration

  1. Verify that the switch ports connected to the cluster ports are up.

    show interface brief | grep up

    Show example
    cs1# show interface brief | grep up
    .
    .
    Eth1/1/1      1       eth  access up      none                    10G(D) --
    Eth1/1/2      1       eth  access up      none                    10G(D) --
    Eth1/7        1       eth  trunk  up      none                   100G(D) --
    Eth1/8        1       eth  trunk  up      none                   100G(D) --
    .
    .
  2. Verify that the ISL between cs1 and cs2 is functional:

    show port-channel summary

    Show example
    cs1# show port-channel summary
    Flags:  D - Down        P - Up in port-channel (members)
            I - Individual  H - Hot-standby (LACP only)
            s - Suspended   r - Module-removed
            b - BFD Session Wait
            S - Switched    R - Routed
            U - Up (port-channel)
            p - Up in delay-lacp mode (member)
            M - Not in use. Min-links not met
    --------------------------------------------------------------------------------
    Group Port-       Type     Protocol  Member Ports
          Channel
    --------------------------------------------------------------------------------
    1     Po1(SU)     Eth      LACP      Eth1/31(P)   Eth1/32(P)
    cs1#
  3. Verify that the cluster LIFs have reverted to their home port:

    network interface show -vserver Cluster

    Show example
    cluster1::*> network interface show -vserver Cluster
                Logical            Status     Network            Current             Current Is
    Vserver     Interface          Admin/Oper Address/Mask       Node                Port    Home
    ----------- ------------------ ---------- ------------------ ------------------- ------- ----
    Cluster
                cluster1-01_clus1  up/up      169.254.3.4/23     cluster1-01         e0d     true
                cluster1-01_clus2  up/up      169.254.3.5/23     cluster1-01         e0d     true
                cluster1-02_clus1  up/up      169.254.3.8/23     cluster1-02         e0d     true
                cluster1-02_clus2  up/up      169.254.3.9/23     cluster1-02         e0d     true
                cluster1-03_clus1  up/up      169.254.1.3/23     cluster1-03         e0b     true
                cluster1-03_clus2  up/up      169.254.1.1/23     cluster1-03         e0b     true
                cluster1-04_clus1  up/up      169.254.1.6/23     cluster1-04         e0b     true
                cluster1-04_clus2  up/up      169.254.1.7/23     cluster1-04         e0b     true
    cluster1::*>
  4. Verify that the cluster is healthy:

    cluster show

    Show example
    cluster1::*> cluster show
    Node                 Health  Eligibility   Epsilon
    -------------------- ------- ------------- -------
    cluster1-01          true    true          false
    cluster1-02          true    true          false
    cluster1-03          true    true          true
    cluster1-04          true    true          false
    cluster1::*>
  5. Verify the connectivity of the remote cluster interfaces:

ONTAP 9.9.1 and later

You can use the network interface check cluster-connectivity command to start an accessibility check for cluster connectivity and then display the details:

network interface check cluster-connectivity start and network interface check cluster-connectivity show

cluster1::*> network interface check cluster-connectivity start

NOTE: Wait for a number of seconds before running the show command to display the details.

cluster1::*> network interface check cluster-connectivity show
                                  Source              Destination         Packet
Node   Date                       LIF                 LIF                 Loss
------ -------------------------- ------------------- ------------------- -----------
cluster1-01
       3/5/2022 19:21:18 -06:00   cluster1-01_clus2   cluster1-02_clus1   none
       3/5/2022 19:21:20 -06:00   cluster1-01_clus2   cluster1-02_clus2   none

cluster1-02
       3/5/2022 19:21:18 -06:00   cluster1-02_clus2   cluster1-01_clus1   none
       3/5/2022 19:21:20 -06:00   cluster1-02_clus2   cluster1-01_clus2   none
All ONTAP releases

For all ONTAP releases, you can also use the cluster ping-cluster -node <name> command to check the connectivity:

cluster ping-cluster -node <name>

cluster1::*> cluster ping-cluster -node local
Host is cluster1-02
Getting addresses from network interface table...
Cluster cluster1-01_clus1 169.254.209.69 cluster1-01     e0a
Cluster cluster1-01_clus2 169.254.49.125 cluster1-01     e0b
Cluster cluster1-02_clus1 169.254.47.194 cluster1-02     e0a
Cluster cluster1-02_clus2 169.254.19.183 cluster1-02     e0b
Local = 169.254.47.194 169.254.19.183
Remote = 169.254.209.69 169.254.49.125
Cluster Vserver Id = 4294967293
Ping status:
....
Basic connectivity succeeds on 4 path(s)
Basic connectivity fails on 0 path(s)
................
Detected 9000 byte MTU on 4 path(s):
    Local 169.254.19.183 to Remote 169.254.209.69
    Local 169.254.19.183 to Remote 169.254.49.125
    Local 169.254.47.194 to Remote 169.254.209.69
    Local 169.254.47.194 to Remote 169.254.49.125
Larger than PMTU communication succeeds on 4 path(s)
RPC status:
2 paths up, 0 paths down (tcp check)
2 paths up, 0 paths down (udp check)
What's next?

Verify SSH configuration.