Skip to main content
Cluster and storage switches

Prepare to install NX-OS software and Reference Configuration File (RCF)

Contributors netapp-jsnyder netapp-jolieg

Before you install the NX-OS software and the Reference Configuration File (RCF), follow this procedure.

What you'll need
  • A fully functioning cluster (no errors in the logs or similar issues).

  • Appropriate software and upgrade guides, which are available from Cisco Nexus 9000 Series Switches.

About the examples

The examples in this procedure use two nodes. These nodes use two 10GbE cluster interconnect ports e0a and e0b. See the Hardware Universe to verify the correct cluster ports on your platforms.

The examples in this procedure use the following switch and node nomenclature:

  • The names of the two Cisco switches are cs1 and cs2.

  • The node names are node1 and node2.

  • The cluster LIF names are node1_clus1 and node1_clus2 for node1 and node2_clus1 and node2_clus2 for node2.

  • The cluster1::*> prompt indicates the name of the cluster.

About this task

The procedure requires the use of both ONTAP commands and Cisco Nexus 9000 Series Switches commands; ONTAP commands are used unless otherwise indicated. The command outputs might vary depending on different releases of ONTAP.

Steps
  1. Change the privilege level to advanced, entering y when prompted to continue:

    set -privilege advanced

    The advanced prompt (*>) appears.

  2. If AutoSupport is enabled on this cluster, suppress automatic case creation by invoking an AutoSupport message:

    system node autosupport invoke -node * -type all -message MAINT=xh

    where x is the duration of the maintenance window in hours.

    Note The AutoSupport message notifies technical support of this maintenance task so that automatic case creation is suppressed during the maintenance window.

    The following command suppresses automatic case creation for two hours:

    cluster1:> **system node autosupport invoke -node * -type all -message MAINT=2h**
  3. Display how many cluster interconnect interfaces are configured in each node for each cluster interconnect switch: network device-discovery show -protocol cdp

    Show example
    cluster1::*> network device-discovery show -protocol cdp
    
     Node/      Local  Discovered
    Protocol    Port   Device (LLDP: ChassisID)  Interface         Platform
    ----------- ------ ------------------------- ----------------  ----------------
    node2      /cdp
                e0a    cs1                       Eth1/2            N9K-C92300YC
                e0b    cs2                       Eth1/2            N9K-C92300YC
    node1      /cdp
                e0a    cs1                       Eth1/1            N9K-C92300YC
                e0b    cs2                       Eth1/1            N9K-C92300YC
    
    4 entries were displayed.
  4. Check the administrative or operational status of each cluster interface.

    1. Display the network port attributes: network port show –ipspace Cluster

      Show example
      cluster1::*> network port show -ipspace Cluster
      
      Node: node2
                                                        Speed(Mbps) Health
      Port      IPspace      Broadcast Domain Link MTU  Admin/Oper  Status
      --------- ------------ ---------------- ---- ---- ----------- --------
      e0a       Cluster      Cluster          up   9000  auto/10000 healthy
      e0b       Cluster      Cluster          up   9000  auto/10000 healthy
      
      Node: node1
                                                        Speed(Mbps) Health
      Port      IPspace      Broadcast Domain Link MTU  Admin/Oper  Status
      --------- ------------ ---------------- ---- ---- ----------- --------
      e0a       Cluster      Cluster          up   9000  auto/10000 healthy
      e0b       Cluster      Cluster          up   9000  auto/10000 healthy
      
      4 entries were displayed.
    2. Display information about the LIFs: network interface show -vserver Cluster

      Show example
      cluster1::*> network interface show -vserver Cluster
      
                  Logical    Status     Network            Current       Current Is
      Vserver     Interface  Admin/Oper Address/Mask       Node          Port    Home
      ----------- ---------- ---------- ------------------ ------------- ------- ----
      Cluster
                  node1_clus1  up/up    169.254.209.69/16  node1         e0a     true
                  node1_clus2  up/up    169.254.49.125/16  node1         e0b     true
                  node2_clus1  up/up    169.254.47.194/16  node2         e0a     true
                  node2_clus2  up/up    169.254.19.183/16  node2         e0b     true
      
      4 entries were displayed.
  5. Ping the remote cluster LIFs:

    cluster ping-cluster -node node-name

    Show example
    cluster1::*> cluster ping-cluster -node node2
    Host is node2
    Getting addresses from network interface table...
    Cluster node1_clus1 169.254.209.69 node1     e0a
    Cluster node1_clus2 169.254.49.125 node1     e0b
    Cluster node2_clus1 169.254.47.194 node2     e0a
    Cluster node2_clus2 169.254.19.183 node2     e0b
    Local = 169.254.47.194 169.254.19.183
    Remote = 169.254.209.69 169.254.49.125
    Cluster Vserver Id = 4294967293
    Ping status:
    
    Basic connectivity succeeds on 4 path(s)
    Basic connectivity fails on 0 path(s)
    
    Detected 9000 byte MTU on 4 path(s):
        Local 169.254.19.183 to Remote 169.254.209.69
        Local 169.254.19.183 to Remote 169.254.49.125
        Local 169.254.47.194 to Remote 169.254.209.69
        Local 169.254.47.194 to Remote 169.254.49.125
    Larger than PMTU communication succeeds on 4 path(s)
    RPC status:
    2 paths up, 0 paths down (tcp check)
    2 paths up, 0 paths down (udp check)
  6. Verify that the auto-revert command is enabled on all cluster LIFs:

    network interface show -vserver Cluster -fields auto-revert

    Show example
    cluster1::*> network interface show -vserver Cluster -fields auto-revert
    
              Logical
    Vserver   Interface     Auto-revert
    --------- ------------- ------------
    Cluster
              node1_clus1   true
              node1_clus2   true
              node2_clus1   true
              node2_clus2   true
    
    4 entries were displayed.
  7. For ONTAP 9.4 and later, enable the cluster switch health monitor log collection feature for collecting switch-related log files using the commands:

    system cluster-switch log setup-password and system cluster-switch log enable-collection

    Show example
    cluster1::*> system cluster-switch log setup-password
    Enter the switch name: <return>
    The switch name entered is not recognized.
    Choose from the following list:
    cs1
    cs2
    
    cluster1::*> system cluster-switch log setup-password
    
    Enter the switch name: cs1
    RSA key fingerprint is e5:8b:c6:dc:e2:18:18:09:36:63:d9:63:dd:03:d9:cc
    Do you want to continue? {y|n}::[n] y
    
    Enter the password: <enter switch password>
    Enter the password again: <enter switch password>
    
    cluster1::*> system cluster-switch log setup-password
    
    Enter the switch name: cs2
    RSA key fingerprint is 57:49:86:a1:b9:80:6a:61:9a:86:8e:3c:e3:b7:1f:b1
    Do you want to continue? {y|n}:: [n] y
    
    Enter the password: <enter switch password>
    Enter the password again: <enter switch password>
    
    cluster1::*> system cluster-switch log enable-collection
    
    Do you want to enable cluster log collection for all nodes in the cluster?
    {y|n}: [n] y
    
    Enabling cluster switch log collection.
    
    cluster1::*>
    Note If any of these commands return an error, contact NetApp support.