Skip to main content
Cluster and storage switches

Replace Cisco Nexus 3132Q-V cluster switches

Contributors netapp-yvonneo netapp-jolieg

Follow this procedure to replace a defective Cisco Nexus 3132Q-V switch in a cluster network. The replacement procedure is a nondisruptive procedure (NDO).

Review requirements

What you'll need
  • The existing cluster and network configuration has:

    • The Nexus 3132Q-V cluster infrastructure is redundant and fully functional on both switches.

      The Cisco Ethernet Switch page has the latest RCF and NX-OS versions on your switches.

    • All cluster ports are in the up state.

    • Management connectivity exists on both switches.

    • All cluster logical interfaces (LIFs) are in the up state and have been migrated.

  • For the Nexus 3132Q-V replacement switch, make sure that:

    • Management network connectivity on the replacement switch is functional.

    • Console access to the replacement switch is in place.

    • The desired RCF and NX-OS operating system image switch is loaded onto the switch.

    • Initial customization of the switch is complete.

  • Hardware Universe

Enable console logging

NetApp strongly recommends that you enable console logging on the devices that you are using and take the following actions when replacing your switch:

Replace the switch

This procedure replaces the second Nexus 3132Q-V cluster switch CL2 with new 3132Q-V switch C2.

About the examples

The examples in this procedure use the following switch and node nomenclature:

  • n1_clus1 is the first cluster logical interface (LIF) connected to cluster switch C1 for node n1.

  • n1_clus2 is the first cluster LIF connected to cluster switch CL2 or C2, for node n1.

  • n1_clus3 is the second LIF connected to cluster switch C2, for node n1.

  • n1_clus4 is the second LIF connected to cluster switch CL1, for node n1.

  • The number of 10 GbE and 40/100 GbE ports are defined in the reference configuration files (RCFs) available on the Cisco® Cluster Network Switch Reference Configuration File Download page.

  • The nodes are n1, n2, n3, and n4. - The examples in this procedure use four nodes: Two nodes use four 10 GB cluster interconnect ports: e0a, e0b, e0c, and e0d. The other two nodes use two 40 GB cluster interconnect ports: e4a and e4e. See the Hardware Universe for the actual cluster ports on your platforms.

About this task

This procedure covers the following scenario:

  • The cluster starts with four nodes connected to two Nexus 3132Q-V cluster switches, CL1 and CL2.

  • Cluster switch CL2 is to be replaced by C2

    • On each node, cluster LIFs connected to CL2 are migrated onto cluster ports connected to CL1.

    • Disconnect cabling from all ports on CL2 and reconnect cabling to the same ports on the replacement switch C2.

    • On each node, its migrated cluster LIFs are reverted.

Step 1: Prepare for replacement

  1. If AutoSupport is enabled on this cluster, suppress automatic case creation by invoking an AutoSupport message:

    system node autosupport invoke -node * -type all - message MAINT=xh

    x is the duration of the maintenance window in hours.

    Note

    The AutoSupport message notifies technical support of this maintenance task so that automatic case creation is suppressed during the maintenance window.

  2. Display information about the devices in your configuration:

    network device-discovery show

    Show example
    cluster::> network device-discovery show
                Local  Discovered
    Node        Port   Device              Interface         Platform
    ----------- ------ ------------------- ----------------  ----------------
    n1         /cdp
                e0a    CL1                 Ethernet1/1/1    N3K-C3132Q-V
                e0b    CL2                 Ethernet1/1/1    N3K-C3132Q-V
                e0c    CL2                 Ethernet1/1/2    N3K-C3132Q-V
                e0d    CL1                 Ethernet1/1/2    N3K-C3132Q-V
    n2         /cdp
                e0a    CL1                 Ethernet1/1/3    N3K-C3132Q-V
                e0b    CL2                 Ethernet1/1/3    N3K-C3132Q-V
                e0c    CL2                 Ethernet1/1/4    N3K-C3132Q-V
                e0d    CL1                 Ethernet1/1/4    N3K-C3132Q-V
    n3         /cdp
                e4a    CL1                 Ethernet1/7      N3K-C3132Q-V
                e4e    CL2                 Ethernet1/7      N3K-C3132Q-V
    n4         /cdp
                e4a    CL1                 Ethernet1/8      N3K-C3132Q-V
                e4e    CL2                 Ethernet1/8      N3K-C3132Q-V
    
    12 entries were displayed
  3. Determine the administrative or operational status for each cluster interface:

    1. Display the network port attributes:

      network port show

      Show example
      cluster::*> network port show -role cluster
             (network port show)
      
      Node: n1
                                                                             Ignore
                                                        Speed(Mbps) Health   Health
      Port      IPspace      Broadcast Domain Link MTU  Admin/Oper  Status   Status
      --------- ------------ ---------------- ---- ---- ----------- -------- ------
      e0a       Cluster      Cluster          up   9000 auto/10000  -        -
      e0b       Cluster      Cluster          up   9000 auto/10000  -        -
      e0c       Cluster      Cluster          up   9000 auto/10000  -        -
      e0d       Cluster      Cluster          up   9000 auto/10000  -        -
      
      Node: n2
                                                                             Ignore
                                                        Speed(Mbps) Health   Health
      Port      IPspace      Broadcast Domain Link MTU  Admin/Oper  Status   Status
      --------- ------------ ---------------- ---- ---- ----------- -------- ------
      e0a       Cluster      Cluster          up   9000  auto/10000 -        -
      e0b       Cluster      Cluster          up   9000  auto/10000 -        -
      e0c       Cluster      Cluster          up   9000  auto/10000 -        -
      e0d       Cluster      Cluster          up   9000  auto/10000 -        -
      
      Node: n3
                                                                             Ignore
                                                        Speed(Mbps) Health   Health
      Port      IPspace      Broadcast Domain Link MTU  Admin/Oper  Status   Status
      --------- ------------ ---------------- ---- ---- ----------- -------- ------
      e4a       Cluster      Cluster          up   9000 auto/40000  -        -
      e4e       Cluster      Cluster          up   9000 auto/40000  -        -
      
      Node: n4
                                                                             Ignore
                                                        Speed(Mbps) Health   Health
      Port      IPspace      Broadcast Domain Link MTU  Admin/Oper  Status   Status
      --------- ------------ ---------------- ---- ---- ----------- -------- ------
      e4a       Cluster      Cluster          up   9000 auto/40000  -        -
      e4e       Cluster      Cluster          up   9000 auto/40000  -        -
      12 entries were displayed.
    2. Display information about the logical interfaces:

      network interface show

      Show example
      cluster::*> network interface show -role cluster
             (network interface show)
      
                   Logical    Status     Network            Current       Current Is
      Vserver     Interface  Admin/Oper Address/Mask       Node          Port    Home
      ----------- ---------- ---------- ------------------ ------------- ------- ----
      Cluster
                  n1_clus1   up/up      10.10.0.1/24       n1            e0a     true
                  n1_clus2   up/up      10.10.0.2/24       n1            e0b     true
                  n1_clus3   up/up      10.10.0.3/24       n1            e0c     true
                  n1_clus4   up/up      10.10.0.4/24       n1            e0d     true
                  n2_clus1   up/up      10.10.0.5/24       n2            e0a     true
                  n2_clus2   up/up      10.10.0.6/24       n2            e0b     true
                  n2_clus3   up/up      10.10.0.7/24       n2            e0c     true
                  n2_clus4   up/up      10.10.0.8/24       n2            e0d     true
                  n3_clus1   up/up      10.10.0.9/24       n3            e0a     true
                  n3_clus2   up/up      10.10.0.10/24      n3            e0e     true
                  n4_clus1   up/up      10.10.0.11/24      n4            e0a     true
                  n4_clus2   up/up      10.10.0.12/24      n4            e0e     true
      
      12 entries were displayed.
    3. Display the information on the discovered cluster switches:

      system cluster-switch show

      Show example
      cluster::> system cluster-switch show
      
      Switch                      Type               Address          Model
      --------------------------- ------------------ ---------------- ---------------
      CL1                          cluster-network   10.10.1.101      NX3132V
           Serial Number: FOX000001
            Is Monitored: true
                  Reason:
        Software Version: Cisco Nexus Operating System (NX-OS) Software, Version
                          7.0(3)I4(1)
          Version Source: CDP
      
      CL2                          cluster-network   10.10.1.102      NX3132V
           Serial Number: FOX000002
            Is Monitored: true
                  Reason:
        Software Version: Cisco Nexus Operating System (NX-OS) Software, Version
                          7.0(3)I4(1)
          Version Source: CDP
      
      2 entries were displayed.
  4. Verify that the appropriate RCF and image are installed on the new Nexus 3132Q-V switch as necessary for your requirements, and make any essential site customizations.

    You must prepare the replacement switch at this time. If you need to upgrade the RCF and image, you must follow these steps:

    1. On the NetApp Support Site, go to the Cisco Ethernet Switch page.

    2. Note your switch and the required software versions in the table on that page.

    3. Download the appropriate version of the RCF.

    4. Click CONTINUE on the Description page, accept the license agreement, and then follow the instructions on the Download page to download the RCF.

    5. Download the appropriate version of the image software.

  5. Migrate the LIFs associated to the cluster ports connected to switch C2:

    network interface migrate

    Show example

    This example shows that the LIF migration is done on all the nodes:

    cluster::*> network interface migrate -vserver Cluster -lif n1_clus2 -source-node n1 –destination-node n1 -destination-port e0a
    cluster::*> network interface migrate -vserver Cluster -lif n1_clus3 -source-node n1 –destination-node n1 -destination-port e0d
    cluster::*> network interface migrate -vserver Cluster -lif n2_clus2 -source-node n2 –destination-node n2 -destination-port e0a
    cluster::*> network interface migrate -vserver Cluster -lif n2_clus3 -source-node n2 –destination-node n2 -destination-port e0d
    cluster::*> network interface migrate -vserver Cluster -lif n3_clus2 -source-node n3 –destination-node n3 -destination-port e4a
    cluster::*> network interface migrate -vserver Cluster -lif n4_clus2 -source-node n4 –destination-node n4 -destination-port e4a
  6. Verify cluster's health:

    network interface show

    Show example
    cluster::*> network interface show -role cluster
           (network interface show)
    
                Logical    Status     Network            Current       Current Is
    Vserver     Interface  Admin/Oper Address/Mask       Node          Port    Home
    ----------- ---------- ---------- ------------------ ------------- ------- ----
    Cluster
                n1_clus1   up/up      10.10.0.1/24       n1            e0a     true
                n1_clus2   up/up      10.10.0.2/24       n1            e0a     false
                n1_clus3   up/up      10.10.0.3/24       n1            e0d     false
                n1_clus4   up/up      10.10.0.4/24       n1            e0d     true
                n2_clus1   up/up      10.10.0.5/24       n2            e0a     true
                n2_clus2   up/up      10.10.0.6/24       n2            e0a     false
                n2_clus3   up/up      10.10.0.7/24       n2            e0d     false
                n2_clus4   up/up      10.10.0.8/24       n2            e0d     true
                n3_clus1   up/up      10.10.0.9/24       n3            e4a     true
                n3_clus2   up/up      10.10.0.10/24      n3            e4a     false
                n4_clus1   up/up      10.10.0.11/24      n4            e4a     true
                n4_clus2   up/up      10.10.0.12/24      n4            e4a     false
    12 entries were displayed.
  7. Shut down the cluster interconnect ports that are physically connected to switch CL2:

    network port modify

    Show example

    This example shows the specified ports being shut down on all nodes:

    cluster::*> network port modify -node n1 -port e0b -up-admin false
    cluster::*> network port modify -node n1 -port e0c -up-admin false
    cluster::*> network port modify -node n2 -port e0b -up-admin false
    cluster::*> network port modify -node n2 -port e0c -up-admin false
    cluster::*> network port modify -node n3 -port e4e -up-admin false
    cluster::*> network port modify -node n4 -port e4e -up-admin false
  8. Verify the connectivity of the remote cluster interfaces:

ONTAP 9.9.1 and later

You can use the network interface check cluster-connectivity command to start an accessibility check for cluster connectivity and then display the details:

network interface check cluster-connectivity start and network interface check cluster-connectivity show

cluster1::*> network interface check cluster-connectivity start

NOTE: Wait for a number of seconds before running the show command to display the details.

cluster1::*> network interface check cluster-connectivity show
                                  Source     Destination   Packet
Node   Date                       LIF        LIF           Loss
------ -------------------------- ---------- ------------- -----------
n1
       3/5/2022 19:21:18 -06:00   n1_clus2   n2_clus1      none
       3/5/2022 19:21:20 -06:00   n1_clus2   n2_clus2      none

n2
       3/5/2022 19:21:18 -06:00   n2_clus2   n1_clus1      none
       3/5/2022 19:21:20 -06:00   n2_clus2   n1_clus2      none
n3
...
...
n4
...
...
All ONTAP releases

For all ONTAP releases, you can also use the cluster ping-cluster -node <name> command to check the connectivity:

cluster ping-cluster -node <name>

cluster::*> cluster ping-cluster -node n1
Host is n1
Getting addresses from network interface table...
Cluster n1_clus1 n1		e0a	10.10.0.1
Cluster n1_clus2 n1		e0b	10.10.0.2
Cluster n1_clus3 n1		e0c	10.10.0.3
Cluster n1_clus4 n1		e0d	10.10.0.4
Cluster n2_clus1 n2		e0a	10.10.0.5
Cluster n2_clus2 n2		e0b	10.10.0.6
Cluster n2_clus3 n2		e0c	10.10.0.7
Cluster n2_clus4 n2		e0d	10.10.0.8
Cluster n3_clus1 n4		e0a	10.10.0.9
Cluster n3_clus2 n3		e0e	10.10.0.10
Cluster n4_clus1 n4		e0a	10.10.0.11
Cluster n4_clus2 n4		e0e	10.10.0.12

Local = 10.10.0.1 10.10.0.2 10.10.0.3 10.10.0.4
Remote = 10.10.0.5 10.10.0.6 10.10.0.7 10.10.0.8 10.10.0.9 10.10.0.10 10.10.0.11 10.10.0.12
Cluster Vserver Id = 4294967293
Ping status:
....
Basic connectivity succeeds on 32 path(s)
Basic connectivity fails on 0 path(s)
................
Detected 1500 byte MTU on 32 path(s):
    Local 10.10.0.1 to Remote 10.10.0.5
    Local 10.10.0.1 to Remote 10.10.0.6
    Local 10.10.0.1 to Remote 10.10.0.7
    Local 10.10.0.1 to Remote 10.10.0.8
    Local 10.10.0.1 to Remote 10.10.0.9
    Local 10.10.0.1 to Remote 10.10.0.10
    Local 10.10.0.1 to Remote 10.10.0.11
    Local 10.10.0.1 to Remote 10.10.0.12
    Local 10.10.0.2 to Remote 10.10.0.5
    Local 10.10.0.2 to Remote 10.10.0.6
    Local 10.10.0.2 to Remote 10.10.0.7
    Local 10.10.0.2 to Remote 10.10.0.8
    Local 10.10.0.2 to Remote 10.10.0.9
    Local 10.10.0.2 to Remote 10.10.0.10
    Local 10.10.0.2 to Remote 10.10.0.11
    Local 10.10.0.2 to Remote 10.10.0.12
    Local 10.10.0.3 to Remote 10.10.0.5
    Local 10.10.0.3 to Remote 10.10.0.6
    Local 10.10.0.3 to Remote 10.10.0.7
    Local 10.10.0.3 to Remote 10.10.0.8
    Local 10.10.0.3 to Remote 10.10.0.9
    Local 10.10.0.3 to Remote 10.10.0.10
    Local 10.10.0.3 to Remote 10.10.0.11
    Local 10.10.0.3 to Remote 10.10.0.12
    Local 10.10.0.4 to Remote 10.10.0.5
    Local 10.10.0.4 to Remote 10.10.0.6
    Local 10.10.0.4 to Remote 10.10.0.7
    Local 10.10.0.4 to Remote 10.10.0.8
    Local 10.10.0.4 to Remote 10.10.0.9
    Local 10.10.0.4 to Remote 10.10.0.10
    Local 10.10.0.4 to Remote 10.10.0.11
    Local 10.10.0.4 to Remote 10.10.0.12

Larger than PMTU communication succeeds on 32 path(s)
RPC status:
8 paths up, 0 paths down (tcp check)
8 paths up, 0 paths down (udp check)
  1. Shut down the ports 1/31 and 1/32 on CL1, and the active Nexus 3132Q-V switch:

    shutdown

    Show example

    This example shows the ISL ports 1/31 and 1/32 being shut down on switch CL1:

    (CL1)# configure
    (CL1)(Config)# interface e1/31-32
    (CL1)(config-if-range)# shutdown
    (CL1)(config-if-range)# exit
    (CL1)(Config)# exit
    (CL1)#

Step 2: Configure ports

  1. Remove all the cables attached to the Nexus 3132Q-V switch CL2 and reconnect them to the replacement switch C2 on all nodes.

  2. Remove the ISL cables from ports e1/31 and e1/32 on CL2 and reconnect them to the same ports on the replacement switch C2.

  3. Bring up ISLs ports 1/31 and 1/32 on the Nexus 3132Q-V switch CL1:

    (CL1)# configure
    (CL1)(Config)# interface e1/31-32
    (CL1)(config-if-range)# no shutdown
    (CL1)(config-if-range)# exit
    (CL1)(Config)# exit
    (CL1)#
  4. Verify that the ISLs are up on CL1:

    show port-channel

    Ports Eth1/31 and Eth1/32 should indicate (P), which means that the ISL ports are up in the port-channel.

    Show example
    CL1# show port-channel summary
    Flags: D - Down         P - Up in port-channel (members)
           I - Individual   H - Hot-standby (LACP only)
           s - Suspended    r - Module-removed
           S - Switched     R - Routed
           U - Up (port-channel)
           M - Not in use. Min-links not met
    --------------------------------------------------------------------------------
    Group Port-        Type   Protocol  Member 						Ports
          Channel
    --------------------------------------------------------------------------------
    1     Po1(SU)      Eth    LACP      Eth1/31(P)   Eth1/32(P)
  5. Verify that the ISLs are up on C2:

    show port-channel summary

    Ports Eth1/31 and Eth1/32 should indicate (P), which means that both ISL ports are up in the port-channel.

    Show example
    C2# show port-channel summary
    Flags: D - Down         P - Up in port-channel (members)
           I - Individual   H - Hot-standby (LACP only)
           s - Suspended    r - Module-removed
           S - Switched     R - Routed
           U - Up (port-channel)
           M - Not in use. Min-links not met
    --------------------------------------------------------------------------------
    Group Port-        Type   Protocol  Member Ports
          Channel
    --------------------------------------------------------------------------------
    1     Po1(SU)      Eth    LACP      Eth1/31(P)   Eth1/32(P)
  6. On all nodes, bring up all the cluster interconnect ports connected to the Nexus 3132Q-V switch C2:

    network port modify

    Show example
    cluster::*> network port modify -node n1 -port e0b -up-admin true
    cluster::*> network port modify -node n1 -port e0c -up-admin true
    cluster::*> network port modify -node n2 -port e0b -up-admin true
    cluster::*> network port modify -node n2 -port e0c -up-admin true
    cluster::*> network port modify -node n3 -port e4e -up-admin true
    cluster::*> network port modify -node n4 -port e4e -up-admin true
  7. For all nodes, revert all of the migrated cluster interconnect LIFs:

    network interface revert

    Show example
    cluster::*> network interface revert -vserver Cluster -lif n1_clus2
    cluster::*> network interface revert -vserver Cluster -lif n1_clus3
    cluster::*> network interface revert -vserver Cluster -lif n2_clus2
    cluster::*> network interface revert -vserver Cluster -lif n2_clus3
    Cluster::*> network interface revert –vserver Cluster –lif n3_clus2
    Cluster::*> network interface revert –vserver Cluster –lif n4_clus2
  8. Verify that the cluster interconnect ports are now reverted to their home:

    network interface show

    Show example

    This example shows that all the LIFs are successfully reverted because the ports listed under the Current Port column have a status of true in the Is Home column. If the Is Home column value is false, the LIF has not been reverted.

    cluster::*> network interface show -role cluster
     (network interface show)
                Logical    Status     Network            Current       Current Is
    Vserver     Interface  Admin/Oper Address/Mask       Node          Port    Home
    ----------- ---------- ---------- ------------------ ------------- ------- ----
    Cluster
                n1_clus1   up/up      10.10.0.1/24       n1            e0a     true
                n1_clus2   up/up      10.10.0.2/24       n1            e0b     true
                n1_clus3   up/up      10.10.0.3/24       n1            e0c     true
                n1_clus4   up/up      10.10.0.4/24       n1            e0d     true
                n2_clus1   up/up      10.10.0.5/24       n2            e0a     true
                n2_clus2   up/up      10.10.0.6/24       n2            e0b     true
                n2_clus3   up/up      10.10.0.7/24       n2            e0c     true
                n2_clus4   up/up      10.10.0.8/24       n2            e0d     true
                n3_clus1   up/up      10.10.0.9/24       n3            e4a     true
                n3_clus2   up/up      10.10.0.10/24      n3            e4e     true
                n4_clus1   up/up      10.10.0.11/24      n4            e4a     true
                n4_clus2   up/up      10.10.0.12/24      n4            e4e     true
    12 entries were displayed.
  9. Verify that the cluster ports are connected:

    network port show

    Show example
    cluster::*> network port show –role cluster
      (network port show)
    Node: n1
                                                                           Ignore
                                                      Speed(Mbps) Health   Health
    Port      IPspace      Broadcast Domain Link MTU  Admin/Oper  Status   Status
    --------- ------------ ---------------- ---- ---- ----------- -------- ------
    e0a       Cluster      Cluster          up   9000 auto/10000  -        -
    e0b       Cluster      Cluster          up   9000 auto/10000  -        -
    e0c       Cluster      Cluster          up   9000 auto/10000  -        -
    e0d       Cluster      Cluster          up   9000 auto/10000  -        -
    
    Node: n2
                                                                           Ignore
                                                      Speed(Mbps) Health   Health
    Port      IPspace      Broadcast Domain Link MTU  Admin/Oper  Status   Status
    --------- ------------ ---------------- ---- ---- ----------- -------- ------
    e0a       Cluster      Cluster          up   9000  auto/10000 -        -
    e0b       Cluster      Cluster          up   9000  auto/10000 -        -
    e0c       Cluster      Cluster          up   9000  auto/10000 -        -
    e0d       Cluster      Cluster          up   9000  auto/10000 -        -
    
    Node: n3
                                                                           Ignore
                                                      Speed(Mbps) Health   Health
    Port      IPspace      Broadcast Domain Link MTU  Admin/Oper  Status   Status
    --------- ------------ ---------------- ---- ---- ----------- -------- ------
    e4a       Cluster      Cluster          up   9000 auto/40000  -        -
    e4e       Cluster      Cluster          up   9000 auto/40000  -        -
    
    Node: n4
                                                                           Ignore
                                                      Speed(Mbps) Health   Health
    Port      IPspace      Broadcast Domain Link MTU  Admin/Oper  Status   Status
    --------- ------------ ---------------- ---- ---- ----------- -------- ------
    e4a       Cluster      Cluster          up   9000 auto/40000  -        -
    e4e       Cluster      Cluster          up   9000 auto/40000  -        -
    12 entries were displayed.
  10. Verify the connectivity of the remote cluster interfaces:

ONTAP 9.9.1 and later

You can use the network interface check cluster-connectivity command to start an accessibility check for cluster connectivity and then display the details:

network interface check cluster-connectivity start and network interface check cluster-connectivity show

cluster1::*> network interface check cluster-connectivity start

NOTE: Wait for a number of seconds before running the show command to display the details.

cluster1::*> network interface check cluster-connectivity show
                                  Source     Destination   Packet
Node   Date                       LIF        LIF           Loss
------ -------------------------- ---------- ------------- -----------
n1
       3/5/2022 19:21:18 -06:00   n1_clus2   n2_clus1      none
       3/5/2022 19:21:20 -06:00   n1_clus2   n2_clus2      none

n2
       3/5/2022 19:21:18 -06:00   n2_clus2   n1_clus1      none
       3/5/2022 19:21:20 -06:00   n2_clus2   n1_clus2      none
n3
...
...
n4
...
...
All ONTAP releases

For all ONTAP releases, you can also use the cluster ping-cluster -node <name> command to check the connectivity:

cluster ping-cluster -node <name>

cluster::*> cluster ping-cluster -node n1
Host is n1
Getting addresses from network interface table...
Cluster n1_clus1 n1		e0a	10.10.0.1
Cluster n1_clus2 n1		e0b	10.10.0.2
Cluster n2_clus1 n2		e0a	10.10.0.5
Cluster n2_clus2 n2		e0b	10.10.0.6
Cluster n2_clus3 n2		e0c	10.10.0.7
Cluster n2_clus4 n2		e0d	10.10.0.8
Cluster n3_clus1 n3		e0a	10.10.0.9
Cluster n3_clus2 n3		e0e	10.10.0.10
Cluster n4_clus1 n4		e0a	10.10.0.11
Cluster n4_clus2 n4		e0e	10.10.0.12

Local = 10.10.0.1 10.10.0.2 10.10.0.3 10.10.0.4
Remote = 10.10.0.5 10.10.0.6 10.10.0.7 10.10.0.8 10.10.0.9 10.10.0.10 10.10.0.11 10.10.0.12
Cluster Vserver Id = 4294967293
Ping status:
....
Basic connectivity succeeds on 32 path(s)
Basic connectivity fails on 0 path(s)
................
Detected 1500 byte MTU on 32 path(s):
    Local 10.10.0.1 to Remote 10.10.0.5
    Local 10.10.0.1 to Remote 10.10.0.6
    Local 10.10.0.1 to Remote 10.10.0.7
    Local 10.10.0.1 to Remote 10.10.0.8
    Local 10.10.0.1 to Remote 10.10.0.9
    Local 10.10.0.1 to Remote 10.10.0.10
    Local 10.10.0.1 to Remote 10.10.0.11
    Local 10.10.0.1 to Remote 10.10.0.12
    Local 10.10.0.2 to Remote 10.10.0.5
    Local 10.10.0.2 to Remote 10.10.0.6
    Local 10.10.0.2 to Remote 10.10.0.7
    Local 10.10.0.2 to Remote 10.10.0.8
    Local 10.10.0.2 to Remote 10.10.0.9
    Local 10.10.0.2 to Remote 10.10.0.10
    Local 10.10.0.2 to Remote 10.10.0.11
    Local 10.10.0.2 to Remote 10.10.0.12
    Local 10.10.0.3 to Remote 10.10.0.5
    Local 10.10.0.3 to Remote 10.10.0.6
    Local 10.10.0.3 to Remote 10.10.0.7
    Local 10.10.0.3 to Remote 10.10.0.8
    Local 10.10.0.3 to Remote 10.10.0.9
    Local 10.10.0.3 to Remote 10.10.0.10
    Local 10.10.0.3 to Remote 10.10.0.11
    Local 10.10.0.3 to Remote 10.10.0.12
    Local 10.10.0.4 to Remote 10.10.0.5
    Local 10.10.0.4 to Remote 10.10.0.6
    Local 10.10.0.4 to Remote 10.10.0.7
    Local 10.10.0.4 to Remote 10.10.0.8
    Local 10.10.0.4 to Remote 10.10.0.9
    Local 10.10.0.4 to Remote 10.10.0.10
    Local 10.10.0.4 to Remote 10.10.0.11
    Local 10.10.0.4 to Remote 10.10.0.12

Larger than PMTU communication succeeds on 32 path(s)
RPC status:
8 paths up, 0 paths down (tcp check)
8 paths up, 0 paths down (udp check)

Step 3: Verify the configuration

  1. Display the information about the devices in your configuration:

    • network device-discovery show

    • network port show -role cluster

    • network interface show -role cluster

    • system cluster-switch show

    Show example
    cluster::> network device-discovery show
                Local  Discovered
    Node        Port   Device              Interface        Platform
    ----------- ------ ------------------- ---------------- ----------------
    n1         /cdp
                e0a    C1                 Ethernet1/1/1    N3K-C3132Q-V
                e0b    C2                 Ethernet1/1/1    N3K-C3132Q-V
                e0c    C2                 Ethernet1/1/2    N3K-C3132Q-V
                e0d    C1                 Ethernet1/1/2    N3K-C3132Q-V
    n2         /cdp
                e0a    C1                 Ethernet1/1/3    N3K-C3132Q-V
                e0b    C2                 Ethernet1/1/3    N3K-C3132Q-V
                e0c    C2                 Ethernet1/1/4    N3K-C3132Q-V
                e0d    C1                 Ethernet1/1/4    N3K-C3132Q-V
    n3         /cdp
                e4a    C1                 Ethernet1/7      N3K-C3132Q-V
                e4e    C2                 Ethernet1/7      N3K-C3132Q-V
    n4         /cdp
                e4a    C1                 Ethernet1/8      N3K-C3132Q-V
                e4e    C2                 Ethernet1/8      N3K-C3132Q-V
    12 entries were displayed.
    cluster::*> network port show –role cluster
      (network port show)
    Node: n1
                                                                           Ignore
                                                      Speed(Mbps) Health   Health
    Port      IPspace      Broadcast Domain Link MTU  Admin/Oper  Status   Status
    --------- ------------ ---------------- ---- ---- ----------- -------- ------
    e0a       Cluster      Cluster          up   9000 auto/10000  -        -
    e0b       Cluster      Cluster          up   9000 auto/10000  -        -
    e0c       Cluster      Cluster          up   9000 auto/10000  -        -
    e0d       Cluster      Cluster          up   9000 auto/10000  -        -
    
    Node: n2
                                                                           Ignore
                                                      Speed(Mbps) Health   Health
    Port      IPspace      Broadcast Domain Link MTU  Admin/Oper  Status   Status
    --------- ------------ ---------------- ---- ---- ----------- -------- ------
    e0a       Cluster      Cluster          up   9000  auto/10000 -        -
    e0b       Cluster      Cluster          up   9000  auto/10000 -        -
    e0c       Cluster      Cluster          up   9000  auto/10000 -        -
    e0d       Cluster      Cluster          up   9000  auto/10000 -        -
    
    Node: n3
                                                                           Ignore
                                                      Speed(Mbps) Health   Health
    Port      IPspace      Broadcast Domain Link MTU  Admin/Oper  Status   Status
    --------- ------------ ---------------- ---- ---- ----------- -------- ------
    e4a       Cluster      Cluster          up   9000 auto/40000  -        -
    e4e       Cluster      Cluster          up   9000 auto/40000  -        -
    
    Node: n4
                                                                           Ignore
                                                      Speed(Mbps) Health   Health
    Port      IPspace      Broadcast Domain Link MTU  Admin/Oper  Status   Status
    --------- ------------ ---------------- ---- ---- ----------- -------- ------
    e4a       Cluster      Cluster          up   9000 auto/40000  -        -
    e4e       Cluster      Cluster          up   9000 auto/40000  -        -
    12 entries were displayed.
    cluster::*> network interface show -role cluster
     (network interface show)
                Logical    Status     Network            Current       Current Is
    Vserver     Interface  Admin/Oper Address/Mask       Node          Port    Home
    ----------- ---------- ---------- ------------------ ------------- ------- ----
    Cluster
                n1_clus1   up/up      10.10.0.1/24       n1            e0a     true
                n1_clus2   up/up      10.10.0.2/24       n1            e0b     true
                n1_clus3   up/up      10.10.0.3/24       n1            e0c     true
                n1_clus4   up/up      10.10.0.4/24       n1            e0d     true
                n2_clus1   up/up      10.10.0.5/24       n2            e0a     true
                n2_clus2   up/up      10.10.0.6/24       n2            e0b     true
                n2_clus3   up/up      10.10.0.7/24       n2            e0c     true
                n2_clus4   up/up      10.10.0.8/24       n2            e0d     true
                n3_clus1   up/up      10.10.0.9/24       n3            e4a     true
                n3_clus2   up/up      10.10.0.10/24      n3            e4e     true
                n4_clus1   up/up      10.10.0.11/24      n4            e4a     true
                n4_clus2   up/up      10.10.0.12/24      n4            e4e     true
    12 entries were displayed.
    cluster::*> system cluster-switch show
    
    Switch                      Type               Address          Model
    --------------------------- ------------------ ---------------- ---------------
    CL1                          cluster-network   10.10.1.101      NX3132V
         Serial Number: FOX000001
          Is Monitored: true
                Reason:
      Software Version: Cisco Nexus Operating System (NX-OS) Software, Version
                        7.0(3)I4(1)
        Version Source: CDP
    
    CL2                          cluster-network   10.10.1.102      NX3132V
         Serial Number: FOX000002
          Is Monitored: true
                Reason:
      Software Version: Cisco Nexus Operating System (NX-OS) Software, Version
                        7.0(3)I4(1)
        Version Source: CDP
    C2                          cluster-network    10.10.1.103      NX3132V
         Serial Number: FOX000003
          Is Monitored: true
                Reason:
      Software Version: Cisco Nexus Operating System (NX-OS) Software, Version
                        7.0(3)I4(1)
        Version Source: CDP
    
    3 entries were displayed.
  2. Remove the replaced Nexus 3132Q-V switch, if it is not already removed automatically:

    system cluster-switch delete

    cluster::*> system cluster-switch delete –device CL2
  3. Verify that the proper cluster switches are monitored:

    system cluster-switch show

    Show example
    cluster::> system cluster-switch show
    
    Switch                      Type               Address          Model
    --------------------------- ------------------ ---------------- ---------------
    CL1                          cluster-network    10.10.1.101      NX3132V
         Serial Number: FOX000001
          Is Monitored: true
                Reason:
      Software Version: Cisco Nexus Operating System (NX-OS) Software, Version
                        7.0(3)I4(1)
        Version Source: CDP
    
    C2                          cluster-network     10.10.1.103      NX3132V
         Serial Number: FOX000002
          Is Monitored: true
                Reason:
      Software Version: Cisco Nexus Operating System (NX-OS) Software, Version
                        7.0(3)I4(1)
        Version Source: CDP
    
    2 entries were displayed.
  4. If you suppressed automatic case creation, re-enable it by invoking an AutoSupport message:

    system node autosupport invoke -node * -type all -message MAINT=END