Skip to main content
Cluster and storage switches

Migrate a CN1610 cluster switch to a Cisco Nexus 3232C cluster switch

Contributors netapp-yvonneo netapp-jolieg

To replace the existing CN1610 cluster switches in a cluster with Cisco Nexus 3232C cluster switches, you must perform a specific sequence of tasks.

Review requirements

Before migration, be sure to review Migration requirements.

Note The procedure requires the use of both ONTAP commands and Cisco Nexus 3000 Series Switches commands; ONTAP commands are used unless otherwise indicated.

If necessary, refer to the following for more information:

Migrate the switches

About the examples

The examples in this procedure use four nodes: Two nodes use four 10 GbE cluster interconnect ports: e0a, e0b, e0c, and e0d. The other two nodes use two 40 GbE cluster interconnect fiber cables: e4a and e4e. The Hardware Universe has information about the cluster fiber cables on your platforms.

The examples in this procedure use the following switch and node nomenclature:

  • The nodes are n1, n2, n3, and n4.

  • The command outputs might vary depending on different releases of ONTAP software.

  • The CN1610 switches to be replaced are CL1 and CL2.

  • The Nexus 3232C switches to replace the CN1610 switches are C1 and C2.

  • n1_clus1 is the first cluster logical interface (LIF) that is connected to cluster switch 1 (CL1 or C1) for node n1.

  • n1_clus2 is the first cluster LIF that is connected to cluster switch 2 (CL2 or C2) for node n1.

  • n1_clus3 is the second LIF that is connected to cluster switch 2 (CL2 or C2) for node n1.

  • n1_clus4 is the second LIF that is connected to cluster switch 1 (CL1 or C1) for node n1.

  • The number of 10 GbE and 40/100 GbE ports are defined in the reference configuration files (RCFs) available on the Cisco® Cluster Network Switch Reference Configuration File Download page.

Step 1: Prepare for migration

  1. If AutoSupport is enabled on this cluster, suppress automatic case creation by invoking an AutoSupport message:

    system node autosupport invoke -node * -type all -message MAINT=xh

    x is the duration of the maintenance window in hours.

    Note

    The message notifies technical support of this maintenance task so that automatic case creation is suppressed during the maintenance window.

  2. Display information about the devices in your configuration:

    network device-discovery show

    Show example

    The following example displays how many cluster interconnect interfaces have been configured in each node for each cluster interconnect switch:

    cluster::> network device-discovery show
    
           Local  Discovered
    Node   Port   Device       Interface   Platform
    ------ ------ ------------ ----------- ----------
    n1     /cdp
            e0a   CL1          0/1         CN1610
            e0b   CL2          0/1         CN1610
            e0c   CL2          0/2         CN1610
            e0d   CL1          0/2         CN1610
    n2     /cdp
            e0a   CL1          0/3         CN1610
            e0b   CL2          0/3         CN1610
            e0c   CL2          0/4         CN1610
            e0d   CL1          0/4         CN1610
    
    8 entries were displayed.
  3. Determine the administrative or operational status for each cluster interface.

    1. Display the cluster network port attributes:

      network port show -role cluster

      Show example
      cluster::*> network port show -role cluster
             (network port show)
      
      Node: n1
                      Broadcast              Speed (Mbps) Health Ignore
      Port  IPspace   Domain     Link  MTU   Admin/Open   Status Health Status
      ----- --------- ---------- ----- ----- ------------ ------ -------------
      e0a   cluster   cluster    up    9000  auto/10000     -
      e0b   cluster   cluster    up    9000  auto/10000     -
      e0c   cluster   cluster    up    9000  auto/10000     -        -
      e0d   cluster   cluster    up    9000  auto/10000     -        -
      Node: n2
                      Broadcast              Speed (Mbps) Health Ignore
      Port  IPspace   Domain     Link  MTU   Admin/Open   Status Health Status
      ----- --------- ---------- ----- ----- ------------ ------ -------------
      e0a   cluster   cluster    up    9000  auto/10000     -
      e0b   cluster   cluster    up    9000  auto/10000     -
      e0c   cluster   cluster    up    9000  auto/10000     -
      e0d   cluster   cluster    up    9000  auto/10000     -
      
      8 entries were displayed.
    2. Display information about the logical interfaces:

      network interface show -role cluster

      Show example
      cluster::*> network interface show -role cluster
      (network interface show)
               Logical    Status      Network        Current  Current  Is
      Vserver  Interface  Admin/Oper  Address/Mask   Node     Port     Home
      -------- ---------- ----------- -------------- -------- -------- -----
      Cluster
               n1_clus1   up/up       10.10.0.1/24   n1       e0a      true
               n1_clus2   up/up       10.10.0.2/24   n1       e0b      true
               n1_clus3   up/up       10.10.0.3/24   n1       e0c      true
               n1_clus4   up/up       10.10.0.4/24   n1       e0d      true
               n2_clus1   up/up       10.10.0.5/24   n2       e0a      true
               n2_clus2   up/up       10.10.0.6/24   n2       e0b      true
               n2_clus3   up/up       10.10.0.7/24   n2       e0c      true
               n2_clus4   up/up       10.10.0.8/24   n2       e0d      true
      
       8 entries were displayed.
    3. Display information about the discovered cluster switches:

      system cluster-switch show

      Show example

      The following example displays the cluster switches that are known to the cluster along with their management IP addresses:

      cluster::> system cluster-switch show
      Switch                        Type             Address       Model
      ----------------------------- ---------------- ------------- --------
      CL1                           cluster-network  10.10.1.101   CN1610
           Serial Number: 01234567
            Is Monitored: true
                  Reason:
        Software Version: 1.2.0.7
          Version Source: ISDP
      CL2                           cluster-network  10.10.1.102   CN1610
           Serial Number: 01234568
            Is Monitored: true
                  Reason:
        Software Version: 1.2.0.7
          Version Source: ISDP
      
      2	entries displayed.
  4. Verify that the appropriate RCF and image are installed on the new 3232C switches as necessary for your requirements, and make any essential site customizations.

    You should prepare both switches at this time. If you need to upgrade the RCF and image, you must complete the following procedure:

    1. See the Cisco Ethernet Switch page on the NetApp Support Site.

    2. Note your switch and the required software versions in the table on that page.

    3. Download the appropriate version of the RCF.

    4. Click CONTINUE on the Description page, accept the license agreement, and then follow the instructions on the Download page to download the RCF.

    5. Download the appropriate version of the image software at Cisco® Cluster and Management Network Switch Reference Configuration File Download.

  5. Migrate the LIFs associated with the second CN1610 switch that you plan to replace:

    network interface migrate -vserver vserver-name -lif lif-name -source-node source-node-name destination-node destination-node-name -destination-port destination-port-name

    Show example

    You must migrate each LIF individually as shown in the following example:

    cluster::*> network interface migrate -vserver cluster -lif n1_clus2 -source-node n1
    -destination-node  n1  -destination-port  e0a
    cluster::*> network interface migrate -vserver cluster -lif n1_clus3 -source-node n1
    -destination-node  n1  -destination-port  e0d
    cluster::*> network interface migrate -vserver cluster -lif n2_clus2 -source-node n2
    -destination-node  n2  -destination-port  e0a
    cluster::*> network interface migrate -vserver cluster -lif n2_clus3 -source-node n2
    -destination-node  n2  -destination-port  e0d
  6. Verify the cluster's health:

    network interface show -role cluster

    Show example
    cluster::*> network interface show -role cluster
    (network interface show)
             Logical    Status      Network         Current  Current  Is
    Vserver  Interface  Admin/Oper  Address/Mask    Node     Port     Home
    -------- ---------- ----------- --------------- -------- -------- -----
    Cluster
             n1_clus1   up/up       10.10.0.1/24    n1        e0a     true
             n1_clus2   up/up       10.10.0.2/24    n1        e0a     false
             n1_clus3   up/up       10.10.0.3/24    n1        e0d     false
             n1_clus4   up/up       10.10.0.4/24    n1        e0d     true
             n2_clus1   up/up       10.10.0.5/24    n2        e0a     true
             n2_clus2   up/up       10.10.0.6/24    n2        e0a     false
             n2_clus3   up/up       10.10.0.7/24    n2        e0d     false
             n2_clus4   up/up       10.10.0.8/24    n2        e0d     true
    
    8 entries were displayed.

Step 2: Replace cluster switch CL2 with C2

  1. Shut down the cluster interconnect ports that are physically connected to switch CL2:

    network port modify -node node-name -port port-name -up-admin false

    Show example

    The following example shows the four cluster interconnect ports being shut down for node n1 and node n2:

    cluster::*> network port modify -node n1 -port e0b -up-admin false
    cluster::*> network port modify -node n1 -port e0c -up-admin false
    cluster::*> network port modify -node n2 -port e0b -up-admin false
    cluster::*> network port modify -node n2 -port e0c -up-admin false
  2. Verify the connectivity of the remote cluster interfaces:

ONTAP 9.9.1 and later

You can use the network interface check cluster-connectivity command to start an accessibility check for cluster connectivity and then display the details:

network interface check cluster-connectivity start and network interface check cluster-connectivity show

cluster1::*> network interface check cluster-connectivity start

NOTE: Wait for a number of seconds before running the show command to display the details.

cluster1::*> network interface check cluster-connectivity show
                                  Source           Destination      Packet
Node   Date                       LIF              LIF              Loss
------ -------------------------- ---------------- ---------------- -----------
n1
       3/5/2022 19:21:18 -06:00   n1_clus2         n2-clus1         none
       3/5/2022 19:21:20 -06:00   n1_clus2         n2_clus2         none

n2
       3/5/2022 19:21:18 -06:00   n2_clus2         n1_clus1         none
       3/5/2022 19:21:20 -06:00   n2_clus2         n1_clus2         none
All ONTAP releases

For all ONTAP releases, you can also use the cluster ping-cluster -node <name> command to check the connectivity:

cluster ping-cluster -node <name>

cluster1::*> cluster ping-cluster -node local
Host is n1
Getting addresses from network interface table...
Cluster n1_clus1 n1       e0a    10.10.0.1
Cluster n1_clus2 n1       e0b    10.10.0.2
Cluster n1_clus3 n1       e0c    10.10.0.3
Cluster n1_clus4 n1       e0d    10.10.0.4
Cluster n2_clus1 n2       e0a    10.10.0.5
Cluster n2_clus2 n2       e0b    10.10.0.6
Cluster n2_clus3 n2       e0c    10.10.0.7
Cluster n2_clus4 n2       e0d    10.10.0.8
Local = 10.10.0.1 10.10.0.2 10.10.0.3 10.10.0.4
Remote = 10.10.0.5 10.10.0.6 10.10.0.7 10.10.0.8
Cluster Vserver Id = 4294967293 Ping status:
....
Basic connectivity succeeds on 16 path(s)
Basic connectivity fails on 0 path(s)
................
Detected 9000 byte MTU on 16 path(s):
    Local 10.10.0.1 to Remote 10.10.0.5
    Local 10.10.0.1 to Remote 10.10.0.6
    Local 10.10.0.1 to Remote 10.10.0.7
    Local 10.10.0.1 to Remote 10.10.0.8
    Local 10.10.0.2 to Remote 10.10.0.5
    Local 10.10.0.2 to Remote 10.10.0.6
    Local 10.10.0.2 to Remote 10.10.0.7
    Local 10.10.0.2 to Remote 10.10.0.8
    Local 10.10.0.3 to Remote 10.10.0.5
    Local 10.10.0.3 to Remote 10.10.0.6
    Local 10.10.0.3 to Remote 10.10.0.7
    Local 10.10.0.3 to Remote 10.10.0.8
    Local 10.10.0.4 to Remote 10.10.0.5
    Local 10.10.0.4 to Remote 10.10.0.6
    Local 10.10.0.4 to Remote 10.10.0.7
    Local 10.10.0.4 to Remote 10.10.0.8

Larger than PMTU communication succeeds on 16 path(s)
RPC status:
4 paths up, 0 paths down (tcp check)
4 paths up, 0 paths down (udp check)
  1. Shut down the ISL ports 13 through 16 on the active CN1610 switch CL1 using the appropriate command.

    For more information on Cisco commands, see the guides listed in the Cisco Nexus 3000 Series NX-OS Command References.

    Show example

    The following example shows ISL ports 13 through 16 being shut down on the CN1610 switch CL1:

    (CL1)# configure
    (CL1)(Config)# interface 0/13-0/16
    (CL1)(Interface 0/13-0/16)# shutdown
    (CL1)(Interface 0/13-0/16)# exit
    (CL1)(Config)# exit
    (CL1)#
  2. Build a temporary ISL between CL1 and C2:

    For more information on Cisco commands, see the guides listed in the Cisco Nexus 3000 Series NX-OS Command References.

    Show example

    The following example shows a temporary ISL being built between CL1 (ports 13-16) and C2 (ports e1/24/1-4) using the Cisco switchport mode trunk command:

    C2# configure
    C2(config)# interface port-channel 2
    C2(config-if)# switchport mode trunk
    C2(config-if)# spanning-tree port type network
    C2(config-if)# mtu 9216
    C2(config-if)# interface breakout module 1 port 24 map 10g-4x
    C2(config)# interface e1/24/1-4
    C2(config-if-range)# switchport mode trunk
    C2(config-if-range)# mtu 9216
    C2(config-if-range)# channel-group 2 mode active
    C2(config-if-range)# exit
    C2(config-if)# exit
  3. Remove the cables that are attached to the CN1610 switch CL2 on all the nodes.

    Using supported cabling, you must reconnect the disconnected ports on all the nodes to the Nexus 3232C switch C2.

  4. Remove four ISL cables from ports 13 to 16 on the CN1610 switch CL1.

    You must attach the appropriate Cisco QSFP28 to SFP+ breakout cables connecting port 1/24 on the new Cisco 3232C switch C2 to ports 13 to 16 on the existing CN1610 switch CL1.

    Note

    When reconnecting any cables to the new Cisco 3232C switch, the cables used must be either optical fiber or Cisco twinax cables.

  5. Make the ISL dynamic by configuring the ISL interface 3/1 on the active CN1610 switch to disable the static mode.

    This configuration matches with the ISL configuration on the 3232C switch C2 when the ISLs are brought up on both switches.

    For more information on Cisco commands, see the guides listed in the Cisco Nexus 3000 Series NX-OS Command References.

    Show example

    The following example shows the ISL interface 3/1 being configured to make the ISL dynamic:

    (CL1)# configure
    (CL1)(Config)# interface 3/1
    (CL1)(Interface 3/1)# no port-channel static
    (CL1)(Interface 3/1)# exit
    (CL1)(Config)# exit
    (CL1)#
  6. Bring up ISLs 13 through 16 on the active CN1610 switch CL1.

    For more information on Cisco commands, see the guides listed in the Cisco Nexus 3000 Series NX-OS Command References.

    Show example

    The following example shows ISL ports 13 through 16 being brought up on the port-channel interface 3/1:

    (CL1)# configure
    (CL1)(Config)# interface 0/13-0/16,3/1
    (CL1)(Interface 0/13-0/16,3/1)# no shutdown
    (CL1)(Interface 0/13-0/16,3/1)# exit
    (CL1)(Config)# exit
    (CL1)#
  7. Verify that the ISLs are up on the CN1610 switch CL1.

    The "Link State" should be Up, "Type" should be Dynamic, and the "Port Active" column should be True for ports 0/13 to 0/16.

    Show example

    The following example shows the ISLs being verified as up on the CN1610 switch CL1:

    (CL1)# show port-channel 3/1
    Local Interface................................ 3/1
    Channel Name................................... ISL-LAG
    Link State..................................... Up
    Admin Mode..................................... Enabled
    Type........................................... Dynamic
    Load Balance Option............................ 7
    (Enhanced hashing mode)
    
    Mbr    Device/       Port        Port
    Ports  Timeout       Speed       Active
    ------ ------------- ----------  -------
    0/13   actor/long    10 Gb Full  True
           partner/long
    0/14   actor/long    10 Gb Full  True
           partner/long
    0/15   actor/long    10 Gb Full  True
           partner/long
    0/16   actor/long    10 Gb Full  True
           partner/long
  8. Verify that the ISLs are up on the 3232C switch C2:

    show port-channel summary

    For more information on Cisco commands, see the guides listed in the Cisco Nexus 3000 Series NX-OS Command References.

    Ports Eth1/24/1 through Eth1/24/4 should indicate (P), meaning that all four ISL ports are up in the port channel. Eth1/31 and Eth1/32 should indicate (D) as they are not connected.

    Show example

    The following example shows the ISLs being verified as up on the 3232C switch C2:

    C2# show port-channel summary
    
    Flags:  D - Down        P - Up in port-channel (members)
            I - Individual  H - Hot-standby (LACP only)
            s - Suspended   r - Module-removed
            S - Switched    R - Routed
            U - Up (port-channel)
            M - Not in use. Min-links not met
    ------------------------------------------------------------------------------
    Group Port-       Type     Protocol  Member Ports
          Channel
    ------------------------------------------------------------------------------
    1	    Po1(SU)     Eth      LACP      Eth1/31(D)   Eth1/32(D)
    2	    Po2(SU)     Eth      LACP      Eth1/24/1(P) Eth1/24/2(P) Eth1/24/3(P)
                                         Eth1/24/4(P)
  9. Bring up all of the cluster interconnect ports that are connected to the 3232C switch C2 on all of the nodes:

    network port modify -node node-name -port port-name -up-admin true

    Show example

    The following example shows how to bring up the cluster interconnect ports connected to the 3232C switch C2:

    cluster::*> network port modify -node n1 -port e0b -up-admin true
    cluster::*> network port modify -node n1 -port e0c -up-admin true
    cluster::*> network port modify -node n2 -port e0b -up-admin true
    cluster::*> network port modify -node n2 -port e0c -up-admin true
  10. Revert all of the migrated cluster interconnect LIFs that are connected to C2 on all of the nodes:

    network interface revert -vserver cluster -lif lif-name

    Show example
    cluster::*> network interface revert -vserver cluster -lif n1_clus2
    cluster::*> network interface revert -vserver cluster -lif n1_clus3
    cluster::*> network interface revert -vserver cluster -lif n2_clus2
    cluster::*> network interface revert -vserver cluster -lif n2_clus3
  11. Verify that all of the cluster interconnect ports are reverted to their home ports:

    network interface show -role cluster

    Show example

    The following example shows that the LIFs on clus2 are reverted to their home ports; the LIFs are successfully reverted if the ports in the "Current Port" column have a status of true in the "Is Home" column. If the "Is Home" value is false, then the LIF is not reverted.

    cluster::*> network interface show -role cluster
    (network interface show)
             Logical    Status      Network        Current  Current  Is
    Vserver  Interface  Admin/Oper  Address/Mask   Node     Port     Home
    -------- ---------- ----------- -------------- -------- -------- -----
    Cluster
             n1_clus1   up/up       10.10.0.1/24   n1       e0a      true
             n1_clus2   up/up       10.10.0.2/24   n1       e0b      true
             n1_clus3   up/up       10.10.0.3/24   n1       e0c      true
             n1_clus4   up/up       10.10.0.4/24   n1       e0d      true
             n2_clus1   up/up       10.10.0.5/24   n2       e0a      true
             n2_clus2   up/up       10.10.0.6/24   n2       e0b      true
             n2_clus3   up/up       10.10.0.7/24   n2       e0c      true
             n2_clus4   up/up       10.10.0.8/24   n2       e0d      true
    
    8 entries were displayed.
  12. Verify that all of the cluster ports are connected:

    network port show -role cluster

    Show example

    The following example shows the output verifying all of the cluster interconnects are up:

    cluster::*> network port show -role cluster
           (network port show)
    
    Node: n1
                    Broadcast               Speed (Mbps) Health   Ignore
    Port  IPspace   Domain      Link  MTU   Admin/Open   Status   Health Status
    ----- --------- ----------- ----- ----- ------------ -------- -------------
    e0a   cluster   cluster     up    9000  auto/10000     -
    e0b   cluster   cluster     up    9000  auto/10000     -
    e0c   cluster   cluster     up    9000  auto/10000     -        -
    e0d   cluster   cluster     up    9000  auto/10000     -        -
    Node: n2
    
                    Broadcast               Speed (Mbps) Health   Ignore
    Port  IPspace   Domain      Link  MTU   Admin/Open   Status   Health Status
    ----- --------- ----------- ----- ----- ------------ -------- -------------
    e0a   cluster   cluster     up    9000  auto/10000     -
    e0b   cluster   cluster     up    9000  auto/10000     -
    e0c   cluster   cluster     up    9000  auto/10000     -
    e0d   cluster   cluster     up    9000  auto/10000     -
    
    8 entries were displayed.
  13. Verify the connectivity of the remote cluster interfaces:

ONTAP 9.9.1 and later

You can use the network interface check cluster-connectivity command to start an accessibility check for cluster connectivity and then display the details:

network interface check cluster-connectivity start and network interface check cluster-connectivity show

cluster1::*> network interface check cluster-connectivity start

NOTE: Wait for a number of seconds before running the show command to display the details.

cluster1::*> network interface check cluster-connectivity show
                                  Source           Destination      Packet
Node   Date                       LIF              LIF              Loss
------ -------------------------- ---------------- ---------------- -----------
n1
       3/5/2022 19:21:18 -06:00   n1_clus2         n2-clus1         none
       3/5/2022 19:21:20 -06:00   n1_clus2         n2_clus2         none

n2
       3/5/2022 19:21:18 -06:00   n2_clus2         n1_clus1         none
       3/5/2022 19:21:20 -06:00   n2_clus2         n1_clus2         none
All ONTAP releases

For all ONTAP releases, you can also use the cluster ping-cluster -node <name> command to check the connectivity:

cluster ping-cluster -node <name>

cluster1::*> cluster ping-cluster -node local
Host is n1
Getting addresses from network interface table...
Cluster n1_clus1 n1       e0a    10.10.0.1
Cluster n1_clus2 n1       e0b    10.10.0.2
Cluster n1_clus3 n1       e0c    10.10.0.3
Cluster n1_clus4 n1       e0d    10.10.0.4
Cluster n2_clus1 n2       e0a    10.10.0.5
Cluster n2_clus2 n2       e0b    10.10.0.6
Cluster n2_clus3 n2       e0c    10.10.0.7
Cluster n2_clus4 n2       e0d    10.10.0.8
Local = 10.10.0.1 10.10.0.2 10.10.0.3 10.10.0.4
Remote = 10.10.0.5 10.10.0.6 10.10.0.7 10.10.0.8
Cluster Vserver Id = 4294967293 Ping status:
....
Basic connectivity succeeds on 16 path(s)
Basic connectivity fails on 0 path(s)
................
Detected 9000 byte MTU on 16 path(s):
    Local 10.10.0.1 to Remote 10.10.0.5
    Local 10.10.0.1 to Remote 10.10.0.6
    Local 10.10.0.1 to Remote 10.10.0.7
    Local 10.10.0.1 to Remote 10.10.0.8
    Local 10.10.0.2 to Remote 10.10.0.5
    Local 10.10.0.2 to Remote 10.10.0.6
    Local 10.10.0.2 to Remote 10.10.0.7
    Local 10.10.0.2 to Remote 10.10.0.8
    Local 10.10.0.3 to Remote 10.10.0.5
    Local 10.10.0.3 to Remote 10.10.0.6
    Local 10.10.0.3 to Remote 10.10.0.7
    Local 10.10.0.3 to Remote 10.10.0.8
    Local 10.10.0.4 to Remote 10.10.0.5
    Local 10.10.0.4 to Remote 10.10.0.6
    Local 10.10.0.4 to Remote 10.10.0.7
    Local 10.10.0.4 to Remote 10.10.0.8

Larger than PMTU communication succeeds on 16 path(s)
RPC status:
4 paths up, 0 paths down (tcp check)
4 paths up, 0 paths down (udp check)
  1. Migrate the LIFs that are associated with the first CN1610 switch CL1:

    network interface migrate -vserver cluster -lif lif-name -source-node node-name

    Show example

    You must migrate each cluster LIF individually to the appropriate cluster ports hosted on cluster switch C2 as shown in the following example:

    cluster::*> network interface migrate -vserver cluster -lif n1_clus1 -source-node n1
    -destination-node n1 -destination-port e0b
    cluster::*> network interface migrate -vserver cluster -lif n1_clus4 -source-node n1
    -destination-node n1 -destination-port e0c
    cluster::*> network interface migrate -vserver cluster -lif n2_clus1 -source-node n2
    -destination-node n2 -destination-port e0b
    cluster::*> network interface migrate -vserver cluster -lif n2_clus4 -source-node n2
    -destination-node n2 -destination-port e0c

Step 3: Replace cluster switch CL1 with C1

  1. Verify the cluster's status:

    network interface show -role cluster

    Show example

    The following example shows that the required cluster LIFs have been migrated to the appropriate cluster ports hosted on cluster switch C2:

    cluster::*> network interface show -role cluster
    (network interface show)
             Logical    Status      Network        Current  Current  Is
    Vserver  Interface  Admin/Oper  Address/Mask   Node     Port     Home
    -------- ---------- ----------- -------------- -------- -------- -----
    Cluster
             n1_clus1   up/up       10.10.0.1/24   n1       e0b      false
             n1_clus2   up/up       10.10.0.2/24   n1       e0b      true
             n1_clus3   up/up       10.10.0.3/24   n1       e0c      true
             n1_clus4   up/up       10.10.0.4/24   n1       e0c      false
             n2_clus1   up/up       10.10.0.5/24   n2       e0b      false
             n2_clus2   up/up       10.10.0.6/24   n2       e0b      true
             n2_clus3   up/up       10.10.0.7/24   n2       e0c      true
             n2_clus4   up/up       10.10.0.8/24   n2       e0c      false
    
    8 entries were displayed.
  2. Shut down the node ports that are connected to CL1 on all of the nodes:

    network port modify -node node-name -port port-name -up-admin false

    Show example

    The following example shows specific ports being shut down on nodes n1 and n2:

    cluster::*> network port modify -node n1 -port e0a -up-admin false
    cluster::*> network port modify -node n1 -port e0d -up-admin false
    cluster::*> network port modify -node n2 -port e0a -up-admin false
    cluster::*> network port modify -node n2 -port e0d -up-admin false
  3. Shut down the ISL ports 24, 31, and 32 on the active 3232C switch C2.

    For more information on Cisco commands, see the guides listed in the Cisco Nexus 3000 Series NX-OS Command References.

    Show example

    The following example shows ISLs 24, 31, and 32 being shut down on the active 3232C switch C2:

    C2# configure
    C2(config)# interface ethernet 1/24/1-4
    C2(config-if-range)# shutdown
    C2(config-if-range)# exit
    C2(config)# interface ethernet 1/31-32
    C2(config-if-range)# shutdown
    C2(config-if-range)# exit
    C2(config)# exit
    C2#
  4. Remove the cables that are attached to the CN1610 switch CL1 on all of the nodes.

    Using the appropriate cabling, you must reconnect the disconnected ports on all the nodes to the Nexus 3232C switch C1.

  5. Remove the QSFP28 cables from Nexus 3232C C2 port e1/24.

    You must connect ports e1/31 and e1/32 on C1 to ports e1/31 and e1/32 on C2 using supported Cisco QSFP28 optical fiber or direct-attach cables.

  6. Restore the configuration on port 24 and remove the temporary port-channel 2 on C2:

    For more information on Cisco commands, see the guides listed in the Cisco Nexus 3000 Series NX-OS Command References.

    Show example

    The following example shows the running-configuration file being copied to the startup-configuration file:

    C2# configure
    C2(config)# no interface breakout module 1 port 24 map 10g-4x
    C2(config)# no interface port-channel 2
    C2(config-if)# interface e1/24
    C2(config-if)# description 100GbE/40GbE Node Port
    C2(config-if)# spanning-tree port type edge
    Edge port type (portfast) should only be enabled on ports connected to a single
    host. Connecting hubs, concentrators, switches, bridges, etc...  to this
    interface when edge port type (portfast) is enabled, can cause temporary bridging loops.
    Use with CAUTION
    
    Edge Port Type (Portfast) has been configured on Ethernet 1/24 but will only
    have effect when the interface is in a non-trunking mode.
    
    C2(config-if)# spanning-tree bpduguard enable
    C2(config-if)# mtu 9216
    C2(config-if-range)# exit
    C2(config)# exit
    C2# copy running-config startup-config
    [] 100%
    Copy Complete.
  7. Bring up ISL ports 31 and 32 on C2, the active 3232C switch.

    For more information on Cisco commands, see the guides listed in the Cisco Nexus 3000 Series NX-OS Command References.

    Show example

    The following example shows ISLs 31 and 32 being brought upon the 3232C switch C2:

    C2# configure
    C2(config)# interface ethernet 1/31-32
    C2(config-if-range)# no shutdown
    C2(config-if-range)# exit
    C2(config)# exit
    C2# copy running-config startup-config
    [] 100%
    Copy Complete.
  8. Verify that the ISL connections are up on the 3232C switch C2.

    For more information on Cisco commands, see the guides listed in the Cisco Nexus 3000 Series NX-OS Command References.

    Show example

    The following example shows the ISL connections being verified. Ports Eth1/31 and Eth1/32 indicate (P), meaning that both the ISL ports are up in the port-channel:

    C1# show port-channel summary
    Flags:  D - Down        P - Up in port-channel (members)
            I - Individual  H - Hot-standby (LACP only)
            s - Suspended   r - Module-removed
            S - Switched    R - Routed
            U - Up (port-channel)
            M - Not in use. Min-links not met
    ------------------------------------------------------------------------------
    Group Port-       Type     Protocol  Member Ports
          Channel
    -----------------------------------------------------------------------------
    1     Po1(SU)     Eth      LACP      Eth1/31(P)   Eth1/32(P)
    
    C2# show port-channel summary
    Flags:  D - Down        P - Up in port-channel (members)
            I - Individual  H - Hot-standby (LACP only)
            s - Suspended   r - Module-removed
            S - Switched    R - Routed
            U - Up (port-channel)
            M - Not in use. Min-links not met
    ------------------------------------------------------------------------------
    Group Port-       Type     Protocol  Member Ports
          Channel
    ------------------------------------------------------------------------------
    1     Po1(SU)     Eth      LACP      Eth1/31(P)   Eth1/32(P)
  9. Bring up all of the cluster interconnect ports connected to the new 3232C switch C1 on all of the nodes:

    network port modify -node node-name -port port-name -up-admin true

    Show example

    The following example shows all of the cluster interconnect ports connected to the new 3232C switch C1 being brought up:

    cluster::*> network port modify -node n1 -port e0a -up-admin true
    cluster::*> network port modify -node n1 -port e0d -up-admin true
    cluster::*> network port modify -node n2 -port e0a -up-admin true
    cluster::*> network port modify -node n2 -port e0d -up-admin true
  10. Verify the status of the cluster node port:

    network port show -role cluster

    Show example

    The following example shows output that verifies that the cluster interconnect ports on nodes n1 and n2 on the new 3232C switch C1 are up:

    cluster::*> network port show -role cluster
           (network port show)
    
    Node: n1
                    Broadcast              Speed (Mbps) Health   Ignore
    Port  IPspace   Domain     Link  MTU   Admin/Open   Status   Health Status
    ----- --------- ---------- ----- ----- ------------ -------- -------------
    e0a   cluster   cluster    up    9000  auto/10000     -
    e0b   cluster   cluster    up    9000  auto/10000     -
    e0c   cluster   cluster    up    9000  auto/10000     -        -
    e0d   cluster   cluster    up    9000  auto/10000     -        -
    
    Node: n2
                    Broadcast              Speed (Mbps) Health   Ignore
    Port  IPspace   Domain     Link  MTU   Admin/Open   Status   Health Status
    ----- --------- ---------- ----- ----- ------------ -------- -------------
    e0a   cluster   cluster    up    9000  auto/10000     -
    e0b   cluster   cluster    up    9000  auto/10000     -
    e0c   cluster   cluster    up    9000  auto/10000     -
    e0d   cluster   cluster    up    9000  auto/10000     -
    
    8 entries were displayed.

Step 4: Complete the procedure

  1. Revert all of the migrated cluster interconnect LIFs that were originally connected to C1 on all of the nodes:

    network interface revert -server cluster -lif lif-name

    Show example

    You must migrate each LIF individually as shown in the following example:

    cluster::*> network interface revert -vserver cluster -lif n1_clus1
    cluster::*> network interface revert -vserver cluster -lif n1_clus4
    cluster::*> network interface revert -vserver cluster -lif n2_clus1
    cluster::*> network interface revert -vserver cluster -lif n2_clus4
  2. Verify that the interface is now home:

    network interface show -role cluster

    Show example

    The following example shows the status of cluster interconnect interfaces is up and "Is Home" for nodes n1 and n2:

    cluster::*> network interface show -role cluster
    (network interface show)
             Logical    Status      Network        Current  Current  Is
    Vserver  Interface  Admin/Oper  Address/Mask   Node     Port     Home
    -------- ---------- ----------- -------------- -------- -------- -----
    Cluster
             n1_clus1   up/up       10.10.0.1/24   n1       e0a      true
             n1_clus2   up/up       10.10.0.2/24   n1       e0b      true
             n1_clus3   up/up       10.10.0.3/24   n1       e0c      true
             n1_clus4   up/up       10.10.0.4/24   n1       e0d      true
             n2_clus1   up/up       10.10.0.5/24   n2       e0a      true
             n2_clus2   up/up       10.10.0.6/24   n2       e0b      true
             n2_clus3   up/up       10.10.0.7/24   n2       e0c      true
             n2_clus4   up/up       10.10.0.8/24   n2       e0d      true
    
    8 entries were displayed.
  3. Verify the connectivity of the remote cluster interfaces:

ONTAP 9.9.1 and later

You can use the network interface check cluster-connectivity command to start an accessibility check for cluster connectivity and then display the details:

network interface check cluster-connectivity start and network interface check cluster-connectivity show

cluster1::*> network interface check cluster-connectivity start

NOTE: Wait for a number of seconds before running the show command to display the details.

cluster1::*> network interface check cluster-connectivity show
                                  Source           Destination      Packet
Node   Date                       LIF              LIF              Loss
------ -------------------------- ---------------- ---------------- -----------
n1
       3/5/2022 19:21:18 -06:00   n1_clus2         n2-clus1         none
       3/5/2022 19:21:20 -06:00   n1_clus2         n2_clus2         none

n2
       3/5/2022 19:21:18 -06:00   n2_clus2         n1_clus1         none
       3/5/2022 19:21:20 -06:00   n2_clus2         n1_clus2         none
All ONTAP releases

For all ONTAP releases, you can also use the cluster ping-cluster -node <name> command to check the connectivity:

cluster ping-cluster -node <name>

cluster1::*> cluster ping-cluster -node local
Host is n1
Getting addresses from network interface table...
Cluster n1_clus1 n1       e0a    10.10.0.1
Cluster n1_clus2 n1       e0b    10.10.0.2
Cluster n1_clus3 n1       e0c    10.10.0.3
Cluster n1_clus4 n1       e0d    10.10.0.4
Cluster n2_clus1 n2       e0a    10.10.0.5
Cluster n2_clus2 n2       e0b    10.10.0.6
Cluster n2_clus3 n2       e0c    10.10.0.7
Cluster n2_clus4 n2       e0d    10.10.0.8
Local = 10.10.0.1 10.10.0.2 10.10.0.3 10.10.0.4
Remote = 10.10.0.5 10.10.0.6 10.10.0.7 10.10.0.8
Cluster Vserver Id = 4294967293 Ping status:
....
Basic connectivity succeeds on 16 path(s)
Basic connectivity fails on 0 path(s)
................
Detected 9000 byte MTU on 16 path(s):
    Local 10.10.0.1 to Remote 10.10.0.5
    Local 10.10.0.1 to Remote 10.10.0.6
    Local 10.10.0.1 to Remote 10.10.0.7
    Local 10.10.0.1 to Remote 10.10.0.8
    Local 10.10.0.2 to Remote 10.10.0.5
    Local 10.10.0.2 to Remote 10.10.0.6
    Local 10.10.0.2 to Remote 10.10.0.7
    Local 10.10.0.2 to Remote 10.10.0.8
    Local 10.10.0.3 to Remote 10.10.0.5
    Local 10.10.0.3 to Remote 10.10.0.6
    Local 10.10.0.3 to Remote 10.10.0.7
    Local 10.10.0.3 to Remote 10.10.0.8
    Local 10.10.0.4 to Remote 10.10.0.5
    Local 10.10.0.4 to Remote 10.10.0.6
    Local 10.10.0.4 to Remote 10.10.0.7
    Local 10.10.0.4 to Remote 10.10.0.8

Larger than PMTU communication succeeds on 16 path(s)
RPC status:
4 paths up, 0 paths down (tcp check)
3 paths up, 0 paths down (udp check)
  1. Expand the cluster by adding nodes to the Nexus 3232C cluster switches.

  2. Display the information about the devices in your configuration:

    • network device-discovery show

    • network port show -role cluster

    • network interface show -role cluster

    • system cluster-switch show

      Show example

      The following examples show nodes n3 and n4 with 40 GbE cluster ports connected to ports e1/7 and e1/8, respectively, on both the Nexus 3232C cluster switches. Both nodes are joined to the cluster. The 40 GbE cluster interconnect ports used are e4a and e4e.

      cluster::*> network device-discovery show
      
             Local  Discovered
      Node   Port   Device       Interface       Platform
      ------ ------ ------------ --------------- -------------
      n1     /cdp
              e0a   C1           Ethernet1/1/1   N3K-C3232C
              e0b   C2           Ethernet1/1/1   N3K-C3232C
              e0c   C2           Ethernet1/1/2   N3K-C3232C
              e0d   C1           Ethernet1/1/2   N3K-C3232C
      n2     /cdp
              e0a   C1           Ethernet1/1/3   N3K-C3232C
              e0b   C2           Ethernet1/1/3   N3K-C3232C
              e0c   C2           Ethernet1/1/4   N3K-C3232C
              e0d   C1           Ethernet1/1/4   N3K-C3232C
      
      n3     /cdp
              e4a   C1           Ethernet1/7     N3K-C3232C
              e4e   C2           Ethernet1/7     N3K-C3232C
      
      n4     /cdp
              e4a   C1           Ethernet1/8     N3K-C3232C
              e4e   C2           Ethernet1/8     N3K-C3232C
      
      12 entries were displayed.
      cluster::*> network port show -role cluster
      (network port show)
      
      Node: n1
                      Broadcast              Speed (Mbps) Health   Ignore
      Port  IPspace   Domain     Link  MTU   Admin/Open   Status   Health Status
      ----- --------- ---------- ----- ----- ------------ -------- -------------
      e0a   cluster   cluster    up    9000  auto/10000     -
      e0b   cluster   cluster    up    9000  auto/10000     -
      e0c   cluster   cluster    up    9000  auto/10000     -        -
      e0d   cluster   cluster    up    9000  auto/10000     -        -
      
      Node: n2
                      Broadcast              Speed (Mbps) Health   Ignore
      Port  IPspace   Domain     Link  MTU   Admin/Open   Status   Health Status
      ----- --------- ---------- ----- ----- ------------ -------- -------------
      e0a   cluster   cluster    up    9000  auto/10000     -
      e0b   cluster   cluster    up    9000  auto/10000     -
      e0c   cluster   cluster    up    9000  auto/10000     -
      e0d   cluster   cluster    up    9000  auto/10000     -        -
      
      Node: n3
                      Broadcast              Speed (Mbps) Health   Ignore
      Port  IPspace   Domain     Link  MTU   Admin/Open   Status   Health Status
      ----- --------- ---------- ----- ----- ------------ -------- -------------
      e4a   cluster   cluster    up    9000  auto/40000     -
      e4e   cluster   cluster    up    9000  auto/40000     -        -
      
      Node: n4
                      Broadcast              Speed (Mbps) Health   Ignore
      Port  IPspace   Domain     Link  MTU   Admin/Open   Status   Health Status
      ----- --------- ---------- ----- ----- ------------ -------- -------------
      e4a   cluster   cluster    up    9000  auto/40000     -
      e4e   cluster   cluster    up    9000  auto/40000     -
      
      12 entries were displayed.
      
      cluster::*> network interface show -role cluster
      (network interface show)
               Logical    Status      Network        Current  Current  Is
      Vserver  Interface  Admin/Oper  Address/Mask   Node     Port     Home
      -------- ---------- ----------- -------------- -------- -------- -----
      Cluster
               n1_clus1   up/up       10.10.0.1/24   n1       e0a      true
               n1_clus2   up/up       10.10.0.2/24   n1       e0b      true
               n1_clus3   up/up       10.10.0.3/24   n1       e0c      true
               n1_clus4   up/up       10.10.0.4/24   n1       e0d      true
               n2_clus1   up/up       10.10.0.5/24   n2       e0a      true
               n2_clus2   up/up       10.10.0.6/24   n2       e0b      true
               n2_clus3   up/up       10.10.0.7/24   n2       e0c      true
               n2_clus4   up/up       10.10.0.8/24   n2       e0d      true
               n3_clus1   up/up       10.10.0.9/24   n3       e4a      true
               n3_clus2   up/up       10.10.0.10/24  n3       e4e      true
               n4_clus1   up/up       10.10.0.11/24  n4       e4a     true
               n4_clus2   up/up       10.10.0.12/24  n4       e4e     true
      
      12 entries were displayed.
      
      cluster::> system cluster-switch show
      
      Switch                      Type             Address       Model
      --------------------------- ---------------- ------------- ---------
      C1                          cluster-network  10.10.1.103   NX3232C
      
           Serial Number: FOX000001
            Is Monitored: true
                  Reason:
        Software Version: Cisco Nexus Operating System (NX-OS) Software, Version
                          7.0(3)I6(1)
          Version Source: CDP
      
      C2                          cluster-network  10.10.1.104   NX3232C
      
           Serial Number: FOX000002
            Is Monitored: true
                  Reason:
        Software Version: Cisco Nexus Operating System (NX-OS) Software, Version
                          7.0(3)I6(1)
          Version Source: CDP
      CL1                         cluster-network  10.10.1.101   CN1610
      
           Serial Number: 01234567
            Is Monitored: true
                  Reason:
        Software Version: 1.2.0.7
          Version Source: ISDP
      CL2                         cluster-network  10.10.1.102    CN1610
      
           Serial Number: 01234568
            Is Monitored: true
                  Reason:
        Software Version: 1.2.0.7
          Version Source: ISDP 4 entries were displayed.
  3. Remove the replaced CN1610 switches if they are not automatically removed:

    system cluster-switch delete -device switch-name

    Show example

    You must delete both devices individually as shown in the following example:

    cluster::> system cluster-switch delete –device CL1
    cluster::> system cluster-switch delete –device CL2
  4. Verify that the proper cluster switches are monitored:

    system cluster-switch show

    Show example

    The following example shows cluster switches C1 and C2 are being monitored:

    cluster::> system cluster-switch show
    
    Switch                      Type               Address          Model
    --------------------------- ------------------ ---------------- ---------------
    C1                          cluster-network    10.10.1.103      NX3232C
    
         Serial Number: FOX000001
          Is Monitored: true
                Reason:
      Software Version: Cisco Nexus Operating System (NX-OS) Software, Version
                        7.0(3)I6(1)
        Version Source: CDP
    
    C2                          cluster-network    10.10.1.104      NX3232C
         Serial Number: FOX000002
          Is Monitored: true
              Reason:
      Software Version: Cisco Nexus Operating System (NX-OS) Software, Version
                        7.0(3)I6(1)
        Version Source: CDP
    
    2 entries were displayed.
  5. If you suppressed automatic case creation, reenable it by invoking an AutoSupport message:

    system node autosupport invoke -node * -type all -message MAINT=END