Skip to main content
Cluster and storage switches

Complete your migration from CN1610 switches to Nexus 3132Q-V switches

Contributors netapp-yvonneo

Complete the following steps to finalize the CN1610 switches migration to Nexus 3132Q-V switches.

Steps
  1. Verify that the ISL connections are up on the 3132Q-V switch C2:

    show port-channel summary

    Ports Eth1/31 and Eth1/32 should indicate (P), meaning that both the ISL ports are up in the port-channel.

    Show example
    C1# show port-channel summary
    Flags:  D - Down        P - Up in port-channel (members)
            I - Individual  H - Hot-standby (LACP only)
            s - Suspended   r - Module-removed
            S - Switched    R - Routed
            U - Up (port-channel)
            M - Not in use. Min-links not met
    ------------------------------------------------------------------------------
    Group Port-       Type     Protocol  Member Ports
          Channel
    ------------------------------------------------------------------------------
    1     Po1(SU)     Eth      LACP      Eth1/31(P)   Eth1/32(P)
  2. Bring up all of the cluster interconnect ports connected to the new 3132Q-V switch C1 on all of the nodes:

    network port modify

    Show example

    The following example shows how to bring up all of the cluster interconnect ports connected to the new 3132Q-V switch C1:

    cluster::*> network port modify -node n1 -port e0a -up-admin true
    cluster::*> network port modify -node n1 -port e0d -up-admin true
    cluster::*> network port modify -node n2 -port e0a -up-admin true
    cluster::*> network port modify -node n2 -port e0d -up-admin true
  3. Verify the status of the cluster node port:

    network port show

    Show example

    The following example verifies that all of the cluster interconnect ports on n1 and n2 on the new 3132Q-V switch C1 are up:

    cluster::*> network port show -role Cluster
           (network port show)
    
    Node: n1
                    Broadcast              Speed (Mbps) Health   Ignore
    Port  IPspace   Domain     Link  MTU   Admin/Open   Status   Health Status
    ----- --------- ---------- ----- ----- ------------ -------- -------------
    e0a   cluster   cluster    up    9000  auto/10000     -        -
    e0b   cluster   cluster    up    9000  auto/10000     -        -
    e0c   cluster   cluster    up    9000  auto/10000     -        -
    e0d   cluster   cluster    up    9000  auto/10000     -        -
    
    Node: n2
                    Broadcast              Speed (Mbps) Health   Ignore
    Port  IPspace   Domain     Link  MTU   Admin/Open   Status   Health Status
    ----- --------- ---------- ----- ----- ------------ -------- -------------
    e0a   cluster   cluster    up    9000  auto/10000     -        -
    e0b   cluster   cluster    up    9000  auto/10000     -        -
    e0c   cluster   cluster    up    9000  auto/10000     -        -
    e0d   cluster   cluster    up    9000  auto/10000     -        -
    
    8 entries were displayed.
  4. Revert all of the migrated cluster interconnect LIFs that were originally connected to C1 on all of the nodes:

    network interface revert

    Show example

    The following example shows how to revert the migrated cluster LIFs to their home ports:

    cluster::*> network interface revert -vserver Cluster -lif n1_clus1
    cluster::*> network interface revert -vserver Cluster -lif n1_clus4
    cluster::*> network interface revert -vserver Cluster -lif n2_clus1
    cluster::*> network interface revert -vserver Cluster -lif n2_clus4
  5. Verify that the interface is now home:

    network interface show

    Show example

    The following example shows the status of cluster interconnect interfaces is up and Is home for n1 and n2:

    cluster::*> network interface show -role Cluster
           (network interface show)
    
             Logical    Status      Network        Current  Current  Is
    Vserver  Interface  Admin/Oper  Address/Mask   Node     Port     Home
    -------- ---------- ----------- -------------- -------- -------- -----
    Cluster
             n1_clus1   up/up       10.10.0.1/24   n1       e0a      true
             n1_clus2   up/up       10.10.0.2/24   n1       e0b      true
             n1_clus3   up/up       10.10.0.3/24   n1       e0c      true
             n1_clus4   up/up       10.10.0.4/24   n1       e0d      true
             n2_clus1   up/up       10.10.0.5/24   n2       e0a      true
             n2_clus2   up/up       10.10.0.6/24   n2       e0b      true
             n2_clus3   up/up       10.10.0.7/24   n2       e0c      true
             n2_clus4   up/up       10.10.0.8/24   n2       e0d      true
    
    8 entries were displayed.
  6. Verify the connectivity of the remote cluster interfaces:

ONTAP 9.9.1 and later

You can use the network interface check cluster-connectivity command to start an accessibility check for cluster connectivity and then display the details:

network interface check cluster-connectivity start and network interface check cluster-connectivity show

cluster1::*> network interface check cluster-connectivity start

NOTE: Wait for a number of seconds before running the show command to display the details.

cluster1::*> network interface check cluster-connectivity show
                                  Source      Destination   Packet
Node   Date                       LIF         LIF           Loss
------ -------------------------- ---------- -------------- -----------
n1
       3/5/2022 19:21:18 -06:00   n1_clus2   n1_clus1       none
       3/5/2022 19:21:20 -06:00   n1_clus2   n2_clus2       none

n2
       3/5/2022 19:21:18 -06:00   n2_clus2   n1_clus1       none
       3/5/2022 19:21:20 -06:00   n2_clus2   n1_clus2       none
All ONTAP releases

For all ONTAP releases, you can also use the cluster ping-cluster -node <name> command to check the connectivity:

cluster ping-cluster -node <name>

cluster::*> cluster ping-cluster -node n1
Host is n1
Getting addresses from network interface table...
Cluster n1_clus1 n1       e0a    10.10.0.1
Cluster n1_clus2 n1       e0b    10.10.0.2
Cluster n1_clus3 n1       e0c    10.10.0.3
Cluster n1_clus4 n1       e0d    10.10.0.4
Cluster n2_clus1 n2       e0a    10.10.0.5
Cluster n2_clus2 n2       e0b    10.10.0.6
Cluster n2_clus3 n2       e0c    10.10.0.7
Cluster n2_clus4 n2       e0d    10.10.0.8

Local = 10.10.0.1 10.10.0.2 10.10.0.3 10.10.0.4
Remote = 10.10.0.5 10.10.0.6 10.10.0.7 10.10.0.8
Cluster Vserver Id = 4294967293
Ping status:
....
Basic connectivity succeeds on 16 path(s)
Basic connectivity fails on 0 path(s)
................
Detected 1500 byte MTU on 16 path(s):
    Local 10.10.0.1 to Remote 10.10.0.5
    Local 10.10.0.1 to Remote 10.10.0.6
    Local 10.10.0.1 to Remote 10.10.0.7
    Local 10.10.0.1 to Remote 10.10.0.8
    Local 10.10.0.2 to Remote 10.10.0.5
    Local 10.10.0.2 to Remote 10.10.0.6
    Local 10.10.0.2 to Remote 10.10.0.7
    Local 10.10.0.2 to Remote 10.10.0.8
    Local 10.10.0.3 to Remote 10.10.0.5
    Local 10.10.0.3 to Remote 10.10.0.6
    Local 10.10.0.3 to Remote 10.10.0.7
    Local 10.10.0.3 to Remote 10.10.0.8
    Local 10.10.0.4 to Remote 10.10.0.5
    Local 10.10.0.4 to Remote 10.10.0.6
    Local 10.10.0.4 to Remote 10.10.0.7
    Local 10.10.0.4 to Remote 10.10.0.8

Larger than PMTU communication succeeds on 16 path(s)
RPC status:
4 paths up, 0 paths down (tcp check)
4 paths up, 0 paths down (udp check)
  1. Expand the cluster by adding nodes to the Nexus 3132Q-V cluster switches.

  2. Display the information about the devices in your configuration:

    • network device-discovery show

    • network port show -role cluster

    • network interface show -role cluster

    • system cluster-switch show

      Show example

      The following examples show nodes n3 and n4 with 40 GbE cluster ports connected to ports e1/7 and e1/8, respectively on both the Nexus 3132Q-V cluster switches, and both nodes have joined the cluster. The 40 GbE cluster interconnect ports used are e4a and e4e.

      cluster::*> network device-discovery show
      
             Local  Discovered
      Node   Port   Device       Interface       Platform
      ------ ------ ------------ --------------- -------------
      n1     /cdp
              e0a   C1           Ethernet1/1/1   N3K-C3132Q-V
              e0b   C2           Ethernet1/1/1   N3K-C3132Q-V
              e0c   C2           Ethernet1/1/2   N3K-C3132Q-V
              e0d   C1           Ethernet1/1/2   N3K-C3132Q-V
      n2     /cdp
              e0a   C1           Ethernet1/1/3   N3K-C3132Q-V
              e0b   C2           Ethernet1/1/3   N3K-C3132Q-V
              e0c   C2           Ethernet1/1/4   N3K-C3132Q-V
              e0d   C1           Ethernet1/1/4   N3K-C3132Q-V
      n3     /cdp
              e4a   C1           Ethernet1/7     N3K-C3132Q-V
              e4e   C2           Ethernet1/7     N3K-C3132Q-V
      n4     /cdp
              e4a   C1           Ethernet1/8     N3K-C3132Q-V
              e4e   C2           Ethernet1/8     N3K-C3132Q-V
      
      12 entries were displayed.
      cluster::*> network port show -role cluster
             (network port show)
      
      Node: n1
                      Broadcast              Speed (Mbps) Health   Ignore
      Port  IPspace   Domain     Link  MTU   Admin/Open   Status   Health Status
      ----- --------- ---------- ----- ----- ------------ -------- -------------
      e0a   cluster   cluster    up    9000  auto/10000     -        -
      e0b   cluster   cluster    up    9000  auto/10000     -        -
      e0c   cluster   cluster    up    9000  auto/10000     -        -
      e0d   cluster   cluster    up    9000  auto/10000     -        -
      
      Node: n2
                      Broadcast              Speed (Mbps) Health   Ignore
      Port  IPspace   Domain     Link  MTU   Admin/Open   Status   Health Status
      ----- --------- ---------- ----- ----- ------------ -------- -------------
      e0a   cluster   cluster    up    9000  auto/10000     -        -
      e0b   cluster   cluster    up    9000  auto/10000     -        -
      e0c   cluster   cluster    up    9000  auto/10000     -        -
      e0d   cluster   cluster    up    9000  auto/10000     -        -
      
      Node: n3
                      Broadcast              Speed (Mbps) Health   Ignore
      Port  IPspace   Domain     Link  MTU   Admin/Open   Status   Health Status
      ----- --------- ---------- ----- ----- ------------ -------- -------------
      e4a   cluster   cluster    up    9000  auto/40000     -        -
      e4e   cluster   cluster    up    9000  auto/40000     -        -
      
      Node: n4
                      Broadcast              Speed (Mbps) Health   Ignore
      Port  IPspace   Domain     Link  MTU   Admin/Open   Status   Health Status
      ----- --------- ---------- ----- ----- ------------ -------- -------------
      e4a   cluster   cluster    up    9000  auto/40000     -        -
      e4e   cluster   cluster    up    9000  auto/40000     -        -
      
      12 entries were displayed.
      cluster::*> network interface show -role Cluster
             (network interface show)
      
               Logical    Status      Network        Current  Current  Is
      Vserver  Interface  Admin/Oper  Address/Mask   Node     Port     Home
      -------- ---------- ----------- -------------- -------- -------- -----
      Cluster
               n1_clus1   up/up       10.10.0.1/24   n1       e0a      true
               n1_clus2   up/up       10.10.0.2/24   n1       e0b      true
               n1_clus3   up/up       10.10.0.3/24   n1       e0c      true
               n1_clus4   up/up       10.10.0.4/24   n1       e0d      true
               n2_clus1   up/up       10.10.0.5/24   n2       e0a      true
               n2_clus2   up/up       10.10.0.6/24   n2       e0b      true
               n2_clus3   up/up       10.10.0.7/24   n2       e0c      true
               n2_clus4   up/up       10.10.0.8/24   n2       e0d      true
               n3_clus1   up/up       10.10.0.9/24   n3       e4a      true
               n3_clus2   up/up       10.10.0.10/24  n3       e4e      true
               n4_clus1   up/up       10.10.0.11/24  n4       e4a      true
               n4_clus2   up/up       10.10.0.12/24  n4       e4e      true
      
      12 entries were displayed.
      cluster::> system cluster-switch show
      
      Switch                      Type             Address       Model
      --------------------------- ---------------- ------------- ---------
      C1                          cluster-network  10.10.1.103   NX3132V
           Serial Number: FOX000001
            Is Monitored: true
                  Reason:
        Software Version: Cisco Nexus Operating System (NX-OS) Software, Version
                          7.0(3)I4(1)
          Version Source: CDP
      
      C2                          cluster-network  10.10.1.104   NX3132V
           Serial Number: FOX000002
            Is Monitored: true
                  Reason:
        Software Version: Cisco Nexus Operating System (NX-OS) Software, Version
                          7.0(3)I4(1)
          Version Source: CDP
      
      CL1                         cluster-network  10.10.1.101   CN1610
           Serial Number: 01234567
            Is Monitored: true
                  Reason:
        Software Version: 1.2.0.7
          Version Source: ISDP
      
      CL2                         cluster-network  10.10.1.102    CN1610
           Serial Number: 01234568
            Is Monitored: true
                  Reason:
        Software Version: 1.2.0.7
          Version Source: ISDP
      
      4 entries were displayed.
  3. Remove the replaced CN1610 switches if they are not automatically removed:

    system cluster-switch delete

    Show example

    The following example shows how to remove the CN1610 switches:

    cluster::> system cluster-switch delete -device CL1
    cluster::> system cluster-switch delete -device CL2
  4. Configure clusters clus1 and clus4 to -auto-revert on each node and confirm:

    Show example
    cluster::*> network interface modify -vserver node1 -lif clus1 -auto-revert true
    cluster::*> network interface modify -vserver node1 -lif clus4 -auto-revert true
    cluster::*> network interface modify -vserver node2 -lif clus1 -auto-revert true
    cluster::*> network interface modify -vserver node2 -lif clus4 -auto-revert true
  5. Verify that the proper cluster switches are monitored:

    system cluster-switch show

    Show example
    cluster::> system cluster-switch show
    
    Switch                      Type               Address          Model
    --------------------------- ------------------ ---------------- ---------------
    C1                          cluster-network    10.10.1.103      NX3132V
         Serial Number: FOX000001
          Is Monitored: true
                Reason:
      Software Version: Cisco Nexus Operating System (NX-OS) Software, Version
                        7.0(3)I4(1)
        Version Source: CDP
    
    C2                          cluster-network    10.10.1.104      NX3132V
         Serial Number: FOX000002
          Is Monitored: true
                Reason:
      Software Version: Cisco Nexus Operating System (NX-OS) Software, Version
                        7.0(3)I4(1)
        Version Source: CDP
    
    2 entries were displayed.
  6. If you suppressed automatic case creation, reenable it by invoking an AutoSupport message:

    system node autosupport invoke -node * -type all -message MAINT=END