Skip to main content
Cluster and storage switches

Complete your migration from CN1610 switches to 3232C switches

Contributors netapp-yvonneo

Complete the following steps to finalize the CN1610 switches migration to Nexus 3232C switches.

Steps
  1. Revert all of the migrated cluster interconnect LIFs that were originally connected to C1 on all of the nodes:

    network interface revert -server cluster -lif lif-name

    Show example

    You must migrate each LIF individually as shown in the following example:

    cluster::*> network interface revert -vserver cluster -lif n1_clus1
    cluster::*> network interface revert -vserver cluster -lif n1_clus4
    cluster::*> network interface revert -vserver cluster -lif n2_clus1
    cluster::*> network interface revert -vserver cluster -lif n2_clus4
  2. Verify that the interface is now home:

    network interface show -role cluster

    Show example

    The following example shows the status of cluster interconnect interfaces is up and "Is Home" for nodes n1 and n2:

    cluster::*> network interface show -role cluster
    (network interface show)
             Logical    Status      Network        Current  Current  Is
    Vserver  Interface  Admin/Oper  Address/Mask   Node     Port     Home
    -------- ---------- ----------- -------------- -------- -------- -----
    Cluster
             n1_clus1   up/up       10.10.0.1/24   n1       e0a      true
             n1_clus2   up/up       10.10.0.2/24   n1       e0b      true
             n1_clus3   up/up       10.10.0.3/24   n1       e0c      true
             n1_clus4   up/up       10.10.0.4/24   n1       e0d      true
             n2_clus1   up/up       10.10.0.5/24   n2       e0a      true
             n2_clus2   up/up       10.10.0.6/24   n2       e0b      true
             n2_clus3   up/up       10.10.0.7/24   n2       e0c      true
             n2_clus4   up/up       10.10.0.8/24   n2       e0d      true
    
    8 entries were displayed.
  3. Verify the connectivity of the remote cluster interfaces:

ONTAP 9.9.1 and later

You can use the network interface check cluster-connectivity command to start an accessibility check for cluster connectivity and then display the details:

network interface check cluster-connectivity start and network interface check cluster-connectivity show

cluster1::*> network interface check cluster-connectivity start

NOTE: Wait for a number of seconds before running the show command to display the details.

cluster1::*> network interface check cluster-connectivity show
                                  Source           Destination      Packet
Node   Date                       LIF              LIF              Loss
------ -------------------------- ---------------- ---------------- -----------
n1
       3/5/2022 19:21:18 -06:00   n1_clus2         n2-clus1         none
       3/5/2022 19:21:20 -06:00   n1_clus2         n2_clus2         none

n2
       3/5/2022 19:21:18 -06:00   n2_clus2         n1_clus1         none
       3/5/2022 19:21:20 -06:00   n2_clus2         n1_clus2         none
All ONTAP releases

For all ONTAP releases, you can also use the cluster ping-cluster -node <name> command to check the connectivity:

cluster ping-cluster -node <name>

cluster1::*> cluster ping-cluster -node local
Host is n1
Getting addresses from network interface table...
Cluster n1_clus1 n1       e0a    10.10.0.1
Cluster n1_clus2 n1       e0b    10.10.0.2
Cluster n1_clus3 n1       e0c    10.10.0.3
Cluster n1_clus4 n1       e0d    10.10.0.4
Cluster n2_clus1 n2       e0a    10.10.0.5
Cluster n2_clus2 n2       e0b    10.10.0.6
Cluster n2_clus3 n2       e0c    10.10.0.7
Cluster n2_clus4 n2       e0d    10.10.0.8
Local = 10.10.0.1 10.10.0.2 10.10.0.3 10.10.0.4
Remote = 10.10.0.5 10.10.0.6 10.10.0.7 10.10.0.8
Cluster Vserver Id = 4294967293 Ping status:
....
Basic connectivity succeeds on 16 path(s)
Basic connectivity fails on 0 path(s)
................
Detected 9000 byte MTU on 16 path(s):
    Local 10.10.0.1 to Remote 10.10.0.5
    Local 10.10.0.1 to Remote 10.10.0.6
    Local 10.10.0.1 to Remote 10.10.0.7
    Local 10.10.0.1 to Remote 10.10.0.8
    Local 10.10.0.2 to Remote 10.10.0.5
    Local 10.10.0.2 to Remote 10.10.0.6
    Local 10.10.0.2 to Remote 10.10.0.7
    Local 10.10.0.2 to Remote 10.10.0.8
    Local 10.10.0.3 to Remote 10.10.0.5
    Local 10.10.0.3 to Remote 10.10.0.6
    Local 10.10.0.3 to Remote 10.10.0.7
    Local 10.10.0.3 to Remote 10.10.0.8
    Local 10.10.0.4 to Remote 10.10.0.5
    Local 10.10.0.4 to Remote 10.10.0.6
    Local 10.10.0.4 to Remote 10.10.0.7
    Local 10.10.0.4 to Remote 10.10.0.8

Larger than PMTU communication succeeds on 16 path(s)
RPC status:
4 paths up, 0 paths down (tcp check)
3 paths up, 0 paths down (udp check)
  1. Expand the cluster by adding nodes to the Nexus 3232C cluster switches.

  2. Display the information about the devices in your configuration:

    • network device-discovery show

    • network port show -role cluster

    • network interface show -role cluster

    • system cluster-switch show

      Show example

      The following examples show nodes n3 and n4 with 40 GbE cluster ports connected to ports e1/7 and e1/8, respectively, on both the Nexus 3232C cluster switches. Both nodes are joined to the cluster. The 40 GbE cluster interconnect ports used are e4a and e4e.

      cluster::*> network device-discovery show
      
             Local  Discovered
      Node   Port   Device       Interface       Platform
      ------ ------ ------------ --------------- -------------
      n1     /cdp
              e0a   C1           Ethernet1/1/1   N3K-C3232C
              e0b   C2           Ethernet1/1/1   N3K-C3232C
              e0c   C2           Ethernet1/1/2   N3K-C3232C
              e0d   C1           Ethernet1/1/2   N3K-C3232C
      n2     /cdp
              e0a   C1           Ethernet1/1/3   N3K-C3232C
              e0b   C2           Ethernet1/1/3   N3K-C3232C
              e0c   C2           Ethernet1/1/4   N3K-C3232C
              e0d   C1           Ethernet1/1/4   N3K-C3232C
      
      n3     /cdp
              e4a   C1           Ethernet1/7     N3K-C3232C
              e4e   C2           Ethernet1/7     N3K-C3232C
      
      n4     /cdp
              e4a   C1           Ethernet1/8     N3K-C3232C
              e4e   C2           Ethernet1/8     N3K-C3232C
      
      12 entries were displayed.
      cluster::*> network port show -role cluster
      (network port show)
      
      Node: n1
                      Broadcast              Speed (Mbps) Health   Ignore
      Port  IPspace   Domain     Link  MTU   Admin/Open   Status   Health Status
      ----- --------- ---------- ----- ----- ------------ -------- -------------
      e0a   cluster   cluster    up    9000  auto/10000     -
      e0b   cluster   cluster    up    9000  auto/10000     -
      e0c   cluster   cluster    up    9000  auto/10000     -        -
      e0d   cluster   cluster    up    9000  auto/10000     -        -
      
      Node: n2
                      Broadcast              Speed (Mbps) Health   Ignore
      Port  IPspace   Domain     Link  MTU   Admin/Open   Status   Health Status
      ----- --------- ---------- ----- ----- ------------ -------- -------------
      e0a   cluster   cluster    up    9000  auto/10000     -
      e0b   cluster   cluster    up    9000  auto/10000     -
      e0c   cluster   cluster    up    9000  auto/10000     -
      e0d   cluster   cluster    up    9000  auto/10000     -        -
      
      Node: n3
                      Broadcast              Speed (Mbps) Health   Ignore
      Port  IPspace   Domain     Link  MTU   Admin/Open   Status   Health Status
      ----- --------- ---------- ----- ----- ------------ -------- -------------
      e4a   cluster   cluster    up    9000  auto/40000     -
      e4e   cluster   cluster    up    9000  auto/40000     -        -
      
      Node: n4
                      Broadcast              Speed (Mbps) Health   Ignore
      Port  IPspace   Domain     Link  MTU   Admin/Open   Status   Health Status
      ----- --------- ---------- ----- ----- ------------ -------- -------------
      e4a   cluster   cluster    up    9000  auto/40000     -
      e4e   cluster   cluster    up    9000  auto/40000     -
      
      12 entries were displayed.
      
      cluster::*> network interface show -role cluster
      (network interface show)
               Logical    Status      Network        Current  Current  Is
      Vserver  Interface  Admin/Oper  Address/Mask   Node     Port     Home
      -------- ---------- ----------- -------------- -------- -------- -----
      Cluster
               n1_clus1   up/up       10.10.0.1/24   n1       e0a      true
               n1_clus2   up/up       10.10.0.2/24   n1       e0b      true
               n1_clus3   up/up       10.10.0.3/24   n1       e0c      true
               n1_clus4   up/up       10.10.0.4/24   n1       e0d      true
               n2_clus1   up/up       10.10.0.5/24   n2       e0a      true
               n2_clus2   up/up       10.10.0.6/24   n2       e0b      true
               n2_clus3   up/up       10.10.0.7/24   n2       e0c      true
               n2_clus4   up/up       10.10.0.8/24   n2       e0d      true
               n3_clus1   up/up       10.10.0.9/24   n3       e4a      true
               n3_clus2   up/up       10.10.0.10/24  n3       e4e      true
               n4_clus1   up/up       10.10.0.11/24  n4       e4a     true
               n4_clus2   up/up       10.10.0.12/24  n4       e4e     true
      
      12 entries were displayed.
      
      cluster::> system cluster-switch show
      
      Switch                      Type             Address       Model
      --------------------------- ---------------- ------------- ---------
      C1                          cluster-network  10.10.1.103   NX3232C
      
           Serial Number: FOX000001
            Is Monitored: true
                  Reason:
        Software Version: Cisco Nexus Operating System (NX-OS) Software, Version
                          7.0(3)I6(1)
          Version Source: CDP
      
      C2                          cluster-network  10.10.1.104   NX3232C
      
           Serial Number: FOX000002
            Is Monitored: true
                  Reason:
        Software Version: Cisco Nexus Operating System (NX-OS) Software, Version
                          7.0(3)I6(1)
          Version Source: CDP
      CL1                         cluster-network  10.10.1.101   CN1610
      
           Serial Number: 01234567
            Is Monitored: true
                  Reason:
        Software Version: 1.2.0.7
          Version Source: ISDP
      CL2                         cluster-network  10.10.1.102    CN1610
      
           Serial Number: 01234568
            Is Monitored: true
                  Reason:
        Software Version: 1.2.0.7
          Version Source: ISDP 4 entries were displayed.
  3. Remove the replaced CN1610 switches if they are not automatically removed:

    system cluster-switch delete -device switch-name

    Show example

    You must delete both devices individually as shown in the following example:

    cluster::> system cluster-switch delete –device CL1
    cluster::> system cluster-switch delete –device CL2
  4. Verify that the proper cluster switches are monitored:

    system cluster-switch show

    Show example

    The following example shows cluster switches C1 and C2 are being monitored:

    cluster::> system cluster-switch show
    
    Switch                      Type               Address          Model
    --------------------------- ------------------ ---------------- ---------------
    C1                          cluster-network    10.10.1.103      NX3232C
    
         Serial Number: FOX000001
          Is Monitored: true
                Reason:
      Software Version: Cisco Nexus Operating System (NX-OS) Software, Version
                        7.0(3)I6(1)
        Version Source: CDP
    
    C2                          cluster-network    10.10.1.104      NX3232C
         Serial Number: FOX000002
          Is Monitored: true
              Reason:
      Software Version: Cisco Nexus Operating System (NX-OS) Software, Version
                        7.0(3)I6(1)
        Version Source: CDP
    
    2 entries were displayed.
  5. If you suppressed automatic case creation, reenable it by invoking an AutoSupport message:

    system node autosupport invoke -node * -type all -message MAINT=END