Skip to main content
Cluster and storage switches

Migrate from a Cisco switch to a Cisco Nexus 92300YC switch

Contributors netapp-yvonneo netapp-jolieg netapp-jsnyder

You can migrate nondisruptively older Cisco cluster switches for an ONTAP cluster to Cisco Nexus 92300YC cluster network switches.

Note After your migration completes, you might need to install the required configuration file to support the Cluster Switch Health Monitor (CSHM) for 92300YC cluster switches. See Install the Cluster Switch Health Monitor (CSHM).

Review requirements

What you'll need
  • A fully functional existing cluster.

  • 10 GbE and 40 GbE connectivity from nodes to Nexus 92300YC cluster switches.

  • All cluster ports are in the up state to ensure nondisruptive operations.

  • Proper version of NX-OS and reference configuration file (RCF) installed on the Nexus 92300YC cluster switches.

  • A redundant and fully functional NetApp cluster using both older Cisco switches.

  • Management connectivity and console access to both the older Cisco switches and the new switches.

  • All cluster LIFs in the up state with the cluster LIFs are on their home ports.

  • ISL ports enabled and cabled between the older Cisco switches and between the new switches.

Migrate the switch

About the examples

The examples in this procedure use the following switch and node nomenclature:

  • The existing Cisco Nexus 5596UP cluster switches are c1 and c2.

  • The new Nexus 92300YC cluster switches are cs1 and cs2.

  • The nodes are node1 and node2.

  • The cluster LIFs are node1_clus1 and node1_clus2 on node 1, and node2_clus1 and node2_clus2 on node 2 respectively.

  • Switch c2 is replaced by switch cs2 first and then switch c1 is replaced by switch cs1.

    • A temporary ISL is built on cs1 connecting c1 to cs1.

    • Cabling between the nodes and c2 are then disconnected from c2 and reconnected to cs2.

    • Cabling between the nodes and c1 are then disconnected from c1 and reconnected to cs1.

    • The temporary ISL between c1 and cs1 is then removed.

Ports used for connections
  • Some of the ports are configured on Nexus 92300YC switches to run at 10 GbE or 40 GbE.

  • The cluster switches use the following ports for connections to nodes:

    • Ports e1/1-48 (10/25 GbE), e1/49-64 (40/100 GbE): Nexus 92300YC

    • Ports e1/1-40 (10 GbE): Nexus 5596UP

    • Ports e1/1-32 (10 GbE): Nexus 5020

    • Ports e1/1-12, e2/1-6 (10 GbE): Nexus 5010 with expansion module

  • The cluster switches use the following Inter-Switch Link (ISL) ports:

    • Ports e1/65-66 (100 GbE): Nexus 92300YC

    • Ports e1/41-48 (10 GbE): Nexus 5596UP

    • Ports e1/33-40 (10 GbE): Nexus 5020

    • Ports e1/13-20 (10 GbE): Nexus 5010

  • Hardware Universe - Switches contains information about supported cabling for all cluster switches.

  • The ONTAP and NX-OS versions supported in this procedure are on the Cisco Ethernet Switches page.

Step 1: Prepare for migration

  1. Change the privilege level to advanced, entering y when prompted to continue:

    set -privilege advanced

    The advanced prompt (*>) appears.

  2. If AutoSupport is enabled on this cluster, suppress automatic case creation by invoking an AutoSupport message:

    system node autosupport invoke -node * -type all -message MAINT=xh

    where x is the duration of the maintenance window in hours.

    Note The AutoSupport message notifies technical support of this maintenance task so that automatic case creation is suppressed during the maintenance window.
    Show example

    The following command suppresses automatic case creation for two hours:

    cluster1::*> system node autosupport invoke -node * -type all -message MAINT=2h
  3. Verify that auto-revert is enabled on all cluster LIFs:

    network interface show -vserver Cluster -fields auto-revert

    Show example
    cluster1::*> network interface show -vserver Cluster -fields auto-revert
    
              Logical
    Vserver   Interface     Auto-revert
    --------- ------------- ------------
    Cluster
              node1_clus1   true
              node1_clus2   true
              node2_clus1   true
              node2_clus2   true
    
    4 entries were displayed.
  4. Determine the administrative or operational status for each cluster interface:

    Each port should display up for Link and healthy for Health Status.

    1. Display the network port attributes:

      network port show -ipspace Cluster

      Show example
      cluster1::*> network port show -ipspace Cluster
      
      Node: node1
                                                                             Ignore
                                                        Speed(Mbps) Health   Health
      Port      IPspace      Broadcast Domain Link MTU  Admin/Oper  Status   Status
      --------- ------------ ---------------- ---- ---- ----------- -------- ------
      e0a       Cluster      Cluster          up   9000  auto/10000 healthy  false
      e0b       Cluster      Cluster          up   9000  auto/10000 healthy  false
      
      Node: node2
                                                                             Ignore
                                                        Speed(Mbps) Health   Health
      Port      IPspace      Broadcast Domain Link MTU  Admin/Oper  Status   Status
      --------- ------------ ---------------- ---- ---- ----------- -------- ------
      e0a       Cluster      Cluster          up   9000  auto/10000 healthy  false
      e0b       Cluster      Cluster          up   9000  auto/10000 healthy  false
      
      4 entries were displayed.
    2. Display information about the logical interfaces and their designated home nodes:

      network interface show -vserver Cluster

      Each LIF should display up/up for Status Admin/Oper and true for Is Home.

      Show example
      cluster1::*> network interface show -vserver Cluster
      
                  Logical      Status     Network            Current       Current Is
      Vserver     Interface    Admin/Oper Address/Mask       Node          Port    Home
      ----------- -----------  ---------- ------------------ ------------- ------- ----
      Cluster
                  node1_clus1  up/up      169.254.209.69/16  node1         e0a     true
                  node1_clus2  up/up      169.254.49.125/16  node1         e0b     true
                  node2_clus1  up/up      169.254.47.194/16  node2         e0a     true
                  node2_clus2  up/up      169.254.19.183/16  node2         e0b     true
      
      4 entries were displayed.
  5. Verify that the cluster ports on each node are connected to existing cluster switches in the following way (from the nodes' perspective) using the command:

    network device-discovery show -protocol cdp

    Show example
    cluster1::*> network device-discovery show -protocol cdp
    Node/       Local  Discovered
    Protocol    Port   Device (LLDP: ChassisID)  Interface         Platform
    ----------- ------ ------------------------- ----------------  ----------------
    node2      /cdp
                e0a    c1                        0/2               N5K-C5596UP
                e0b    c2                        0/2               N5K-C5596UP
    node1      /cdp
                e0a    c1                        0/1               N5K-C5596UP
                e0b    c2                        0/1               N5K-C5596UP
    
    4 entries were displayed.
  6. Verify that the cluster ports and switches are connected in the following way (from the switches' perspective) using the command:

    show cdp neighbors

    Show example
    c1# show cdp neighbors
    
    Capability Codes: R - Router, T - Trans-Bridge, B - Source-Route-Bridge
                      S - Switch, H - Host, I - IGMP, r - Repeater,
                      V - VoIP-Phone, D - Remotely-Managed-Device,
                      s - Supports-STP-Dispute
    
    
    Device-ID             Local Intrfce Hldtme Capability  Platform         Port ID
    node1               Eth1/1         124    H         FAS2750            e0a
    node2               Eth1/2         124    H         FAS2750            e0a
    c2(FOX2025GEFC)     Eth1/41        179    S I s     N5K-C5596UP        Eth1/41
    
    c2(FOX2025GEFC)     Eth1/42        175    S I s     N5K-C5596UP        Eth1/42
    
    c2(FOX2025GEFC)     Eth1/43        179    S I s     N5K-C5596UP        Eth1/43
    
    c2(FOX2025GEFC)     Eth1/44        175    S I s     N5K-C5596UP        Eth1/44
    
    c2(FOX2025GEFC)     Eth1/45        179    S I s     N5K-C5596UP        Eth1/45
    
    c2(FOX2025GEFC)     Eth1/46        179    S I s     N5K-C5596UP        Eth1/46
    
    c2(FOX2025GEFC)     Eth1/47        175    S I s     N5K-C5596UP        Eth1/47
    
    c2(FOX2025GEFC)     Eth1/48        179    S I s     N5K-C5596UP        Eth1/48
    
    Total entries displayed: 10
    
    
    c2# show cdp neighbors
    
    Capability Codes: R - Router, T - Trans-Bridge, B - Source-Route-Bridge
                      S - Switch, H - Host, I - IGMP, r - Repeater,
                      V - VoIP-Phone, D - Remotely-Managed-Device,
                      s - Supports-STP-Dispute
    
    
    Device-ID             Local Intrfce Hldtme Capability  Platform         Port ID
    node1               Eth1/1         124    H         FAS2750            e0b
    node2               Eth1/2         124    H         FAS2750            e0b
    c1(FOX2025GEEX)     Eth1/41        175    S I s     N5K-C5596UP        Eth1/41
    
    c1(FOX2025GEEX)     Eth1/42        175    S I s     N5K-C5596UP        Eth1/42
    
    c1(FOX2025GEEX)     Eth1/43        175    S I s     N5K-C5596UP        Eth1/43
    
    c1(FOX2025GEEX)     Eth1/44        175    S I s     N5K-C5596UP        Eth1/44
    
    c1(FOX2025GEEX)     Eth1/45        175    S I s     N5K-C5596UP        Eth1/45
    
    c1(FOX2025GEEX)     Eth1/46        175    S I s     N5K-C5596UP        Eth1/46
    
    c1(FOX2025GEEX)     Eth1/47        176    S I s     N5K-C5596UP        Eth1/47
    
    c1(FOX2025GEEX)     Eth1/48        176    S I s     N5K-C5596UP        Eth1/48
  7. Verify the connectivity of the remote cluster interfaces:

ONTAP 9.9.1 and later

You can use the network interface check cluster-connectivity command to start an accessibility check for cluster connectivity and then display the details:

network interface check cluster-connectivity start and network interface check cluster-connectivity show

cluster1::*> network interface check cluster-connectivity start

NOTE: Wait for a number of seconds before running the show command to display the details.

cluster1::*> network interface check cluster-connectivity show
                                  Source           Destination      Packet
Node   Date                       LIF              LIF              Loss
------ -------------------------- ---------------- ---------------- -----------
node1
       3/5/2022 19:21:18 -06:00   node1_clus2      node2-clus1      none
       3/5/2022 19:21:20 -06:00   node1_clus2      node2_clus2      none
node2
       3/5/2022 19:21:18 -06:00   node2_clus2      node1_clus1      none
       3/5/2022 19:21:20 -06:00   node2_clus2      node1_clus2      none
All ONTAP releases

For all ONTAP releases, you can also use the cluster ping-cluster -node <name> command to check the connectivity:

cluster ping-cluster -node <name>

cluster1::*> cluster ping-cluster -node local
Host is node2
Getting addresses from network interface table...
Cluster node1_clus1 169.254.209.69 node1     e0a
Cluster node1_clus2 169.254.49.125 node1     e0b
Cluster node2_clus1 169.254.47.194 node2     e0a
Cluster node2_clus2 169.254.19.183 node2     e0b
Local = 169.254.47.194 169.254.19.183
Remote = 169.254.209.69 169.254.49.125
Cluster Vserver Id = 4294967293
Ping status:
....
Basic connectivity succeeds on 4 path(s)
Basic connectivity fails on 0 path(s)
................
Detected 9000 byte MTU on 4 path(s):
    Local 169.254.19.183 to Remote 169.254.209.69
    Local 169.254.19.183 to Remote 169.254.49.125
    Local 169.254.47.194 to Remote 169.254.209.69
    Local 169.254.47.194 to Remote 169.254.49.125
Larger than PMTU communication succeeds on 4 path(s)
RPC status:
2 paths up, 0 paths down (tcp check)
2 paths up, 0 paths down (udp check)

Step 2: Configure cables and ports

  1. Configure a temporary ISL on cs1on ports e1/41-48, between c1 and cs1.

    Show example

    The following example shows how the new ISL is configured on c1 and cs1:

    cs1# configure
    Enter configuration commands, one per line. End with CNTL/Z.
    cs1(config)# interface e1/41-48
    cs1(config-if-range)# description temporary ISL between Nexus 5596UP and Nexus 92300YC
    cs1(config-if-range)# no lldp transmit
    cs1(config-if-range)# no lldp receive
    cs1(config-if-range)# switchport mode trunk
    cs1(config-if-range)# no spanning-tree bpduguard enable
    cs1(config-if-range)# channel-group 101 mode active
    cs1(config-if-range)# exit
    cs1(config)# interface port-channel 101
    cs1(config-if)# switchport mode trunk
    cs1(config-if)# spanning-tree port type network
    cs1(config-if)# exit
    cs1(config)# exit
  2. Remove ISL cables from ports e1/41-48 from c2 and connect the cables to ports e1/41-48 on cs1.

  3. Verify that the ISL ports and port-channel are operational connecting c1 and cs1:

    show port-channel summary

    Show example

    The following example shows the Cisco show port-channel summary command being used to verify the ISL ports are operational on c1 and cs1:

    c1# show port-channel summary
    Flags:  D - Down        P - Up in port-channel (members)
            I - Individual  H - Hot-standby (LACP only)
            s - Suspended   r - Module-removed
            b - BFD Session Wait
            S - Switched    R - Routed
            U - Up (port-channel)
            p - Up in delay-lacp mode (member)
            M - Not in use. Min-links not met
    --------------------------------------------------------------------------------
    Group Port-       Type     Protocol  Member Ports
          Channel
    --------------------------------------------------------------------------------
    1     Po1(SU)     Eth      LACP      Eth1/41(P)   Eth1/42(P)   Eth1/43(P)
                                         Eth1/44(P)   Eth1/45(P)   Eth1/46(P)
                                         Eth1/47(P)   Eth1/48(P)
    
    
    cs1# show port-channel summary
    Flags:  D - Down        P - Up in port-channel (members)
            I - Individual  H - Hot-standby (LACP only)
            s - Suspended   r - Module-removed
            b - BFD Session Wait
            S - Switched    R - Routed
            U - Up (port-channel)
            p - Up in delay-lacp mode (member)
            M - Not in use. Min-links not met
    --------------------------------------------------------------------------------
    Group Port-       Type     Protocol  Member Ports
          Channel
    --------------------------------------------------------------------------------
    1     Po1(SU)     Eth      LACP      Eth1/65(P)   Eth1/66(P)
    101   Po101(SU)   Eth      LACP      Eth1/41(P)   Eth1/42(P)   Eth1/43(P)
                                         Eth1/44(P)   Eth1/45(P)   Eth1/46(P)
                                         Eth1/47(P)   Eth1/48(P)
  4. For node1, disconnect the cable from e1/1 on c2, and then connect the cable to e1/1 on cs2, using appropriate cabling supported by Nexus 92300YC.

  5. For node2, disconnect the cable from e1/2 on c2, and then connect the cable to e1/2 on cs2, using appropriate cabling supported by Nexus 92300YC.

  6. The cluster ports on each node are now connected to cluster switches in the following way, from the nodes' perspective:

    network device-discovery show -protocol cdp

    Show example
    cluster1::*> network device-discovery show -protocol cdp
    
    Node/       Local  Discovered
    Protocol    Port   Device (LLDP: ChassisID)  Interface         Platform
    ----------- ------ ------------------------- ----------------  ----------------
    node2      /cdp
                e0a    c1                        0/2               N5K-C5596UP
                e0b    cs2                       0/2               N9K-C92300YC
    node1      /cdp
                e0a    c1                        0/1               N5K-C5596UP
                e0b    cs2                       0/1               N9K-C92300YC
    
    4 entries were displayed.
  7. For node1, disconnect the cable from e1/1 on c1, and then connect the cable to e1/1 on cs1, using appropriate cabling supported by Nexus 92300YC.

  8. For node2, disconnect the cable from e1/2 on c1, and then connect the cable to e1/2 on cs1, using appropriate cabling supported by Nexus 92300YC.

  9. The cluster ports on each node are now connected to cluster switches in the following way, from the nodes' perspective:

    network device-discovery show -protocol cdp

    Show example
    cluster1::*> network device-discovery show -protocol cdp
    Node/       Local  Discovered
    Protocol    Port   Device (LLDP: ChassisID)  Interface         Platform
    ----------- ------ ------------------------- ----------------  ----------------
    node2      /cdp
                e0a    cs1                       0/2               N9K-C92300YC
                e0b    cs2                       0/2               N9K-C92300YC
    node1      /cdp
                e0a    cs1                       0/1               N9K-C92300YC
                e0b    cs2                       0/1               N9K-C92300YC
    4 entries were displayed.
  10. Delete the temporary ISL between cs1 and c1.

    Show example
    cs1(config)# no interface port-channel 10
    cs1(config)# interface e1/41-48
    cs1(config-if-range)# lldp transmit
    cs1(config-if-range)# lldp receive
    cs1(config-if-range)# no switchport mode trunk
    cs1(config-if-range)# no channel-group
    cs1(config-if-range)# description 10GbE Node Port
    cs1(config-if-range)# spanning-tree bpduguard enable
    cs1(config-if-range)# exit
    cs1(config)# exit

Step 3: Complete the migration

  1. Verify the final configuration of the cluster:

    network port show -ipspace Cluster

    Each port should display up for Link and healthy for Health Status.

    Show example
    cluster1::*> network port show -ipspace Cluster
    
    Node: node1
                                                                           Ignore
                                                      Speed(Mbps) Health   Health
    Port      IPspace      Broadcast Domain Link MTU  Admin/Oper  Status   Status
    --------- ------------ ---------------- ---- ---- ----------- -------- ------
    e0a       Cluster      Cluster          up   9000  auto/10000 healthy  false
    e0b       Cluster      Cluster          up   9000  auto/10000 healthy  false
    
    Node: node2
                                                                           Ignore
                                                      Speed(Mbps) Health   Health
    Port      IPspace      Broadcast Domain Link MTU  Admin/Oper  Status   Status
    --------- ------------ ---------------- ---- ---- ----------- -------- ------
    e0a       Cluster      Cluster          up   9000  auto/10000 healthy  false
    e0b       Cluster      Cluster          up   9000  auto/10000 healthy  false
    
    4 entries were displayed.
    
    
    cluster1::*> network interface show -vserver Cluster
    
                Logical    Status     Network            Current       Current Is
    Vserver     Interface  Admin/Oper Address/Mask       Node          Port    Home
    ----------- ---------- ---------- ------------------ ------------- ------- ----
    Cluster
                node1_clus1  up/up    169.254.209.69/16  node1         e0a     true
                node1_clus2  up/up    169.254.49.125/16  node1         e0b     true
                node2_clus1  up/up    169.254.47.194/16  node2         e0a     true
                node2_clus2  up/up    169.254.19.183/16  node2         e0b     true
    
    4 entries were displayed.
    
    
    cluster1::*> network device-discovery show -protocol cdp
    
    Node/       Local  Discovered
    Protocol    Port   Device (LLDP: ChassisID)  Interface         Platform
    ----------- ------ ------------------------- ----------------  ----------------
    node2      /cdp
                e0a    cs1                       0/2               N9K-C92300YC
                e0b    cs2                       0/2               N9K-C92300YC
    node1      /cdp
                e0a    cs1                       0/1               N9K-C92300YC
                e0b    cs2                       0/1               N9K-C92300YC
    
    4 entries were displayed.
    
    
    cs1# show cdp neighbors
    
    Capability Codes: R - Router, T - Trans-Bridge, B - Source-Route-Bridge
                      S - Switch, H - Host, I - IGMP, r - Repeater,
                      V - VoIP-Phone, D - Remotely-Managed-Device,
                      s - Supports-STP-Dispute
    
    Device-ID          Local Intrfce  Hldtme Capability  Platform      Port ID
    node1               Eth1/1         124    H         FAS2750            e0a
    node2               Eth1/2         124    H         FAS2750            e0a
    cs2(FDO220329V5)    Eth1/65        179    R S I s   N9K-C92300YC  Eth1/65
    cs2(FDO220329V5)    Eth1/66        179    R S I s   N9K-C92300YC  Eth1/66
    
    
    cs2# show cdp neighbors
    
    Capability Codes: R - Router, T - Trans-Bridge, B - Source-Route-Bridge
                      S - Switch, H - Host, I - IGMP, r - Repeater,
                      V - VoIP-Phone, D - Remotely-Managed-Device,
                      s - Supports-STP-Dispute
    
    Device-ID          Local Intrfce  Hldtme Capability  Platform      Port ID
    node1               Eth1/1         124    H         FAS2750            e0b
    node2               Eth1/2         124    H         FAS2750            e0b
    cs1(FDO220329KU)
                        Eth1/65        179    R S I s   N9K-C92300YC  Eth1/65
    cs1(FDO220329KU)
                        Eth1/66        179    R S I s   N9K-C92300YC  Eth1/66
    
    Total entries displayed: 4
  2. Verify the connectivity of the remote cluster interfaces:

ONTAP 9.9.1 and later

You can use the network interface check cluster-connectivity command to start an accessibility check for cluster connectivity and then display the details:

network interface check cluster-connectivity start and network interface check cluster-connectivity show

cluster1::*> network interface check cluster-connectivity start

NOTE: Wait for a number of seconds before running the show command to display the details.

cluster1::*> network interface check cluster-connectivity show
                                  Source           Destination      Packet
Node   Date                       LIF              LIF              Loss
------ -------------------------- ---------------- ---------------- -----------
node1
       3/5/2022 19:21:18 -06:00   node1_clus2      node2-clus1      none
       3/5/2022 19:21:20 -06:00   node1_clus2      node2_clus2      none
node2
       3/5/2022 19:21:18 -06:00   node2_clus2      node1_clus1      none
       3/5/2022 19:21:20 -06:00   node2_clus2      node1_clus2      none
All ONTAP releases

For all ONTAP releases, you can also use the cluster ping-cluster -node <name> command to check the connectivity:

cluster ping-cluster -node <name>

cluster1::*> cluster ping-cluster -node local
Host is node2
Getting addresses from network interface table...
Cluster node1_clus1 169.254.209.69 node1     e0a
Cluster node1_clus2 169.254.49.125 node1     e0b
Cluster node2_clus1 169.254.47.194 node2     e0a
Cluster node2_clus2 169.254.19.183 node2     e0b
Local = 169.254.47.194 169.254.19.183
Remote = 169.254.209.69 169.254.49.125
Cluster Vserver Id = 4294967293
Ping status:
....
Basic connectivity succeeds on 4 path(s)
Basic connectivity fails on 0 path(s)
................
Detected 9000 byte MTU on 4 path(s):
    Local 169.254.19.183 to Remote 169.254.209.69
    Local 169.254.19.183 to Remote 169.254.49.125
    Local 169.254.47.194 to Remote 169.254.209.69
    Local 169.254.47.194 to Remote 169.254.49.125
Larger than PMTU communication succeeds on 4 path(s)
RPC status:
2 paths up, 0 paths down (tcp check)
2 paths up, 0 paths down (udp check)
  1. If you suppressed automatic case creation, reenable it by invoking an AutoSupport message:

    system node autosupport invoke -node * -type all -message MAINT=END

    Show example
    cluster1::*> system node autosupport invoke -node * -type all -message MAINT=END
  2. Change the privilege level back to admin:

    set -privilege admin