Skip to main content
Cluster and storage switches

Prepare for migration from Nexus 5596 switches to Nexus 3132Q-V switches

Contributors netapp-yvonneo

Follow these steps to prepare your Cisco Nexus 5596 switches for migration to Cisco Nexus 3132Q-V switches.

Steps
  1. If AutoSupport is enabled on this cluster, suppress automatic case creation by invoking an AutoSupport message: system node autosupport invoke -node * -type all -message MAINT=xh

    x is the duration of the maintenance window in hours.

    Note The message notifies technical support of this maintenance task so that automatic case creation is suppressed during the maintenance window.
  2. Display information about the devices in your configuration:

    network device-discovery show

    Show example

    The following example shows how many cluster interconnect interfaces have been configured in each node for each cluster interconnect switch:

    cluster::> network device-discovery show
                Local  Discovered
    Node        Port   Device              Interface        Platform
    ----------- ------ ------------------- ---------------- ----------------
    n1         /cdp
                e0a    CL1                 Ethernet1/1      N5K-C5596UP
                e0b    CL2                 Ethernet1/1      N5K-C5596UP
                e0c    CL2                 Ethernet1/2      N5K-C5596UP
                e0d    CL1                 Ethernet1/2      N5K-C5596UP
    n2         /cdp
                e0a    CL1                 Ethernet1/3      N5K-C5596UP
                e0b    CL2                 Ethernet1/3      N5K-C5596UP
                e0c    CL2                 Ethernet1/4      N5K-C5596UP
                e0d    CL1                 Ethernet1/4      N5K-C5596UP
    8 entries were displayed.
  3. Determine the administrative or operational status for each cluster interface:

    1. Display the network port attributes:

      network port show

      Show example

      The following example displays the network port attributes on a system:

      cluster::*> network port show –role cluster
        (network port show)
      Node: n1
                                                                             Ignore
                                                        Speed(Mbps) Health   Health
      Port      IPspace      Broadcast Domain Link MTU  Admin/Oper  Status   Status
      --------- ------------ ---------------- ---- ---- ----------- -------- ------
      e0a       Cluster      Cluster          up   9000 auto/10000  -        -
      e0b       Cluster      Cluster          up   9000 auto/10000  -        -
      e0c       Cluster      Cluster          up   9000 auto/10000  -        -
      e0d       Cluster      Cluster          up   9000 auto/10000  -        -
      
      Node: n2
                                                                             Ignore
                                                        Speed(Mbps) Health   Health
      Port      IPspace      Broadcast Domain Link MTU  Admin/Oper  Status   Status
      --------- ------------ ---------------- ---- ---- ----------- -------- ------
      e0a       Cluster      Cluster          up   9000  auto/10000 -        -
      e0b       Cluster      Cluster          up   9000  auto/10000 -        -
      e0c       Cluster      Cluster          up   9000  auto/10000 -        -
      e0d       Cluster      Cluster          up   9000  auto/10000 -        -
      8 entries were displayed.
    2. Display information about the logical interfaces:
      network interface show

      Show example

      The following example displays the general information about all of the LIFs on your system:

      cluster::*> network interface show -role cluster
       (network interface show)
                  Logical    Status     Network            Current       Current Is
      Vserver     Interface  Admin/Oper Address/Mask       Node          Port    Home
      ----------- ---------- ---------- ------------------ ------------- ------- ----
      Cluster
                  n1_clus1   up/up      10.10.0.1/24       n1            e0a     true
                  n1_clus2   up/up      10.10.0.2/24       n1            e0b     true
                  n1_clus3   up/up      10.10.0.3/24       n1            e0c     true
                  n1_clus4   up/up      10.10.0.4/24       n1            e0d     true
                  n2_clus1   up/up      10.10.0.5/24       n2            e0a     true
                  n2_clus2   up/up      10.10.0.6/24       n2            e0b     true
                  n2_clus3   up/up      10.10.0.7/24       n2            e0c     true
                  n2_clus4   up/up      10.10.0.8/24       n2            e0d     true
      8 entries were displayed.
    3. Display information about the discovered cluster switches:
      system cluster-switch show

      Show example

      The following example displays the cluster switches that are known to the cluster, along with their management IP addresses:

      cluster::*> system cluster-switch show
      
      Switch                        Type               Address         Model
      ----------------------------- ------------------ --------------- ---------------
      CL1                           cluster-network    10.10.1.101     NX5596
           Serial Number: 01234567
            Is Monitored: true
                  Reason:
        Software Version: Cisco Nexus Operating System (NX-OS) Software, Version
                          7.1(1)N1(1)
          Version Source: CDP
      CL2                           cluster-network    10.10.1.102     NX5596
           Serial Number: 01234568
            Is Monitored: true
                  Reason:
        Software Version: Cisco Nexus Operating System (NX-OS) Software, Version
                          7.1(1)N1(1)
          Version Source: CDP
      
      2 entries were displayed.
  4. Set the -auto-revert parameter to false on cluster LIFs clus1 and clus2 on both nodes:

    network interface modify

    Show example
    cluster::*> network interface modify -vserver node1 -lif clus1 -auto-revert false
    cluster::*> network interface modify -vserver node1 -lif clus2 -auto-revert false
    cluster::*> network interface modify -vserver node2 -lif clus1 -auto-revert false
    cluster::*> network interface modify -vserver node2 -lif clus2 -auto-revert false
  5. Verify that the appropriate RCF and image are installed on the new 3132Q-V switches as necessary for your requirements, and make the essential site customizations, such as users and passwords, network addresses, and so on.

    You must prepare both switches at this time. If you need to upgrade the RCF and image, follow these steps:

    1. Go to the Cisco Ethernet Switches page on the NetApp Support Site.

    2. Note your switch and the required software versions in the table on that page.

    3. Download the appropriate version of the RCF.

    4. Select CONTINUE on the Description page, accept the license agreement, and then follow the instructions on the Download page to download the RCF.

    5. Download the appropriate version of the image software.

      See the ONTAP 8.x or later Cluster and Management Network Switch Reference Configuration FilesDownload page, and then select the appropriate version.

      To find the correct version, see the ONTAP 8.x or later Cluster Network Switch Download page.

  6. Migrate the LIFs associated with the second Nexus 5596 switch to be replaced:

    network interface migrate

    Show example

    The following example shows n1 and n2, but LIF migration must be done on all of the nodes:

    cluster::*> network interface migrate -vserver Cluster -lif n1_clus2 -source-node n1 –
    destination-node n1 -destination-port e0a
    cluster::*> network interface migrate -vserver Cluster -lif n1_clus3 -source-node n1 –
    destination-node n1 -destination-port e0d
    cluster::*> network interface migrate -vserver Cluster -lif n2_clus2 -source-node n2 –
    destination-node n2 -destination-port e0a
    cluster::*> network interface migrate -vserver Cluster -lif n2_clus3 -source-node n2 –
    destination-node n2 -destination-port e0d
  7. Verify the cluster's health:

    network interface show

    Show example

    The following example shows the result of the previous network interface migrate command:

    cluster::*> network interface show -role cluster
     (network interface show)
                Logical    Status     Network            Current       Current Is
    Vserver     Interface  Admin/Oper Address/Mask       Node          Port    Home
    ----------- ---------- ---------- ------------------ ------------- ------- ----
    Cluster
                n1_clus1   up/up      10.10.0.1/24       n1            e0a     true
                n1_clus2   up/up      10.10.0.2/24       n1            e0a     false
                n1_clus3   up/up      10.10.0.3/24       n1            e0d     false
                n1_clus4   up/up      10.10.0.4/24       n1            e0d     true
                n2_clus1   up/up      10.10.0.5/24       n2            e0a     true
                n2_clus2   up/up      10.10.0.6/24       n2            e0a     false
                n2_clus3   up/up      10.10.0.7/24       n2            e0d     false
                n2_clus4   up/up      10.10.0.8/24       n2            e0d     true
    8 entries were displayed.
  8. Shut down the cluster interconnect ports that are physically connected to switch CL2:

    network port modify

    Show example

    The following commands shut down the specified ports on n1 and n2, but the ports must be shut down on all nodes:

    cluster::*> network port modify -node n1 -port e0b -up-admin false
    cluster::*> network port modify -node n1 -port e0c -up-admin false
    cluster::*> network port modify -node n2 -port e0b -up-admin false
    cluster::*> network port modify -node n2 -port e0c -up-admin false
  9. Verify the connectivity of the remote cluster interfaces:

ONTAP 9.9.1 and later

You can use the network interface check cluster-connectivity command to start an accessibility check for cluster connectivity and then display the details:

network interface check cluster-connectivity start and network interface check cluster-connectivity show

cluster1::*> network interface check cluster-connectivity start

NOTE: Wait for a number of seconds before running the show command to display the details.

cluster1::*> network interface check cluster-connectivity show
                                  Source          Destination       Packet
Node   Date                       LIF             LIF               Loss
------ -------------------------- --------------- ----------------- -----------
n1
       3/5/2022 19:21:18 -06:00   n1_clus2        n2_clus1      none
       3/5/2022 19:21:20 -06:00   n1_clus2        n2_clus2      none

n2
       3/5/2022 19:21:18 -06:00   n2_clus2        n1_clus1      none
       3/5/2022 19:21:20 -06:00   n2_clus2        n1_clus2      none
All ONTAP releases

For all ONTAP releases, you can also use the cluster ping-cluster -node <name> command to check the connectivity:

cluster ping-cluster -node <name>

cluster::*> cluster ping-cluster -node n1
Host is n1
Getting addresses from network interface table...
Cluster n1_clus1 n1		e0a	10.10.0.1
Cluster n1_clus2 n1		e0b	10.10.0.2
Cluster n1_clus3 n1		e0c	10.10.0.3
Cluster n1_clus4 n1		e0d	10.10.0.4
Cluster n2_clus1 n2		e0a	10.10.0.5
Cluster n2_clus2 n2		e0b	10.10.0.6
Cluster n2_clus3 n2		e0c	10.10.0.7
Cluster n2_clus4 n2		e0d	10.10.0.8

Local = 10.10.0.1 10.10.0.2 10.10.0.3 10.10.0.4
Remote = 10.10.0.5 10.10.0.6 10.10.0.7 10.10.0.8
Cluster Vserver Id = 4294967293
Ping status:
....
Basic connectivity succeeds on 16 path(s)
Basic connectivity fails on 0 path(s)
................
Detected 1500 byte MTU on 16 path(s):
    Local 10.10.0.1 to Remote 10.10.0.5
    Local 10.10.0.1 to Remote 10.10.0.6
    Local 10.10.0.1 to Remote 10.10.0.7
    Local 10.10.0.1 to Remote 10.10.0.8
    Local 10.10.0.2 to Remote 10.10.0.5
    Local 10.10.0.2 to Remote 10.10.0.6
    Local 10.10.0.2 to Remote 10.10.0.7
    Local 10.10.0.2 to Remote 10.10.0.8
    Local 10.10.0.3 to Remote 10.10.0.5
    Local 10.10.0.3 to Remote 10.10.0.6
    Local 10.10.0.3 to Remote 10.10.0.7
    Local 10.10.0.3 to Remote 10.10.0.8
    Local 10.10.0.4 to Remote 10.10.0.5
    Local 10.10.0.4 to Remote 10.10.0.6
    Local 10.10.0.4 to Remote 10.10.0.7
    Local 10.10.0.4 to Remote 10.10.0.8
Larger than PMTU communication succeeds on 16 path(s)
RPC status:
4 paths up, 0 paths down (tcp check)
4 paths up, 0 paths down (udp check)
  1. Shut down the ISL ports 41 through 48 on the active Nexus 5596 switch CL1:

    Show example

    The following example shows how to shut down ISL ports 41 through 48 on the Nexus 5596 switch CL1:

    (CL1)# configure
    (CL1)(Config)# interface e1/41-48
    (CL1)(config-if-range)# shutdown
    (CL1)(config-if-range)# exit
    (CL1)(Config)# exit
    (CL1)#

    If you are replacing a Nexus 5010 or 5020, specify the appropriate port numbers for ISL.

  2. Build a temporary ISL between CL1 and C2.

    Show example

    The following example shows a temporary ISL being set up between CL1 and C2:

    C2# configure
    C2(config)# interface port-channel 2
    C2(config-if)# switchport mode trunk
    C2(config-if)# spanning-tree port type network
    C2(config-if)# mtu 9216
    C2(config-if)# interface breakout module 1 port 24 map 10g-4x
    C2(config)# interface e1/24/1-4
    C2(config-if-range)# switchport mode trunk
    C2(config-if-range)# mtu 9216
    C2(config-if-range)# channel-group 2 mode active
    C2(config-if-range)# exit
    C2(config-if)# exit
What's next?

Configure your ports.