Skip to main content
Cluster and storage switches

Migrate from a NetApp CN1610 cluster switch to a Cisco 9336C-FX2 cluster switch

Contributors

You can migrate NetApp CN1610 cluster switches for an ONTAP cluster to Cisco 9336C-FX2 cluster switches. This is a nondisruptive procedure.

Review requirements

You must be aware of certain configuration information, port connections and cabling requirements when you are replacing NetApp CN1610 cluster switches with Cisco 9336C-FX2 cluster switches.

Supported switches

The following cluster switches are supported:

  • NetApp CN1610

  • Cisco 9336C-FX2

For details of supported ports and their configurations, see the Hardware Universe.

What you'll need

Verify that your configuration meets the following requirements:

  • The existing cluster is correctly set up and functioning.

  • All cluster ports are in the up state to ensure nondisruptive operations.

  • The Cisco 9336C-FX2 cluster switches are configured and operating under the correct version of NX-OS installed with the reference configuration file (RCF) applied.

  • The existing cluster network configuration has the following:

    • A redundant and fully functional NetApp cluster using NetApp CN1610 switches.

    • Management connectivity and console access to both the NetApp CN1610 switches and the new switches.

    • All cluster LIFs in the up state with the cluster LIFs are on their home ports.

  • Some of the ports are configured on Cisco 9336C-FX2 switches to run at 40GbE or 100GbE.

  • You have planned, migrated, and documented 40GbE and 100GbE connectivity from nodes to Cisco 9336C-FX2 cluster switches.

Migrate the switches

About the examples

The examples in this procedure use the following switch and node nomenclature:

  • The existing CN1610 cluster switches are C1 and C2.

  • The new 9336C-FX2 cluster switches are cs1 and cs2.

  • The nodes are node1 and node2.

  • The cluster LIFs are node1_clus1 and node1_clus2 on node 1, and node2_clus1 and node2_clus2 on node 2 respectively.

  • The cluster1::*> prompt indicates the name of the cluster.

  • The cluster ports used in this procedure are e3a and e3b.

About this task

This procedure covers the following scenario:

  • Switch C2 is replaced by switch cs2 first.

    • Shut down the ports to the cluster nodes. All ports must be shut down simultaneously to avoid cluster instability.

    • The cabling between the nodes and C2 is then disconnected from C2 and reconnected to cs2.

  • Switch C1 is replaced by switch cs1.

    • Shut down the ports to the cluster nodes. All ports must be shut down simultaneously to avoid cluster instability.

    • The cabling between the nodes and C1 is then disconnected from C1 and reconnected to cs1.

Note No operational inter-switch link (ISL) is needed during this procedure. This is by design because RCF version changes can affect ISL connectivity temporarily. To ensure non-disruptive cluster operations, the following procedure migrates all of the cluster LIFs to the operational partner switch while performing the steps on the target switch.

Step 1: Prepare for migration

  1. If AutoSupport is enabled on this cluster, suppress automatic case creation by invoking an AutoSupport message:

    system node autosupport invoke -node * -type all -message MAINT=xh

    where x is the duration of the maintenance window in hours.

  2. Change the privilege level to advanced, entering y when prompted to continue:

    set -privilege advanced

    The advanced prompt (*>) appears.

  3. Disable auto-revert on the cluster LIFs:

    network interface modify -vserver Cluster -lif * -auto-revert false

Step 2: Configure ports and cabling

  1. Determine the administrative or operational status for each cluster interface.

    Each port should display up for Link and healthy for Health Status.

    1. Display the network port attributes:

      network port show -ipspace Cluster

      Show example
      cluster1::*> network port show -ipspace Cluster
      
      Node: node1
                                                                             Ignore
                                                       Speed(Mbps)  Health   Health
      Port      IPspace    Broadcast Domain Link MTU   Admin/Oper   Status   Status
      --------- ---------- ---------------- ---- ----- ------------ -------- ------
      e3a       Cluster    Cluster          up   9000  auto/100000  healthy  false
      e3b       Cluster    Cluster          up   9000  auto/100000  healthy  false
      
      Node: node2
                                                                             Ignore
                                                       Speed(Mbps)  Health   Health
      Port      IPspace    Broadcast Domain Link MTU   Admin/Oper   Status   Status
      --------- ---------- ---------------- ---- ----- ------------ -------- ------
      e3a       Cluster    Cluster          up   9000  auto/100000  healthy  false
      e3b       Cluster    Cluster          up   9000  auto/100000  healthy  false
    2. Display information about the LIFs and their designated home nodes:

      network interface show -vserver Cluster

      Each LIF should display up/up for Status Admin/Oper and true for Is Home.

      Show example
      cluster1::*> network interface show -vserver Cluster
      
                  Logical      Status     Network            Current     Current Is
      Vserver     Interface    Admin/Oper Address/Mask       Node        Port    Home
      ----------- -----------  ---------- ------------------ ----------- ------- ----
      Cluster
                  node1_clus1  up/up      169.254.209.69/16  node1       e3a     true
                  node1_clus2  up/up      169.254.49.125/16  node1       e3b     true
                  node2_clus1  up/up      169.254.47.194/16  node2       e3a     true
                  node2_clus2  up/up      169.254.19.183/16  node2       e3b     true
  2. The cluster ports on each node are connected to existing cluster switches in the following way (from the nodes' perspective) using the command:

    network device-discovery show -protocol

    Show example
    cluster1::*> network device-discovery show -protocol cdp
    Node/       Local  Discovered
    Protocol    Port   Device (LLDP: ChassisID)  Interface         Platform
    ----------- ------ ------------------------- ----------------  ----------------
    node1      /cdp
                e3a    C1 (6a:ad:4f:98:3b:3f)    0/1               -
                e3b    C2 (6a:ad:4f:98:4c:a4)    0/1               -
    node2      /cdp
                e3a    C1 (6a:ad:4f:98:3b:3f)    0/2               -
                e3b    C2 (6a:ad:4f:98:4c:a4)    0/2               -
  3. The cluster ports and switches are connected in the following way (from the switches' perspective) using the command:

    show cdp neighbors

    Show example
    C1# show cdp neighbors
    
    Capability Codes: R - Router, T - Trans-Bridge, B - Source-Route-Bridge
                      S - Switch, H - Host, I - IGMP, r - Repeater,
                      V - VoIP-Phone, D - Remotely-Managed-Device,
                      s - Supports-STP-Dispute
    
    Device-ID             Local Intrfce Hldtme Capability  Platform         Port ID
    node1                 Eth1/1        124    H           AFF-A400         e3a
    node2                 Eth1/2        124    H           AFF-A400         e3a
    C2                    0/13          179    S I s       CN1610           0/13
    C2                    0/14          175    S I s       CN1610           0/14
    C2                    0/15          179    S I s       CN1610           0/15
    C2                    0/16          175    S I s       CN1610           0/16
    
    C2# show cdp neighbors
    
    Capability Codes: R - Router, T - Trans-Bridge, B - Source-Route-Bridge
                      S - Switch, H - Host, I - IGMP, r - Repeater,
                      V - VoIP-Phone, D - Remotely-Managed-Device,
                      s - Supports-STP-Dispute
    
    
    Device-ID             Local Intrfce Hldtme Capability  Platform         Port ID
    node1                 Eth1/1        124    H           AFF-A400         e3b
    node2                 Eth1/2        124    H           AFF-A400         e3b
    C1                    0/13          175    S I s       CN1610           0/13
    C1                    0/14          175    S I s       CN1610           0/14
    C1                    0/15          175    S I s       CN1610           0/15
    C1                    0/16          175    S I s       CN1610           0/16
  4. Verify that the cluster network has full connectivity using the command:

    cluster ping-cluster -node node-name

    Show example
    cluster1::*> cluster ping-cluster -node node2
    
    Host is node2
    Getting addresses from network interface table...
    Cluster node1_clus1 169.254.209.69 node1     e3a
    Cluster node1_clus2 169.254.49.125 node1     e3b
    Cluster node2_clus1 169.254.47.194 node2     e3a
    Cluster node2_clus2 169.254.19.183 node2     e3b
    Local = 169.254.47.194 169.254.19.183
    Remote = 169.254.209.69 169.254.49.125
    Cluster Vserver Id = 4294967293
    Ping status:
    ....
    Basic connectivity succeeds on 4 path(s)
    Basic connectivity fails on 0 path(s)
    ................
    Detected 9000 byte MTU on 4 path(s):
        Local 169.254.19.183 to Remote 169.254.209.69
        Local 169.254.19.183 to Remote 169.254.49.125
        Local 169.254.47.194 to Remote 169.254.209.69
        Local 169.254.47.194 to Remote 169.254.49.125
    Larger than PMTU communication succeeds on 4 path(s)
    RPC status:
    2 paths up, 0 paths down (tcp check)
    2 paths up, 0 paths down (udp check)
  5. On switch C2, shut down the ports connected to the cluster ports of the nodes in order to fail over the cluster LIFs.

    (C2)# configure
    (C2)(Config)# interface 0/1-0/12
    (C2)(Interface 0/1-0/12)# shutdown
    (C2)(Interface 0/1-0/12)# exit
    (C2)(Config)# exit
  6. Move the node cluster ports from the old switch C2 to the new switch cs2, using appropriate cabling supported by Cisco 9336C-FX2.

  7. Display the network port attributes:

    network port show -ipspace Cluster

    Show example
    cluster1::*> network port show -ipspace Cluster
    
    Node: node1
                                                                           Ignore
                                                     Speed(Mbps)  Health   Health
    Port      IPspace    Broadcast Domain Link MTU   Admin/Oper   Status   Status
    --------- ---------- ---------------- ---- ----- ------------ -------- ------
    e3a       Cluster    Cluster          up   9000  auto/100000  healthy  false
    e3b       Cluster    Cluster          up   9000  auto/100000  healthy  false
    
    Node: node2
                                                                           Ignore
                                                     Speed(Mbps)  Health   Health
    Port      IPspace    Broadcast Domain Link MTU   Admin/Oper   Status   Status
    --------- ---------- ---------------- ---- ----- ------------ -------- ------
    e3a       Cluster    Cluster          up   9000  auto/100000  healthy  false
    e3b       Cluster    Cluster          up   9000  auto/100000  healthy  false
  8. The cluster ports on each node are now connected to cluster switches in the following way, from the nodes' perspective:

    network device-discovery show -protocol

    Show example
    cluster1::*> network device-discovery show -protocol cdp
    
    Node/       Local  Discovered
    Protocol    Port   Device (LLDP: ChassisID)  Interface         Platform
    ----------- ------ ------------------------- ----------------  ----------------
    node1      /cdp
                e3a    C1  (6a:ad:4f:98:3b:3f)   0/1               CN1610
                e3b    cs2 (b8:ce:f6:19:1a:7e)   Ethernet1/1/1     N9K-C9336C-FX2
    node2      /cdp
                e3a    C1  (6a:ad:4f:98:3b:3f)   0/2               CN1610
                e3b    cs2 (b8:ce:f6:19:1b:96)   Ethernet1/1/2     N9K-C9336C-FX2
  9. On switch cs2, verify that all node cluster ports are up:

    network interface show -vserver Cluster

    Show example
    cluster1::*> network interface show -vserver Cluster
                Logical      Status     Network            Current     Current Is
    Vserver     Interfac     Admin/Oper Address/Mask       Node        Port    Home
    ----------- ------------ ---------- ------------------ ----------- ------- ----
    Cluster
                node1_clus1  up/up      169.254.3.4/16     node1       e0b     false
                node1_clus2  up/up      169.254.3.5/16     node1       e0b     true
                node2_clus1  up/up      169.254.3.8/16     node2       e0b     false
                node2_clus2  up/up      169.254.3.9/16     node2       e0b     true
  10. On switch C1, shut down the ports connected to the cluster ports of the nodes in order to fail over the cluster LIFs.

    (C1)# configure
    (C1)(Config)# interface 0/1-0/12
    (C1)(Interface 0/1-0/12)# shutdown
    (C1)(Interface 0/1-0/12)# exit
    (C1)(Config)# exit
  11. Move the node cluster ports from the old switch C1 to the new switch cs1, using appropriate cabling supported by Cisco 9336C-FX2.

  12. Verify the final configuration of the cluster:

    network port show -ipspace Cluster

    Each port should display up for Link and healthy for Health Status.

    Show example
    cluster1::*> network port show -ipspace Cluster
    
    Node: node1
                                                                           Ignore
                                                     Speed(Mbps)  Health   Health
    Port      IPspace    Broadcast Domain Link MTU   Admin/Oper   Status   Status
    --------- ---------- ---------------- ---- ----- ------------ -------- ------
    e3a       Cluster    Cluster          up   9000  auto/100000  healthy  false
    e3b       Cluster    Cluster          up   9000  auto/100000  healthy  false
    
    Node: node2
                                                                           Ignore
                                                     Speed(Mbps)  Health   Health
    Port      IPspace    Broadcast Domain Link MTU   Admin/Oper   Status   Status
    --------- ---------- ---------------- ---- ----- ------------ -------- ------
    e3a       Cluster    Cluster          up   9000  auto/100000  healthy  false
    e3b       Cluster    Cluster          up   9000  auto/100000  healthy  false
  13. The cluster ports on each node are now connected to cluster switches in the following way, from the nodes' perspective:

    network device-discovery show -protocol

    Show example
    cluster1::*> network device-discovery show -protocol cdp
    
    Node/       Local  Discovered
    Protocol    Port   Device (LLDP: ChassisID)  Interface       Platform
    ----------- ------ ------------------------- --------------  ----------------
    node1      /cdp
                e3a    cs1 (b8:ce:f6:19:1a:7e)   Ethernet1/1/1   N9K-C9336C-FX2
                e3b    cs2 (b8:ce:f6:19:1b:96)   Ethernet1/1/2   N9K-C9336C-FX2
    node2      /cdp
                e3a    cs1 (b8:ce:f6:19:1a:7e)   Ethernet1/1/1   N9K-C9336C-FX2
                e3b    cs2 (b8:ce:f6:19:1b:96)   Ethernet1/1/2   N9K-C9336C-FX2
  14. On switches cs1 and cs2, verify that all node cluster ports are up:

    network port show -ipspace Cluster

    Show example
    cluster1::*> network port show -ipspace Cluster
    
    Node: node1
                                                                           Ignore
                                                      Speed(Mbps) Health   Health
    Port      IPspace      Broadcast Domain Link MTU  Admin/Oper  Status   Status
    --------- ------------ ---------------- ---- ---- ----------- -------- ------
    e0a       Cluster      Cluster          up   9000  auto/10000 healthy  false
    e0b       Cluster      Cluster          up   9000  auto/10000 healthy  false
    
    Node: node2
                                                                           Ignore
                                                      Speed(Mbps) Health   Health
    Port      IPspace      Broadcast Domain Link MTU  Admin/Oper  Status   Status
    --------- ------------ ---------------- ---- ---- ----------- -------- ------
    e0a       Cluster      Cluster          up   9000  auto/10000 healthy  false
    e0b       Cluster      Cluster          up   9000  auto/10000 healthy  false
  15. Verify that both nodes each have one connection to each switch:

    network device-discovery show -protocol

    Show example

    The following example shows the appropriate results for both switches:

    cluster1::*> network device-discovery show -protocol cdp
    Node/       Local  Discovered
    Protocol    Port   Device (LLDP: ChassisID)  Interface       Platform
    ----------- ------ ------------------------- --------------  --------------
    node1      /cdp
                e0a    cs1 (b8:ce:f6:19:1b:42)   Ethernet1/1/1   N9K-C9336C-FX2
                e0b    cs2 (b8:ce:f6:19:1b:96)   Ethernet1/1/2   N9K-C9336C-FX2
    
    node2      /cdp
                e0a    cs1 (b8:ce:f6:19:1b:42)   Ethernet1/1/1   N9K-C9336C-FX2
                e0b    cs2 (b8:ce:f6:19:1b:96)   Ethernet1/1/2   N9K-C9336C-FX2

Step 3: Complete the procedure

  1. Enable auto-revert on the cluster LIFs:

    cluster1::*> network interface modify -vserver Cluster -lif * -auto-revert true

  2. Verify that all cluster network LIFs are back on their home ports:

    network interface show

    Show example
    cluster1::*> network interface show -vserver Cluster
    
                Logical    Status     Network            Current       Current Is
    Vserver     Interface  Admin/Oper Address/Mask       Node          Port    Home
    ----------- ---------- ---------- ------------------ ------------- ------- ----
    Cluster
                node1_clus1  up/up    169.254.209.69/16  node1         e3a     true
                node1_clus2  up/up    169.254.49.125/16  node1         e3b     true
                node2_clus1  up/up    169.254.47.194/16  node2         e3a     true
                node2_clus2  up/up    169.254.19.183/16  node2         e3b     true
  3. To set up log collection, run the following command for each switch. You are prompted to enter the switch name, username, and password for log collection.

    system switch ethernet log setup-password

    Show example
    cluster1::*> system switch ethernet log setup-password
    Enter the switch name: <return>
    The switch name entered is not recognized.
    Choose from the following list:
    cs1
    cs2
    
    cluster1::*> system switch ethernet log setup-password
    
    Enter the switch name: cs1
    RSA key fingerprint is e5:8b:c6:dc:e2:18:18:09:36:63:d9:63:dd:03:d9:cc
    Do you want to continue? {y|n}::[n] y
    
    Enter the password: <enter switch password>
    Enter the password again: <enter switch password>
    
    cluster1::*> system switch ethernet log setup-password
    
    Enter the switch name: cs2
    RSA key fingerprint is 57:49:86:a1:b9:80:6a:61:9a:86:8e:3c:e3:b7:1f:b1
    Do you want to continue? {y|n}:: [n] y
    
    Enter the password: <enter switch password>
    Enter the password again: <enter switch password>
  4. To start log collection, run the following command, replacing DEVICE with the switch used in the previous command. This starts both types of log collection: the detailed Support logs and an hourly collection of Periodic data.

    system switch ethernet log modify -device <switch-name> -log-request true

    Show example
    cluster1::*> system switch ethernet log modify -device cs1 -log-request true
    
    Do you want to modify the cluster switch log collection configuration? {y|n}: [n] y
    
    Enabling cluster switch log collection.
    
    cluster1::*> system switch ethernet log modify -device cs2 -log-request true
    
    Do you want to modify the cluster switch log collection configuration? {y|n}: [n] y
    
    Enabling cluster switch log collection.
    cluster1::*>

    Wait for 10 minutes and then check that the log collection was successful using the command:

    system switch ethernet log show

    Note If any of these commands return an error, contact NetApp support.
  5. Change the privilege level back to admin:

    set -privilege admin

  6. If you suppressed automatic case creation, re-enable it by invoking an AutoSupport message:

    system node autosupport invoke -node * -type all -message MAINT=END