Skip to main content
Cluster and storage switches

Migrate to a two-node switched cluster in FAS22xx systems with a single cluster-network connection

Contributors netapp-jolieg

If you have FAS22xx systems in an existing two-node switchless cluster in which each controller module has a single, back-to-back 10 GbE connection for cluster connectivity, you can use the switchless cluster networking option and replace the direct back-to-back connectivity with switch connections.

Review requirements

What you'll need
  • Two cluster connections for migrating from a switchless configuration to a switched configuration.

  • The cluster is healthy and consists of two nodes connected with back-to-back connectivity.

  • The nodes are running ONTAP 8.2 or later.

  • The switchless cluster feature cannot be used with more than two nodes.

  • All cluster ports are in the up state.

Migrate the switches

This procedure is a nondisruptive procedure that removes the direct cluster connectivity in a switchless environment and replaces each connection to the switch with a connection to the partner node.

Step 1: Prepare for migration

  1. Change the privilege level to advanced, entering y when prompted to continue:

    set -privilege advanced

    The advanced prompt (*>) appears.

  2. Check the cluster status of the nodes at the system console of either node:

    cluster show

    Show example

    The following example displays information about the health and eligibility of the nodes in the cluster:

    cluster::*> cluster show
    Node                 Health  Eligibility   Epsilon
    -------------------- ------- ------------  ------------
    node1                true    true          false
    node2                true    true          false
    
    2 entries were displayed.
  3. Check the status of the HA pair at the system console of either node: storage failover show

    Show example

    The following example shows the status of node1 and node2:

    Node           Partner        Possible State Description
    -------------- -------------- -------- -------------------------------------
    node1          node2          true      Connected to node2
    node2          node1          true      Connected to node1
    
    2 entries were displayed.
  4. If AutoSupport is enabled on this cluster, suppress automatic case creation by invoking an AutoSupport message:

    system node autosupport invoke -node * -type all -message MAINT=xh

    x is the duration of the maintenance window in hours.

    Note The message notifies technical support of this maintenance task so that automatic case creation is suppressed during the maintenance window.
    Show example

    The following command suppresses automatic case creation for two hours:

    cluster::*> system node autosupport invoke -node * -type all -message MAINT=2h
  5. Verify that the current state of the switchless cluster is true, and then disable the switchless cluster mode:

    network options switchless-cluster modify -enabled false

  6. Take over the target node:

    storage failover takeover -ofnode target_node_name

    It does not matter which node is the target node. When it is taken over, the target node automatically reboots and displays the Waiting for giveback…​ message.

    The active node is now serving data for the partner (target) node that was taken over.

  7. Wait for two minutes after takeover of the impaired node to confirm that the takeover was completed successfully.

  8. With the target node showing the Waiting for giveback…​ message, shut it down.

    The method you use to shut down the node depends on whether you use remote management through the node Service Processor (SP).

    If SP Then…​

    Is configured

    Log in to the impaired node SP, and then power off the system: system power off

    Is not configured

    At the impaired node prompt, press Ctrl-C, and then respond y to halt the node.

Step 2: Configure cables and ports

  1. On each controller module, disconnect the cable that connects the 10 GbE cluster port to the switchless cluster.

  2. Connect the 10 GbE cluster port to the switch on both controller modules.

  3. Verify that the 10 GbE cluster ports connected on the switch are configured to be part of the same VLAN.

    If you plan to connect the cluster ports on each controller module to different switches, then you must verify that the ports on which the cluster ports are connected on each switch are configured for the same VLAN and that trunking is properly configured on both switches.

  4. Give back storage to the target node:

    storage failover giveback -ofnode node2

  5. Monitor the progress of the giveback operation:

    storage failover show-giveback

  6. After the giveback operation is complete, confirm that the HA pair is healthy and takeover is possible:

    storage failover show

    Show example

    The output should be similar to the following:

    Node           Partner        Possible State Description
    -------------- -------------- -------- -------------------------------------
    node1          node2          true      Connected to node2
    node2          node1          true      Connected to node1
    
    2 entries were displayed.
  7. Verify that the cluster port LIFs are operating correctly:

    network interface show -role cluster

    Show example

    The following example shows that the LIFs are up on node1 and node2 and that the "Is Home" column results are true:

    cluster::*> network interface show -role cluster
                Logical    Status     Network            Current       Current Is
    Vserver     Interface  Admin/Oper Address/Mask       Node          Port    Home
    ----------- ---------- ---------- ------------------ ------------- ------- ----
    node1
                clus1        up/up    192.168.177.121/24  node1        e1a     true
    node2
                clus1        up/up    192.168.177.123/24  node2        e1a     true
    
    2 entries were displayed.
  8. Check the cluster status of the nodes at the system console of either node:

    cluster show

    Show example

    The following example displays information about the health and eligibility of the nodes in the cluster:

    cluster::*> cluster show
    Node                 Health  Eligibility   Epsilon
    -------------------- ------- ------------  ------------
    node1                true    true          false
    node2                true    true          false
    
    2 entries were displayed.
  9. Ping the cluster ports to verify the cluster connectivity:

    cluster ping-cluster local

    The command output should show connectivity between all of the cluster ports.

Step 3: Complete the procedure

  1. If you suppressed automatic case creation, reenable it by invoking an AutoSupport message:

    system node autosupport invoke -node * -type all -message MAINT=END

    Show example
    cluster::*> system node autosupport invoke -node * -type all -message MAINT=END
  2. Change the privilege level back to admin:

    set -privilege admin