Skip to main content
ONTAP MetroCluster

Configuring the new nodes and completing transition

Contributors netapp-thomi netapp-martyh NetAppZacharyWambold netapp-aoife netapp-aherbin

With the new nodes added, you must complete the transition steps and configure the MetroCluster IP nodes.

Configuring the MetroCluster IP nodes and disabling transition

You must implement the MetroCluster IP connections, refresh the MetroCluster configuration, and disable transition mode.

  1. Form the new nodes into a DR group by issuing the following commands from controller node_A_1-IP:

    metrocluster configuration-settings dr-group create -partner-cluster peer-cluster-name -local-node local-controller-name -remote-node remote-controller-name

    metrocluster configuration-settings dr-group show

  2. Create MetroCluster IP interfaces (node_A_1-IP, node_A_2-IP, node_B_1-IP, node_B_2-IP) — two interfaces need to be created per controller; eight interfaces in total:

    metrocluster configuration-settings interface create -cluster-name cluster-name -home-node controller-name -home-port port -address ip-address -netmask netmask -vlan-id vlan-id``metrocluster configuration-settings interface show

    Note Beginning with ONTAP 9.9.1, if you are using a layer 3 configuration, you must also specify the -gateway parameter when creating MetroCluster IP interfaces. Refer to Considerations for layer 3 wide-area networks.

    The -vlan-id parameter is required only if you are not using the default VLAN IDs. Only certain systems support non-default VLAN IDs.

    Note
    • Certain platforms use a VLAN for the MetroCluster IP interface. By default, each of the two ports use a different VLAN: 10 and 20. You can also specify a different (non-default) VLAN higher than 100 (between 101 and 4095) using the -vlan-id parameter in the metrocluster configuration-settings interface create command.

    • Beginning with ONTAP 9.9.1, if you are using a layer 3 configuration, you must also specify the -gateway parameter when creating MetroCluster IP interfaces. Refer to Considerations for layer 3 wide-area networks.

    The following platform models can be added to the existing MetroCluster configuration if the VLANs used are 10/20 or greater than 100. If any other VLANs are used, then these platforms cannot be added to the existing configuration as the MetroCluster interface cannot be configured. If you are using any other platform, the VLAN configuration is not relevant as this is not required in ONTAP.

    AFF platforms

    FAS platforms

    • AFF A220

    • AFF A250

    • AFF A400

    • FAS2750

    • FAS500f

    • FAS8300

    • FAS8700

  1. Perform the MetroCluster connect operation from controller node_A_1-IP to connect the MetroCluster sites — this operation can take a few minutes to complete:

    metrocluster configuration-settings connection connect

  2. Verify that the remote cluster disks are visible from each controller via the iSCSI connections:

    disk show

    You should see the remote disks belonging to the other nodes in the configuration.

  3. Mirror the root aggregate for node_A_1-IP and node_B_1-IP:

    aggregate mirror -aggregate root-aggr

  4. Assign disks for node_A_2-IP and node_B_2-IP.

    Pool 1 disk assignments where already made for node_A_1-IP and node_B_1-IP when the boot_after_mcc_transtion command was issued at the boot menu.

    1. Issue the following commands on node_A_2-IP:

      disk assign disk1disk2disk3 …​ diskn -sysid node_B_2-IP-controller-sysid -pool 1 -force

    2. Issue the following commands on node_B_2-IP:

      disk assign disk1disk2disk3 …​ diskn -sysid node_A_2-IP-controller-sysid -pool 1 -force

  5. Confirm ownership has been updated for the remote disks:

    disk show

  6. If necessary, refresh the ownership information using the following commands:

    1. Go to advanced privilege mode and enter y when prompted to continue:

      set priv advanced

    2. Refresh disk ownership:

      disk refresh-ownership controller-name

    3. Return to admin mode:

      set priv admin

  7. Mirror the root aggregates for node_A_2-IP and node_B_2-IP:

    aggregate mirror -aggregate root-aggr

  8. Verify that the aggregate re-synchronization has completed for root and data aggregates:

    aggr show``aggr plex show

    The resync can take some time but must complete before proceeding with the following steps.

  9. Refresh the MetroCluster configuration to incorporate the new nodes:

    1. Go to advanced privilege mode and enter y when prompted to continue:

      set priv advanced

    2. Refresh the configuration:

      If you have configured…​

      Issue this command…​

      A single aggregate in each cluster:

      metrocluster configure -refresh true -allow-with-one-aggregate true

      More than a single aggregate in each cluster

      metrocluster configure -refresh true

    3. Return to admin mode:

      set priv admin

  10. Disable MetroCluster transition mode:

    1. Enter advanced privilege mode and enter “y” when prompted to continue:

      set priv advanced

    2. Disable transition mode:

      metrocluster transition disable

    3. Return to admin mode:

      set priv admin

Setting up data LIFs on the new nodes

You must configure data LIFs on the new nodes, node_A_2-IP and node_B_2-IP.

You must add any new ports available on new controllers to a broadcast domain if not already assigned to one. If required, create VLANs or interface groups on the new ports. See Network management

  1. Identify the current port usage and broadcast domains:

    network port show``network port broadcast-domain show

  2. Add ports to broadcast domains and VLANs as necessary.

    1. View the IP spaces:

      network ipspace show

    2. Create IP spaces and assign data ports as needed.

    3. View the broadcast domains:

      network port broadcast-domain show

    4. Add any data ports to a broadcast domain as needed.

    5. Recreate VLANs and interface groups as needed.

      VLAN and interface group membership might be different than that of the old node.

  3. Verify that the LIFs are hosted on the appropriate node and ports on the MetroCluster IP nodes (including the SVM with -mc vserver) as needed.

    See the information gathered in Creating the network configuration.

    1. Check the home port of the LIFs:

      network interface show -field home-port

    2. If necessary, modify the LIF configuration:

      vserver config override -command "network interface modify -vserver vserver_name -home-port active_port_after_upgrade -lif lif_name -home- node new_node_name"

    3. Revert the LIFs to their home ports:

      network interface revert * -vserver vserver_name

Bringing up the SVMs

Due to the changes if LIF configuration, you must restart the SVMs on the new nodes.

Steps
  1. Check the state of the SVMs:

    metrocluster vserver show

  2. Restart the SVMs on cluster_A that do not have an “-mc” suffix:

    vserver start -vserver svm-name -force true

  3. Repeat the previous steps on the partner cluster.

  4. Check that all SVMs are in a healthy state:

    metrocluster vserver show

  5. Verify that all data LIFs are online:

    network interface show

Moving a system volume to the new nodes

To improve resiliency, a system volume should be moved from controller node_A_1-IP to controller node_A_2-IP, and also from node_B_1-IP to node_B_2-IP. You must create a mirrored aggregate on the destination node for the system volume.

About this task

System volumes have the name form “MDV_CRS_*_A” or “MDV_CRS_*_B.” The designations “_A” and “_B” are unrelated to the site_A and site_B references used throughout this section; e.g., MDV_CRS_*_A is not associated with site_A.

Steps
  1. Assign at least three pool 0 and three pool 1 disks each for controllers node_A_2-IP and node_B_2-IP as needed.

  2. Enable disk auto-assignment.

  3. Move the _B system volume from node_A_1-IP to node_A_2-IP using the following steps from site_A.

    1. Create a mirrored aggregate on controller node_A_2-IP to hold the system volume:

      aggr create -aggregate new_node_A_2-IP_aggr -diskcount 10 -mirror true -node nodename_node_A_2-IP

      aggr show

      The mirrored aggregate requires five pool 0 and five pool 1 spare disks owned by controller node_A_2-IP.

      The advanced option, “-force-small-aggregate true” can be used to limit disk use to 3 pool 0 and 3 pool 1 disks, if disks are in short supply.

    2. List the system volumes associated with the admin SVM:

      vserver show

      volume show -vserver admin-vserver-name

      You should identify volumes contained by aggregates owned by site_A. The site_B system volumes will also be shown.

  4. Move the MDV_CRS_*_B system volume for site_A to the mirrored aggregate created on controller node_A_2-IP

    1. Check for possible destination aggregates:

      volume move target-aggr show -vserver admin-vserver-name -volume system_vol_MDV_B

      The newly created aggregate on node_A_2-IP should be listed.

    2. Move the volume to the newly created aggregate on node_A_2-IP:

      set advanced

      volume move start -vserver admin-vserver -volume system_vol_MDV_B -destination-aggregate new_node_A_2-IP_aggr -cutover-window 40

    3. Check status for the move operation:

      volume move show -vserver admin-vserver-name -volume system_vol_MDV_B

    4. When the move operation complete, verify that the MDV_CRS_*_B system is contained by the new aggregate on node_A_2-IP:

      set admin

      volume show -vserver admin-vserver

  5. Repeat the above steps on site_B (node_B_1-IP and node_B_2-IP).