Skip to main content
ONTAP MetroCluster

Configuring the new nodes and completing transition

Contributors netapp-aoife netapp-thomi netapp-martyh NetAppZacharyWambold netapp-aherbin

With the new nodes added, you must complete the transition steps and configure the MetroCluster IP nodes.

Configuring the MetroCluster IP nodes and disabling transition

You must implement the MetroCluster IP connections, refresh the MetroCluster configuration, and disable transition mode.

Steps
  1. Form the new nodes into a DR group by issuing the following commands from controller node_A_1-IP:

    metrocluster configuration-settings dr-group create -partner-cluster <peer_cluster_name> -local-node <local_controller_name> -remote-node <remote_controller_name>

    metrocluster configuration-settings dr-group show

  2. Create MetroCluster IP interfaces (node_A_1-IP, node_A_2-IP, node_B_1-IP, node_B_2-IP) — two interfaces need to be created per controller; eight interfaces in total:

    metrocluster configuration-settings interface create -cluster-name <cluster_name> -home-node <controller_name> -home-port <port_name> -address <ip_address> -netmask <netmask_address> -vlan-id <vlan_id>

    metrocluster configuration-settings interface show

    Certain platforms use a VLAN for the MetroCluster IP interface. By default, each of the two ports use a different VLAN: 10 and 20.

    If supported, you can also specify a different (non-default) VLAN higher than 100 (between 101 and 4095) using the -vlan-id parameter in the metrocluster configuration-settings interface create command.

    The following platforms do not support the -vlan-id parameter:

    • FAS8200 and AFF A300

    • AFF A320

    • FAS9000 and AFF A700

    • AFF C800, ASA C800, AFF A800 and ASA A800

      All other platforms support the -vlan-id parameter.

      The default and valid VLAN assignments depend on whether the platform supports the -vlan-id parameter:

      Platforms that support -vlan-id

      Default VLAN:

      • When the -vlan-id parameter is not specified, the interfaces are created with VLAN 10 for the "A" ports and VLAN 20 for the "B" ports.

      • The VLAN specified must match the VLAN selected in the RCF.

      Valid VLAN ranges:

      • Default VLAN 10 and 20

      • VLANs 101 and higher (between 101 and 4095)

      Platforms that do not support -vlan-id

      Default VLAN:

      • Not applicable. The interface does not require a VLAN to be specified on the MetroCluster interface. The switch port defines the VLAN that is used.

      Valid VLAN ranges:

      • All VLANs not explicitly excluded when generating the RCF. The RCF alerts you if the VLAN is invalid.

  1. Perform the MetroCluster connect operation from controller node_A_1-IP to connect the MetroCluster sites — this operation can take a few minutes to complete:

    metrocluster configuration-settings connection connect

  2. Verify that the remote cluster disks are visible from each controller via the iSCSI connections:

    disk show

    You should see the remote disks belonging to the other nodes in the configuration.

  3. Mirror the root aggregate for node_A_1-IP and node_B_1-IP:

    aggregate mirror -aggregate root-aggr

  4. Assign disks for node_A_2-IP and node_B_2-IP.

    Pool 1 disk assignments where already made for node_A_1-IP and node_B_1-IP when the boot_after_mcc_transtion command was issued at the boot menu.

    1. Issue the following commands on node_A_2-IP:

      disk assign disk1disk2disk3 …​ diskn -sysid node_B_2-IP-controller-sysid -pool 1 -force

    2. Issue the following commands on node_B_2-IP:

      disk assign disk1disk2disk3 …​ diskn -sysid node_A_2-IP-controller-sysid -pool 1 -force

  5. Confirm ownership has been updated for the remote disks:

    disk show

  6. If necessary, refresh the ownership information using the following commands:

    1. Go to advanced privilege mode and enter y when prompted to continue:

      set priv advanced

    2. Refresh disk ownership:

      disk refresh-ownership controller-name

    3. Return to admin mode:

      set priv admin

  7. Mirror the root aggregates for node_A_2-IP and node_B_2-IP:

    aggregate mirror -aggregate root-aggr

  8. Verify that the aggregate re-synchronization has completed for root and data aggregates:

    aggr show``aggr plex show

    The resync can take some time but must complete before proceeding with the following steps.

  9. Refresh the MetroCluster configuration to incorporate the new nodes:

    1. Go to advanced privilege mode and enter y when prompted to continue:

      set priv advanced

    2. Refresh the configuration:

      If you have configured…​

      Issue this command…​

      A single aggregate in each cluster:

      metrocluster configure -refresh true -allow-with-one-aggregate true

      More than a single aggregate in each cluster

      metrocluster configure -refresh true

    3. Return to admin mode:

      set priv admin

  10. Disable MetroCluster transition mode:

    1. Enter advanced privilege mode and enter “y” when prompted to continue:

      set priv advanced

    2. Disable transition mode:

      metrocluster transition disable

    3. Return to admin mode:

      set priv admin

Setting up data LIFs on the new nodes

You must configure data LIFs on the new nodes, node_A_2-IP and node_B_2-IP.

You must add any new ports available on new controllers to a broadcast domain if not already assigned to one. If required, create VLANs or interface groups on the new ports. See Network management

  1. Identify the current port usage and broadcast domains:

    network port show``network port broadcast-domain show

  2. Add ports to broadcast domains and VLANs as necessary.

    1. View the IP spaces:

      network ipspace show

    2. Create IP spaces and assign data ports as needed.

    3. View the broadcast domains:

      network port broadcast-domain show

    4. Add any data ports to a broadcast domain as needed.

    5. Recreate VLANs and interface groups as needed.

      VLAN and interface group membership might be different than that of the old node.

  3. Verify that the LIFs are hosted on the appropriate node and ports on the MetroCluster IP nodes (including the SVM with -mc vserver) as needed.

    See the information gathered in Creating the network configuration.

    1. Check the home port of the LIFs:

      network interface show -field home-port

    2. If necessary, modify the LIF configuration:

      vserver config override -command "network interface modify -vserver <svm_name> -home-port <active_port_after_upgrade> -lif <lif_name> -home-node <new_node_name>

    3. Revert the LIFs to their home ports:

      network interface revert * -vserver <svm_name>

Bringing up the SVMs

Due to the changes if LIF configuration, you must restart the SVMs on the new nodes.

Steps
  1. Check the state of the SVMs:

    metrocluster vserver show

  2. Restart the SVMs on cluster_A that do not have an “-mc” suffix:

    vserver start -vserver <svm_name> -force true

  3. Repeat the previous steps on the partner cluster.

  4. Check that all SVMs are in a healthy state:

    metrocluster vserver show

  5. Verify that all data LIFs are online:

    network interface show

Moving a system volume to the new nodes

To improve resiliency, a system volume should be moved from controller node_A_1-IP to controller node_A_2-IP, and also from node_B_1-IP to node_B_2-IP. You must create a mirrored aggregate on the destination node for the system volume.

About this task

System volumes have the name form “MDV_CRS_*_A” or “MDV_CRS_*_B.” The designations “_A” and “_B” are unrelated to the site_A and site_B references used throughout this section; e.g., MDV_CRS_*_A is not associated with site_A.

Steps
  1. Assign at least three pool 0 and three pool 1 disks each for controllers node_A_2-IP and node_B_2-IP as needed.

  2. Enable disk auto-assignment.

  3. Move the _B system volume from node_A_1-IP to node_A_2-IP using the following steps from site_A.

    1. Create a mirrored aggregate on controller node_A_2-IP to hold the system volume:

      aggr create -aggregate new_node_A_2-IP_aggr -diskcount 10 -mirror true -node node_A_2-IP

      aggr show

      The mirrored aggregate requires five pool 0 and five pool 1 spare disks owned by controller node_A_2-IP.

      The advanced option, “-force-small-aggregate true” can be used to limit disk use to 3 pool 0 and 3 pool 1 disks, if disks are in short supply.

    2. List the system volumes associated with the admin SVM:

      vserver show

      volume show -vserver <admin_svm_name>

      You should identify volumes contained by aggregates owned by site_A. The site_B system volumes will also be shown.

  4. Move the MDV_CRS_*_B system volume for site_A to the mirrored aggregate created on controller node_A_2-IP

    1. Check for possible destination aggregates:

      volume move target-aggr show -vserver <admin_svm_name> -volume MDV_CRS_*_B

      The newly created aggregate on node_A_2-IP should be listed.

    2. Move the volume to the newly created aggregate on node_A_2-IP:

      set advanced

      volume move start -vserver <admin_svm_name> -volume MDV_CRS_*_B -destination-aggregate new_node_A_2-IP_aggr -cutover-window 40

    3. Check status for the move operation:

      volume move show -vserver <admin_svm_name> -volume MDV_CRS_*_B

    4. When the move operation complete, verify that the MDV_CRS_*_B system is contained by the new aggregate on node_A_2-IP:

      set admin

      volume show -vserver <admin_svm_name>

  5. Repeat the above steps on site_B (node_B_1-IP and node_B_2-IP).