Skip to main content
ONTAP MetroCluster

Disruptively transitioning from MetroCluster FC to MetroCluster IP when retiring storage shelves (ONTAP 9.8 and later)

Contributors netapp-thomi netapp-martyh netapp-ahibbard NetAppZacharyWambold

Beginning with ONTAP 9.8, you can disruptively transition a two-node MetroCluster FC configuration to a four-node MetroCluster IP configuration and retire the existing storage shelves. The procedure includes steps to move data from the existing drive shelves to the new configuration, and then retire the old shelves.

  • This procedure is used when you plan to retire the existing storage shelves and move all data to the new shelves in the MetroCluster IP configuration.

  • The existing storage shelf models must be supported by the new MetroCluster IP nodes.

  • This procedure is supported on systems running ONTAP 9.8 and later.

  • This procedure is disruptive.

  • This procedure applies only to a two-node MetroCluster FC configuration.

    If you have a four-node MetroCluster FC configuration, see Choosing your transition procedure.

  • You must meet all requirements and follow all steps in the procedure.

Requirements for transition when retiring old shelves

Before starting the transition process, you must make sure the existing MetroCluster FC configuration meets the requirements.

The new configuration must also meet the following requirements:

  • The new MetroCluster IP platform models must support the old storage shelf models.

  • Depending on the spare disks available in the existing shelves, additional drives must be added.

    This might require additional drive shelves.

    You need to have additional 14 to 18 drives for each controller:

    • Three pool 0 drives

    • Three pool 1 drives

    • Two spare drives

    • Six to ten drives for the system volume

  • You must ensure that the configuration, including the new nodes, does not exceed the platform limits for the configuration, including drive count, root aggregate size capacity, etc.

    This information is available for each platform model at NetApp Hardware Universe

You must have remote console access for all six nodes from either MetroCluster site or plan for travel between the sites as required by the procedure.

Workflow for disruptive transition when moving data and retiring old storage shelves

You must follow the specific workflow to ensure a successful transition.

As you prepare for the transition, plan for travel between the sites. Note that after the remote nodes are racked and cabled, you need serial terminal access to the nodes. Service Processor access is not be available until the nodes are configured.

workflow 2n transition moving data to new shelves

Transitioning the configuration

You must follow the detailed transition procedure.

About this task

In the following steps you are directed to other procedures. You must perform the steps in each referenced procedure in the order given.

Steps
  1. Plan port mapping using the steps in Mapping ports from the MetroCluster FC nodes to the MetroCluster IP nodes.

  2. Prepare the MetroCluster IP controllers using the steps in Preparing the MetroCluster IP controllers.

  3. Verify the health of the MetroCluster FC configuration.

  4. Gather information from the MetroCluster FC configuration.

  5. Remove Tiebreaker monitoring, if necessary.

  6. Prepare and remove the existing MetroCluster FC nodes.

  7. Connect the new MetroCluster IP nodes.

  8. Configure the new MetroCluster IP nodes and complete transition.

Migrating the root aggregates

After the transition is complete, migrate the existing root aggregates leftover from the MetroCluster FC configuration to new shelves in the MetroCluster IP configuration.

About this task

This task moves the root aggregates for node_A_1-FC and node_B_1-FC to disk shelves owned by the new MetroCluster IP controllers:

Steps
  1. Assign pool 0 disks on the new local storage shelf to the controller that has the root being migrated (e.g., if the root of node_A_1-FC is being migrated, assign pool 0 disks on the new shelf to node_A_1-IP)

    Note that the migration removes and does not re-create the root mirror, so pool 1 disks do not need to be assigned before issuing the migrate command

  2. Set the privilege mode to advanced:

    set priv advanced

  3. Migrate the root aggregate:

    system node migrate-root -node node-name -disklist disk-id1,disk-id2,diskn -raid-type raid-type

    • The node-name is the node to which the root aggregate is being migrated.

    • The disk-id identifies the pool 0 disks on the new shelf.

    • The raid-type is normally the same as the raid-type of the existing root aggregate.

    • You can use the command job show -idjob-id-instance to check the migration status, where job-id is the value provided when the migrate-root command is issued.

      For example, if the root aggregate for node_A_1-FC consisted of three disks with raid_dp, the following command would be used to migrate root to a new shelf 11:

      system node migrate-root -node node_A_1-IP -disklist 3.11.0,3.11.1,3.11.2 -raid-type raid_dp
  4. Wait until the migration operation completes and the node automatically reboots.

  5. Assign pool 1 disks for the root aggregate on a new shelf directly connected to the remote cluster.

  6. Mirror the migrated root aggregate.

  7. Wait for the root aggregate to complete resynchronising.

    You can use the storage aggregate show command to check the sync status of the aggregates.

  8. Repeat these steps for the other root aggregate.

Migrating the data aggregates

Create data aggregates on the new shelves and use volume move to transfer the data volumes from the old shelves to the aggregates on the new shelves.

  1. Move the data volumes to aggregates on the new controllers, one volume at a time.

Retiring shelves moved from node_A_1-FC and node_A_2-FC

You retire the old storage shelves from the original MetroCluster FC configuration. These shelves were originally owned by node_A_1-FC and node_A_2-FC.

  1. Identify the aggregates on the old shelves on cluster_B that need to be deleted.

    In this example the following data aggregates are hosted by the MetroCluster FC cluster_B and need to be deleted: aggr_data_a1 and aggr_data_a2.

    Note You need to perform the steps to identify, offline and delete the data aggregates on the shelves. The example is for one cluster only.
    cluster_B::> aggr show
    
    Aggregate     Size Available Used% State   #Vols  Nodes            RAID Status
    --------- -------- --------- ----- ------- ------ ---------------- ------------
    aggr0_node_A_1-FC
               349.0GB   16.83GB   95% online       1 node_A_1-IP      raid_dp,
                                                                       mirrored,
                                                                       normal
    aggr0_node_A_2-IP
               349.0GB   16.83GB   95% online       1 node_A_2-IP      raid_dp,
                                                                       mirrored,
                                                                       normal
    ...
    8 entries were displayed.
    
    cluster_B::>
  2. Check if the data aggregates have any MDV_aud volumes, and delete them prior to deleting the aggregates.

    You must delete the MDV_aud volumes as they cannot be moved.

  3. Take each of the aggregates offline, and then delete them:

    1. Take the aggregate offline:

      storage aggregate offline -aggregate aggregate-name

      The following example shows the aggregate node_B_1_aggr0 being taken offline:

      cluster_B::> storage aggregate offline -aggregate node_B_1_aggr0
      
      Aggregate offline successful on aggregate: node_B_1_aggr0
    2. Delete the aggregate:

      storage aggregate delete -aggregate aggregate-name

      You can destroy the plex when prompted.

      The following example shows the aggregate node_B_1_aggr0 being deleted.

      cluster_B::> storage aggregate delete -aggregate node_B_1_aggr0
      Warning: Are you sure you want to destroy aggregate "node_B_1_aggr0"? {y|n}: y
      [Job 123] Job succeeded: DONE
      
      cluster_B::>
  4. After deleting all aggregates, power down, disconnect, and remove the shelves.

  5. Repeat the above steps to retire the cluster_A shelves.

Completing transition

With the old controller modules removed, you can complete the transition process.

Step
  1. Complete the transition process.