Skip to main content

Aggregate relocation failures

Contributors netapp-pcarriga netapp-aoife

Aggregate relocation (ARL) might fail at different points during the upgrade.

Check for aggregate relocation failure

During the procedure, ARL might fail in Stage 2, Stage 3, or Stage 5.

Steps
  1. Enter the following command and examine the output:

    storage aggregate relocation show

    The storage aggregate relocation show command shows you which aggregates were successfully relocated and which ones were not, along with the causes of failure.

  2. Check the console for any EMS messages.

  3. Take one of the following actions:

    • Take the appropriate corrective action, depending on the output of the storage aggregate relocation show command and the output of the EMS message.

    • Force relocation of the aggregate or aggregates by using the override-vetoes option or the override-destination-checks option of the storage aggregate relocation start command.

    For detailed information about the storage aggregate relocation start, override-vetoes, and override-destination-checks options, refer to References to link to the ONTAP 9 Commands: Manual Page Reference.

Aggregates originally on node1 are owned by node2 after completion of the upgrade

At the end of the upgrade procedure, node1 should be the new home node of aggregates that originally had node1 as the home node. You can relocate them after the upgrade.

About this task

Aggregates might fail to relocate correctly, that is, they have node2 as their home node instead of node1, under the following circumstances:

  • During Stage 3, when aggregates are relocated from node2 to node1.

    Some of the aggregates being relocated have node1 as their home node. For example, such an aggregate could be called aggr_node_1. If relocation of aggr_node_1 fails during Stage 3, and relocation cannot be forced, then the aggregate is left behind on node2.

  • After Stage 4, when node2 is replaced with the new system modules.

    When node2 is replaced, aggr_node_1 will come online with node1 as its home node instead of node2.

You can fix the incorrect ownership problem after Stage 6, after you have enabled storage failover by completing the following steps:

Steps
  1. Get a list of aggregates:

    storage aggregate show -nodes node2 -is-home true

    To identify aggregates that were not correctly relocated, refer to the list of aggregates with the home owner of node1 that you obtained in the section Prepare the nodes for upgrade and compare it with the output of the above command.

  2. Compare the output of Step 1 with the output you captured for node1 in the section Prepare the nodes for upgrade and note any aggregates that were not correctly relocated.

  3. Relocate the aggregates left behind on node2:

    storage aggregate relocation start -node node2 -aggr aggr_node_1 -destination node1

    Do not use the -ndo-controller-upgrade parameter during this relocation.

  4. Verify that node1 is now the home owner of the aggregates:

    storage aggregate show -aggregate aggr1,aggr2,aggr3…​ -fields home-name

    aggr1,aggr2,aggr3…​ is the list of aggregates that had node1 as the original home owner.

    Aggregates that do not have node1 as home owner can be relocated to node1 using the same relocation command in Step 3.