Relocate non-root aggregates and NAS data LIFs from node2 to node1

Contributors netapp-pcarriga

Before you can replace node2 with the AFF A900 controller module and the NVRAM module, you must first relocate the non-root aggregates that are owned by node2 to node1.

Before you begin

After the post-checks from the previous stage complete, the resource release for node2 starts automatically. The non-root aggregates and non-SAN data LIFs are migrated from node2 to the new node1.

About this task

After the aggregates and LIFs are migrated, the operation is paused for verification purposes. At this stage, you must verify that all the non-root aggregates and non-SAN data LIFs are migrated to the new node1.

The home owner for the aggregates and LIFs are not modified; only the current owner is modified.

Steps
  1. Verify that all the non-root aggregates are online and their state on node1:

    storage aggregate show -node node1 -state online -root false

    The following example shows that the non-root aggregates on node1 are online:

    cluster::> storage aggregate show -node node1 state online -root false
    
    Aggregate     Size        Available   Used%   State	 #Vols	 Nodes	  RAID	   Status
    ----------    ---------   ---------   ------  -----   -----  ------  -------  ------
    aggr_1	      744.9GB      744.8GB	 0%	 online	  5	 node1	  raid_dp  normal
    aggr_2	      825.0GB	    825.0GB	 0%	 online	  1	 node1	  raid_dp  normal
    2 entries were displayed.

    If the aggregates have gone offline or become foreign on node1, bring them online by using the following command on the new node1, once for each aggregate:

    storage aggregate online -aggregate <aggr_name>

  2. Verify that all the volumes are online on node1 by using the following command on node1 and examining its output:

    volume show -node node1 -state offline

    If any volumes are offline on node1, bring them online by using the following command on node1, once for each volume:

    volume online -vserver <vserver-name> -volume <volume-name>

    The <vserver-name> to use with this command is found in the output of the previous volume show command.

  3. Verify that the LIFs have been moved to the correct ports and have a status of up. If any LIFs are down, set the administrative status of the LIFs to up by entering the following command, once for each LIF:

    network interface modify -vserver <vserver_name> -lif <LIF_name> -home-node <nodename> - status-admin up

  4. Verify that there are no data LIFs remaining on node2 by using the following command and examining the output:

    network interface show -curr-node <node2> -role data