Skip to main content

Relocate non-root aggregates and NAS data LIFs from node2 to node3

Contributors netapp-pcarriga

Before replacing node2 with node4, you relocate the non-root aggregates and NAS data LIFs that are owned by node2 to node3.

Before you begin

After the post-checks from the previous stage complete, the resource release for node2 starts automatically. The non-root aggregates and non-SAN data LIFs are migrated from node2 to node3.

About this task

Remote LIFs handle traffic to SAN LUNs during the upgrade procedure. Moving SAN LIFs is not necessary for cluster or service health during the upgrade.

After the aggregates and LIFs are migrated, the operation is paused for verification purposes. At this stage, you must verify whether or not all the non-root aggregates and non-SAN data LIFs are migrated to node3.

Note The home owner for the aggregates and LIFs are not modified; only the current owner is modified.
Steps
  1. Verify that all the non-root aggregates are online and their state on node3:

    storage aggregate show -node node3 -state online -root false

    The following example shows that the non-root aggregates on node2 are online:

    cluster::> storage aggregate show -node node3 state online -root false
    
    Aggregate      Size         Available   Used%   State   #Vols  Nodes  RAID     Status
    ----------     ---------    ---------   ------  -----   -----  ------ -------  ------
    aggr_1         744.9GB      744.8GB      0%     online  5      node2  raid_dp  normal
    aggr_2         825.0GB      825.0GB      0%     online  1      node2  raid_dp  normal
    2 entries were displayed.

    If the aggregates have gone offline or become foreign on node3, bring them online by using the following command on node3, once for each aggregate:

    storage aggregate online -aggregate aggr_name

  2. Verify that all the volumes are online on node3 by using the following command on node3 and examining the output:

    volume show -node node3 -state offline

    If any volumes are offline on node3, bring them online by using the following command on node3, once for each volume:

    volume online -vserver vserver_name -volume volume_name

    The vserver_name to use with this command is found in the output of the previous volume show command.

  3. Verify that the LIFs have been moved to the correct ports and have a status of up. If any LIFs are down, set the administrative status of the LIFs to up by entering the following command, once for each LIF:

    network interface modify -vserver vserver_name -lif LIF_name -home-node node_name -status-admin up

  4. If the ports currently hosting data LIFs will not exist on the new hardware, remove them from the broadcast domain:

    network port broadcast-domain remove-ports

  5. Verify that there are no data LIFs remaining on node2 by entering the following command and examining the output:

    network interface show -curr-node node2 -role data