Skip to main content

Relocate non-root aggregates and NAS data LIFs owned by node1 to node2

Contributors netapp-pcarriga netapp-aoife netapp-aherbin

Before you can replace node1 with node3, you must move the non-root aggregates and NAS data LIFs from node1 to node2 before eventually moving node1's resources to node3.

Before you begin

The operation must already be paused when you begin the task; you must manually resume the operation.

About this task

Remote LIFs handle traffic to SAN LUNs during the upgrade procedure. Moving SAN LIFs is not necessary for cluster or service health during the upgrade. You must verify that the LIFs are healthy and located on appropriate ports after you bring node3 online.

Note The home owner for the aggregates and LIFs is not modified; only the current owner is modified.
Steps
  1. Resume the aggregate relocation and NAS data LIF move operations:

    system controller replace resume

    All the non-root aggregates and NAS data LIFs are migrated from node1 to node2.

    The operation pauses to enable you to verify whether all node1 non-root aggregates and non-SAN data LIFs have been migrated to node2.

  2. Check the status of the aggregate relocation and NAS data LIF move operations:

    system controller replace show-details

  3. With the operation still paused, verify that all the non-root aggregates are online for their state on node2:

    storage aggregate show -node node2 -state online -root false

    The following example shows that the non-root aggregates on node2 are online:

    cluster::> storage aggregate show -node node2 state online -root false
    
    Aggregate  Size     Available  Used%  State  #Vols  Nodes  RAID Status
    ---------  -------  ---------  -----  ------ -----  ------ --------------
    aggr_1     744.9GB  744.8GB    0%     online     5  node2  raid_dp,normal
    aggr_2     825.0GB  825.0GB    0%     online     1  node2  raid_dp,normal
    2 entries were displayed.

    If the aggregates have gone offline or become foreign on node2, bring them online by using the following command on node2, once for each aggregate:

    storage aggregate online -aggregate aggr_name

  4. Verify that all the volumes are online on node2 by using the following command on node2 and examining its output:

    volume show -node node2 -state offline

    If any volumes are offline on node2, bring them online by using the following command on node2, once for each volume:

    volume online -vserver vserver_name -volume volume_name

    The vserver_name to use with this command is found in the output of the previous volume show command.

  1. If the ports currently hosting data LIFs will not exist on the new hardware, remove them from the broadcast domain:

    network port broadcast-domain remove-ports

  2. If any LIFs are down, set the administrative status of the LIFs to up by entering the following command, once for each LIF:

    network interface modify -vserver vserver_name -lif LIF_name-home-node nodename -status-admin up

  3. If you have interface groups or VLANs configured, complete the following substeps:

    1. If you have not already saved them, record the VLAN and interface group information so you can re-create the VLANs and interface groups on node3 after node3 is booted up.

    2. Remove the VLANs from the interface groups:

      network port vlan delete -node nodename -port ifgrp -vlan-id VLAN_ID

      Note Follow the corrective action to resolve any errors that are suggested by the vlan delete command.
    3. Enter the following command and examine its output to see if there are any interface groups configured on the node:

      network port ifgrp show -node nodename -ifgrp ifgrp_name -instance

      The system displays interface group information for the node as shown in the following example:

      cluster::> network port ifgrp show -node node1 -ifgrp a0a -instance
                       Node: node1
       Interface Group Name: a0a
      Distribution Function: ip
              Create Policy: multimode_lacp
                MAC Address: 02:a0:98:17:dc:d4
         Port Participation: partial
              Network Ports: e2c, e2d
                   Up Ports: e2c
                 Down Ports: e2d
    4. If any interface groups are configured on the node, record the names of those groups and the ports assigned to them, and then delete the ports by entering the following command, once for each port:

      network port ifgrp remove-port -node nodename -ifgrp ifgrp_name -port netport