Relocate non-root aggregates and NAS data LIFs owned by node1 to node2
Before you can replace node1 with node3, you must move the non-root aggregates and NAS data LIFs from node1 to node2 before eventually moving node1's resources to node3.
The operation must already be paused when you begin the task; you must manually resume the operation.
Remote LIFs handle traffic to SAN LUNs during the upgrade procedure. Moving SAN LIFs is not necessary for cluster or service health during the upgrade. You must verify that the LIFs are healthy and located on appropriate ports after you bring node3 online.
The home owner for the aggregates and LIFs is not modified; only the current owner is modified. |
-
Resume the aggregate relocation and NAS data LIF move operations:
system controller replace resume
All the non-root aggregates and NAS data LIFs are migrated from node1 to node2.
The operation pauses to enable you to verify whether all node1 non-root aggregates and non-SAN data LIFs have been migrated to node2.
-
Check the status of the aggregate relocation and NAS data LIF move operations:
system controller replace show-details
-
With the operation still paused, verify that all the non-root aggregates are online for their state on node2:
storage aggregate show -node node2 -state online -root false
The following example shows that the non-root aggregates on node2 are online:
cluster::> storage aggregate show -node node2 state online -root false Aggregate Size Available Used% State #Vols Nodes RAID Status --------- ------- --------- ----- ------ ----- ------ -------------- aggr_1 744.9GB 744.8GB 0% online 5 node2 raid_dp,normal aggr_2 825.0GB 825.0GB 0% online 1 node2 raid_dp,normal 2 entries were displayed.
If the aggregates have gone offline or become foreign on node2, bring them online by using the following command on node2, once for each aggregate:
storage aggregate online -aggregate aggr_name
-
Verify that all the volumes are online on node2 by using the following command on node2 and examining its output:
volume show -node node2 -state offline
If any volumes are offline on node2, bring them online by using the following command on node2, once for each volume:
volume online -vserver vserver_name -volume volume_name
The
vserver_name
to use with this command is found in the output of the previousvolume show
command.
-
If the ports currently hosting data LIFs will not exist on the new hardware, remove them from the broadcast domain:
network port broadcast-domain remove-ports
-
If any LIFs are down, set the administrative status of the LIFs to "up" by entering the following command, once for each LIF:
network interface modify -vserver Vserver_name -lif _LIF_name
-home-node nodename -status-admin up
-
If you have interface groups or VLANs configured, complete the following substeps:
-
If you have not already saved them, record VLAN and ifgrp information so you can re-create the VLANs and ifgrps on node3 after node3 is booted up.
-
Remove the VLANs from the interface groups by entering the following command:
network port vlan delete -node nodename -port ifgrp -vlan-id VLAN_ID
Follow the corrective action to resolve any errors that are suggested by the vlan delete command. -
Enter the following command and examine its output to see if there are any interface groups configured on the node:
network port ifgrp show -node nodename -ifgrp ifgrp_name -instance
The system displays interface group information for the node as shown in the following example:
cluster::> network port ifgrp show -node node1 -ifgrp a0a -instance Node: node1 Interface Group Name: a0a Distribution Function: ip Create Policy: multimode_lacp MAC Address: 02:a0:98:17:dc:d4 Port Participation: partial Network Ports: e2c, e2d Up Ports: e2c Down Ports: e2d
-
If any interface groups are configured on the node, record the names of those groups and the ports assigned to them, and then delete the ports by entering the following command, once for each port:
network port ifgrp remove-port -node nodename -ifgrp ifgrp_name -port netport
-