Relocate non-root aggregates and NAS data LIFs from node2 to node3
Before replacing node2 with node4, you relocate the non-root aggregates and NAS data LIFs that are owned by node2 to node3.
After the post-checks from the previous stage complete, the resource release for node2 starts automatically. The non-root aggregates and non-SAN data LIFs are migrated from node2 to node3.
Remote LIFs handle traffic to SAN LUNs during the upgrade procedure. Moving SAN LIFs is not necessary for cluster or service health during the upgrade.
After the aggregates and LIFs are migrated, the operation is paused for verification purposes. At this stage, you must verify whether or not all the non-root aggregates and non-SAN data LIFs are migrated to node3.
The home owner for the aggregates and LIFs are not modified; only the current owner is modified. |
-
Verify that all the non-root aggregates are online and their state on node3:
storage aggregate show -node node3 -state online -root false
The following example shows that the non-root aggregates on node2 are online:
cluster::> storage aggregate show -node node3 state online -root false Aggregate Size Available Used% State #Vols Nodes RAID Status ---------- --------- --------- ------ ----- ----- ------ ------- ------ aggr_1 744.9GB 744.8GB 0% online 5 node2 raid_dp normal aggr_2 825.0GB 825.0GB 0% online 1 node2 raid_dp normal 2 entries were displayed.
If the aggregates have gone offline or become foreign on node3, bring them online by using the following command on node3, once for each aggregate:
storage aggregate online -aggregate aggr_name
-
Verify that all the volumes are online on node3 by using the following command on node3 and examining the output:
volume show -node node3 -state offline
If any volumes are offline on node3, bring them online by using the following command on node3, once for each volume:
volume online -vserver vserver_name -volume volume_name
The
vserver_name
to use with this command is found in the output of the previousvolume show
command. -
Verify that the LIFs have been moved to the correct ports and have a status of
up
. If any LIFs are down, set the administrative status of the LIFs toup
by entering the following command, once for each LIF:network interface modify -vserver vserver_name -lif LIF_name -home-node node_name -status-admin up
-
If the ports currently hosting data LIFs will not exist on the new hardware, remove them from the broadcast domain:
network port broadcast-domain remove-ports
-
Verify that there are no data LIFs remaining on node2 by entering the following command and examining the output:
network interface show -curr-node node2 -role data
-
If you have interface groups or VLANs configured, complete the following substeps:
-
Record VLAN and interface group information so you can re-create the VLANs and interface groups on node3 after node3 is booted up.
-
Remove the VLANs from the interface groups:
network port vlan delete -node nodename -port ifgrp -vlan-id VLAN_ID
-
Check if there are any interface groups configured on the node by entering the following command and examining its output:
network port ifgrp show -node node2 -ifgrp ifgrp_name -instance
The system displays interface group information for the node as shown in the following example:
cluster::> network port ifgrp show -node node2 -ifgrp a0a -instance Node: node3 Interface Group Name: a0a Distribution Function: ip Create Policy: multimode_lacp MAC Address: 02:a0:98:17:dc:d4 Port Participation: partial Network Ports: e2c, e2d Up Ports: e2c Down Ports: e2d
-
If any interface groups are configured on the node, record the names of those groups and the ports assigned to them, and then delete the ports by entering the following command, once for each port:
network port ifgrp remove-port -node nodename -ifgrp ifgrp_name -port netport
-