Skip to main content

Migrate LIFs and data aggregates on FAS2820 node4 to node3

Contributors netapp-pcarriga

To complete the upgrade, you connect FAS2820 node3 to node4 and then migrate the data logical interfaces (LIFs) and data aggregates on node4 to node3.

About this task

Perform the following steps on node3.

Steps
  1. At the LOADER prompt for node3, boot the node into the boot menu:

    boot_ontap menu
  2. Select option 6 Update flash from backup config to restore the /var file system to node3.

    This replaces all flash-based configuration with the last backup to disks.

  3. Enter y to continue.

  4. Allow the node to boot as normal.

    Note

    The node automatically reboots to load the new copy of the /var file system.

    The node reports a warning that there is a system ID mismatch. Enter y to override the system ID.

  5. Verify that the cluster and HA ports are connected between node3 to node4.

  6. Display the cluster and HA ports on node3 and node4:

    set -privilege advanced
    network port show
  7. Modify the cluster broadcast domain to include the desired cluster ports:

    network port broadcast-domain remove-ports -broadcast-domain <broadcast_domain_name> -ports <port_names>
    network port broadcast-domain add-ports -broadcast-domain Cluster -ports <port_names>
    Note Beginning with ONTAP 9.8, new IPspaces and one or more broadcast domains might be designated to existing physical ports that are intended for cluster connectivity.
  8. Modify the cluster IPspace to include the desired cluster ports and set the maximum transmission unit to 9000 if not already set:

    network port modify -node <node_name> -port <port_name> -mtu 9000 -ipspace Cluster
  9. Display all cluster network LIFs:

    network interface show -role cluster
  10. Migrate all cluster network LIFs on both nodes to their planned home ports:

    network interface migrate -vserver <vserver_name> -lif <lif_name> -destination-node <node_name> -destination-port <port_name>
  11. Display all cluster network LIFs:

    network interface show -role cluster
  12. Configure the home ports for the cluster network LIFs:

    network interface modify -vserver <vserver_name> -lif <lif_name> -home-port <port_name>
  13. Migrate all data LIFs meant for node3 back to node3:

    network interface migrate -vserver <vserver_name> -lif <lif_name> -destination-node <node3> -destination-port <port_name>
  14. Display all data network LIFs:

    network interface show -role data
  15. Configure the home node and home port for all data LIFs. If any LIFs are down, set their administrative status to up by entering the following command once for each LIF:

    network interface modify -vserver <vserver_name> -lif <lif_name> -home-node <node_name> -home-port <port_name> -status-admin up
  16. Migrate the cluster management LIF:

    network interface migrate -vserver <vserver_name> -lif <cluster_mgmt_lif> -destination-node <node3> -destination-port <port_name>
  17. Display the status of the cluster management LIF:

    network interface show -role cluster-mgmt
  18. Display the status of all data aggregates in the cluster:

    storage aggregate show
  19. Enable cluster high availability in the two-node cluster:

    cluster ha modify -configured true
  20. Enable and verify storage failover for node3 and node4:

    storage failover modify -node <node3> -enabled true
    storage failover modify -node <node4> -enabled true
    storage failover show
  21. Migrate data aggregates owned by node4 that should be owned by node3:

    storage aggregate relocation start -aggregate <aggregate_name> -node <node4> -destination <node3>
  22. Display the status of all data aggregates in the cluster:

    storage aggregate show
  23. Enable auto-revert of the network LIFs across the nodes:

    network interface modify -lif * -auto-revert true
  24. Enable storage failover automatic giveback:

    storage failover modify -node * -auto-giveback true
  25. Display the cluster status:

    cluster show
  26. Display failover eligibility:

    storage failover show
    Note In the cluster report output, a node might incorrectly own aggregates that belong to another node. If this occurs, perform a takeover and giveback from both sides of the cluster.
  27. Display the status of all data aggregates in the cluster:

    storage aggregate show