Skip to main content
ONTAP MetroCluster

Booting the new controller modules (MetroCluster FC configurations)

Contributors netapp-folivia netapp-thomi netapp-martyh netapp-pcarriga

After aggregate healing has been completed for both the data and root aggregates, you must boot the node or nodes at the disaster site.

About this task

This task begins with the nodes showing the LOADER prompt.

Steps
  1. Display the boot menu:

    boot_ontap menu

  2. From the boot menu, select option 6, Update flash from backup config.

  3. Respond y to the following prompt:

    This will replace all flash-based configuration with the last backup to disks. Are you sure you want to continue?: y

    The system will boot twice, the second time to load the new configuration.

    Note If you did not clear the NVRAM contents of a used replacement controller, then you might see a panic with the following message: PANIC: NVRAM contents are invalid…​ If this occurs, repeat From the boot menu, select option 6, Update flash from backup config. to boot the system to the ONTAP prompt. You then need to Reset the boot recovery and rdb_corrupt bootargs
  4. Mirror the root aggregate on plex 0:

    1. Assign three pool0 disks to the new controller module.

    2. Mirror the root aggregate pool1 plex:

      aggr mirror root-aggr-name

    3. Assign unowned disks to pool0 on the local node

  5. If you have a four-node configuration, repeat the previous steps on the other node at the disaster site.

  6. Refresh the MetroCluster configuration:

    1. Enter advanced privilege mode:

      set -privilege advanced

    2. Refresh the configuration:

      metrocluster configure -refresh true

    3. Return to admin privilege mode:

      set -privilege admin

  7. Confirm that the replacement nodes at the disaster site are ready for switchback:

    metrocluster node show

    The replacement nodes should be in “waiting for switchback recovery” mode. If they are in “normal” mode instead, you can reboot the replacement nodes. After that boot, the nodes should be in “waiting for switchback recovery” mode.

    The following example shows that the replacement nodes are ready for switchback:

    cluster_B::> metrocluster node show
    DR                    Configuration  DR
    Grp Cluster Node      State          Mirroring Mode
    --- ------- --------- -------------- --------- --------------------
    1   cluster_B
                node_B_1  configured     enabled   switchover completed
                node_B_2  configured     enabled   switchover completed
        cluster_A
                node_A_1  configured     enabled   waiting for switchback recovery
                node_A_2  configured     enabled   waiting for switchback recovery
    4 entries were displayed.
    
    cluster_B::>
What to do next

Proceed to Complete the disaster recovery process.

Reset the boot_recovery and rdb_corrupt bootargs

If required, you can reset the boot_recovery and rdb_corrupt_bootargs

Steps
  1. Halt the node back to the LOADER prompt:

    node_A_1::*> halt -node _node-name_
  2. Check if the following bootargs have been set:

    LOADER> printenv bootarg.init.boot_recovery
    LOADER> printenv bootarg.rdb_corrupt
  3. If either bootarg has been set to a value, unset it and boot ONTAP:

    LOADER> unsetenv bootarg.init.boot_recovery
    LOADER> unsetenv bootarg.rdb_corrupt
    LOADER> saveenv
    LOADER> bye