Skip to main content

Reassign node1 disks to node3

Contributors netapp-pcarriga

You need to reassign the disks that belonged to node1 to node3 before verifying the node3 installation.

About this task

You perform the steps in this section on node3.

Steps
  1. Go to the boot menu and using 22/7 and select the hidden option boot_after_controller_replacement. At the prompt, enter node1 to reassign the disks of node1 to node3, as per the following example.

    Expand the console output example
    LOADER-A> boot_ontap menu
    ...
    *******************************
    *                             *
    * Press Ctrl-C for Boot Menu. *
    *                             *
    *******************************
    .
    .
    Please choose one of the following:
    (1) Normal Boot.
    (2) Boot without /etc/rc.
    (3) Change password.
    (4) Clean configuration and initialize all disks.
    (5) Maintenance mode boot.
    (6) Update flash from backup config.
    (7) Install new software first.
    (8) Reboot node.
    (9) Configure Advanced Drive Partitioning.
    Selection (1-9)? 22/7
    .
    .
    (boot_after_controller_replacement)   Boot after controller upgrade
    (9a)                                  Unpartition all disks and remove their ownership information.
    (9b)                                  Clean configuration and initialize node with partitioned disks.
    (9c)                                  Clean configuration and initialize node with whole disks.
    (9d)                                  Reboot the node.
    (9e)                                  Return to main boot menu.
    
    Please choose one of the following:
    
    (1) Normal Boot.
    (2) Boot without /etc/rc.
    (3) Change password.
    (4) Clean configuration and initialize all disks.
    (5) Maintenance mode boot.
    (6) Update flash from backup config.
    (7) Install new software first.
    (8) Reboot node.
    (9) Configure Advanced Drive Partitioning.
    Selection (1-9)? boot_after_controller_replacement
    .
    This will replace all flash-based configuration with the last backup to
    disks. Are you sure you want to continue?: yes
    .
    .
    Controller Replacement: Provide name of the node you would like to replace: <name of the node being replaced>
    .
    .
    Changing sysid of node <node being replaced> disks.
    Fetched sanown old_owner_sysid = 536953334 and calculated old sys id = 536953334
    Partner sysid = 4294967295, owner sysid = 536953334
    .
    .
    .
    Terminated
    <node reboots>
    .
    .
    System rebooting...
    .
    Restoring env file from boot media...
    copy_env_file:scenario = head upgrade
    Successfully restored env file from boot media...
    .
    .
    System rebooting...
    .
    .
    .
    WARNING: System ID mismatch. This usually occurs when replacing a boot device or NVRAM cards!
    Override system ID? {y|n} y
    Login:
    ...
  2. If the system goes into a reboot loop with the message no disks found, this is because it has reset the ports back to the target mode and therefore is unable to see any disks. Continue with Step 3 to Step 8 to resolve this.

  3. Press Ctrl-C during AUTOBOOT to stop the node at the LOADER> prompt.

  4. At the LOADER prompt, enter maintenance mode:

    boot_ontap maint

  5. In maintenance mode, display all the previously set initiator ports that are now in target mode:

    ucadmin show

    Change the ports back to initiator mode:

    ucadmin modify -m fc -t initiator -f adapter name

  6. Verify that the ports have been changed to initiator mode:

    ucadmin show

  7. Exit maintenance mode:

    halt

    Note

    If you are upgrading from a system that supports external disks to a system that also supports external disks, go to Step 8.

    If you are upgrading from a system that supports external disks to a system that supports both internal and external disks, for example, an AFF A800 system, go to Step 9.

  8. At the LOADER prompt, boot up:

    boot_ontap menu

    Now, on booting, the node can detect all the disks that were previously assigned to it and can boot up as expected.

    When the cluster nodes you are replacing use root volume encryption, ONTAP is unable to read the volume information from the disks. Restore the keys for the root volume:

    1. Return to the special boot menu:

      LOADER> boot_ontap menu

      Please choose one of the following:
      (1) Normal Boot.
      (2) Boot without /etc/rc.
      (3) Change password.
      (4) Clean configuration and initialize all disks.
      (5) Maintenance mode boot.
      (6) Update flash from backup config.
      (7) Install new software first.
      (8) Reboot node.
      (9) Configure Advanced Drive Partitioning.
      (10) Set Onboard Key Manager recovery secrets.
      (11) Configure node for external key management.
      
      Selection (1-11)? 10
    2. Select (10) Set Onboard Key Manager recovery secrets

    3. Enter y at the following prompt:

      This option must be used only in disaster recovery procedures. Are you sure? (y or n): y

    4. At the prompt, enter the key-manager passphrase.

    5. Enter the backup data when prompted.

      Note You must have obtained the passphrase and backup data in the Prepare the nodes for upgrade section of this procedure.
    6. After the system boots to the special boot menu again, run option (1) Normal Boot

      Note You might encounter an error at this stage. If an error occurs, repeat the substeps in Step 8 until the system boots normally.
  9. If you are upgrading from a system with external disks to a system that supports internal and external disks (AFF A800 systems, for example), set the node1 aggregate as the root aggregate to confirm that node3 boots from the root aggregate of node1. To set the root aggregate, go to the boot menu and select option 5 to enter maintenance mode.

    Caution You must perform the following substeps in the exact order shown; failure to do so might cause an outage or even data loss.

    The following procedure sets node3 to boot from the root aggregate of node1:

    1. Enter maintenance mode:

      boot_ontap maint

    2. Check the RAID, plex, and checksum information for the node1 aggregate:

      aggr status -r

    3. Check the status of the node1 aggregate:

      aggr status

    4. If necessary, bring the node1 aggregate online:

      aggr_online root_aggr_from_node1

    5. Prevent the node3 from booting from its original root aggregate:

      aggr offline root_aggr_on_node3

    6. Set the node1 root aggregate as the new root aggregate for node3:

      aggr options aggr_from_node1 root

    7. Verify that the root aggregate of node3 is offline and the root aggregate for the disks brought over from node1 is online and set to root:

      aggr status

      Note Failing to perform the previous substep might cause node3 to boot from the internal root aggregate, or it might cause the system to assume a new cluster configuration exists or prompt you to identify one.

      The following shows an example of the command output:

       -----------------------------------------------------------------
       Aggr                 State    Status             Options
      
       aggr0_nst_fas8080_15 online   raid_dp, aggr      root, nosnap=on
                                     fast zeroed
                                     64-bit
      
       aggr0                offline  raid_dp, aggr      diskroot
                                     fast zeroed
                                     64-bit
       -----------------------------------------------------------------