Skip to main content

Reassign node2 disks to node4

Contributors netapp-pcarriga

You need to reassign the disks that belonged to node2 to node4 before verifying the node4 installation..

Steps
  1. Verify that node2 has stopped at the boot menu and reassign the disks of node2 to node4:

    boot_after_controller_replacement

    After a short delay, you are prompted to enter the name of the node that is being replaced. If there are shared disks (also called Advanced Disk Partitioning (ADP) or partitioned disks), you are prompted to enter the node name of the HA partner.

    These prompts might get buried in the console messages. If you do not enter a node name or enter an incorrect name, you are prompted to enter the name again.

    Expand the console output example
    LOADER-A> boot_ontap menu ...
    *******************************
    *                             *
    * Press Ctrl-C for Boot Menu. *
    *                             *
    *******************************
    .
    .
    Please choose one of the following:
    
    (1) Normal Boot.
    (2) Boot without /etc/rc.
    (3) Change password.
    (4) Clean configuration and initialize all disks.
    (5) Maintenance mode boot.
    (6) Update flash from backup config.
    (7) Install new software first.
    (8) Reboot node.
    (9) Configure Advanced Drive Partitioning.
    Selection (1-9)? 22/7
    .
    .
    (boot_after_controller_replacement) Boot after controller upgrade
    (9a)                                Unpartition all disks and remove their ownership information.
    (9b)                                Clean configuration and initialize node with partitioned disks.
    (9c)                                Clean configuration and initialize node with whole disks.
    (9d)                                Reboot the node.
    (9e)                                Return to main boot menu.
    
    Please choose one of the following:
    
    (1) Normal Boot.
    (2) Boot without /etc/rc.
    (3) Change password.
    (4) Clean configuration and initialize all disks.
    (5) Maintenance mode boot.
    (6) Update flash from backup config.
    (7) Install new software first.
    (8) Reboot node.
    (9) Configure Advanced Drive Partitioning.
    Selection (1-9)? boot_after_controller_replacement
    .
    This will replace all flash-based configuration with the last backup to disks. Are you sure you want to continue?: yes
    .
    .
    Controller Replacement: Provide name of the node you would like to replace: <name of the node being replaced>
    Controller Replacement: Provide High Availability partner of node1: <nodename of the partner of the node being replaced>
    Changing sysid of node <node being replaced> disks.
    Fetched sanown old_owner_sysid = 536953334 and calculated old sys id = 536953334
    Partner sysid = 4294967295, owner sysid = 536953334
    .
    .
    .
    Terminated
    <node reboots>
    .
    .
    System rebooting...
    .
    Restoring env file from boot media...
    copy_env_file:scenario = head upgrade
    Successfully restored env file from boot media...
    .
    .
    System rebooting...
    .
    .
    .
    WARNING: System ID mismatch. This usually occurs when replacing a boot device or NVRAM cards!
    Override system ID? {y|n} y
    Login: ...
  2. If the system goes into a reboot loop with the message no disks found, this is because it has reset the ports back to the target mode and therefore is unable to see any disks. Perform Step 3 through Step 8 on node4 to resolve this issue.

  3. Press Ctrl-C during AUTOBOOT to stop the node at the LOADER> prompt.

  4. At the LOADER prompt, enter maintenance mode:

    boot_ontap maint

  5. In maintenance mode, display all the previously set initiator ports that are now in target mode:

    ucadmin show

    Change the ports back to initiator mode:

    ucadmin modify -m fc -t initiator -f adapter name

  6. Verify that the ports have been changed to initiator mode:

    ucadmin show

  7. Exit maintenance mode:

    halt

    Note

    If you are upgrading from a system that supports external disks to a system that also supports external disks, go to Step 8.

    If you are upgrading from a system that uses external disks to a system that supports both internal and external disks, for example, an AFF A800 system, go to Step 9.

  8. At the LOADER prompt, boot up:

    boot_ontap menu

    Now, on booting, the node can detect all the disks that were previously assigned to it and can boot up as expected.

    When the cluster nodes you are replacing use root volume encryption, ONTAP is unable to read the volume information from the disks. Restore the keys for the root volume:

    1. Return to the special boot menu:

      LOADER> boot_ontap menu

      Please choose one of the following:
      (1) Normal Boot.
      (2) Boot without /etc/rc.
      (3) Change password.
      (4) Clean configuration and initialize all disks.
      (5) Maintenance mode boot.
      (6) Update flash from backup config.
      (7) Install new software first.
      (8) Reboot node.
      (9) Configure Advanced Drive Partitioning.
      (10) Set Onboard Key Manager recovery secrets.
      (11) Configure node for external key management.
      
      Selection (1-11)? 10
    2. Select (10) Set Onboard Key Manager recovery secrets

    3. Enter y at the following prompt:

      This option must be used only in disaster recovery procedures. Are you sure? (y or n): y

    4. At the prompt, enter the key-manager passphrase.

    5. Enter the backup data when prompted.

      Note You must have obtained the passphrase and backup data in the Prepare the nodes for upgrade section of this procedure.
    6. After the system boots to the special boot menu again, run option (1) Normal Boot

      Note You might encounter an error at this stage. If an error occurs, repeat the substeps in Step 22 until the system boots normally.
  9. If you are upgrading from a system with external disks to a system that supports internal and external disks (AFF A800 systems, for example), set the node2 aggregate as the root aggregate to confirm that node4 boots from the root aggregate of node2. To set the root aggregate, go to the boot menu on node4 and select option 5 to enter maintenance mode.

    Warning You must perform the following substeps in the exact order shown; failure to do so might cause an outage or even data loss.

    The following procedure sets node4 to boot from the root aggregate of node2:

    1. Enter maintenance mode:

      boot_ontap maint

    2. Check the RAID, plex, and checksum information for the node2 aggregate:

      aggr status -r

    3. Check the status of the node2 aggregate:

      aggr status

    4. If necessary, bring the node2 aggregate online:

      aggr_online root_aggr_from_node2

    5. Prevent the node4 from booting from its original root aggregate:

      aggr offline root_aggr_on_node4

    6. Set the node2 root aggregate as the new root aggregate for node4:

      aggr options aggr_from_node2 root