Skip to main content
Install and maintain

Recable the system and reassign disks - FAS9000

Contributors

Continue the replacement procedure by recabling the storage and confirming disk reassignment.

Step 1: Recable the system

Recable the controller module's storage and network connections.

Steps
  1. Recable the system.

  2. Verify that the cabling is correct by using Active IQ Config Advisor.

    1. Download and install Config Advisor.

    2. Enter the information for the target system, and then click Collect Data.

    3. Click the Cabling tab, and then examine the output. Make sure that all disk shelves are displayed and all disks appear in the output, correcting any cabling issues you find.

    4. Check other cabling by clicking the appropriate tab, and then examining the output from Config Advisor.

Step 2: Reassign disks

If the storage system is in an HA pair, the system ID of the new controller module is automatically assigned to the disks when the giveback occurs at the end of the procedure. You must confirm the system ID change when you boot the replacement node and then verify that the change was implemented.

This procedure applies only to systems running ONTAP in an HA pair.

  1. If the replacement node is in Maintenance mode (showing the *> prompt, exit Maintenance mode and go to the LOADER prompt: halt

  2. From the LOADER prompt on the replacement node, boot the node, entering y if you are prompted to override the system ID due to a system ID mismatch.boot_ontap

  3. Wait until the Waiting for giveback…​ message is displayed on the replacement node console and then, from the healthy node, verify that the new partner system ID has been automatically assigned: storage failover show

    In the command output, you should see a message that the system ID has changed on the impaired node, showing the correct old and new IDs. In the following example, node2 has undergone replacement and has a new system ID of 151759706.

    node1> `storage failover show`
                                        Takeover
    Node              Partner           Possible     State Description
    ------------      ------------      --------     -------------------------------------
    node1             node2             false        System ID changed on partner (Old:
                                                      151759755, New: 151759706), In takeover
    node2             node1             -            Waiting for giveback (HA mailboxes)
  4. From the healthy node, verify that any coredumps are saved:

    1. Change to the advanced privilege level: set -privilege advanced

      You can respond Y when prompted to continue into advanced mode. The advanced mode prompt appears (*>).

    2. Save any coredumps: system node run -node local-node-name partner savecore

    3. Wait for the `savecore`command to complete before issuing the giveback.

      You can enter the following command to monitor the progress of the savecore command: system node run -node local-node-name partner savecore -s

    4. Return to the admin privilege level: set -privilege admin

  5. If your storage system has Storage or Volume Encryption configured, you must restore Storage or Volume Encryption functionality by using one of the following procedures, depending on whether you are using onboard or external key management:

  6. Give back the node:

    1. From the healthy node, give back the replaced node's storage: storage failover giveback -ofnode replacement_node_name

      The replacement node takes back its storage and completes booting.

      If you are prompted to override the system ID due to a system ID mismatch, you should enter y.

      Note If the giveback is vetoed, you can consider overriding the vetoes.
    2. After the giveback has been completed, confirm that the HA pair is healthy and that takeover is possible: storage failover show

      The output from the storage failover show command should not include the System ID changed on partner message.

  7. Verify that the disks were assigned correctly: storage disk show -ownership

    The disks belonging to the replacement node should show the new system ID. In the following example, the disks owned by node1 now show the new system ID, 1873775277:

    node1> `storage disk show -ownership`
    
    Disk  Aggregate Home  Owner  DR Home  Home ID    Owner ID  DR Home ID Reserver  Pool
    ----- ------    ----- ------ -------- -------    -------    -------  ---------  ---
    1.0.0  aggr0_1  node1 node1  -        1873775277 1873775277  -       1873775277 Pool0
    1.0.1  aggr0_1  node1 node1           1873775277 1873775277  -       1873775277 Pool0
    .
    .
    .
  8. If the system is in a MetroCluster configuration, monitor the status of the node: metrocluster node show

    The MetroCluster configuration takes a few minutes after the replacement to return to a normal state, at which time each node will show a configured state, with DR Mirroring enabled and a mode of normal. The metrocluster node show -fields node-systemid command output displays the old system ID until the MetroCluster configuration returns to a normal state.

  9. If the node is in a MetroCluster configuration, depending on the MetroCluster state, verify that the DR home ID field shows the original owner of the disk if the original owner is a node on the disaster site.

    This is required if both of the following are true:

  10. If your system is in a MetroCluster configuration, verify that each node is configured: metrocluster node show - fields configuration-state

    node1_siteA::> metrocluster node show -fields configuration-state
    
    dr-group-id            cluster node           configuration-state
    -----------            ---------------------- -------------- -------------------
    1 node1_siteA          node1mcc-001           configured
    1 node1_siteA          node1mcc-002           configured
    1 node1_siteB          node1mcc-003           configured
    1 node1_siteB          node1mcc-004           configured
    
    4 entries were displayed.
  11. Verify that the expected volumes are present for each node: vol show -node node-name

  12. If you disabled automatic takeover on reboot, enable it from the healthy node: storage failover modify -node replacement-node-name -onreboot true