Skip to main content
Install and maintain

Replace NVRAM - FAS70 and FAS90

Contributors netapp-jsnyder

The NVRAM module consists of the NVRAM12 hardware and field-replaceable DIMMs. You can replace a failed NVRAM module or the DIMMs inside the NVRAM module. To replace a failed NVRAM module, you must remove the module from the enclosure, move the DIMMs to the replacement module, and install the replacement NVRAM module into the enclosure.

All other components in the system must be functioning properly; if not, you must contact NetApp Support.

You must replace the failed component with a replacement FRU component you received from your provider.

Step 1: Shut down the impaired controller

Shut down or take over the impaired controller using one of the following options.

Option 1: Most systems

To shut down the impaired controller, you must determine the status of the controller and, if necessary, take over the controller so that the healthy controller continues to serve data from the impaired controller storage.

About this task
  • If you have a SAN system, you must have checked event messages (cluster kernel-service show) for the impaired controller SCSI blade. The cluster kernel-service show command (from priv advanced mode) displays the node name, quorum status of that node, availability status of that node, and operational status of that node.

    Each SCSI-blade process should be in quorum with the other nodes in the cluster. Any issues must be resolved before you proceed with the replacement.

  • If you have a cluster with more than two nodes, it must be in quorum. If the cluster is not in quorum or a healthy controller shows false for eligibility and health, you must correct the issue before shutting down the impaired controller; see Synchronize a node with the cluster.

Steps
  1. If AutoSupport is enabled, suppress automatic case creation by invoking an AutoSupport message: system node autosupport invoke -node * -type all -message MAINT=<# of hours>h

    The following AutoSupport message suppresses automatic case creation for two hours: cluster1:> system node autosupport invoke -node * -type all -message MAINT=2h

  2. Disable automatic giveback from the console of the healthy controller: storage failover modify –node local -auto-giveback false

    Note When you see Do you want to disable auto-giveback?, enter y.
  3. Take the impaired controller to the LOADER prompt:

    If the impaired controller is displaying…​ Then…​

    The LOADER prompt

    Go to the next step.

    Waiting for giveback…​

    Press Ctrl-C, and then respond y when prompted.

    System prompt or password prompt

    Take over or halt the impaired controller from the healthy controller: storage failover takeover -ofnode impaired_node_name

    When the impaired controller shows Waiting for giveback…​, press Ctrl-C, and then respond y.

Option 2: Controller is in a MetroCluster

To shut down the impaired controller, you must determine the status of the controller and, if necessary, take over the controller so that the healthy controller continues to serve data from the impaired controller storage.

  • If you have a cluster with more than two nodes, it must be in quorum. If the cluster is not in quorum or a healthy controller shows false for eligibility and health, you must correct the issue before shutting down the impaired controller; see Synchronize a node with the cluster.

  • You must have confirmed that the MetroCluster Configuration State is configured and that the nodes are in an enabled and normal state (metrocluster node show).

Steps
  1. If AutoSupport is enabled, suppress automatic case creation by invoking an AutoSupport message: system node autosupport invoke -node * -type all -message MAINT=number_of_hours_downh

    The following AutoSupport message suppresses automatic case creation for two hours: cluster1:*> system node autosupport invoke -node * -type all -message MAINT=2h

  2. Disable automatic giveback from the console of the healthy controller: storage failover modify –node local -auto-giveback false

  3. Take the impaired controller to the LOADER prompt:

    If the impaired controller is displaying…​ Then…​

    The LOADER prompt

    Go to the next section.

    Waiting for giveback…​

    Press Ctrl-C, and then respond y when prompted.

    System prompt or password prompt (enter system password)

    Take over or halt the impaired controller from the healthy controller: storage failover takeover -ofnode impaired_node_name

    When the impaired controller shows Waiting for giveback…​, press Ctrl-C, and then respond y.

Step 2: Replace the NVRAM module

To replace the NVRAM module, locate it in slot 4/5 in the enclosure and follow the specific sequence of steps.

  1. If you are not already grounded, properly ground yourself.

  2. Unplug the power cord from both PSUs.

  3. Rotate the cable management tray down by gently pulling the pins on the ends of the tray and rotating the tray down.

  4. Remove the impaired NVRAM module from the enclosure:

    1. Depress the locking cam button.

      The cam button moves away from the enclosure.

    2. Rotate the cam latch down as far as it will go.

    3. Remove the impaired NVRAM module from the enclosure by hooking your finger into the cam lever opening and pulling the module out of the enclosure.

      Remove the NVRAM12 module and DIMMs

      Callout number 1

      Cam locking button

      Callout number 2

      DIMM locking tabs

  5. Set the NVRAM module on a stable surface.

  6. Remove the DIMMs, one at a time, from the impaired NVRAM module and install them in the replacement NVRAM module.

  7. Install the replacement NVRAM module into the enclosure:

    1. Align the module with the edges of the enclosure opening in slot 4/5.

    2. Gently slide the module into the slot all the way, and then rotate the cam latch all the way up to lock the module in place.

  8. Recable the PSUs.

  9. Rotate the cable management tray up to the closed position.

Step 3: Replace a NVRAM DIMM

To replace NVRAM DIMMs in the NVRAM module, you must remove the NVRAM module, and then replace the target DIMM.

  1. If you are not already grounded, properly ground yourself.

  2. Unplug the power cord from both PSUs.

  3. Rotate the cable management tray down by gently pulling the pins on the ends of the tray and rotating the tray down.

  4. Remove the target NVRAM module from the enclosure.

    Remove the NVRAM 12 module and DIMMs

    Callout number 1

    Cam locking button

    Callout number 2

    DIMM locking tabs

  5. Set the NVRAM module on a stable surface.

  6. Locate the DIMM to be replaced inside the NVRAM module.

    Note Consult the FRU map label on the side of the NVRAM module to determine the locations of DIMM slots 1 and 2.
  7. Remove the DIMM by pressing down on the DIMM locking tabs and lifting the DIMM out of the socket.

  8. Install the replacement DIMM by aligning the DIMM with the socket and gently pushing the DIMM into the socket until the locking tabs lock in place.

  9. Install the NVRAM module into the enclosure:

    1. Gently slide the module into the slot until the cam latch begins to engage with the I/O cam pin, and then rotate the cam latch all the way up to lock the module in place.

  10. Recable the PSUs.

  11. Rotate the cable management tray up to the closed position.

Step 4: Reboot the controller

After you replace the FRU, you must reboot the controller module.

  1. To boot ONTAP from the LOADER prompt, enter bye.

  2. Return the impaired controller to normal operation by giving back its storage: storage failover giveback -ofnode _impaired_node_name.

  3. If automatic giveback was disabled, reenable it: storage failover modify -node local -auto-giveback true .

  4. If AutoSupport is enabled, restore/unsuppress automatic case creation: system node autosupport invoke -node * -type all -message MAINT=END.

Step 5: Reassign disks

You must confirm the system ID change when you boot the controller and then verify that the change was implemented.

Caution Disk reassignment is only needed when replacing the NVRAM module and does not apply to NVRAM DIMM replacement.
Steps
  1. If the controller is in Maintenance mode (showing the *> prompt), exit Maintenance mode and go to the LOADER prompt: halt

  2. From the LOADER prompt on the controller, boot the controller and enter y when prompted to override the system ID due to a system ID mismatch.

  3. Wait until the Waiting for giveback…​ message is displayed on the console of the controller with the replacement module and then, from the healthy controller, verify that the new partner system ID has been automatically assigned: storage failover show

    In the command output, you should see a message that the system ID has changed on the impaired controller, showing the correct old and new IDs. In the following example, node2 has undergone replacement and has a new system ID of 151759706.

    node1:> storage failover show
                                        Takeover
    Node              Partner           Possible     State Description
    ------------      ------------      --------     -------------------------------------
    node1             node2             false        System ID changed on partner (Old:
                                                      151759755, New: 151759706), In takeover
    node2             node1             -            Waiting for giveback (HA mailboxes)
  4. Give back the controller:

    1. From the healthy controller, give back the replaced controller's storage: storage failover giveback -ofnode replacement_node_name

      The controller takes back its storage and completes booting.

      If you are prompted to override the system ID due to a system ID mismatch, you should enter y.

      Note If the giveback is vetoed, you can consider overriding the vetoes.

      For more information, see the Manual giveback commands topic to override the veto.

    2. After the giveback has been completed, confirm that the HA pair is healthy and that takeover is possible: storage failover show

      The output from the storage failover show command should not include the System ID changed on partner message.

  5. Verify that the disks were assigned correctly: storage disk show -ownership

    The disks belonging to the controller should show the new system ID. In the following example, the disks owned by node1 now show the new system ID, 151759706:

    node1:> storage disk show -ownership
    
    Disk  Aggregate Home  Owner  DR Home  Home ID    Owner ID  DR Home ID Reserver  Pool
    ----- ------    ----- ------ -------- -------    -------    -------  ---------  ---
    1.0.0  aggr0_1  node1 node1  -        151759706  151759706  -       151759706 Pool0
    1.0.1  aggr0_1  node1 node1           151759706  151759706  -       151759706 Pool0
    .
    .
    .
  6. If the system is in a MetroCluster configuration, monitor the status of the controller: metrocluster node show

    The MetroCluster configuration takes a few minutes after the replacement to return to a normal state, at which time each controller will show a configured state, with DR Mirroring enabled and a mode of normal. The metrocluster node show -fields node-systemid command output displays the impaired system ID until the MetroCluster configuration returns to a normal state.

  7. If the controller is in a MetroCluster configuration, depending on the MetroCluster state, verify that the DR home ID field shows the original owner of the disk if the original owner is a controller on the disaster site.

    This is required if both of the following are true:

  8. If your system is in a MetroCluster configuration, verify that each controller is configured: metrocluster node show - fields configuration-state

    node1_siteA::> metrocluster node show -fields configuration-state
    
    dr-group-id            cluster node           configuration-state
    -----------            ---------------------- -------------- -------------------
    1 node1_siteA          node1mcc-001           configured
    1 node1_siteA          node1mcc-002           configured
    1 node1_siteB          node1mcc-003           configured
    1 node1_siteB          node1mcc-004           configured
    
    4 entries were displayed.
  9. Verify that the expected volumes are present for each controller: vol show -node node-name

  10. Return the impaired controller to normal operation by giving back its storage: storage failover giveback -ofnode impaired_node_name.

  11. If automatic giveback was disabled, reenable it: storage failover modify -node local -auto-giveback true.

  12. If AutoSupport is enabled, restore/unsuppress automatic case creation: system node autosupport invoke -node * -type all -message MAINT=END.

Step 6: Return the failed part to NetApp

Return the failed part to NetApp, as described in the RMA instructions shipped with the kit. See the Part Return and Replacements page for further information.