Replace a DIMM - AFF A700 and FAS9000

Contributors dougthomp netapp-martyh thrisun

You must replace a DIMM in the controller module when your system registers an increasing number of correctable error correction codes (ECC); failure to do so causes a system panic.

All other components in the system must be functioning properly; if not, you must contact technical support.

You must replace the failed component with a replacement FRU component you received from your provider.

Step 1: Shut down the impaired controller

You can shut down or take over the impaired controller using different procedures, depending on the storage system hardware configuration.

Option 1: Most configurations

To shut down the impaired node, you must determine the status of the node and, if necessary, take over the node so that the healthy node continues to serve data from the impaired node storage.

About this task

If you have a cluster with more than two nodes, it must be in quorum. If the cluster is not in quorum or a healthy node shows false for eligibility and health, you must correct the issue before shutting down the impaired node; see the Administration overview with the CLI.

Steps
  1. If AutoSupport is enabled, suppress automatic case creation by invoking an AutoSupport message: system node autosupport invoke -node * -type all -message MAINT=number_of_hours_downh

    The following AutoSupport message suppresses automatic case creation for two hours: cluster1:*> system node autosupport invoke -node * -type all -message MAINT=2h

  2. Disable automatic giveback from the console of the healthy node: storage failover modify –node local -auto-giveback false

  3. Take the impaired node to the LOADER prompt:

    If the impaired node is displaying…​ Then…​

    The LOADER prompt

    Go to the next step.

    Waiting for giveback…​

    Press Ctrl-C, and then respond y when prompted.

    System prompt or password prompt (enter system password)

    Take over or halt the impaired node:

    • For an HA pair, take over the impaired node from the healthy node: storage failover takeover -ofnode impaired_node_name

      When the impaired node shows Waiting for giveback…​, press Ctrl-C, and then respond y.

Option 2: Controller is in a MetroCluster

Note Do not use this procedure if your system is in a two-node MetroCluster configuration.

To shut down the impaired node, you must determine the status of the node and, if necessary, take over the node so that the healthy node continues to serve data from the impaired node storage.

  • If you have a cluster with more than two nodes, it must be in quorum. If the cluster is not in quorum or a healthy node shows false for eligibility and health, you must correct the issue before shutting down the impaired node; see the Administration overview with the CLI.

  • If you have a MetroCluster configuration, you must have confirmed that the MetroCluster Configuration State is configured and that the nodes are in an enabled and normal state (metrocluster node show).

Steps
  1. If AutoSupport is enabled, suppress automatic case creation by invoking an AutoSupport message: system node autosupport invoke -node * -type all -message MAINT=number_of_hours_downh

    The following AutoSupport message suppresses automatic case creation for two hours: cluster1:*> system node autosupport invoke -node * -type all -message MAINT=2h

  2. Disable automatic giveback from the console of the healthy node: storage failover modify –node local -auto-giveback false

  3. Take the impaired node to the LOADER prompt:

    If the impaired node is displaying…​ Then…​

    The LOADER prompt

    Go to the next step.

    Waiting for giveback…​

    Press Ctrl-C, and then respond y when prompted.

    System prompt or password prompt (enter system password)

    Take over or halt the impaired node:

    • For an HA pair, take over the impaired node from the healthy node: storage failover takeover -ofnode impaired_node_name

      When the impaired node shows Waiting for giveback…​, press Ctrl-C, and then respond y.

Option 3: Controller is in a two-node MetroCluster

To shut down the impaired node, you must determine the status of the node and, if necessary, switch over the node so that the healthy node continues to serve data from the impaired node storage.

About this task
  • If you are using NetApp Storage Encryption, you must have reset the MSID using the instructions in the "Returning SEDs to unprotected mode" section of Administration overview with the CLI.

  • You must leave the power supplies turned on at the end of this procedure to provide power to the healthy node.

Steps
  1. Check the MetroCluster status to determine whether the impaired node has automatically switched over to the healthy node: metrocluster show

  2. Depending on whether an automatic switchover has occurred, proceed according to the following table:

    If the impaired node…​ Then…​

    Has automatically switched over

    Proceed to the next step.

    Has not automatically switched over

    Perform a planned switchover operation from the healthy node: metrocluster switchover

    Has not automatically switched over, you attempted switchover with the metrocluster switchover command, and the switchover was vetoed

    Review the veto messages and, if possible, resolve the issue and try again. If you are unable to resolve the issue, contact technical support.

  3. Resynchronize the data aggregates by running the metrocluster heal -phase aggregates command from the surviving cluster.

    controller_A_1::> metrocluster heal -phase aggregates
    [Job 130] Job succeeded: Heal Aggregates is successful.

    If the healing is vetoed, you have the option of reissuing the metrocluster heal command with the -override-vetoes parameter. If you use this optional parameter, the system overrides any soft vetoes that prevent the healing operation.

  4. Verify that the operation has been completed by using the metrocluster operation show command.

    controller_A_1::> metrocluster operation show
        Operation: heal-aggregates
          State: successful
    Start Time: 7/25/2016 18:45:55
       End Time: 7/25/2016 18:45:56
         Errors: -
  5. Check the state of the aggregates by using the storage aggregate show command.

    controller_A_1::> storage aggregate show
    Aggregate     Size Available Used% State   #Vols  Nodes            RAID Status
    --------- -------- --------- ----- ------- ------ ---------------- ------------
    ...
    aggr_b2    227.1GB   227.1GB    0% online       0 mcc1-a2          raid_dp, mirrored, normal...
  6. Heal the root aggregates by using the metrocluster heal -phase root-aggregates command.

    mcc1A::> metrocluster heal -phase root-aggregates
    [Job 137] Job succeeded: Heal Root Aggregates is successful

    If the healing is vetoed, you have the option of reissuing the metrocluster heal command with the -override-vetoes parameter. If you use this optional parameter, the system overrides any soft vetoes that prevent the healing operation.

  7. Verify that the heal operation is complete by using the metrocluster operation show command on the destination cluster:

    mcc1A::> metrocluster operation show
      Operation: heal-root-aggregates
          State: successful
     Start Time: 7/29/2016 20:54:41
       End Time: 7/29/2016 20:54:42
         Errors: -
  8. On the impaired controller module, disconnect the power supplies.

Step 2: Open the controller module

To access components inside the controller, you must first remove the controller module from the system and then remove the cover on the controller module.

Steps
  1. If you are not already grounded, properly ground yourself.

  2. Unplug the cables from the impaired controller module, and keep track of where the cables were connected.

  3. Slide the orange button on the cam handle downward until it unlocks.

    drw 9000 remove pcm

    legend icon 01

    Cam handle release button

    legend icon 02

    Cam handle

  4. Rotate the cam handle so that it completely disengages the controller module from the chassis, and then slide the controller module out of the chassis.

    Make sure that you support the bottom of the controller module as you slide it out of the chassis.

  5. Place the controller module lid-side up on a stable, flat surface, press the blue button on the cover, slide the cover to the back of the controller module, and then swing the cover up and lift it off of the controller module.

    drw 9000 pcm open

    legend icon 01

    Controller module cover locking button

Step 3: Replace the DIMMs

To replace the DIMMs, locate them inside the controller and follow the specific sequence of steps.

Steps
  1. If you are not already grounded, properly ground yourself.

  2. Locate the DIMMs on your controller module.

    Note Each system memory DIMM has an LED located on the board next to each DIMM slot. The LED for the faulty blinks every two seconds.
    drw 9000 dimm map
  3. Eject the DIMM from its slot by slowly pushing apart the two DIMM ejector tabs on either side of the DIMM, and then slide the DIMM out of the slot.

    Note Carefully hold the DIMM by the edges to avoid pressure on the components on the DIMM circuit board.
    drw 9000 replace pcm dimms

    legend icon 01

    DIMM ejector tabs

    legend icon 02

    DIMM

  4. Remove the replacement DIMM from the antistatic shipping bag, hold the DIMM by the corners, and align it to the slot.

    The notch among the pins on the DIMM should line up with the tab in the socket.

  5. Make sure that the DIMM ejector tabs on the connector are in the open position, and then insert the DIMM squarely into the slot.

    The DIMM fits tightly in the slot, but should go in easily. If not, realign the DIMM with the slot and reinsert it.

    Note Visually inspect the DIMM to verify that it is evenly aligned and fully inserted into the slot.
  6. Push carefully, but firmly, on the top edge of the DIMM until the ejector tabs snap into place over the notches at the ends of the DIMM.

  7. Close the controller module cover.

Step 4: Install the controller

After you install the components into the controller module, you must install the controller module back into the system chassis and boot the operating system.

For HA pairs with two controller modules in the same chassis, the sequence in which you install the controller module is especially important because it attempts to reboot as soon as you completely seat it in the chassis.

Steps
  1. If you are not already grounded, properly ground yourself.

  2. If you have not already done so, replace the cover on the controller module.

  3. Align the end of the controller module with the opening in the chassis, and then gently push the controller module halfway into the system.

    Note Do not completely insert the controller module in the chassis until instructed to do so.
  4. Cable the management and console ports only, so that you can access the system to perform the tasks in the following sections.

    Note You will connect the rest of the cables to the controller module later in this procedure.
  5. Complete the reinstallation of the controller module:

    1. If you have not already done so, reinstall the cable management device.

    2. Firmly push the controller module into the chassis until it meets the midplane and is fully seated.

      The locking latches rise when the controller module is fully seated.

      Note Do not use excessive force when sliding the controller module into the chassis to avoid damaging the connectors.

      The controller module begins to boot as soon as it is fully seated in the chassis. Be prepared to interrupt the boot process.

    3. Rotate the locking latches upward, tilting them so that they clear the locking pins, and then lower them into the locked position.

    4. Interrupt the boot process by pressing Ctrl-C when you see Press Ctrl-C for Boot Menu.

    5. Select the option to boot to Maintenance mode from the displayed menu.

Step 5: Run system-level diagnostics

After installing a new DIMM, you should run diagnostics.

Your system must be at the LOADER prompt to start System Level Diagnostics.

All commands in the diagnostic procedures are issued from the node where the component is being replaced.

Steps
  1. If the node to be serviced is not at the LOADER prompt, perform the following steps:

    1. Select the Maintenance mode option from the displayed menu.

    2. After the node boots to Maintenance mode, halt the node: halt

      After you issue the command, you should wait until the system stops at the LOADER prompt.

      Note During the boot process, you can safely respond y to prompts:
      • A prompt warning that when entering Maintenance mode in an HA configuration, you must ensure that the healthy node remains down.

  2. At the LOADER prompt, access the special drivers specifically designed for system-level diagnostics to function properly: boot_diags

    During the boot process, you can safely respond y to the prompts until the Maintenance mode prompt (*>) appears.

  3. Run diagnostics on the system memory: sldiag device run -dev mem

  4. Verify that no hardware problems resulted from the replacement of the DIMMs: sldiag device status -dev mem -long -state failed

    System-level diagnostics returns you to the prompt if there are no test failures, or lists the full status of failures resulting from testing the component.

  5. Proceed based on the result of the preceding step:

    If the system-level diagnostics tests…​ Then…​

    Were completed without any failures

    1. Clear the status logs: sldiag device clearstatus

    2. Verify that the log was cleared: sldiag device status

      The following default response is displayed:

      SLDIAG: No log messages are present.
      -----
      
       .. Exit Maintenance mode: `halt`
      +
      The node displays the LOADER prompt.
      
       .. Boot the node from the LOADER prompt: `bye`
       .. Return the node to normal operation.

    An HA pair

    Perform a give back: storage failover giveback -ofnode replacement_node_name

    Note If you disabled automatic giveback, re-enable it with the storage failover modify command.

    A two-node MetroCluster configuration

    Proceed to the next step.

    The MetroCluster switchback procedure is done in the next task in the replacement process.

    A stand-alone configuration

    Proceed to the next step.

    No action is required.

    You have completed system-level diagnostics.

    Resulted in some test failures

    Determine the cause of the problem:

    1. Exit Maintenance mode: halt

      After you issue the command, wait until the system stops at the LOADER prompt.

    2. Turn off or leave on the power supplies, depending on how many controller modules are in the chassis:

      • If you have two controller modules in the chassis, leave the power supplies turned on to provide power to the other controller module.

      • If you have one controller module in the chassis, turn off the power supplies and unplug them from the power sources.

    3. Verify that you have observed all the considerations identified for running system-level diagnostics, that cables are securely connected, and that hardware components are properly installed in the storage system.

    4. Boot the controller module you are servicing, interrupting the boot by pressing Ctrl-C when prompted to get to the Boot menu:

      • If you have two controller modules in the chassis, fully seat the controller module you are servicing in the chassis.

        The controller module boots up when fully seated.

      • If you have one controller module in the chassis, connect the power supplies, and then turn them on.

    5. Select Boot to maintenance mode from the menu.

    6. Exit Maintenance mode by entering the following command: halt

      After you issue the command, wait until the system stops at the LOADER prompt.

    7. Rerun the system-level diagnostic test.

Step 6: Switch back aggregates in a two-node MetroCluster configuration

After you have completed the FRU replacement in a two-node MetroCluster configuration, you can perform the MetroCluster switchback operation. This returns the configuration to its normal operating state, with the sync-source storage virtual machines (SVMs) on the formerly impaired site now active and serving data from the local disk pools.

This task only applies to two-node MetroCluster configurations.

Steps
  1. Verify that all nodes are in the enabled state: metrocluster node show

    cluster_B::>  metrocluster node show
    
    DR                           Configuration  DR
    Group Cluster Node           State          Mirroring Mode
    ----- ------- -------------- -------------- --------- --------------------
    1     cluster_A
                  controller_A_1 configured     enabled   heal roots completed
          cluster_B
                  controller_B_1 configured     enabled   waiting for switchback recovery
    2 entries were displayed.
  2. Verify that resynchronization is complete on all SVMs: metrocluster vserver show

  3. Verify that any automatic LIF migrations being performed by the healing operations were completed successfully: metrocluster check lif show

  4. Perform the switchback by using the metrocluster switchback command from any node in the surviving cluster.

  5. Verify that the switchback operation has completed: metrocluster show

    The switchback operation is still running when a cluster is in the waiting-for-switchback state:

    cluster_B::> metrocluster show
    Cluster              Configuration State    Mode
    --------------------	------------------- 	---------
     Local: cluster_B configured       	switchover
    Remote: cluster_A configured       	waiting-for-switchback

    The switchback operation is complete when the clusters are in the normal state.:

    cluster_B::> metrocluster show
    Cluster              Configuration State    Mode
    --------------------	------------------- 	---------
     Local: cluster_B configured      		normal
    Remote: cluster_A configured      		normal

    If a switchback is taking a long time to finish, you can check on the status of in-progress baselines by using the metrocluster config-replication resync-status show command.

  6. Reestablish any SnapMirror or SnapVault configurations.

Step 7: Return the failed part to NetApp

After you replace the part, you can return the failed part to NetApp, as described in the RMA instructions shipped with the kit. Contact technical support at NetApp Support, 888-463-8277 (North America), 00-800-44-638277 (Europe), or +800-800-80-800 (Asia/Pacific) if you need the RMA number or additional help with the replacement procedure.