Replace the caching module or add/replace a core dump module - AFF A700 and FAS9000

Contributors dougthomp netapp-martyh

You must replace the caching module in the controller module when your system registers a single AutoSupport (ASUP) message that the module has gone offline; failure to do so results in performance degradation. If AutoSupport is not enabled, you can locate the failed caching module by the fault LED on the front of the module. You can also add or replace the 1TB, X9170A core dump module, which is required if you are installing NS224 drive shelves in an AFF A700 system.

Before you begin
  • You must replace the failed component with a replacement FRU component you received from your provider.

  • For instructions about hot swapping the caching module, see Hot-swapping a caching module.

  • When removing, replacing, or adding caching or core dump modules, the target node must be halted to the LOADER.

  • AFF A700 supports the 1TB core dump module, X9170A, which is required if you are adding NS224 drive shelves.

  • The core dump modules can be installed in slots 6-1 and 6-2. The recommended best practice is to install the module in slot 6-1.

  • The X9170A core dump module is not hot-swappable.

Step 1: Shutting down the impaired controller

You can shut down or take over the impaired controller using different procedures, depending on the storage system hardware configuration.

Option 1: Most configurations

To shut down the impaired node, you must determine the status of the node and, if necessary, take over the node so that the healthy node continues to serve data from the impaired node storage.

About this task

If you have a cluster with more than two nodes, it must be in quorum. If the cluster is not in quorum or a healthy node shows false for eligibility and health, you must correct the issue before shutting down the impaired node; see the Administration overview with the CLI.

Steps
  1. If AutoSupport is enabled, suppress automatic case creation by invoking an AutoSupport message: system node autosupport invoke -node * -type all -message MAINT=number_of_hours_downh

    The following AutoSupport message suppresses automatic case creation for two hours: cluster1:*> system node autosupport invoke -node * -type all -message MAINT=2h

  2. Disable automatic giveback from the console of the healthy node: storage failover modify –node local -auto-giveback false

  3. Take the impaired node to the LOADER prompt:

    If the impaired node is displaying…​ Then…​

    The LOADER prompt

    Go to the next step.

    Waiting for giveback…​

    Press Ctrl-C, and then respond y when prompted.

    System prompt or password prompt (enter system password)

    Take over or halt the impaired node:

    • For an HA pair, take over the impaired node from the healthy node: storage failover takeover -ofnode impaired_node_name

      When the impaired node shows Waiting for giveback…​, press Ctrl-C, and then respond y.

Option 2: Controller is in a MetroCluster

Note Do not use this procedure if your system is in a two-node MetroCluster configuration.

To shut down the impaired node, you must determine the status of the node and, if necessary, take over the node so that the healthy node continues to serve data from the impaired node storage.

  • If you have a cluster with more than two nodes, it must be in quorum. If the cluster is not in quorum or a healthy node shows false for eligibility and health, you must correct the issue before shutting down the impaired node; see the Administration overview with the CLI.

  • If you have a MetroCluster configuration, you must have confirmed that the MetroCluster Configuration State is configured and that the nodes are in an enabled and normal state (metrocluster node show).

Steps
  1. If AutoSupport is enabled, suppress automatic case creation by invoking an AutoSupport message: system node autosupport invoke -node * -type all -message MAINT=number_of_hours_downh

    The following AutoSupport message suppresses automatic case creation for two hours: cluster1:*> system node autosupport invoke -node * -type all -message MAINT=2h

  2. Disable automatic giveback from the console of the healthy node: storage failover modify –node local -auto-giveback false

  3. Take the impaired node to the LOADER prompt:

    If the impaired node is displaying…​ Then…​

    The LOADER prompt

    Go to the next step.

    Waiting for giveback…​

    Press Ctrl-C, and then respond y when prompted.

    System prompt or password prompt (enter system password)

    Take over or halt the impaired node:

    • For an HA pair, take over the impaired node from the healthy node: storage failover takeover -ofnode impaired_node_name

      When the impaired node shows Waiting for giveback…​, press Ctrl-C, and then respond y.

Option 3: Controller is in a two-node MetroCluster

To shut down the impaired node, you must determine the status of the node and, if necessary, switch over the node so that the healthy node continues to serve data from the impaired node storage.

About this task
  • If you are using NetApp Storage Encryption, you must have reset the MSID using the instructions in the "Returning SEDs to unprotected mode" section of Administration overview with the CLI.

  • You must leave the power supplies turned on at the end of this procedure to provide power to the healthy node.

Steps
  1. Check the MetroCluster status to determine whether the impaired node has automatically switched over to the healthy node: metrocluster show

  2. Depending on whether an automatic switchover has occurred, proceed according to the following table:

    If the impaired node…​ Then…​

    Has automatically switched over

    Proceed to the next step.

    Has not automatically switched over

    Perform a planned switchover operation from the healthy node: metrocluster switchover

    Has not automatically switched over, you attempted switchover with the metrocluster switchover command, and the switchover was vetoed

    Review the veto messages and, if possible, resolve the issue and try again. If you are unable to resolve the issue, contact technical support.

  3. Resynchronize the data aggregates by running the metrocluster heal -phase aggregates command from the surviving cluster.

    controller_A_1::> metrocluster heal -phase aggregates
    [Job 130] Job succeeded: Heal Aggregates is successful.

    If the healing is vetoed, you have the option of reissuing the metrocluster heal command with the -override-vetoes parameter. If you use this optional parameter, the system overrides any soft vetoes that prevent the healing operation.

  4. Verify that the operation has been completed by using the metrocluster operation show command.

    controller_A_1::> metrocluster operation show
        Operation: heal-aggregates
          State: successful
    Start Time: 7/25/2016 18:45:55
       End Time: 7/25/2016 18:45:56
         Errors: -
  5. Check the state of the aggregates by using the storage aggregate show command.

    controller_A_1::> storage aggregate show
    Aggregate     Size Available Used% State   #Vols  Nodes            RAID Status
    --------- -------- --------- ----- ------- ------ ---------------- ------------
    ...
    aggr_b2    227.1GB   227.1GB    0% online       0 mcc1-a2          raid_dp, mirrored, normal...
  6. Heal the root aggregates by using the metrocluster heal -phase root-aggregates command.

    mcc1A::> metrocluster heal -phase root-aggregates
    [Job 137] Job succeeded: Heal Root Aggregates is successful

    If the healing is vetoed, you have the option of reissuing the metrocluster heal command with the -override-vetoes parameter. If you use this optional parameter, the system overrides any soft vetoes that prevent the healing operation.

  7. Verify that the heal operation is complete by using the metrocluster operation show command on the destination cluster:

    mcc1A::> metrocluster operation show
      Operation: heal-root-aggregates
          State: successful
     Start Time: 7/29/2016 20:54:41
       End Time: 7/29/2016 20:54:42
         Errors: -
  8. On the impaired controller module, disconnect the power supplies.

Step 2: Replace or add a caching module

The NVMe SSD Flash Cache modules (FlashCache or caching modules) are separate modules. They are located in the front of the NVRAM module. To replace or add a caching module, locate it on the rear of the system on slot 6, and then follow the specific sequence of steps to replace it.

Before you begin

Your storage system must meet certain criteria depending on your situation:

  • It must have the appropriate operating system for the caching module you are installing.

  • It must support the caching capacity.

  • The target node must be at the LOADER prompt before adding or replacing the caching module.

  • The replacement caching module must have the same capacity as the failed caching module, but can be from a different supported vendor.

  • All other components in the storage system must be functioning properly; if not, you must contact technical support.

Steps
  1. If you are not already grounded, properly ground yourself.

  2. Locate the failed caching module, in slot 6, by the lit amber Attention LED on the front of the caching module.

  3. Remove the caching module:

    Note If you are adding another caching module to your system, remove the blank module and go to the next step.
    drw 9000 remove flashcache

    legend icon 01

    Orange release button.

    legend icon 02

    Caching module cam handle.

    1. Press the orange release button on the front of the caching module.

      Note Do not use the numbered and lettered I/O cam latch to eject the caching module. The numbered and lettered I/O cam latch ejects the entire NVRAM10 module and not the caching module.
    2. Rotate the cam handle until the caching module begins to slide out of the NVRAM10 module.

    3. Gently pull the cam handle straight toward you to remove the caching module from the NVRAM10 module.

      Be sure to support the caching module as you remove it from the NVRAM10 module.

  4. Install the caching module:

    1. Align the edges of the caching module with the opening in the NVRAM10 module.

    2. Gently push the caching module into the bay until the cam handle engages.

    3. Rotate the cam handle until it locks into place.

Step 3: Add or replace an X9170A core dump module

The 1TB cache core dump, X9170A, is only used in the AFF A700 systems. The core dump module cannot be hot-swapped. The core dump module typically is located in the front of the NVRAM module in slot 6-1 in the rear of the system. To replace or add the core dump module, locate slot 6-1, and then follow the specific sequence of steps to add or replace it.

Before you begin
  • Your system must be running ONTAP 9.8 or later in order to add a core dump module.

  • The X9170A core dump module is not hot-swappable.

  • The target node must be at the LOADER prompt before adding or replacing the code dump module.

  • You must have received two X9170 core dump modules; one for each controller.

  • All other components in the storage system must be functioning properly; if not, you must contact technical support.

Steps
  1. If you are not already grounded, properly ground yourself.

  2. If you are replacing a failed core dump module, locate and remove it:

    drw 9000 remove flashcache

    legend icon 01

    Orange release button.

    legend icon 02

    Core dump module cam handle.

    1. Locate the failed module by the amber Attention LED on the front of the module.

    2. Press the orange release button on the front of the core dump module.

      Note Do not use the numbered and lettered I/O cam latch to eject the core dump module. The numbered and lettered I/O cam latch ejects the entire NVRAM10 module and not the core dump module.
    3. Rotate the cam handle until the core dump module begins to slide out of the NVRAM10 module.

    4. Gently pull the cam handle straight toward you to remove the core dump module from the NVRAM10 module and set it aside.

      Be sure to support the core dump module as you remove it from the NVRAM10 module.

  3. Install the core dump module:

    1. If you are installing a new core dump module, remove the blank module from slot 6-1.

    2. Align the edges of the core dump module with the opening in the NVRAM10 module.

    3. Gently push the core dump module into the bay until the cam handle engages.

    4. Rotate the cam handle until it locks into place.

Step 4: Reboot the controller after FRU replacement

After you replace the FRU, you must reboot the controller module.

Step
  1. To boot ONTAP from the LOADER prompt, enter bye.

Step 5: Switch back aggregates in a two-node MetroCluster configuration

After you have completed the FRU replacement in a two-node MetroCluster configuration, you can perform the MetroCluster switchback operation. This returns the configuration to its normal operating state, with the sync-source storage virtual machines (SVMs) on the formerly impaired site now active and serving data from the local disk pools.

This task only applies to two-node MetroCluster configurations.

Steps
  1. Verify that all nodes are in the enabled state: metrocluster node show

    cluster_B::>  metrocluster node show
    
    DR                           Configuration  DR
    Group Cluster Node           State          Mirroring Mode
    ----- ------- -------------- -------------- --------- --------------------
    1     cluster_A
                  controller_A_1 configured     enabled   heal roots completed
          cluster_B
                  controller_B_1 configured     enabled   waiting for switchback recovery
    2 entries were displayed.
  2. Verify that resynchronization is complete on all SVMs: metrocluster vserver show

  3. Verify that any automatic LIF migrations being performed by the healing operations were completed successfully: metrocluster check lif show

  4. Perform the switchback by using the metrocluster switchback command from any node in the surviving cluster.

  5. Verify that the switchback operation has completed: metrocluster show

    The switchback operation is still running when a cluster is in the waiting-for-switchback state:

    cluster_B::> metrocluster show
    Cluster              Configuration State    Mode
    --------------------	------------------- 	---------
     Local: cluster_B configured       	switchover
    Remote: cluster_A configured       	waiting-for-switchback

    The switchback operation is complete when the clusters are in the normal state.:

    cluster_B::> metrocluster show
    Cluster              Configuration State    Mode
    --------------------	------------------- 	---------
     Local: cluster_B configured      		normal
    Remote: cluster_A configured      		normal

    If a switchback is taking a long time to finish, you can check on the status of in-progress baselines by using the metrocluster config-replication resync-status show command.

  6. Reestablish any SnapMirror or SnapVault configurations.

Step 6: Return the failed part to NetApp

After you replace the part, you can return the failed part to NetApp, as described in the RMA instructions shipped with the kit. Contact technical support at NetApp Support, 888-463-8277 (North America), 00-800-44-638277 (Europe), or +800-800-80-800 (Asia/Pacific) if you need the RMA number or additional help with the replacement procedure.