Skip to main content
Install and maintain

Replace the caching module or add/replace a core dump module - FAS9000

Contributors dougthomp

You must replace the caching module in the controller module when your system registers a single AutoSupport (ASUP) message that the module has gone offline; failure to do so results in performance degradation. If AutoSupport is not enabled, you can locate the failed caching module by the fault LED on the front of the module. You can also add or replace the 1TB, X9170A core dump module, which is required if you are installing NS224 drive shelves in an AFF A700 system.

Before you begin
  • You must replace the failed component with a replacement FRU component you received from your provider.

  • For instructions about hot swapping the caching module, see Hot-swapping a caching module.

  • When removing, replacing, or adding caching or core dump modules, the target node must be halted to the LOADER.

  • AFF A700 supports the 1TB core dump module, X9170A, which is required if you are adding NS224 drive shelves.

  • The core dump modules can be installed in slots 6-1 and 6-2. The recommended best practice is to install the module in slot 6-1.

  • The X9170A core dump module is not hot-swappable.

Step 1: Shut down the impaired controller

You can shut down or take over the impaired controller using different procedures, depending on the storage system hardware configuration.

Option 1: Most configurations

To shut down the impaired controller, you must determine the status of the controller and, if necessary, take over the controller so that the healthy controller continues to serve data from the impaired controller storage.

About this task
  • If you have a SAN system, you must have checked event messages (cluster kernel-service show) for the impaired controller SCSI blade. The cluster kernel-service show command (from priv advanced mode) displays the node name, quorum status of that node, availability status of that node, and operational status of that node.

    Each SCSI-blade process should be in quorum with the other nodes in the cluster. Any issues must be resolved before you proceed with the replacement.

  • If you have a cluster with more than two nodes, it must be in quorum. If the cluster is not in quorum or a healthy controller shows false for eligibility and health, you must correct the issue before shutting down the impaired controller; see Synchronize a node with the cluster.

Steps
  1. If AutoSupport is enabled, suppress automatic case creation by invoking an AutoSupport message: system node autosupport invoke -node * -type all -message MAINT=<# of hours>h

    The following AutoSupport message suppresses automatic case creation for two hours: cluster1:> system node autosupport invoke -node * -type all -message MAINT=2h

  2. Disable automatic giveback from the console of the healthy controller: storage failover modify –node local -auto-giveback false

    Note When you see Do you want to disable auto-giveback?, enter y.
  3. Take the impaired controller to the LOADER prompt:

    If the impaired controller is displaying…​ Then…​

    The LOADER prompt

    Go to the next step.

    Waiting for giveback…​

    Press Ctrl-C, and then respond y when prompted.

    System prompt or password prompt

    Take over or halt the impaired controller from the healthy controller: storage failover takeover -ofnode impaired_node_name

    When the impaired controller shows Waiting for giveback…​, press Ctrl-C, and then respond y.

Option 2: Controller is in a two-node MetroCluster

To shut down the impaired controller, you must determine the status of the controller and, if necessary, switch over the controller so that the healthy controller continues to serve data from the impaired controller storage.

About this task
  • You must leave the power supplies turned on at the end of this procedure to provide power to the healthy controller.

Steps
  1. Check the MetroCluster status to determine whether the impaired controller has automatically switched over to the healthy controller: metrocluster show

  2. Depending on whether an automatic switchover has occurred, proceed according to the following table:

    If the impaired controller…​ Then…​

    Has automatically switched over

    Proceed to the next step.

    Has not automatically switched over

    Perform a planned switchover operation from the healthy controller: metrocluster switchover

    Has not automatically switched over, you attempted switchover with the metrocluster switchover command, and the switchover was vetoed

    Review the veto messages and, if possible, resolve the issue and try again. If you are unable to resolve the issue, contact technical support.

  3. Resynchronize the data aggregates by running the metrocluster heal -phase aggregates command from the surviving cluster.

    controller_A_1::> metrocluster heal -phase aggregates
    [Job 130] Job succeeded: Heal Aggregates is successful.

    If the healing is vetoed, you have the option of reissuing the metrocluster heal command with the -override-vetoes parameter. If you use this optional parameter, the system overrides any soft vetoes that prevent the healing operation.

  4. Verify that the operation has been completed by using the metrocluster operation show command.

    controller_A_1::> metrocluster operation show
        Operation: heal-aggregates
          State: successful
    Start Time: 7/25/2016 18:45:55
       End Time: 7/25/2016 18:45:56
         Errors: -
  5. Check the state of the aggregates by using the storage aggregate show command.

    controller_A_1::> storage aggregate show
    Aggregate     Size Available Used% State   #Vols  Nodes            RAID Status
    --------- -------- --------- ----- ------- ------ ---------------- ------------
    ...
    aggr_b2    227.1GB   227.1GB    0% online       0 mcc1-a2          raid_dp, mirrored, normal...
  6. Heal the root aggregates by using the metrocluster heal -phase root-aggregates command.

    mcc1A::> metrocluster heal -phase root-aggregates
    [Job 137] Job succeeded: Heal Root Aggregates is successful

    If the healing is vetoed, you have the option of reissuing the metrocluster heal command with the -override-vetoes parameter. If you use this optional parameter, the system overrides any soft vetoes that prevent the healing operation.

  7. Verify that the heal operation is complete by using the metrocluster operation show command on the destination cluster:

    mcc1A::> metrocluster operation show
      Operation: heal-root-aggregates
          State: successful
     Start Time: 7/29/2016 20:54:41
       End Time: 7/29/2016 20:54:42
         Errors: -
  8. On the impaired controller module, disconnect the power supplies.

Step 2: Replace or add a caching module

The NVMe SSD Flash Cache modules (FlashCache or caching modules) are separate modules. They are located in the front of the NVRAM module. To replace or add a caching module, locate it on the rear of the system on slot 6, and then follow the specific sequence of steps to replace it.

Before you begin

Your storage system must meet certain criteria depending on your situation:

  • It must have the appropriate operating system for the caching module you are installing.

  • It must support the caching capacity.

  • The target node must be at the LOADER prompt before adding or replacing the caching module.

  • The replacement caching module must have the same capacity as the failed caching module, but can be from a different supported vendor.

  • All other components in the storage system must be functioning properly; if not, you must contact technical support.

Steps
  1. If you are not already grounded, properly ground yourself.

  2. Locate the failed caching module, in slot 6, by the lit amber Attention LED on the front of the caching module.

  3. Remove the caching module:

    Note If you are adding another caching module to your system, remove the blank module and go to the next step.
    Caching module remove

    Callout number 1

    Orange release button.

    Callout number 2

    Caching module cam handle.

    1. Press the orange release button on the front of the caching module.

      Note Do not use the numbered and lettered I/O cam latch to eject the caching module. The numbered and lettered I/O cam latch ejects the entire NVRAM10 module and not the caching module.
    2. Rotate the cam handle until the caching module begins to slide out of the NVRAM10 module.

    3. Gently pull the cam handle straight toward you to remove the caching module from the NVRAM10 module.

      Be sure to support the caching module as you remove it from the NVRAM10 module.

  4. Install the caching module:

    1. Align the edges of the caching module with the opening in the NVRAM10 module.

    2. Gently push the caching module into the bay until the cam handle engages.

    3. Rotate the cam handle until it locks into place.

Step 3: Add or replace an X9170A core dump module

The 1TB cache core dump, X9170A, is only used in the AFF A700 systems. The core dump module cannot be hot-swapped. The core dump module typically is located in the front of the NVRAM module in slot 6-1 in the rear of the system. To replace or add the core dump module, locate slot 6-1, and then follow the specific sequence of steps to add or replace it.

Before you begin
  • Your system must be running ONTAP 9.8 or later in order to add a core dump module.

  • The X9170A core dump module is not hot-swappable.

  • The target node must be at the LOADER prompt before adding or replacing the code dump module.

  • You must have received two X9170 core dump modules; one for each controller.

  • All other components in the storage system must be functioning properly; if not, you must contact technical support.

Steps
  1. If you are not already grounded, properly ground yourself.

  2. If you are replacing a failed core dump module, locate and remove it:

    Caching module remove

    Callout number 1

    Orange release button.

    Callout number 2

    Core dump module cam handle.

    1. Locate the failed module by the amber Attention LED on the front of the module.

    2. Press the orange release button on the front of the core dump module.

      Note Do not use the numbered and lettered I/O cam latch to eject the core dump module. The numbered and lettered I/O cam latch ejects the entire NVRAM10 module and not the core dump module.
    3. Rotate the cam handle until the core dump module begins to slide out of the NVRAM10 module.

    4. Gently pull the cam handle straight toward you to remove the core dump module from the NVRAM10 module and set it aside.

      Be sure to support the core dump module as you remove it from the NVRAM10 module.

  3. Install the core dump module:

    1. If you are installing a new core dump module, remove the blank module from slot 6-1.

    2. Align the edges of the core dump module with the opening in the NVRAM10 module.

    3. Gently push the core dump module into the bay until the cam handle engages.

    4. Rotate the cam handle until it locks into place.

Step 4: Reboot the controller after FRU replacement

After you replace the FRU, you must reboot the controller module.

Step
  1. To boot ONTAP from the LOADER prompt, enter bye.

Step 5: Switch back aggregates in a two-node MetroCluster configuration

After you have completed the FRU replacement in a two-node MetroCluster configuration, you can perform the MetroCluster switchback operation. This returns the configuration to its normal operating state, with the sync-source storage virtual machines (SVMs) on the formerly impaired site now active and serving data from the local disk pools.

This task only applies to two-node MetroCluster configurations.

Steps
  1. Verify that all nodes are in the enabled state: metrocluster node show

    cluster_B::>  metrocluster node show
    
    DR                           Configuration  DR
    Group Cluster Node           State          Mirroring Mode
    ----- ------- -------------- -------------- --------- --------------------
    1     cluster_A
                  controller_A_1 configured     enabled   heal roots completed
          cluster_B
                  controller_B_1 configured     enabled   waiting for switchback recovery
    2 entries were displayed.
  2. Verify that resynchronization is complete on all SVMs: metrocluster vserver show

  3. Verify that any automatic LIF migrations being performed by the healing operations were completed successfully: metrocluster check lif show

  4. Perform the switchback by using the metrocluster switchback command from any node in the surviving cluster.

  5. Verify that the switchback operation has completed: metrocluster show

    The switchback operation is still running when a cluster is in the waiting-for-switchback state:

    cluster_B::> metrocluster show
    Cluster              Configuration State    Mode
    --------------------	------------------- 	---------
     Local: cluster_B configured       	switchover
    Remote: cluster_A configured       	waiting-for-switchback

    The switchback operation is complete when the clusters are in the normal state.:

    cluster_B::> metrocluster show
    Cluster              Configuration State    Mode
    --------------------	------------------- 	---------
     Local: cluster_B configured      		normal
    Remote: cluster_A configured      		normal

    If a switchback is taking a long time to finish, you can check on the status of in-progress baselines by using the metrocluster config-replication resync-status show command.

  6. Reestablish any SnapMirror or SnapVault configurations.

Step 6: Return the failed part to NetApp

Return the failed part to NetApp, as described in the RMA instructions shipped with the kit. See the Part Return and Replacements page for further information.