Replace the caching module - FAS8300 and FAS8700

Contributors dougthomp netapp-martyh

You must replace the caching module in the controller module when your system registers a single AutoSupport (ASUP) message that the module has gone offline; failure to do so results in performance degradation.

  • You must replace the failed component with a replacement FRU component you received from your provider.

Step 1: Shut down the impaired controller

You can shut down or take over the impaired controller using different procedures, depending on the storage system hardware configuration.

Option 1: Most configurations

To shut down the impaired node, you must determine the status of the node and, if necessary, take over the node so that the healthy node continues to serve data from the impaired node storage.

About this tasks

If you have a cluster with more than two nodes, it must be in quorum. If the cluster is not in quorum or a healthy node shows false for eligibility and health, you must correct the issue before shutting down the impaired node.

You might want to erase the contents of your caching module before replacing it.

Steps
  1. Although data on the caching module is encrypted, you might want to erase any data from the impaired caching module and verify that the caching module has no data:

    1. Erase the data on the caching module: system controller flash-cache secure-erase run -node node_name -device-id device_id

    2. Verify that the data has been erased from the caching module: system controller flash-cache secure-erase show -node node_name

      The output should display the caching module status as erased.

  2. If AutoSupport is enabled, suppress automatic case creation by invoking an AutoSupport message: system node autosupport invoke -node * -type all -message MAINT=_number_of_hours_down_h

    The following AutoSupport message suppresses automatic case creation for two hours: cluster1:*> system node autosupport invoke -node * -type all -message MAINT=2h

  3. Disable automatic giveback from the console of the healthy node: storage failover modify –node local -auto-giveback false

  4. Take the impaired node to the LOADER prompt:

    If the impaired node is displaying…​ Then…​

    The LOADER prompt

    Go to the next step.

    Waiting for giveback…​

    Press Ctrl-C, and then respond y.

    System prompt or password prompt (enter system password)

    Take over or halt the impaired node: storage failover takeover -ofnode impaired_node_name When the impaired node shows Waiting for giveback…​, press Ctrl-C, and then respond y.

Option 2: Controller is in a MetroCluster

Note Do not use this procedure if your system is in a two-node MetroCluster configuration.

To shut down the impaired node, you must determine the status of the node and, if necessary, take over the node so that the healthy node continues to serve data from the impaired node storage.

  • If you have a cluster with more than two nodes, it must be in quorum. If the cluster is not in quorum or a healthy node shows false for eligibility and health, you must correct the issue before shutting down the impaired node; see the Administration overview with the CLI.

  • If you have a MetroCluster configuration, you must have confirmed that the MetroCluster Configuration State is configured and that the nodes are in an enabled and normal state (metrocluster node show).

Steps
  1. If AutoSupport is enabled, suppress automatic case creation by invoking an AutoSupport message: system node autosupport invoke -node * -type all -message MAINT=number_of_hours_downh

    The following AutoSupport message suppresses automatic case creation for two hours: cluster1:*> system node autosupport invoke -node * -type all -message MAINT=2h

  2. Disable automatic giveback from the console of the healthy node: storage failover modify –node local -auto-giveback false

  3. Take the impaired node to the LOADER prompt:

    If the impaired node is displaying…​ Then…​

    The LOADER prompt

    Go to the next step.

    Waiting for giveback…​

    Press Ctrl-C, and then respond y when prompted.

    System prompt or password prompt (enter system password)

    Take over or halt the impaired node:

    • For an HA pair, take over the impaired node from the healthy node: storage failover takeover -ofnode impaired_node_name

      When the impaired node shows Waiting for giveback…​, press Ctrl-C, and then respond y.

Option 3: Controller is in a two-node MetroCluster

To shut down the impaired node, you must determine the status of the node and, if necessary, switch over the node so that the healthy node continues to serve data from the impaired node storage.

About this task
  • If you are using NetApp Storage Encryption, you must have reset the MSID using the instructions in the "Returning SEDs to unprotected mode" section of Administration overview with the CLI.

  • You must leave the power supplies turned on at the end of this procedure to provide power to the healthy node.

Steps
  1. Check the MetroCluster status to determine whether the impaired node has automatically switched over to the healthy node: metrocluster show

  2. Depending on whether an automatic switchover has occurred, proceed according to the following table:

    If the impaired node…​ Then…​

    Has automatically switched over

    Proceed to the next step.

    Has not automatically switched over

    Perform a planned switchover operation from the healthy node: metrocluster switchover

    Has not automatically switched over, you attempted switchover with the metrocluster switchover command, and the switchover was vetoed

    Review the veto messages and, if possible, resolve the issue and try again. If you are unable to resolve the issue, contact technical support.

  3. Resynchronize the data aggregates by running the metrocluster heal -phase aggregates command from the surviving cluster.

    controller_A_1::> metrocluster heal -phase aggregates
    [Job 130] Job succeeded: Heal Aggregates is successful.

    If the healing is vetoed, you have the option of reissuing the metrocluster heal command with the -override-vetoes parameter. If you use this optional parameter, the system overrides any soft vetoes that prevent the healing operation.

  4. Verify that the operation has been completed by using the metrocluster operation show command.

    controller_A_1::> metrocluster operation show
        Operation: heal-aggregates
          State: successful
    Start Time: 7/25/2016 18:45:55
       End Time: 7/25/2016 18:45:56
         Errors: -
  5. Check the state of the aggregates by using the storage aggregate show command.

    controller_A_1::> storage aggregate show
    Aggregate     Size Available Used% State   #Vols  Nodes            RAID Status
    --------- -------- --------- ----- ------- ------ ---------------- ------------
    ...
    aggr_b2    227.1GB   227.1GB    0% online       0 mcc1-a2          raid_dp, mirrored, normal...
  6. Heal the root aggregates by using the metrocluster heal -phase root-aggregates command.

    mcc1A::> metrocluster heal -phase root-aggregates
    [Job 137] Job succeeded: Heal Root Aggregates is successful

    If the healing is vetoed, you have the option of reissuing the metrocluster heal command with the -override-vetoes parameter. If you use this optional parameter, the system overrides any soft vetoes that prevent the healing operation.

  7. Verify that the heal operation is complete by using the metrocluster operation show command on the destination cluster:

    mcc1A::> metrocluster operation show
      Operation: heal-root-aggregates
          State: successful
     Start Time: 7/29/2016 20:54:41
       End Time: 7/29/2016 20:54:42
         Errors: -
  8. On the impaired controller module, disconnect the power supplies.

Step 2: Remove the controller module

To access components inside the controller module, you must remove the controller module from the chassis.

You can use the following animation, illustration, or the written steps to remove the controller module from the chassis.

drw A400 Remove controller
Steps
  1. If you are not already grounded, properly ground yourself.

  2. Release the power cable retainers, and then unplug the cables from the power supplies.

  3. Loosen the hook and loop strap binding the cables to the cable management device, and then unplug the system cables and SFPs (if needed) from the controller module, keeping track of where the cables were connected.

    Leave the cables in the cable management device so that when you reinstall the cable management device, the cables are organized.

  4. Remove the cable management device from the controller module and set it aside.

  5. Press down on both of the locking latches, and then rotate both latches downward at the same time.

    The controller module moves slightly out of the chassis.

  6. Slide the controller module out of the chassis.

    Make sure that you support the bottom of the controller module as you slide it out of the chassis.

  7. Place the controller module on a stable, flat surface.

Step 3: Replace a caching module

To replace a caching module, referred to as the Flash Cache on the label on your controller, locate the slot inside the controller and follow the specific sequence of steps. See the FRU map on the controller module for the location of the Flash Cache.

Your storage system must meet certain criteria depending on your situation:

  • It must have the appropriate operating system for the caching module you are installing.

  • It must support the caching capacity.

  • Although the contents of the caching module is encrypted, it is a best practice to erase the contents of the module before replacing it. For more information, see the Statement of Volatility for your system on the NetApp Support Site.

    Note You must log into the NetApp Support Site to display the Statement of Volatility for your system.
  • All other components in the storage system must be functioning properly; if not, you must contact technical support.

You can use the following animation, illustration, or the written steps to replace a caching module.

drw 8300 Replace flashcache
Steps
  1. If you are not already grounded, properly ground yourself.

  2. Open the air duct:

    1. Press the locking tabs on the sides of the air duct in toward the middle of the controller module.

    2. Slide the air duct toward the back of the controller module, and then rotate it upward to its completely open position.

  3. Using the FRU map on the controller module, locate the failed caching module and remove it:

    Depending on your configuration, there may be zero, one, or two caching modules in the controller module. The failed caching module’s LED is lit.

    1. Press the blue release tab.

      The caching module end rises clear of the release tab.

    2. Rotate the caching module up and slide it out of the socket.

  4. Install the replacement caching module:

    1. Align the edges of the replacement caching module with the socket and gently insert it into the socket.

    2. Rotate the caching module downward toward the motherboard.

    3. Placing your finger at the end of the caching module by the blue button, firmly push down on the caching module end, and then lift the locking button to lock the caching module in place.

  5. Close the air duct:

    1. Rotate the air duct down to the controller module.

    2. Slide the air duct toward the risers to lock it in place.

Step 4: Install the controller module

After you have replaced the component in the controller module, you must re-install the controller module into the chassis, and then boot it to Maintenance mode.

You can use the following animation, illustration, or the written steps to install the controller module in the chassis.

drw A400 Install controller source
Steps
  1. If you have not already done so, close the air duct.

  2. Align the end of the controller module with the opening in the chassis, and then gently push the controller module halfway into the system.

    Note Do not completely insert the controller module in the chassis until instructed to do so.
  3. Cable the management and console ports only, so that you can access the system to perform the tasks in the following sections.

    Note You will connect the rest of the cables to the controller module later in this procedure.
  4. Complete the installation of the controller module:

    1. Plug the power cord into the power supply, reinstall the power cable locking collar, and then connect the power supply to the power source.

    2. Using the locking latches, firmly push the controller module into the chassis until the locking latches begin to rise.

      Note Do not use excessive force when sliding the controller module into the chassis to avoid damaging the connectors.
    3. Fully seat the controller module in the chassis by rotating the locking latches upward, tilting them so that they clear the locking pins, gently push the controller all the way in, and then lower the locking latches into the locked position.

      The controller module begins to boot as soon as it is fully seated in the chassis. Be prepared to interrupt the boot process.

    4. If you have not already done so, reinstall the cable management device.

    5. Interrupt the normal boot process and boot to LOADER by pressing Ctrl-C.

      Note If your system stops at the boot menu, select the option to boot to LOADER.
    6. At the LOADER prompt, enter bye to reinitialize the PCIe cards and other components.

    7. Interrupt the boot process and boot to the LOADER prompt by pressing Ctrl-C.

      If your system stops at the boot menu, select the option to boot to LOADER.

Step 5: Run diagnostics

After you have replaced a component in your system, you should run diagnostic tests on that component.

Your system must be at the LOADER prompt to start diagnostics.

All commands in the diagnostic procedures are issued from the node where the component is being replaced.

Steps
  1. If the node to be serviced is not at the LOADER prompt, reboot the node: system node halt -node node_name

    After you issue the command, you should wait until the system stops at the LOADER prompt.

  2. At the LOADER prompt, access the special drivers specifically designed for system-level diagnostics to function properly: boot_diags

  3. Select Scan System from the displayed menu to enable running the diagnostics tests.

  4. Select Stress-Test system from the displayed menu.

  5. Select M.2 NVME Drive Stress from the displayed menu.

  6. Proceed based on the result of the preceding step:

    • If the test failed, correct the failure, and then rerun the test.

    • If the test reported no failures, select Reboot from the menu to reboot the system.

Step 6: Restore the controller module to operation after running diagnostics

After completing diagnostics, you must recable the system, give back the controller module, and then reenable automatic giveback.

Steps
  1. Recable the system, as needed.

    If you removed the media converters (QSFPs or SFPs), remember to reinstall them if you are using fiber optic cables.

  2. Return the node to normal operation by giving back its storage: storage failover giveback -ofnode impaired_node_name

  3. If automatic giveback was disabled, reenable it: storage failover modify -node local -auto-giveback true

Step 7: Switch back aggregates in a two-node MetroCluster configuration

After you have completed the FRU replacement in a two-node MetroCluster configuration, you can perform the MetroCluster switchback operation. This returns the configuration to its normal operating state, with the sync-source storage virtual machines (SVMs) on the formerly impaired site now active and serving data from the local disk pools.

This task only applies to two-node MetroCluster configurations.

Steps
  1. Verify that all nodes are in the enabled state: metrocluster node show

    cluster_B::>  metrocluster node show
    
    DR                           Configuration  DR
    Group Cluster Node           State          Mirroring Mode
    ----- ------- -------------- -------------- --------- --------------------
    1     cluster_A
                  controller_A_1 configured     enabled   heal roots completed
          cluster_B
                  controller_B_1 configured     enabled   waiting for switchback recovery
    2 entries were displayed.
  2. Verify that resynchronization is complete on all SVMs: metrocluster vserver show

  3. Verify that any automatic LIF migrations being performed by the healing operations were completed successfully: metrocluster check lif show

  4. Perform the switchback by using the metrocluster switchback command from any node in the surviving cluster.

  5. Verify that the switchback operation has completed: metrocluster show

    The switchback operation is still running when a cluster is in the waiting-for-switchback state:

    cluster_B::> metrocluster show
    Cluster              Configuration State    Mode
    --------------------	------------------- 	---------
     Local: cluster_B configured       	switchover
    Remote: cluster_A configured       	waiting-for-switchback

    The switchback operation is complete when the clusters are in the normal state.:

    cluster_B::> metrocluster show
    Cluster              Configuration State    Mode
    --------------------	------------------- 	---------
     Local: cluster_B configured      		normal
    Remote: cluster_A configured      		normal

    If a switchback is taking a long time to finish, you can check on the status of in-progress baselines by using the metrocluster config-replication resync-status show command.

  6. Reestablish any SnapMirror or SnapVault configurations.

Step 8: Complete the replacement process

After you replace the part, you can return the failed part to NetApp, as described in the RMA instructions shipped with the kit. Contact technical support at NetApp Support, 888-463-8277 (North America), 00-800-44-638277 (Europe), or +800-800-80-800 (Asia/Pacific) if you need the RMA number or additional help with the replacement procedure.