Replace an NVDIMM - AFF A400
You must replace the NVDIMM in the controller module when your system registers that the flash lifetime is almost at an end or that the identified NVDIMM is not healthy in general; failure to do so causes a system panic.
All other components in the system must be functioning properly; if not, you must contact technical support.
You must replace the failed component with a replacement FRU component you received from your provider.
Step 1: Shut down the impaired controller
Shut down or take over the impaired controller using the appropriate procedure for your configuration.
To shut down the impaired controller, you must determine the status of the controller and, if necessary, take over the controller so that the healthy controller continues to serve data from the impaired controller storage.
If you are using NetApp Storage Encryption, you must have reset the MSID using the instructions in the “Returning SEDs to unprotected mode” section of the ONTAP 9 NetApp Encryption Power Guide.
If you have a SAN system, you must have checked event messages (
event log show) for impaired controller SCSI blade.
Each SCSI-blade process should be in quorum with the other nodes in the cluster. Any issues must be resolved before you proceed with the replacement.
If you have a cluster with more than two nodes, it must be in quorum. If the cluster is not in quorum or a healthy controller shows false for eligibility and health, you must correct the issue before shutting down the impaired controller; see the Administration overview with the CLI.
If you have a MetroCluster configuration, you must have confirmed that the MetroCluster Configuration State is configured and that the nodes are in an enabled and normal state (
metrocluster node show).
If AutoSupport is enabled, suppress automatic case creation by invoking an AutoSupport message:
system node autosupport invoke -node * -type all -message MAINT=number_of_hours_downh
The following AutoSupport message suppresses automatic case creation for two hours:
cluster1:*> system node autosupport invoke -node * -type all -message MAINT=2h
Disable automatic giveback from the console of the healthy controller:
storage failover modify –node local -auto-giveback false
When you see Do you want to disable auto-giveback?, enter
Take the impaired controller to the LOADER prompt:
If the impaired controller is displaying… Then…
The LOADER prompt
Go to Remove controller module.
Waiting for giveback…
Press Ctrl-C, and then respond
System prompt or password prompt (enter system password)
Take over or halt the impaired controller from the healthy controller:
storage failover takeover -ofnode impaired_node_name
When the impaired controller shows Waiting for giveback…, press Ctrl-C, and then respond
To shut down the impaired controller, you must determine the status of the controller and, if necessary, switch over the controller so that the healthy controller continues to serve data from the impaired controller storage.
If you are using NetApp Storage Encryption, you must have reset the MSID using the instructions in the "Return a FIPS drive or SED to unprotected mode" section of NetApp Encryption overview with the CLI.
You must leave the power supplies turned on at the end of this procedure to provide power to the healthy controller.
Check the MetroCluster status to determine whether the impaired controller has automatically switched over to the healthy controller:
Depending on whether an automatic switchover has occurred, proceed according to the following table:
If the impaired controller… Then…
Has automatically switched over
Proceed to the next step.
Has not automatically switched over
Perform a planned switchover operation from the healthy controller:
Has not automatically switched over, you attempted switchover with the
metrocluster switchovercommand, and the switchover was vetoed
Review the veto messages and, if possible, resolve the issue and try again. If you are unable to resolve the issue, contact technical support.
Resynchronize the data aggregates by running the
metrocluster heal -phase aggregatescommand from the surviving cluster.
controller_A_1::> metrocluster heal -phase aggregates [Job 130] Job succeeded: Heal Aggregates is successful.
If the healing is vetoed, you have the option of reissuing the
metrocluster healcommand with the
-override-vetoesparameter. If you use this optional parameter, the system overrides any soft vetoes that prevent the healing operation.
Verify that the operation has been completed by using the metrocluster operation show command.
controller_A_1::> metrocluster operation show Operation: heal-aggregates State: successful Start Time: 7/25/2016 18:45:55 End Time: 7/25/2016 18:45:56 Errors: -
Check the state of the aggregates by using the
storage aggregate showcommand.
controller_A_1::> storage aggregate show Aggregate Size Available Used% State #Vols Nodes RAID Status --------- -------- --------- ----- ------- ------ ---------------- ------------ ... aggr_b2 227.1GB 227.1GB 0% online 0 mcc1-a2 raid_dp, mirrored, normal...
Heal the root aggregates by using the
metrocluster heal -phase root-aggregatescommand.
mcc1A::> metrocluster heal -phase root-aggregates [Job 137] Job succeeded: Heal Root Aggregates is successful
metrocluster healcommand with the -override-vetoes parameter. If you use this optional parameter, the system overrides any soft vetoes that prevent the healing operation.
Verify that the heal operation is complete by using the
metrocluster operation showcommand on the destination cluster:
mcc1A::> metrocluster operation show Operation: heal-root-aggregates State: successful Start Time: 7/29/2016 20:54:41 End Time: 7/29/2016 20:54:42 Errors: -
On the impaired controller module, disconnect the power supplies.
Step 2: Remove the controller module
To access components inside the controller module, you must remove the controller module from the chassis.
You can use the following animations, illustration, or the written steps to remove the controller module from the chassis.
If you are not already grounded, properly ground yourself.
Release the power cable retainers, and then unplug the cables from the power supplies.
Loosen the hook and loop strap binding the cables to the cable management device, and then unplug the system cables and SFPs (if needed) from the controller module, keeping track of where the cables were connected.
Leave the cables in the cable management device so that when you reinstall the cable management device, the cables are organized.
Remove the cable management device from the controller module and set it aside.
Press down on both of the locking latches, and then rotate both latches downward at the same time.
The controller module moves slightly out of the chassis.
Slide the controller module out of the chassis.
Make sure that you support the bottom of the controller module as you slide it out of the chassis.
Place the controller module on a stable, flat surface.
Step 3: Replace the NVDIMM
To replace the NVDIMM, you must locate it in the controller module using the FRU map on top of the air duct or the FRU Map on the top of the slot 1 riser.
The NVDIMM LED blinks while destaging contents when you halt the system. After the destage is complete, the LED turns off.
Although the contents of the NVDIMM is encrypted, it is a best practice to erase the contents of the NVDIMM before replacing it. For more information, see the Statement of Volatility on the NetApp Support Site.
You must log into the NetApp Support Site to display the Statement of Volatility for your system.
You can use the following animation, illustration, or the written steps to replace the NVDIMM.
|The animation shows empty slots for sockets without DIMMs. These empty sockets are populated with blanks.|
Open the air duct and then locate the NVDIMM in slot 11 on your controller module.
The NVDIMM looks significantly different than system DIMMs.
Eject the NVDIMM from its slot by slowly pushing apart the two NVDIMM ejector tabs on either side of the NVDIMM, and then slide the NVDIMM out of the socket and set it aside.
Carefully hold the NVDIMM by the edges to avoid pressure on the components on the NVDIMM circuit board.
Remove the replacement NVDIMM from the antistatic shipping bag, hold the NVDIMM by the corners, and then align it to the slot.
The notch among the pins on the NVDIMM should line up with the tab in the socket.
Locate the slot where you are installing the NVDIMM.
Insert the NVDIMM squarely into the slot.
The NVDIMM fits tightly in the slot, but should go in easily. If not, realign the NVDIMM with the slot and reinsert it.
Visually inspect the NVDIMM to verify that it is evenly aligned and fully inserted into the slot.
Push carefully, but firmly, on the top edge of the NVDIMM until the ejector tabs snap into place over the notches at the ends of the NVDIMM.
Close the air duct.
Step 4: Install the controller module
After you have replaced the component in the controller module, you must reinstall the controller module into the chassis, and then boot it to Maintenance mode.
You can use the following animation, illustration, or the written steps to install the controller module in the chassis.
If you have not already done so, close the air duct.
Align the end of the controller module with the opening in the chassis, and then gently push the controller module halfway into the system.
Do not completely insert the controller module in the chassis until instructed to do so.
Cable the management and console ports only, so that you can access the system to perform the tasks in the following sections.
You will connect the rest of the cables to the controller module later in this procedure.
Complete the installation of the controller module:
Using the locking latches, firmly push the controller module into the chassis until the locking latches begin to rise.
Do not use excessive force when sliding the controller module into the chassis to avoid damaging the connectors.
Fully seat the controller module in the chassis by rotating the locking latches upward, tilting them so that they clear the locking pins, gently push the controller all the way in, and then lower the locking latches into the locked position.
The controller module begins to boot as soon as it is fully seated in the chassis. Be prepared to interrupt the boot process.
If you have not already done so, reinstall the cable management device.
Interrupt the normal boot process and boot to LOADER by pressing
If your system stops at the boot menu, select the option to boot to LOADER.
At the LOADER prompt, enter
byeto reinitialize the PCIe cards and other components.
Interrupt the boot process and boot to the LOADER prompt by pressing
If your system stops at the boot menu, select the option to boot to LOADER.
Step 5: Run diagnostics
After you have replaced the NVDIMM in your system, you should run diagnostic tests on that component.
Your system must be at the LOADER prompt to start diagnostics.
All commands in the diagnostic procedures are issued from the controller where the component is being replaced.
If the controller to be serviced is not at the LOADER prompt, reboot the controller:
system node halt -node node_name
After you issue the command, you should wait until the system stops at the LOADER prompt.
At the LOADER prompt, access the special drivers specifically designed for system-level diagnostics to function properly:
Select Scan System from the displayed menu to enable running the diagnostics tests.
Select Test Memory from the displayed menu.
Select NVDIMM Test from the displayed menu.
Proceed based on the result of the preceding step:
If the test failed, correct the failure, and then rerun the test.
If the test reported no failures, select Reboot from the menu to reboot the system.
Step 6: Restore the controller module to operation after running diagnostics
After completing diagnostics, you must recable the system, give back the controller module, and then reenable automatic giveback.
Recable the system, as needed.
If you removed the media converters (QSFPs or SFPs), remember to reinstall them if you are using fiber optic cables.
Return the controller to normal operation by giving back its storage:
storage failover giveback -ofnode impaired_node_name
If automatic giveback was disabled, reenable it:
storage failover modify -node local -auto-giveback true
Step 7: Switch back aggregates in a two-node MetroCluster configuration
After you have completed the FRU replacement in a two-node MetroCluster configuration, you can perform the MetroCluster switchback operation. This returns the configuration to its normal operating state, with the sync-source storage virtual machines (SVMs) on the formerly impaired site now active and serving data from the local disk pools.
This task only applies to two-node MetroCluster configurations.
Verify that all nodes are in the
metrocluster node show
cluster_B::> metrocluster node show DR Configuration DR Group Cluster Node State Mirroring Mode ----- ------- -------------- -------------- --------- -------------------- 1 cluster_A controller_A_1 configured enabled heal roots completed cluster_B controller_B_1 configured enabled waiting for switchback recovery 2 entries were displayed.
Verify that resynchronization is complete on all SVMs:
metrocluster vserver show
Verify that any automatic LIF migrations being performed by the healing operations were completed successfully:
metrocluster check lif show
Perform the switchback by using the
metrocluster switchbackcommand from any node in the surviving cluster.
Verify that the switchback operation has completed:
The switchback operation is still running when a cluster is in the
cluster_B::> metrocluster show Cluster Configuration State Mode -------------------- ------------------- --------- Local: cluster_B configured switchover Remote: cluster_A configured waiting-for-switchback
The switchback operation is complete when the clusters are in the
cluster_B::> metrocluster show Cluster Configuration State Mode -------------------- ------------------- --------- Local: cluster_B configured normal Remote: cluster_A configured normal
If a switchback is taking a long time to finish, you can check on the status of in-progress baselines by using the
metrocluster config-replication resync-status showcommand.
Reestablish any SnapMirror or SnapVault configurations.
Step 8: Return the failed part to NetApp
Return the failed part to NetApp, as described in the RMA instructions shipped with the kit. See the Part Return & Replacements page for further information.