Replace an I/O module - AFF A700 and FAS9000
To replace an I/O module, you must perform a specific sequence of tasks.
You can use this procedure with all versions of ONTAP supported by your system
All other components in the system must be functioning properly; if not, you must contact technical support.
Step 1: Shut down the impaired controller
You can shut down or take over the impaired controller using different procedures, depending on the storage system hardware configuration.
To shut down the impaired controller, you must determine the status of the controller and, if necessary, take over the controller so that the healthy controller continues to serve data from the impaired controller storage.
If you are using NetApp Storage Encryption, you must have reset the MSID using the instructions in the “Returning SEDs to unprotected mode” section of the ONTAP 9 NetApp Encryption Power Guide.
If you have a SAN system, you must have checked event messages (
event log show) for impaired controller SCSI blade.
Each SCSI-blade process should be in quorum with the other nodes in the cluster. Any issues must be resolved before you proceed with the replacement.
If you have a cluster with more than two nodes, it must be in quorum. If the cluster is not in quorum or a healthy controller shows false for eligibility and health, you must correct the issue before shutting down the impaired controller; see the Administration overview with the CLI.
If you have a MetroCluster configuration, you must have confirmed that the MetroCluster Configuration State is configured and that the nodes are in an enabled and normal state (
metrocluster node show).
If AutoSupport is enabled, suppress automatic case creation by invoking an AutoSupport message:
system node autosupport invoke -node * -type all -message MAINT=number_of_hours_downh
The following AutoSupport message suppresses automatic case creation for two hours:
cluster1:*> system node autosupport invoke -node * -type all -message MAINT=2h
Disable automatic giveback from the console of the healthy controller:
storage failover modify –node local -auto-giveback false
When you see Do you want to disable auto-giveback?, enter
Take the impaired controller to the LOADER prompt:
If the impaired controller is displaying… Then…
The LOADER prompt
Go to Remove controller module.
Waiting for giveback…
Press Ctrl-C, and then respond
System prompt or password prompt (enter system password)
Take over or halt the impaired controller from the healthy controller:
storage failover takeover -ofnode impaired_node_name
When the impaired controller shows Waiting for giveback…, press Ctrl-C, and then respond
To shut down the impaired controller, you must determine the status of the controller and, if necessary, switch over the controller so that the healthy controller continues to serve data from the impaired controller storage.
If you are using NetApp Storage Encryption, you must have reset the MSID using the instructions in the "Return a FIPS drive or SED to unprotected mode" section of NetApp Encryption overview with the CLI.
You must leave the power supplies turned on at the end of this procedure to provide power to the healthy controller.
Check the MetroCluster status to determine whether the impaired controller has automatically switched over to the healthy controller:
Depending on whether an automatic switchover has occurred, proceed according to the following table:
If the impaired controller… Then…
Has automatically switched over
Proceed to the next step.
Has not automatically switched over
Perform a planned switchover operation from the healthy controller:
Has not automatically switched over, you attempted switchover with the
metrocluster switchovercommand, and the switchover was vetoed
Review the veto messages and, if possible, resolve the issue and try again. If you are unable to resolve the issue, contact technical support.
Resynchronize the data aggregates by running the
metrocluster heal -phase aggregatescommand from the surviving cluster.
controller_A_1::> metrocluster heal -phase aggregates [Job 130] Job succeeded: Heal Aggregates is successful.
If the healing is vetoed, you have the option of reissuing the
metrocluster healcommand with the
-override-vetoesparameter. If you use this optional parameter, the system overrides any soft vetoes that prevent the healing operation.
Verify that the operation has been completed by using the metrocluster operation show command.
controller_A_1::> metrocluster operation show Operation: heal-aggregates State: successful Start Time: 7/25/2016 18:45:55 End Time: 7/25/2016 18:45:56 Errors: -
Check the state of the aggregates by using the
storage aggregate showcommand.
controller_A_1::> storage aggregate show Aggregate Size Available Used% State #Vols Nodes RAID Status --------- -------- --------- ----- ------- ------ ---------------- ------------ ... aggr_b2 227.1GB 227.1GB 0% online 0 mcc1-a2 raid_dp, mirrored, normal...
Heal the root aggregates by using the
metrocluster heal -phase root-aggregatescommand.
mcc1A::> metrocluster heal -phase root-aggregates [Job 137] Job succeeded: Heal Root Aggregates is successful
If the healing is vetoed, you have the option of reissuing the
metrocluster healcommand with the -override-vetoes parameter. If you use this optional parameter, the system overrides any soft vetoes that prevent the healing operation.
Verify that the heal operation is complete by using the
metrocluster operation showcommand on the destination cluster:
mcc1A::> metrocluster operation show Operation: heal-root-aggregates State: successful Start Time: 7/29/2016 20:54:41 End Time: 7/29/2016 20:54:42 Errors: -
On the impaired controller module, disconnect the power supplies.
Step 2: Replace I/O modules
To replace an I/O module, locate it within the chassis and follow the specific sequence of steps.
If you are not already grounded, properly ground yourself.
Unplug any cabling associated with the target I/O module.
Make sure that you label the cables so that you know where they came from.
Remove the target I/O module from the chassis:
Depress the lettered and numbered cam button.
The cam button moves away from the chassis.
Rotate the cam latch down until it is in a horizontal position.
The I/O module disengages from the chassis and moves about 1/2 inch out of the I/O slot.
Remove the I/O module from the chassis by pulling on the pull tabs on the sides of the module face.
Make sure that you keep track of which slot the I/O module was in.
Lettered and numbered I/O cam latch
I/O cam latch completely unlocked
Set the I/O module aside.
Install the replacement I/O module into the chassis by gently sliding the I/O module into the slot until the lettered and numbered I/O cam latch begins to engage with the I/O cam pin, and then push the I/O cam latch all the way up to lock the module in place.
Recable the I/O module, as needed.
Step 3: Reboot the controller after I/O module replacement
After you replace an I/O module, you must reboot the controller module.
|If the new I/O module is not the same model as the failed module, you must first reboot the BMC.|
Reboot the BMC if the replacement module is not the same model as the old module:
From the LOADER prompt, change to advanced privilege mode:
priv set advanced
Reboot the BMC:
From the LOADER prompt, reboot the node:
This reinitializes the PCIe cards and other components and reboots the node.
If your system is configured to support 10 GbE cluster interconnect and data connections on 40 GbE NICs or onboard ports, convert these ports to 10 GbE connections by using the
nicadmin convertcommand from Maintenance mode.
Be sure to exit Maintenance mode after completing the conversion.
Return the node to normal operation:
storage failover giveback -ofnode impaired_node_name
If automatic giveback was disabled, reenable it:
storage failover modify -node local -auto-giveback true
If your system is in a two-node MetroCluster configuration, you must switch back the aggregates as described in the next step.
Step 4: Switch back aggregates in a two-node MetroCluster configuration
After you have completed the FRU replacement in a two-node MetroCluster configuration, you can perform the MetroCluster switchback operation. This returns the configuration to its normal operating state, with the sync-source storage virtual machines (SVMs) on the formerly impaired site now active and serving data from the local disk pools.
This task only applies to two-node MetroCluster configurations.
Verify that all nodes are in the
metrocluster node show
cluster_B::> metrocluster node show DR Configuration DR Group Cluster Node State Mirroring Mode ----- ------- -------------- -------------- --------- -------------------- 1 cluster_A controller_A_1 configured enabled heal roots completed cluster_B controller_B_1 configured enabled waiting for switchback recovery 2 entries were displayed.
Verify that resynchronization is complete on all SVMs:
metrocluster vserver show
Verify that any automatic LIF migrations being performed by the healing operations were completed successfully:
metrocluster check lif show
Perform the switchback by using the
metrocluster switchbackcommand from any node in the surviving cluster.
Verify that the switchback operation has completed:
The switchback operation is still running when a cluster is in the
cluster_B::> metrocluster show Cluster Configuration State Mode -------------------- ------------------- --------- Local: cluster_B configured switchover Remote: cluster_A configured waiting-for-switchback
The switchback operation is complete when the clusters are in the
cluster_B::> metrocluster show Cluster Configuration State Mode -------------------- ------------------- --------- Local: cluster_B configured normal Remote: cluster_A configured normal
If a switchback is taking a long time to finish, you can check on the status of in-progress baselines by using the
metrocluster config-replication resync-status showcommand.
Reestablish any SnapMirror or SnapVault configurations.
Step 5: Return the failed part to NetApp
Return the failed part to NetApp, as described in the RMA instructions shipped with the kit. See the Part Return & Replacements page for further information.