Shut down the controllers - ASA C800
- PDF of this doc site
Collection of separate PDF docs
Creating your file...
This procedure is for 2-node, non-MetroCluster configurations only. If you have a system with more than two nodes, see How to perform a graceful shutdown and power up of one HA pair in a 4-node cluster.
You need:
-
Local administrator credentials for ONTAP.
-
NetApp onboard key management (OKM) cluster-wide passphrase if using storage encryption or NVE/NAE.
-
BMC accessability for each controller.
-
Stop all clients/host from accessing data on the NetApp system.
-
Suspend external backup jobs.
-
Necessary tools and equipment for the replacement.
If the system is a NetApp StorageGRID or ONTAP S3 used as FabricPool cloud tier, refer to the Gracefully shutdown and power up your storage system Resolution Guide after performing this procedure. |
If using SSDs, refer to SU490: (Impact: Critical) SSD Best Practices: Avoid risk of drive failure and data loss if powered off for more than two months |
As a best practice before shutdown, you should:
-
Perform additional system health checks.
-
Upgrade ONTAP to a recommended release for the system.
-
Resolve any Active IQ Wellness Alerts and Risks. Make note of any faults presently on the system, such as LEDs on the system components.
-
Log into the cluster through SSH or log in from any node in the cluster using a local console cable and a laptop/console.
-
Turn off AutoSupport and indicate how long you expect the system to be offline:
system node autosupport invoke -node * -type all -message "MAINT=8h Power Maintenance"
-
Identify the SP/BMC address of all nodes:
system service-processor show -node * -fields address
-
Exit the cluster shell:
exit
-
Log into SP/BMC over SSH using the IP address of any of the nodes listed in the output from the previous step.
If your're using a console/laptop, log into the controller using the same cluster administrator credentials.
Open an SSH session to every SP/BMC connection so that you can monitor progress. -
Halt the 2 nodes located in the impaired chassis:
system node halt -node <node>,<node2> -skip-lif-migration-before-shutdown true -ignore-quorum-warnings true -inhibit-takeover true
For clusters using SnapMirror synchronous operating in StrictSync mode: system node halt -node <node>,<node2> -skip-lif-migration-before-shutdown true -ignore-quorum-warnings true -inhibit-takeover true -ignore-strict-sync-warnings true
-
Enter y for each controller in the cluster when you see
Warning: Are you sure you want to halt node "cluster <node-name> number"? {y|n}:
-
Wait for each controller to halt and display the LOADER prompt.