Skip to main content

Verify cluster and storage health after an ONTAP revert

Contributors netapp-aherbin netapp-thomi netapp-forry

After you revert an ONTAP cluster, you should verify that the nodes are healthy and eligible to participate in the cluster, and that the cluster is in quorum. You should also verify the status of your disks, aggregates, and volumes.

Verify cluster health

Steps
  1. Verify that the nodes in the cluster are online and are eligible to participate in the cluster:

    cluster show

    In this example, the cluster is healthy and all nodes are eligible to participate in the cluster.

    cluster1::> cluster show
    Node                  Health  Eligibility
    --------------------- ------- ------------
    node0                 true    true
    node1                 true    true

    If any node is unhealthy or ineligible, check EMS logs for errors and take corrective action.

  2. Set the privilege level to advanced:

    set -privilege advanced

    Enter y to continue.

  3. Verify the configuration details for each RDB process.

    • The relational database epoch and database epochs should match for each node.

    • The per-ring quorum master should be the same for all nodes.

      Note that each ring might have a different quorum master.

      To display this RDB process…​ Enter this command…​

      Management application

      cluster ring show -unitname mgmt

      Volume location database

      cluster ring show -unitname vldb

      Virtual-Interface manager

      cluster ring show -unitname vifmgr

      SAN management daemon

      cluster ring show -unitname bcomd

      This example shows the volume location database process:

      cluster1::*> cluster ring show -unitname vldb
      Node      UnitName Epoch    DB Epoch DB Trnxs Master    Online
      --------- -------- -------- -------- -------- --------- ---------
      node0     vldb     154      154      14847    node0     master
      node1     vldb     154      154      14847    node0     secondary
      node2     vldb     154      154      14847    node0     secondary
      node3     vldb     154      154      14847    node0     secondary
      4 entries were displayed.
  4. Return to the admin privilege level:

    set -privilege admin
  5. If you are operating in a SAN environment, verify that each node is in a SAN quorum:

    event log show  -severity informational -message-name scsiblade.*

    The most recent scsiblade event message for each node should indicate that the scsi-blade is in quorum.

    cluster1::*> event log show  -severity informational -message-name scsiblade.*
    Time             Node       Severity       Event
    ---------------  ---------- -------------- ---------------------------
    MM/DD/YYYY TIME  node0      INFORMATIONAL  scsiblade.in.quorum: The scsi-blade ...
    MM/DD/YYYY TIME  node1      INFORMATIONAL  scsiblade.in.quorum: The scsi-blade ...
Related information

System administration

Verify storage health

After you revert or downgrade a cluster, you should verify the status of your disks, aggregates, and volumes.

Steps
  1. Verify disk status:

    To check for…​ Do this…​

    Broken disks

    1. Display any broken disks:

      storage disk show -state broken
    2. Remove or replace any broken disks.

    Disks undergoing maintenance or reconstruction

    1. Display any disks in maintenance, pending, or reconstructing states:

      storage disk show -state maintenance|pending|reconstructing
    2. Wait for the maintenance or reconstruction operation to finish before proceeding.

  2. Verify that all aggregates are online by displaying the state of physical and logical storage, including storage aggregates:

    storage aggregate show -state !online

    This command displays the aggregates that are not online. All aggregates must be online before and after performing a major upgrade or reversion.

    cluster1::> storage aggregate show -state !online
    There are no entries matching your query.
  3. Verify that all volumes are online by displaying any volumes that are not online:

    volume show -state !online

    All volumes must be online before and after performing a major upgrade or reversion.

    cluster1::> volume show -state !online
    There are no entries matching your query.
  4. Verify that there are no inconsistent volumes:

    volume show -is-inconsistent true

    See the Knowledge Base article Volume Showing WAFL Inconsistent on how to address the inconsistent volumes.

Related information

Disk and aggregate management

Verify client access (SMB and NFS)

For the configured protocols, test access from SMB and NFS clients to verify that the cluster is accessible.