Performing aggregate healing and restoring mirrors (MetroCluster FC configurations)
After replacing hardware and assigning disks, you can perform the MetroCluster healing operations. You must then confirm that aggregates are mirrored and, if necessary, restart mirroring.
-
Perform the two phases of healing (aggregate healing and root healing) on the disaster site:
cluster_B::> metrocluster heal -phase aggregates cluster_B::> metrocluster heal -phase root-aggregates
-
Monitor the healing and verify that the aggregates are in either the resyncing or mirrored state:
storage aggregate show -node local
If the aggregate shows this state…
Then…
resyncing
No action is required. Let the aggregate complete resyncing.
mirror degraded
mirrored, normal
No action is required.
unknown, offline
The root aggregate shows this state if all the disks on the disaster sites were replaced.
cluster_B::> storage aggregate show -node local Aggregate Size Available Used% State #Vols Nodes RAID Status --------- -------- --------- ----- ------- ------ ---------- ------------ node_B_1_aggr1 227.1GB 11.00GB 95% online 1 node_B_1 raid_dp, resyncing NodeA_1_aggr2 430.3GB 28.02GB 93% online 2 node_B_1 raid_dp, mirror degraded node_B_1_aggr3 812.8GB 85.37GB 89% online 5 node_B_1 raid_dp, mirrored, normal 3 entries were displayed. cluster_B::>
In the following examples, the three aggregates are each in a different state:
Node
State
node_B_1_aggr1
resyncing
node_B_1_aggr2
mirror degraded
node_B_1_aggr3
mirrored, normal
-
If one or more plexes remain offline, additional steps are required to rebuild the mirror.
In the preceding table, the mirror for node_B_1_aggr2 must be rebuilt.
-
View details of the aggregate to identify any failed plexes:
storage aggregate show -r -aggregate node_B_1_aggr2
In the following example, plex /node_B_1_aggr2/plex0 is in a failed state:
cluster_B::> storage aggregate show -r -aggregate node_B_1_aggr2 Owner Node: node_B_1 Aggregate: node_B_1_aggr2 (online, raid_dp, mirror degraded) (block checksums) Plex: /node_B_1_aggr2/plex0 (offline, failed, inactive, pool0) RAID Group /node_B_1_aggr2/plex0/rg0 (partial) Usable Physical Position Disk Pool Type RPM Size Size Status -------- ------------------------ ---- ----- ------ -------- -------- ---------- Plex: /node_B_1_aggr2/plex1 (online, normal, active, pool1) RAID Group /node_B_1_aggr2/plex1/rg0 (normal, block checksums) Usable Physical Position Disk Pool Type RPM Size Size Status -------- ------------------------ ---- ----- ------ -------- -------- ---------- dparity 1.44.8 1 SAS 15000 265.6GB 273.5GB (normal) parity 1.41.11 1 SAS 15000 265.6GB 273.5GB (normal) data 1.42.8 1 SAS 15000 265.6GB 273.5GB (normal) data 1.43.11 1 SAS 15000 265.6GB 273.5GB (normal) data 1.44.9 1 SAS 15000 265.6GB 273.5GB (normal) data 1.43.18 1 SAS 15000 265.6GB 273.5GB (normal) 6 entries were displayed. cluster_B::>
-
Delete the failed plex:
storage aggregate plex delete -aggregate aggregate-name -plex plex
-
Reestablish the mirror:
storage aggregate mirror -aggregate aggregate-name
-
Monitor the resynchronization and mirroring status of the plex until all mirrors are reestablished and all aggregates show mirrored, normal status:
storage aggregate show
-