Skip to main content
ONTAP MetroCluster

Deleting failed plexes owned by the surviving site (MetroCluster IP configurations)

Contributors netapp-thomi netapp-pcarriga

After replacing hardware and assigning disks, you must delete failed remote plexes that are owned by the surviving site nodes but located at the disaster site.

About this task

These steps are performed on the surviving cluster.

Steps
  1. Identify the local aggregates: storage aggregate show -is-home true

    cluster_B::> storage aggregate show -is-home true
    
    cluster_B Aggregates:
    Aggregate     Size Available Used% State   #Vols  Nodes            RAID Status
    --------- -------- --------- ----- ------- ------ ---------------- ------------
    node_B_1_aggr0 1.49TB  74.12GB 95% online       1 node_B_1         raid4,
                                                                       mirror
                                                                       degraded
    node_B_2_aggr0 1.49TB  74.12GB 95% online       1 node_B_2         raid4,
                                                                       mirror
                                                                       degraded
    node_B_1_aggr1 2.99TB  2.88TB   3% online      15 node_B_1         raid_dp,
                                                                       mirror
                                                                       degraded
    node_B_1_aggr2 2.99TB  2.91TB   3% online      14 node_B_1         raid_tec,
                                                                       mirror
                                                                       degraded
    node_B_2_aggr1 2.95TB  2.80TB   5% online      37 node_B_2         raid_dp,
                                                                       mirror
                                                                       degraded
    node_B_2_aggr2 2.99TB  2.87TB   4% online      35 node_B_2         raid_tec,
                                                                       mirror
                                                                       degraded
    6 entries were displayed.
    
    cluster_B::>
  2. Identify the failed remote plexes:

    storage aggregate plex show

    The following example calls out the plexes that are remote (not plex0) and have a status of "failed":

    cluster_B::> storage aggregate plex show -fields aggregate,status,is-online,Plex,pool
    aggregate    plex  status        is-online pool
    ------------ ----- ------------- --------- ----
    node_B_1_aggr0 plex0 normal,active true     0
    node_B_1_aggr0 plex4 failed,inactive false  - <<<<---Plex at remote site
    node_B_2_aggr0 plex0 normal,active true     0
    node_B_2_aggr0 plex4 failed,inactive false  - <<<<---Plex at remote site
    node_B_1_aggr1 plex0 normal,active true     0
    node_B_1_aggr1 plex4 failed,inactive false  - <<<<---Plex at remote site
    node_B_1_aggr2 plex0 normal,active true     0
    node_B_1_aggr2 plex1 failed,inactive false  - <<<<---Plex at remote site
    node_B_2_aggr1 plex0 normal,active true     0
    node_B_2_aggr1 plex4 failed,inactive false  - <<<<---Plex at remote site
    node_B_2_aggr2 plex0 normal,active true     0
    node_B_2_aggr2 plex1 failed,inactive false  - <<<<---Plex at remote site
    node_A_1_aggr1 plex0 failed,inactive false  -
    node_A_1_aggr1 plex4 normal,active true     1
    node_A_1_aggr2 plex0 failed,inactive false  -
    node_A_1_aggr2 plex1 normal,active true     1
    node_A_2_aggr1 plex0 failed,inactive false  -
    node_A_2_aggr1 plex4 normal,active true     1
    node_A_2_aggr2 plex0 failed,inactive false  -
    node_A_2_aggr2 plex1 normal,active true     1
    20 entries were displayed.
    
    cluster_B::>
  3. Take offline each of the failed plexes, and then delete them:

    1. Take offline the failed plexes:

      storage aggregate plex offline -aggregate aggregate-name -plex plex-id

      The following example shows the aggregate "node_B_2_aggr1/plex1" being taken offline:

      cluster_B::> storage aggregate plex offline -aggregate node_B_1_aggr0 -plex plex4
      
      Plex offline successful on plex: node_B_1_aggr0/plex4
    2. Delete the failed plex:

      storage aggregate plex delete -aggregate aggregate-name -plex plex-id

      You can destroy the plex when prompted.

      The following example shows the plex node_B_2_aggr1/plex1 being deleted.

      cluster_B::> storage aggregate plex delete -aggregate  node_B_1_aggr0 -plex plex4
      
      Warning: Aggregate "node_B_1_aggr0" is being used for the local management root
               volume or HA partner management root volume, or has been marked as
               the aggregate to be used for the management root volume after a
               reboot operation. Deleting plex "plex4" for this aggregate could lead
               to unavailability of the root volume after a disaster recovery
               procedure. Use the "storage aggregate show -fields
               has-mroot,has-partner-mroot,root" command to view such aggregates.
      
      Warning: Deleting plex "plex4" of mirrored aggregate "node_B_1_aggr0" on node
               "node_B_1" in a MetroCluster configuration will disable its
               synchronous disaster recovery protection. Are you sure you want to
               destroy this plex? {y|n}: y
      [Job 633] Job succeeded: DONE
      
      cluster_B::>

    You must repeat these steps for each of the failed plexes.

  4. Confirm that the plexes have been removed:

    storage aggregate plex show -fields aggregate,status,is-online,plex,pool

    cluster_B::> storage aggregate plex show -fields aggregate,status,is-online,Plex,pool
    aggregate    plex  status        is-online pool
    ------------ ----- ------------- --------- ----
    node_B_1_aggr0 plex0 normal,active true     0
    node_B_2_aggr0 plex0 normal,active true     0
    node_B_1_aggr1 plex0 normal,active true     0
    node_B_1_aggr2 plex0 normal,active true     0
    node_B_2_aggr1 plex0 normal,active true     0
    node_B_2_aggr2 plex0 normal,active true     0
    node_A_1_aggr1 plex0 failed,inactive false  -
    node_A_1_aggr1 plex4 normal,active true     1
    node_A_1_aggr2 plex0 failed,inactive false  -
    node_A_1_aggr2 plex1 normal,active true     1
    node_A_2_aggr1 plex0 failed,inactive false  -
    node_A_2_aggr1 plex4 normal,active true     1
    node_A_2_aggr2 plex0 failed,inactive false  -
    node_A_2_aggr2 plex1 normal,active true     1
    14 entries were displayed.
    
    cluster_B::>
  5. Identify the switched-over aggregates:

    storage aggregate show -is-home false

    You can also use the storage aggregate plex show -fields aggregate,status,is-online,plex,pool command to identify plex 0 switched-over aggregates. They will have a status of "failed, inactive".

    The following commands show four switched-over aggregates:

    • node_A_1_aggr1

    • node_A_1_aggr2

    • node_A_2_aggr1

    • node_A_2_aggr2

    cluster_B::> storage aggregate show -is-home false
    
    cluster_A Switched Over Aggregates:
    Aggregate     Size Available Used% State   #Vols  Nodes            RAID Status
    --------- -------- --------- ----- ------- ------ ---------------- ------------
    node_A_1_aggr1 2.12TB  1.88TB   11% online      91 node_B_1        raid_dp,
                                                                       mirror
                                                                       degraded
    node_A_1_aggr2 2.89TB  2.64TB    9% online      90 node_B_1        raid_tec,
                                                                       mirror
                                                                       degraded
    node_A_2_aggr1 2.12TB  1.86TB   12% online      91 node_B_2        raid_dp,
                                                                       mirror
                                                                       degraded
    node_A_2_aggr2 2.89TB  2.64TB    9% online      90 node_B_2        raid_tec,
                                                                       mirror
                                                                       degraded
    4 entries were displayed.
    
    cluster_B::>
  6. Identify switched-over plexes:

    storage aggregate plex show -fields aggregate,status,is-online,Plex,pool

    You want to identify the plexes with a status of "failed, inactive".

    The following commands show four switched-over aggregates:

    cluster_B::> storage aggregate plex show -fields aggregate,status,is-online,Plex,pool
    aggregate    plex  status        is-online pool
    ------------ ----- ------------- --------- ----
    node_B_1_aggr0 plex0 normal,active true     0
    node_B_2_aggr0 plex0 normal,active true     0
    node_B_1_aggr1 plex0 normal,active true     0
    node_B_1_aggr2 plex0 normal,active true     0
    node_B_2_aggr1 plex0 normal,active true     0
    node_B_2_aggr2 plex0 normal,active true     0
    node_A_1_aggr1 plex0 failed,inactive false  -  <<<<-- Switched over aggr/Plex0
    node_A_1_aggr1 plex4 normal,active true     1
    node_A_1_aggr2 plex0 failed,inactive false  -  <<<<-- Switched over aggr/Plex0
    node_A_1_aggr2 plex1 normal,active true     1
    node_A_2_aggr1 plex0 failed,inactive false  -  <<<<-- Switched over aggr/Plex0
    node_A_2_aggr1 plex4 normal,active true     1
    node_A_2_aggr2 plex0 failed,inactive false  -  <<<<-- Switched over aggr/Plex0
    node_A_2_aggr2 plex1 normal,active true     1
    14 entries were displayed.
    
    cluster_B::>
  7. Delete the failed plex:

    storage aggregate plex delete -aggregate node_A_1_aggr1 -plex plex0

    You can destroy the plex when prompted.

    The following example shows the plex node_A_1_aggr1/plex0 being deleted:

    cluster_B::> storage aggregate plex delete -aggregate node_A_1_aggr1 -plex plex0
    
    Warning: Aggregate "node_A_1_aggr1" hosts MetroCluster metadata volume
             "MDV_CRS_e8457659b8a711e78b3b00a0988fe74b_A". Deleting plex "plex0"
             for this aggregate can lead to the failure of configuration
             replication across the two DR sites. Use the "volume show -vserver
             <admin-vserver> -volume MDV_CRS*" command to verify the location of
             such volumes.
    
    Warning: Deleting plex "plex0" of mirrored aggregate "node_A_1_aggr1" on node
             "node_A_1" in a MetroCluster configuration will disable its
             synchronous disaster recovery protection. Are you sure you want to
             destroy this plex? {y|n}: y
    [Job 639] Job succeeded: DONE
    
    cluster_B::>

    You must repeat these steps for each of the failed aggregates.

  8. Verify that there are no failed plexes remaining on the surviving site.

    The following output shows that all plexes are normal, active, and online.

    cluster_B::> storage aggregate plex show -fields aggregate,status,is-online,Plex,pool
    aggregate    plex  status        is-online pool
    ------------ ----- ------------- --------- ----
    node_B_1_aggr0 plex0 normal,active true     0
    node_B_2_aggr0 plex0 normal,active true     0
    node_B_1_aggr1 plex0 normal,active true     0
    node_B_2_aggr2 plex0 normal,active true     0
    node_B_1_aggr1 plex0 normal,active true     0
    node_B_2_aggr2 plex0 normal,active true     0
    node_A_1_aggr1 plex4 normal,active true     1
    node_A_1_aggr2 plex1 normal,active true     1
    node_A_2_aggr1 plex4 normal,active true     1
    node_A_2_aggr2 plex1 normal,active true     1
    10 entries were displayed.
    
    cluster_B::>