Skip to main content

Set up an active-passive configuration on nodes using root-data-data partitioning

Contributors netapp-thomi netapp-forry netapp-ahibbard

When an HA pair is configured to use root-data-data partitioning by the factory, ownership of the data partitions is split between both nodes in the pair for use in an active-active configuration. If you want to use the HA pair in an active-passive configuration, you must update partition ownership before creating your data local tier (aggregate).

What you'll need
  • You should have decided which node will be the active node and which node will be the passive node.

  • Storage failover must be configured on the HA pair.

About this task

This task is performed on two nodes: Node A and Node B.

This procedure is designed for nodes for which no data local tier (aggregate) has been created from the partitioned disks.

Steps

All commands are input at the cluster shell.

  1. View the current ownership of the data partitions:

    storage aggregate show-spare-disks -original-owner passive_node_name -fields local-usable-data1-size, local-usable-data2-size

    The output shows that half of the data partitions are owned by one node and half are owned by the other node. All of the data partitions should be spare.

  2. Enter the advanced privilege level:

    set advanced

  3. For each data1 partition owned by the node that will be the passive node, assign it to the active node:

    storage disk assign -force -data1 -owner active_node_name -disk disk_name

    You do not need to include the partition as part of the disk name

  4. For each data2 partition owned by the node that will be the passive node, assign it to the active node:

    storage disk assign -force -data2 -owner active_node_name -disk disk_name

    You do not need to include the partition as part of the disk name

  5. Confirm that all of the partitions are assigned to the active node:

    storage aggregate show-spare-disks

    cluster1::*> storage aggregate show-spare-disks
    
    Original Owner: cluster1-01
     Pool0
      Partitioned Spares
                                                                Local    Local
                                                                 Data     Root Physical
     Disk                        Type     RPM Checksum         Usable   Usable     Size
     --------------------------- ----- ------ -------------- -------- -------- --------
     1.0.0                       BSAS    7200 block           753.8GB       0B  828.0GB
     1.0.1                       BSAS    7200 block           753.8GB  73.89GB  828.0GB
     1.0.2                       BSAS    7200 block           753.8GB       0B  828.0GB
     1.0.3                       BSAS    7200 block           753.8GB       0B  828.0GB
     1.0.4                       BSAS    7200 block           753.8GB       0B  828.0GB
     1.0.5                       BSAS    7200 block           753.8GB       0B  828.0GB
     1.0.6                       BSAS    7200 block           753.8GB       0B  828.0GB
     1.0.7                       BSAS    7200 block           753.8GB       0B  828.0GB
     1.0.8                       BSAS    7200 block           753.8GB       0B  828.0GB
     1.0.9                       BSAS    7200 block           753.8GB       0B  828.0GB
     1.0.10                      BSAS    7200 block           753.8GB       0B  828.0GB
     1.0.11                      BSAS    7200 block           753.8GB       0B  828.0GB
    
    Original Owner: cluster1-02
     Pool0
      Partitioned Spares
                                                                Local    Local
                                                                 Data     Root Physical
     Disk                        Type     RPM Checksum         Usable   Usable     Size
     --------------------------- ----- ------ -------------- -------- -------- --------
     1.0.8                       BSAS    7200 block                0B  73.89GB  828.0GB
    13 entries were displayed.

    Note that cluster1-02 still owns a spare root partition.

  6. Return to administrative privilege:

    set admin

  7. Create your data aggregate, leaving at least one data partition as spare:

    storage aggregate create new_aggr_name -diskcount number_of_partitions -node active_node_name

    The data aggregate is created and is owned by the active node.

  8. Alternatively, you can use ONTAP's recommend aggregate layout which includes best practices for RAID group layout and spare counts:

    storage aggregate auto-provision