Skip to main content

Set up an active-passive configuration on nodes using root-data partitioning

Contributors netapp-thomi netapp-forry

When an HA pair is configured to use root-data partitioning by the factory, ownership of the data partitions is split between both nodes in the pair for use in an active-active configuration. If you want to use the HA pair in an active-passive configuration, you must update partition ownership before creating your data local tier (aggregate).

What you'll need
  • You should have decided which node will be the active node and which node will be the passive node.

  • Storage failover must be configured on the HA pair.

About this task

This task is performed on two nodes: Node A and Node B.

This procedure is designed for nodes for which no data local tier (aggregate) has been created from the partitioned disks.

Steps

All commands are inputted at the cluster shell.

  1. View the current ownership of the data partitions:

    storage aggregate show-spare-disks

    The output shows that half of the data partitions are owned by one node and half are owned by the other node. All of the data partitions should be spare.

    cluster1::> storage aggregate show-spare-disks
    
    Original Owner: cluster1-01
     Pool0
      Partitioned Spares
                                                                Local    Local
                                                                 Data     Root Physical
     Disk                        Type     RPM Checksum         Usable   Usable     Size
     --------------------------- ----- ------ -------------- -------- -------- --------
     1.0.0                       BSAS    7200 block           753.8GB       0B  828.0GB
     1.0.1                       BSAS    7200 block           753.8GB  73.89GB  828.0GB
     1.0.5                       BSAS    7200 block           753.8GB       0B  828.0GB
     1.0.6                       BSAS    7200 block           753.8GB       0B  828.0GB
     1.0.10                      BSAS    7200 block           753.8GB       0B  828.0GB
     1.0.11                      BSAS    7200 block           753.8GB       0B  828.0GB
    
    Original Owner: cluster1-02
     Pool0
      Partitioned Spares
                                                                Local    Local
                                                                 Data     Root Physical
     Disk                        Type     RPM Checksum         Usable   Usable     Size
     --------------------------- ----- ------ -------------- -------- -------- --------
     1.0.2                       BSAS    7200 block           753.8GB       0B  828.0GB
     1.0.3                       BSAS    7200 block           753.8GB       0B  828.0GB
     1.0.4                       BSAS    7200 block           753.8GB       0B  828.0GB
     1.0.7                       BSAS    7200 block           753.8GB       0B  828.0GB
     1.0.8                       BSAS    7200 block           753.8GB  73.89GB  828.0GB
     1.0.9                       BSAS    7200 block           753.8GB       0B  828.0GB
    12 entries were displayed.
  2. Enter the advanced privilege level:

    set advanced

  3. For each data partition owned by the node that will be the passive node, assign it to the active node:

    storage disk assign -force -data true -owner active_node_name -disk disk_name

    You do not need to include the partition as part of the disk name.

    You would enter a command similar to the following example for each data partition you need to reassign:

    storage disk assign -force -data true -owner cluster1-01 -disk 1.0.3

  4. Confirm that all of the partitions are assigned to the active node.

    cluster1::*> storage aggregate show-spare-disks
    
    Original Owner: cluster1-01
     Pool0
      Partitioned Spares
                                                                Local    Local
                                                                 Data     Root Physical
     Disk                        Type     RPM Checksum         Usable   Usable     Size
     --------------------------- ----- ------ -------------- -------- -------- --------
     1.0.0                       BSAS    7200 block           753.8GB       0B  828.0GB
     1.0.1                       BSAS    7200 block           753.8GB  73.89GB  828.0GB
     1.0.2                       BSAS    7200 block           753.8GB       0B  828.0GB
     1.0.3                       BSAS    7200 block           753.8GB       0B  828.0GB
     1.0.4                       BSAS    7200 block           753.8GB       0B  828.0GB
     1.0.5                       BSAS    7200 block           753.8GB       0B  828.0GB
     1.0.6                       BSAS    7200 block           753.8GB       0B  828.0GB
     1.0.7                       BSAS    7200 block           753.8GB       0B  828.0GB
     1.0.8                       BSAS    7200 block           753.8GB       0B  828.0GB
     1.0.9                       BSAS    7200 block           753.8GB       0B  828.0GB
     1.0.10                      BSAS    7200 block           753.8GB       0B  828.0GB
     1.0.11                      BSAS    7200 block           753.8GB       0B  828.0GB
    
    Original Owner: cluster1-02
     Pool0
      Partitioned Spares
                                                                Local    Local
                                                                 Data     Root Physical
     Disk                        Type     RPM Checksum         Usable   Usable     Size
     --------------------------- ----- ------ -------------- -------- -------- --------
     1.0.8                       BSAS    7200 block                0B  73.89GB  828.0GB
    13 entries were displayed.

    Note that cluster1-02 still owns a spare root partition.

  5. Return to administrative privilege:

    set admin

  6. Create your data aggregate, leaving at least one data partition as spare:

    storage aggregate create new_aggr_name -diskcount number_of_partitions -node active_node_name

    The data aggregate is created and is owned by the active node.