Skip to main content

Remove the original nodes from the cluster

Contributors netapp-pcarriga

After moving the volumes from the original to the new nodes and moving or deleting data LIFs, you remove the original nodes from the cluster. When you remove a node, the node's configuration is erased and all disks are initialized.

Steps
  1. Disable high-availability (HA) configuration on the original nodes:

    storage failover modify -node <original_node_name> -enabled false

  2. Access the advanced privilege level:

    set -privilege advanced

  3. Identify the node that has epsilon:

    cluster show

    In the following example, "node0" currently holds epsilon:

    cluster::*>
    Node                 Health  Eligibility  Epsilon
    -------------------- ------- ------------ ------------
    node0                true    true         true
    node1                true    true         false
    node2                true    true         false
    node3                true    true         false
  4. If one of the original nodes holds epsilon, move epsilon to a node that you are not removing from the cluster:

    1. Remove epsilon from the original node:

      cluster modify -node <original_node_name> -epsilon false

    2. Assign epsilon to a different node:

      cluster modify -node <new_node_name> -epsilon true

  5. Identify the current master node:

    cluster ring show

    The master node is the node that holds processes such as mgmt, vldb, vifmgr, bcomd, and crs.

  6. If a node that you are removing is the current master node, then enable a node that is remaining in the cluster to be elected as the master node:

    1. Make the current master node ineligible to participate in the cluster:

      cluster modify -node <original_node_name> -eligibility false

      The node is marked unhealthy until eligibility is restored. When the master node becomes ineligible, another node in the cluster is elected by the cluster quorum as the new master.

      Note

      If you are performing this step on the first node in an HA pair, you should mark only that node as ineligible. Do not modify the status of the HA partner.

      If the partner node is selected as the new master, you need to verify if it holds epsilon. If the partner node holds epsilon, you need to move epsilon to a different node that is remaining in the cluster before making it ineligible. You do this when you repeat these steps to remove the partner node.

    2. Make the previous master node eligible to participate in the cluster again:

      cluster modify -node <node_name> -eligibility true

  7. Log into the remote node management LIF or the cluster-management LIF on a node that is remaining in the cluster and remove each original node from the cluster (advanced privilege level):

    With node name:

    cluster remove-node -node <original_node_name>

    With node IP:

    cluster remove-node -cluster_ip <original_node_ip>

    If you have a mixed version cluster and you are removing the last lower version node, use the -skip-last-low-version-node-check parameter with these commands.

    The system informs you of the following:

    • You must also remove the node's failover partner from the cluster.

    • After the node is removed and before it can rejoin a cluster, you must use boot menu option (4) Clean configuration and initialize all disks or option (9) Configure Advanced Drive Partitioning to erase the node's configuration and initialize all disks.

      A failure message is generated if you have conditions that you must address before removing the node. For example, the message might indicate that the node has shared resources that you must remove or that the node is in a cluster HA configuration or storage failover configuration that you must disable.

      If the node is the quorum master, the cluster briefly loses and then returns to quorum. This quorum loss is temporary and does not affect any data operations.

  8. If a failure message indicates error conditions, address those conditions and rerun the cluster remove-node or cluster unjoin command.

    After a node is successfully removed from the cluster, it automatically reboots.

  9. If you are repurposing the node, erase the node configuration and initialize all disks:

    1. During the boot process, press Ctrl-C to display the boot menu when prompted to do so.

    2. Select the boot menu option (4) Clean configuration and initialize all disks.

  10. Return to admin privilege level on one of the new nodes in the cluster:

    set -privilege admin
  11. Repeat Steps 1 to 10 to remove the failover partner from the cluster.

  12. If the cluster has only two nodes remaining, configure high availability for the two-node cluster:

    cluster ha modify -configured true