Skip to main content

Remove the original nodes from the cluster

Contributors netapp-pcarriga

After moving the volumes from the original to the new nodes and moving or deleting data LIFs, you remove the original nodes from the cluster. When you remove a node, the node's configuration is erased and all disks are initialized.

Steps
  1. Disable high-availability (HA) configuration on the original nodes:

    storage failover modify -node <original_node_name> -enabled false
  1. Change the privilege level to advanced:

    set -privilege advanced
  2. Identify the node in the cluster that has epsilon:

    cluster show

    In the following example, "node0" currently holds epsilon:

    cluster::*>
    Node                 Health  Eligibility  Epsilon
    -------------------- ------- ------------ ------------
    node0                true    true         true
    node1                true    true         false
    node2                true    true         false
    node3                true    true         false
  3. If the node that you are removing holds epsilon:

    1. Move epsilon from the node you are removing:

      cluster modify -node <name_of_node_to_be_removed> -epsilon false
    2. Move epsilon to a node that you are not removing:

      cluster modify -node <node_name> -epsilon true
  4. Identify the current master node:

    cluster ring show

    The master node is the node that holds processes such as mgmt, vldb, vifmgr, bcomd, and crs.

  5. If the node you are removing is the current master node, enable another node in the cluster to be elected as the master node:

    1. Make the current master node ineligible to participate in the cluster:

      cluster modify -node <node_name> -eligibility false

      The node is marked unhealthy until eligibility is restored. When the master node becomes ineligible, one of the remaining nodes is elected by the cluster quorum as the new master.

      Note

      If you are performing this step on the first node in an HA pair, you should mark only that node as ineligible. Do not modify the status of the HA partner.

      If the partner node is selected as the new master, you need to verify if it holds epsilon before making it ineligible. If the partner node holds epsilon, you need to move epsilon to a different node that is remaining in the cluster before making it ineligible. You do this when you repeat these steps to remove the partner node.

    2. Make the previous master node eligible to participate in the cluster again:

      cluster modify -node <node_name> -eligibility true
  6. Log into the remote node management LIF or the cluster-management LIF on a node that you are not removing from the cluster.

  7. Remove the nodes from the cluster:

    For this ONTAP version…​ Use this command…​

    ONTAP 9.3

    cluster unjoin

    ONTAP 9.4 and later

    With node name:

    cluster remove-node -node <node_name>

    With node IP:

    cluster remove-node -cluster_ip <node_ip>

    If you have a mixed version cluster and you are removing the last lower version node, use the -skip-last-low-version-node-check parameter with these commands.

    The system informs you of the following:

    • You must also remove the node's failover partner from the cluster.

    • After the node is removed and before it can rejoin a cluster, you must use boot menu option (4) Clean configuration and initialize all disks or option (9) Configure Advanced Drive Partitioning to erase the node's configuration and initialize all disks.

      A failure message is generated if you have conditions that you must address before removing the node. For example, the message might indicate that the node has shared resources that you must remove or that the node is in a cluster HA configuration or storage failover configuration that you must disable.

      If the node is the quorum master, the cluster will briefly lose and then return to quorum. This quorum loss is temporary and does not affect any data operations.

  8. If a failure message indicates error conditions, address those conditions and rerun the cluster remove-node or cluster unjoin command.

    The node automatically reboots after it is successfully removed from the cluster.

  9. If you are repurposing the node, erase the node configuration and initialize all disks:

    1. During the boot process, press Ctrl-C to display the boot menu when prompted to do so.

    2. Select the boot menu option (4) Clean configuration and initialize all disks.

  10. Return to admin privilege level:

    set -privilege admin
  1. Repeat Steps 1 to 10 to remove the failover partner from the cluster.

  2. If the cluster has only two nodes remaining, configure high availability for the two-node cluster:

    cluster ha modify -configured true