Skip to main content
HCI
1.10

Replace H410S nodes

Contributors netapp-pcarriga netapp-dbagwell

You should replace a storage node in the event of dual inline memory module (DIMM) failure, CPU failure, Radian card problems, other motherboard issues, or if it does not power on. Alarms in the VMware vSphere Web Client alert you when a storage node is faulty. You should use the NetApp Element software UI to get the serial number (service tag) of the failed node. You need this information to locate the failed node in the chassis.

What you'll need
  • You have determined that the storage node needs to be replaced.

  • You have a replacement storage node.

  • You have an electrostatic discharge (ESD) wristband, or you have taken other antistatic precautions.

  • You have labeled each cable that is connected to the storage node.

About this task

The replacement procedure applies to H410S storage nodes in a two rack unit (2U), four-node NetApp HCI chassis.

Here is the rear view of a four-node chassis with H410S nodes:

Shows the back of a four-node chassis with H410S nodes.

Here is the front view of a four-node chassis with H410S nodes, showing the bays that correspond to each node:

Shows the bays associated with each node in a four-node chassis with H410S nodes.
Steps overview

Here is a high-level overview of the steps in this procedure:
Prepare to replace the storage node
Replace the storage node in the chassis
Add the storage node to the cluster

Prepare to replace the storage node

You should remove the faulty storage node correctly from the cluster before you install the replacement node. You can do this without causing any service interruption. You should obtain the serial number of the failed storage node from the Element UI and match it with the serial number on the sticker at the back of the node.

Note In the case of component failures where the node is still online and functioning, for example, a dual inline memory module (DIMM) failure, you should remove the drives from the cluster before you remove the failed node.
Steps
  1. If you have a DIMM failure, remove the drives associated with the node you are going to replace from the cluster. You can use either the NetApp Element software UI or the NetApp Element Management extension point in Element plug-in for vCenter server before you remove the node.

  2. Remove the nodes using either the NetApp Element software UI or the NetApp Element Management extension point in Element plug-in for vCenter server:

    Option Steps

    Using the Element UI

    1. From the Element UI, select Cluster > Nodes.

    2. Note the serial number (service tag) of the faulty node.
      You need this information to match it with the serial number on the sticker at the back of the node.

    3. After you note the serial number, remove the node from the cluster as follows:

    4. Select Actions for the node you want to remove.

    5. Select Remove.

    You can now physically remove the node from the chassis.

    Using the Element plug-in for vCenter server UI

    1. From the NetApp Element Management extension point of the vSphere Web Client, select NetApp Element Management > Cluster.

    2. Select the Nodes sub-tab.

    3. From Active view, select the check box for each node you want to remove, select Actions > Remove.

    4. Confirm the action.
      Any nodes removed from a cluster appear in the list of Pending nodes.

Replace the storage node in the chassis

You should install the replacement node in the same slot in the chassis from which you remove the faulty node. You should use the serial number you noted down from the UI and match it with the serial number at the back of the node.

Note Ensure that you have antistatic protection before you perform the steps here.
Steps
  1. Unpack the new storage node, and set it on a level surface near the chassis.
    Keep the packaging material for when you return the failed node to NetApp.

  2. Label each cable that is inserted at the back of the storage node that you want to remove.
    After you install the new storage node, you must insert the cables into the original ports.

  3. Disconnect all the cables from the storage node.

  4. Pull down the cam handle on the right side of the node, and pull the node out using both the cam handles.
    The cam handle that you should pull down has an arrow on it to indicate the direction in which it moves. The other cam handle does not move and is there to help you pull the node out.

    Note Support the node with both your hands when you pull it out of the chassis.
    Shows the storage node with the cam handles called out.
  5. Place the node on a level surface.

  6. Install the replacement node.

  7. Push the node in until you hear a click.

    Caution Ensure that you do not use excessive force when sliding the node into the chassis.
  8. Reconnect the cables to the ports from which you originally disconnected them.
    The labels you had attached to the cables when you disconnected them help guide you.

    Caution If the airflow vents at the rear of the chassis are blocked by cables or labels, it can lead to premature component failures due to overheating.
    Do not force the cables into the ports; you might damage the cables, ports, or both.
    Tip Ensure that the replacement node is cabled in the same way as the other nodes in the chassis.
  9. Press the button at the front of the node to power it on.

Add the storage node to the cluster

You should add the storage node back to the cluster. The steps vary depending on the version of NetApp HCI you are running.

What you'll need
  • You have free and unused IPv4 addresses on the same network segment as existing nodes (each new node must be installed on the same network as existing nodes of its type).

  • You have one of the following types of SolidFire storage cluster accounts:

    • The native Administrator account that was created during initial deployment

    • A custom user account with Cluster Admin, Drives, Volumes, and Nodes permissions

  • You have cabled and powered on the new node.

  • You have the management IPv4 address of an already installed storage node. You can find the IP address in the NetApp Element Management > Cluster > Nodes tab of the NetApp Element Plug-in for vCenter Server.

  • You have ensured that the new node uses the same network topology and cabling as the existing storage clusters.

    Tip Ensure that storage capacity is split evenly across all chassis for the best reliability.

NetApp HCI 1.6P1 and later

You can use NetApp Hybrid Cloud Control only if your NetApp HCI installation runs on version 1.6P1 or later.

Steps
  1. Open the IP address of the management node in a web browser. For example:

    https://<ManagementNodeIP>/manager/login
  2. Log in to NetApp Hybrid Cloud Control by providing the NetApp HCI storage cluster administrator credentials.

  3. In the Expand Installation pane, select Expand.

  4. Log in to the NetApp Deployment Engine by providing the local NetApp HCI storage cluster administrator credentials.

    Note You cannot log in using Lightweight Directory Access Protocol credentials.
  5. On the Welcome page, select No.

  6. Select Continue.

  7. On the Available Inventory page, select the storage node you want to add to the existing NetApp HCI installation.

  8. Select Continue.

  9. On the Network Settings page, some of the network information has been detected from the initial deployment. Each new storage node is listed by serial number, and you should assign new network information to it. Perform the following steps:

    1. If NetApp HCI detected a naming prefix, copy it from the Detected Naming Prefix field, and insert it as the prefix for the new unique hostname you add in the Hostname field.

    2. In the Management IP Address field, enter a management IP address for the new storage node that is within the management network subnet.

    3. In the Storage (iSCSI) IP Address field, enter an iSCSI IP address for the new storage node that is within the iSCSI network subnet.

    4. Select Continue.

      Note NetApp HCI might take some time to validate the IP addresses you enter. The Continue button becomes available when IP address validation is complete.
  10. On the Review page in the Network Settings section, new nodes are shown in bold text. If you need to make changes to information in any section, perform the following steps:

    1. Select Edit for that section.

    2. When finished making changes, select Continue on any subsequent pages to return to the Review page.

  11. Optional: If you do not want to send cluster statistics and support information to NetApp-hosted Active IQ servers, clear the final checkbox.
    This disables real-time health and diagnostic monitoring for NetApp HCI. Disabling this feature removes the ability for NetApp to proactively support and monitor NetApp HCI to detect and resolve problems before production is affected.

  12. Select Add Nodes.
    You can monitor the progress while NetApp HCI adds and configures the resources.

  13. Optional: Verify that any new storage nodes are visible in the VMware vSphere Web Client.

NetApp HCI 1.4 P2, 1.4, and 1.3

If your NetApp HCI installation runs version 1.4P2, 1.4, or 1.3, you can use the NetApp Deployment Engine to add the node to the cluster.

Steps
  1. Browse to the management IP address of one of the existing storage nodes:
    http://<storage_node_management_IP_address>/

  2. Log in to the NetApp Deployment Engine by providing the local NetApp HCI storage cluster administrator credentials.

    Note You cannot log in using Lightweight Directory Access Protocol credentials.
  3. Select Expand Your Installation.

  4. On the Welcome page, select No.

  5. Click Continue.

  6. On the Available Inventory page, select the storage node to add to the NetApp HCI installation.

  7. Select Continue.

  8. On the Network Settings page, perform the following steps:

    1. Verify the information detected from the initial deployment.
      Each new storage node is listed by serial number, and you should assign new network information to it. For each new storage node, perform the following steps:

      1. If NetApp HCI detected a naming prefix, copy it from the Detected Naming Prefix field, and insert it as the prefix for the new unique hostname you add in the Hostname field.

      2. In the Management IP Address field, enter a management IP address for the new storage node that is within the management network subnet.

      3. In the Storage (iSCSI) IP Address field, enter an iSCSI IP address for the new storage node that is within the iSCSI network subnet.

    2. Select Continue.

    3. On the Review page in the Network Settings section, the new node is shown in bold text. If you want to make changes to information in any section, perform the following steps:

      1. Select Edit for that section.

      2. When finished making changes, select Continue on any subsequent pages to return to the Review page.

  9. Optional: If you do not want to send cluster statistics and support information to NetApp-hosted Active IQ servers, clear the final checkbox.
    This disables real-time health and diagnostic monitoring for NetApp HCI. Disabling this feature removes the ability for NetApp to proactively support and monitor NetApp HCI to detect and resolve problems before production is affected.

  10. Select Add Nodes.
    You can monitor the progress while NetApp HCI adds and configures the resources.

  11. Optional: Verify that any new storage nodes are visible in the VMware vSphere Web Client.

NetApp HCI 1.2, 1.1, and 1.0

When you install the node, the terminal user interface (TUI) displays the fields necessary to configure the node. You must enter the necessary configuration information for the node before you proceed with adding the node to the cluster.

Note You must use the TUI to configure static network information as well as cluster information. If you were using out-of-band management, you must configure it on the new node.

You should have a console or keyboard, video, mouse (KVM) to perform these steps, and have the network and cluster information necessary to configure the node.

Steps
  1. Attach a keyboard and monitor to the node.
    The TUI appears on the tty1 terminal with the Network Settings tab.

  2. Use the on-screen navigation to configure the Bond1G and Bond10G network settings for the node. You should enter the following information for Bond1G:

    • IP address. You can reuse the Management IP address from the failed node.

    • Subnet mask. If you do not know, your network administrator can provide this information.

    • Gateway address. If you do not know, your network administrator can provide this information.
      You should enter the following information for Bond10G:

    • IP address. You can reuse the Storage IP address from the failed node.

    • Subnet mask. If you do not know, your network administrator can provide this information.

  3. Enter s to save the settings, and then enter y to accept the changes.

  4. Enter c to navigate to the Cluster tab.

  5. Use the on-screen navigation to set the hostname and cluster for the node.

    Note If you want to change the default hostname to the name of the node you removed, you should do it now.
    Tip It is best to use the same name for the new node as the node you replaced to avoid confusion in the future.
  6. Enter s to save the settings.
    The cluster membership changes from Available to Pending.

  7. In NetApp Element Plug-in for vCenter Server, select NetApp Element Management > Cluster > Nodes.

  8. Select Pending from the drop-down list to view the list of available nodes.

  9. Select the node you want to add, and select Add.

    Note It might take up to 2 minutes for the node to be added to the cluster and displayed under Nodes > Active.
    Important Adding the drives all at once can lead to disruptions. For best practices related to adding and removing drives, see this KB article (login required).
  10. Select Drives.

  11. Select Available from the drop-down list to view the available drives.

  12. Select the drives you want to add, and select Add.