English

Expand NetApp HCI storage resources

Contributors netapp-mwallis amgrissino Download PDF of this page

After you finish NetApp HCI deployment, you can expand and configure NetApp HCI storage resources by using NetApp Hybrid Cloud Control.

Before you begin
  • Ensure that you have free and unused IPv4 addresses on the same network segment as existing nodes (each new node must be installed on the same network as existing nodes of its type).

  • Ensure that you have one of the following types of SolidFire storage cluster accounts:

    • The native administrator account that was created during initial deployment

    • A custom user account with Cluster Admin, Drives, Volumes, and Nodes permissions

  • Ensure that you have performed the following actions with each new node:

    • Installed the new node in the NetApp HCI chassis by following the installation instructions available in the NetApp HCI Documentation Center.

    • Cabled and powered on the new node

  • Ensure that you have the management IPv4 address of an already installed storage node. You can find the IP address in the NetApp Element Management > Cluster > Nodes tab of the NetApp Element Plug-in for vCenter Server.

  • Ensure that each new node uses the same network topology and cabling as the existing storage or compute clusters.

When you are expanding storage resources, storage capacity should be split evenly across all chassis for the best reliability.
Steps
  1. Open a web browser and browse to the IP address of the management node. For example:

    https://[management node IP address]
  2. Log in to NetApp Hybrid Cloud Control by providing the NetApp HCI storage cluster administrator credentials.

  3. Click Expand at the top right corner of the interface.

    The browser opens the NetApp Deployment Engine.

  4. Log in to the NetApp Deployment Engine by providing the NetApp HCI storage cluster administrator credentials.

  5. On the Welcome page, click No and click Continue.

  6. On the Available Inventory page, select the storage nodes you want to add and click Continue.

  7. On the Network Settings page, some of the network information has been detected from the initial deployment. Each new storage node is listed by serial number, and you need to assign the new network information to it. For each new storage node, complete the following steps:

    1. Hostname: If NetApp HCI detected a naming prefix, copy it from the Detected Naming Prefix field, and insert it as the prefix for the new unique hostname you add in the Hostname field.

    2. Management Address: Enter a management IP address for the new storage node that is within the management network subnet.

    3. Storage (iSCSI) IP Address: Enter an iSCSI IP address for the new storage node that is within the iSCSI network subnet.

    4. Click Continue.

      NetApp HCI might take some time to validate the IP addresses you enter. The Continue button becomes available when IP address validation completes.
  8. On the Review page in the Network Settings section, new nodes are shown in the bold text. To make changes in any section, do the following:

    1. Click Edit for that section.

    2. After you finish, click Continue on any subsequent pages to return to the Review page.

  9. Optional: If you do not want to send cluster statistics and support information to NetApp hosted Active IQ servers, clear the final checkbox.

    This disables real-time health and diagnostic monitoring for NetApp HCI. Disabling this feature removes the ability for NetApp to proactively support and monitor NetApp HCI to detect and resolve issues before production is impacted.

  10. Click Add Nodes.

    You can monitor the progress while NetApp HCI adds and configures the resources.

  11. Optional: Verify that any new storage nodes are visible in the Element Plug-in for vCenter Server.

    If you expanded a two-node storage cluster to four nodes or more, the pair of Witness Nodes previously used by the storage cluster are still visible as standby virtual machines in vSphere. The newly expanded storage cluster does not use them; if you want to reclaim VM resources, you can manually remove the Witness Node virtual machines.

Find more information