Configuring your expanded StorageGRID system

After completing an expansion, you must perform additional integration and configuration steps.

About this task

You must complete the configuration tasks listed below for the grid nodes you are adding in your expansion. Some tasks might be optional, depending on the options selected when installing and administering your system, and how you want to configure the grid nodes added during the expansion.

Procedure

  1. Complete the following configuration tasks for each type of grid node you added during the expansion:
    Type of grid node Configuration tasks
    Storage Nodes
    1. Verify the information lifecycle management (ILM) storage pool configuration.

      You must verify that the expansion Storage Nodes are included in a storage pool used by a rule in the active ILM policy. Otherwise, the new storage will not be used by the StorageGRID system. See the instructions for administering StorageGRID.

    2. Verify that the Storage Node is ingesting objects.

      See the instructions for verifying that the Storage Node is active.

    Gateway Nodes
    1. If High Availability Groups are used for client connections, add the Gateway Nodes to an HA group. Go to Configuration > High Availability Groups to review the list of existing HA groups and to add the new nodes.

      For more information, see the instructions for administering StorageGRID.

    Admin Nodes
    1. If single sign-on is enabled for your StorageGRID system, you must create a relying party trust in Active Directory Federation Services (AD FS) for the new Admin Node. You cannot sign in to the node until you create this relying party trust. To learn how to create relying party trusts for Admin Nodes, see Configuring single sign-on in the instructions for administering StorageGRID.
    2. If you plan to use the Load Balancer service on Admin Nodes, you might need to add the Admin Nodes to High Availability Groups. Go to Configuration > High Availability Groups to review the list of existing HA groups and to add the new nodes.

      For more information, see the instructions for administering StorageGRID.

    3. Copy the Admin Node database.

      Optionally, copy the Admin Node database from the primary Admin Node to the expansion Admin Node if you want to keep the attribute and audit information consistent on each Admin Node. For more information, see Copying the Admin Node database.

    4. Copy the Prometheus metrics.

      Optionally, copy the Prometheus database from the primary Admin Node to the expansion Admin Node if you want to keep the historical metrics consistent on each Admin Node. For more information, see Copying Prometheus metrics.

    5. Copy the audit logs.

      Optionally, copy the existing audit logs from the primary Admin Node to the expansion Admin Node if you want to keep the historical log information consistent on each Admin Node. See Copying the audit logs.

    6. Configure access to audit shares.
      Optionally, you can configure access to the system for auditing purposes through an NFS or a CIFS file share. See the instructions for administering StorageGRID.
      Note: Audit export through CIFS/Samba has been deprecated and will be removed in a future StorageGRID release.
    7. Change the preferred sender for email notifications.

      Optionally, you can update your configuration to make the expansion Admin Node the preferred sender. Otherwise, an existing Admin Node configured as the preferred sender continues to send notifications and AutoSupport messages. See the instructions for administering StorageGRID.

    Archive Nodes
    1. Configure the Archive Node's connection to the targeted external archival storage system.

      When you complete the expansion, Archive Nodes are in an alarm state until you configure connection information through the ARC > Target component.

    2. Update the ILM policy.

      You must update your ILM policy in order to archive object data through the new Archive Node.

    3. Configure custom alarms.

      You should establish custom alarms for the attributes that are used to monitor the speed and efficiency of object data retrieval from Archive Nodes.

    For more information, see the instructions for administering StorageGRID.

  2. To check if expansion nodes were added with an untrusted Client Network or to change whether a node's Client Network is untrusted or trusted, go to Configuration > Untrusted Client Network.

    If the Client Network on the expansion node is untrusted, then connections to the node on the Client Network must be made using a load balancer endpoint. See the instructions for administering StorageGRID for more information.

  3. Configure the Domain Name System (DNS).
    If you have been specifying DNS settings separately for each grid node, you must add custom per-node DNS settings for the new nodes. See information about modifying the DNS configuration for a single grid node in the recovery and maintenance instructions.

    The best practice is for the grid-wide DNS server list to contain some DNS servers that are accessible locally from each site. If you just added a new site, add new DNS servers for the site to the grid-wide DNS configuration.

    Attention: Provide two to six IP addresses for DNS servers. You should select DNS servers that each site can access locally in the event of network islanding. This is to ensure an islanded site continues to have access to the DNS service. After configuring the grid-wide DNS server list, you can further customize the DNS server list for each node. For details, see information about modifying the DNS configuration in the recovery and maintenance instructions.
  4. If you added a new site, confirm that Network Time Protocol (NTP) servers are accessible from that site.
    Attention: Make sure that at least two nodes at each site can access at least four external NTP sources. If only one node at a site can reach the NTP sources, timing issues will occur if that node goes down. In addition, designating two nodes per site as primary NTP sources ensures accurate timing if a site is isolated from the rest of the grid.

    For more information, see the recovery and maintenance instructions.