Skip to main content

Linux: Adding direct-attached or SAN volumes to a Storage Node

Contributors netapp-lhalbert

If a Storage Node includes fewer than 16 storage volumes, you can increase its capacity by adding new block storage devices, making them visible to the Linux hosts, and adding the new block device mappings to the StorageGRID configuration file used for the Storage Node.

What you'll need
  • You must have access to the instructions for installing StorageGRID for your Linux platform.

  • You must have the Passwords.txt file.

  • You must have specific access permissions.

Important Do not attempt to add storage volumes to a Storage Node while a software upgrade, recovery procedure, or another expansion procedure is active.
About this task

The Storage Node is unavailable for a brief time when you add storage volumes. You should perform this procedure on one Storage Node at a time to avoid impacting client-facing grid services.

Steps
  1. Install the new storage hardware.

    For more information, see the documentation provided by your hardware vendor.

  2. Create new block storage volumes of the desired sizes.

    • Attach the new disk drives and update the RAID controller configuration as needed, or allocate the new SAN LUNs on the shared storage arrays and allow the Linux host to access them.

    • Use the same persistent naming scheme you used for the storage volumes on the existing Storage Node.

    • If you use the StorageGRID node migration feature, make the new volumes visible to other Linux hosts that are migration targets for this Storage Node. For more information, see the instructions for installing StorageGRID for your Linux platform.

  3. Log into the Linux host supporting the Storage Node as root or with an account that has sudo permission.

  4. Confirm that the new storage volumes are visible on the Linux host.

    You might have to rescan for devices.

  5. Run the following command to temporarily disable the Storage Node:

    sudo storagegrid node stop <node-name>

  6. Using a text editor such as vim or pico, edit the node configuration file for the Storage Node, which can be found at /etc/storagegrid/nodes/<node-name>.conf.

  7. Locate the section of the node configuration file that contains the existing object storage block device mappings.

    In the example, BLOCK_DEVICE_RANGEDB_00 to BLOCK_DEVICE_RANGEDB_03 are the existing object storage block device mappings.

    NODE_TYPE = VM_Storage_Node
    ADMIN_IP = 10.1.0.2
    BLOCK_DEVICE_VAR_LOCAL = /dev/mapper/sgws-sn1-var-local
    BLOCK_DEVICE_RANGEDB_00 = /dev/mapper/sgws-sn1-rangedb-0
    BLOCK_DEVICE_RANGEDB_01 = /dev/mapper/sgws-sn1-rangedb-1
    BLOCK_DEVICE_RANGEDB_02 = /dev/mapper/sgws-sn1-rangedb-2
    BLOCK_DEVICE_RANGEDB_03 = /dev/mapper/sgws-sn1-rangedb-3
    GRID_NETWORK_TARGET = bond0.1001
    ADMIN_NETWORK_TARGET = bond0.1002
    CLIENT_NETWORK_TARGET = bond0.1003
    GRID_NETWORK_IP = 10.1.0.3
    GRID_NETWORK_MASK = 255.255.255.0
    GRID_NETWORK_GATEWAY = 10.1.0.1
  8. Add new object storage block device mappings corresponding to the block storage volumes you added for this Storage Node.

    Make sure to start at the next BLOCK_DEVICE_RANGEDB_nn. Do not leave a gap.

    • Based on the example above, start at BLOCK_DEVICE_RANGEDB_04.

    • In the example below, four new block storage volumes have been added to the node: BLOCK_DEVICE_RANGEDB_04 to BLOCK_DEVICE_RANGEDB_07.

    NODE_TYPE = VM_Storage_Node
    ADMIN_IP = 10.1.0.2
    BLOCK_DEVICE_VAR_LOCAL = /dev/mapper/sgws-sn1-var-local
    BLOCK_DEVICE_RANGEDB_00 = /dev/mapper/sgws-sn1-rangedb-0
    BLOCK_DEVICE_RANGEDB_01 = /dev/mapper/sgws-sn1-rangedb-1
    BLOCK_DEVICE_RANGEDB_02 = /dev/mapper/sgws-sn1-rangedb-2
    BLOCK_DEVICE_RANGEDB_03 = /dev/mapper/sgws-sn1-rangedb-3
    BLOCK_DEVICE_RANGEDB_04 = /dev/mapper/sgws-sn1-rangedb-4
    BLOCK_DEVICE_RANGEDB_05 = /dev/mapper/sgws-sn1-rangedb-5
    BLOCK_DEVICE_RANGEDB_06 = /dev/mapper/sgws-sn1-rangedb-6
    BLOCK_DEVICE_RANGEDB_07 = /dev/mapper/sgws-sn1-rangedb-7
    GRID_NETWORK_TARGET = bond0.1001
    ADMIN_NETWORK_TARGET = bond0.1002
    CLIENT_NETWORK_TARGET = bond0.1003
    GRID_NETWORK_IP = 10.1.0.3
    GRID_NETWORK_MASK = 255.255.255.0
    GRID_NETWORK_GATEWAY = 10.1.0.1
  9. Run the following command to validate your changes to the node configuration file for the Storage Node:

    sudo storagegrid node validate <node-name>

    Address any errors or warnings before proceeding to the next step.

    Note

    If you observe an error similar to the following, it means that the node configuration file is attempting to map the block device used by <node-name> for <PURPOSE> to the given <path-name> in the Linux file system, but there is not a valid block device special file (or softlink to a block device special file) at that location.

    Checking configuration file for node <node-name>…
    ERROR: BLOCK_DEVICE_<PURPOSE> = <path-name>
    <path-name> is not a valid block device

    Verify that you entered the correct <path-name>.

  10. Run the following command to restart the node with the new block device mappings in place:

    sudo storagegrid node start <node-name>

  11. Log in to the Storage Node as admin using the password listed in the Passwords.txt file.

  12. Check that the services start correctly:

    1. View a listing of the status of all services on the server:
      sudo storagegrid-status

      The status is updated automatically.

    2. Wait until all services are Running or Verified.

    3. Exit the status screen:

      Ctrl+C

  13. Configure the new storage for use by the Storage Node:

    1. Configure the new storage volumes:

      sudo add_rangedbs.rb

      This script finds any new storage volumes and prompts you to format them.

    2. Enter y to format the storage volumes.

    3. If any of the volumes have previously been formatted, decide if you want to reformat them.

      • Enter y to reformat.

      • Enter n to skip reformatting. The storage volumes are formatted.

    4. When asked, enter y to stop storage services.

      The storage services are stopped, and the setup_rangedbs.sh script runs automatically. After the volumes are ready for use as rangedbs, the services start again.

  14. Check that the services start correctly:

    1. View a listing of the status of all services on the server:

      sudo storagegrid-status

      The status is updated automatically.

    2. Wait until all services are Running or Verified.

    3. Exit the status screen:

      Ctrl+C

  15. Verify that the Storage Node is online:

    1. Sign in to the Grid Manager using a supported browser.

    2. Select Support > Tools > Grid Topology.

    3. Select site > Storage Node > LDR > Storage.

    4. Select the Configuration tab and then the Main tab.

    5. If the Storage State - Desired drop-down list is set to Read-only or Offline, select Online.

    6. Click Apply Changes.

  16. To see the new object stores:

    1. Select Nodes > site > Storage Node > Storage.

    2. View the details in the Object Stores table.

Result

You can now use the expanded capacity of the Storage Nodes to save object data.