After recovering storage volumes for the appliance Storage Node, you restore the object data that was lost when the Storage Node failed. Object data is restored from other Storage Nodes and Archive Nodes, assuming that the grid's ILM rules were configured such that object copies are available.
If you have the volume ID of each storage volume that failed and was restored, you can use those volume IDs to restore object data to those storage volumes. If you know the device names of these storage volumes, you can use the device names to find the volume IDs, which correspond to the volume's /var/local/rangedb number.
At installation, each storage device is assigned a file system universal unique identifier (UUID) and is mounted to a rangedb directory on the Storage Node using that assigned file system UUID. The file system UUID and the rangedb directory are listed in the /etc/fstab file. The device name, rangedb directory, and the size of the mounted volume are displayed in the StorageGRID Webscale system at .
In the following example, device /dev/sdb has a volume size of 4 TB, is mounted to /var/local/rangedb/0, using the device name /dev/disk/by-uuid/822b0547-3b2b-472e-ad5e-e1cf1809faba in the /etc/fstab file:
Using the repair-data script
Replicated data
Erasure coded (EC) data
The repair-data start-ec-node-repair and repair-data start-ec-volume-repair commands are used for restoring erasure coded data. Repairs of erasure coded data can begin while some Storage Nodes are offline. Repair will complete after all nodes are available. You can track repairs of erasure coded data by using the repair-data show-ec-repair-status command.
Notes on data recovery
If the only remaining copy of object data is located on an Archive Node, object data is retrieved from the Archive Node. Due to the latency associated with retrievals from external archival storage systems, restoring object data to a Storage Node from an Archive Node takes longer than restoring copies from other Storage Nodes.