Return a FIPS drive or SED to service when authentication keys are lost

Contributors

The system treats a FIPS drive or SED as broken if you lose the authentication keys for it permanently and cannot retrieve them from the KMIP server. Although you cannot access or recover the data on the disk, you can take steps to make the SED’s unused space available again for data.

What you’ll need

You must be a cluster administrator to perform this task.

About this task

You should use this process only if you are certain that the authentication keys for the FIPS drive or SED are permanently lost and that you cannot recover them.

Steps
  1. Return a FIPS drive or SED to service:

    If the SEDS are…​

    Use these steps…​

    Not in FIPS-compliance mode, or in FIPS-compliance mode and the FIPS key is available

    1. Sanitize the broken disk:
      storage encryption disk sanitize -disk disk_id

    2. Set the privilege level to advanced:
      set -privilege advanced

    3. Unfail the sanitized disk:
      storage disk unfail -spare true -disk disk_id

    4. Check whether the disk has an owner:
      storage disk show -disk disk_id

    5. If the disk does not have an owner, assign one, then unfail the disk again:
      storage disk assign -owner node -disk disk_id

      storage disk unfail -spare true -disk disk_id

    6. Verify that the disk is now a spare and ready to be reused in an aggregate:
      storage disk show -disk disk_id

    In FIPS-compliance mode, the FIPS key is not available, and the SEDs have a PSID printed on the label

    1. Obtain the PSID of the disk from the disk label.

    2. Set the privilege level to advanced:
      set -privilege advanced

    3. Reset the disk to its factory-configured settings:
      storage encryption disk revert-to-original-state -disk disk_id -psid disk_physical_secure_id

    4. Unfail the sanitized disk:
      storage disk unfail -spare true -disk disk_id

    5. Check whether the disk has an owner:
      storage disk show -disk disk_id

    6. If the disk does not have an owner, assign one, then unfail the disk again:
      storage disk assign -owner node -disk disk_id

      storage disk unfail -spare true -disk disk_id

    7. Verify that the disk is now a spare and ready to be reused in an aggregate:
      storage disk show -disk disk_id

    For complete command syntax, see the man pages.