Storage raid-options Commands

The raid-options directory

The sub commands storage raid-options modify and storage raid-options show are used to change and view the configurable node RAID options. Following are the RAID options that can be configured:

raid.background_disk_fw_update.enable

This option determines the behavior of automatic disk firmware update. Valid values are on or off. The default value is on. If the option is set to on, firmware updates to spares and file-system disks are performed in a non-disruptive manner in the background. If the option is turned off automatic firmware update occur when the system is booted or a disk is inserted.

raid.disk.copy.auto.enable

This option determines the action taken when a disk reports a predictive failure. Valid values for this option are on or off. The default value for this option is on.

Sometimes, it is possible to predict that a disk will fail soon based on a pattern of recovered errors that have occurred on the disk. In such cases, the disk reports a predictive failure to Data ONTAP. If this option is set to on, Data ONTAP initiates Rapid RAID Recovery to copy data from the failing disk to a spare disk. When data is copied, the disk is marked failed and placed in the pool of broken disks. If a spare is not available, the node will continue to use the prefailed disk until the disk fails.

If the option is set to off, the disk is immediately marked as failed and placed in the pool of broken disks. A spare disk is selected and data from the missing disk is reconstructed from other disks in the RAID group. The disk does not fail if the RAID group is already degraded or is being reconstructed. This ensures that a disk failure does not lead to the failure of the entire RAID group.

raid.disktype.enable

This option is obsolete. Use options raid.mix.hdd.disktype.capacity and raid.mix.hdd.disktype.performance instead.







raid.media_scrub.rate

This option sets the rate of media scrub on an aggregate. Valid values for this option range from 300 to 3000 where a rate of 300 represents a media scrub of approximately 512 MB per hour, and 3000 represents a media scrub of approximately 5GB per hour. The default value for this option is 600, which is a rate of approximately 1GB per hour.

raid.min_spare_count

This option specifies the minimum number of spare drives required to avoid warnings about low spare drives. If every file-system drive has the minimum number of spare drives specified in raid.min_spare_count that are appropriate replacements, then no warning is displayed for low spares. This option can be set from 0 to 4. The default setting is 1. Setting this option to 0 means that no warnings will be displayed for low spares even if there are no spares available. This option can be set to 0 only on systems that have 16 or fewer attached drives and that are running with RAID-DP aggregates. A setting of 0 is not allowed on systems with RAID4 aggregates.

raid.mirror_read_plex_pref

This option specifies the plex preference when reading from a mirrored aggregate on a MetroCluster-configured system. There are three possible values:
  • local indicates that all reads are handled by the local plex (plex consisting of disks from Pool0).
  • remote indicates that all reads are handled by the remote plex (plex consisting of disks from Pool1).
  • alternate indicates that the handling of read requests is shared between the two plexes.

This option is ignored if the system is not in a MetroCluster configuration. The option setting applies to all aggregates on the node.









raid.mix.hdd.disktype.capacity

Controls mixing of SATA, BSAS, FSAS and ATA disk types. The default value is on, which allows mixing.

When this option is set to on, SATA, BSAS, FSAS and ATA disk types are considered interchangeable for all aggregate operations, including aggregate creation, adding disks to an aggregate, and replacing disks within an existing aggregate, whether this is done by the administrator or automatically by Data ONTAP.

If you set this option to off, SATA, BSAS, FSAS and ATA disks cannot be combined within the same aggregate. If you have existing aggregates that combine those disk types, those aggregates will continue to function normally and accept any of those disk types.

Note: This option is ignored in storage aggregate create and storage aggregate add-disks commands, when either of -disktype or -diskclass parameters are used. It is better to use the -disktype or -diskclass parameter instead of relying on this option.

raid.mix.hdd.disktype.performance

Controls mixing of FCAL and SAS disk types. The default value is off, which prevents mixing.

If you set this option to on, FCAL and SAS disk types are considered interchangeable for all aggregate operations, including aggregate creation, adding disks to an aggregate, and replacing disks within an existing aggregate, whether this is done by the administrator or automatically by Data ONTAP.

When this option is set to off, FCAL and SAS disks cannot be combined within the same aggregate. If you have existing aggregates that combine those disk types, those aggregates will continue to function normally and accept either disk type.

Note: This option is ignored in storage aggregate create and storage aggregate add-disks commands, when either of -disktype or -diskclass parameter is used. It is better to use the -disktype or -diskclass parameter instead of relying on this option.

raid.mix.disktype.solid_state

Controls mixing of SSD and SSD-NVM disk types. The default value is on, which allows mixing.

If you set this option to on, SSD and SSD-NVM disk types are considered interchangeable for all aggregate operations, including aggregate creation, adding disks to an aggregate, and replacing disks within an existing aggregate, whether this is done by the administrator or automatically by Data ONTAP.

When this option is set to off, SSD and SSD-NVM disks cannot be combined within the same aggregate. If you have existing aggregates that combine those disk types, those aggregates will continue to function normally and accept either disk type.

Note: This option is ignored in storage aggregate create and storage aggregate add-disks commands, when either of -disktype or -diskclass parameter is used. It is better to use the -disktype or -diskclass parameter instead of relying on this option.

raid.mix.hdd.rpm.capacity

This option controls separation of capacity-based hard disk drives (ATA, SATA, BSAS, FSAS, MSATA) by uniform rotational speed (RPM). If you set this option to off, Data ONTAP always selects disks with the same RPM when creating new aggregates or when adding disks to existing aggregates using these disk types. If you set this option to on, Data ONTAP does not differentiate between these disk types based on rotational speed. For example, Data ONTAP might use both 5400 RPM and 7200 RPM disks in the same aggregate. The default value is on.

raid.mix.hdd.rpm.performance

This option controls separation of performance-based hard disk drives (SAS, FCAL) by uniform rotational speed (RPM). If you set this option to off, Data ONTAP always selects disks with the same RPM when creating new aggregates or when adding disks to existing aggregates using these disk types. If you set this option to on, Data ONTAP does not differentiate between these disk types based on rotational speed. For example, Data ONTAP might use both 10K RPM and 15K RPM disks in the same aggregate. The default value is off.

raid.reconstruct.perf_impact

This option sets the overall performance impact of RAID reconstruction. When the CPU and disk bandwidth are not consumed by serving clients, RAID reconstruction consumes as much bandwidth as it needs. If the serving of clients is already consuming most or all of the CPU and disk bandwidth, this option allows control over the CPU and disk bandwidth that can be taken away for reconstruction, and thereby enables control over the negative performance impact on the serving of clients. As the value of this option is increased, the speed of reconstruction also increase. The possible values low, medium, and high. The default value is medium. When mirror resync and reconstruction are running at the same time, the system does not distinguish between their separate resource consumption on shared resources (like CPU or a shared disk). In this case, the combined resource utilization of these operations is limited to the maximum resource entitlement for individual operations.



raid.resync.num_concurrent_ios_per_rg

This option changes the duration of a resync by modifying the number of concurrent resync I/Os in progress for each resync'ing raidgroup. The legacy, and default value, is 1. As the value of this option is increased, the speed of resync is increased. This will have a negative performance impact on the serving of clients.

raid.resync.perf_impact

This option sets the overall performance impact of RAID mirror resync (whether started automatically by the system or implicitly by an operator-issued command). When the CPU and disk bandwidth are not consumed by serving clients, a resync operation consumes as much bandwidth as it needs. If the serving of clients is already consuming most or all of the CPU and disk bandwidth, this option allows control over the CPU and disk bandwidth that can be taken away for resync operations, and thereby enables control over the negative performance impact on the serving of clients. As the value of this option is increased, the speed of resync also increases. The possible values are low, medium, and high. The default value is medium. When RAID mirror resync and reconstruction are running at the same time, the system does not distinguish between their separate resource consumption on shared resources (like CPU or a shared disk). In this case, the combined resource utilization of these operations is limited to the maximum resource entitlement for individual operations.

raid.rpm.ata.enable

This option is obsolete. Use option raid.mix.hdd.rpm.capacity instead.

raid.rpm.fcal.enable

This option is obsolete. Use option raid.mix.hdd.rpm.performance instead.





raid.scrub.perf_impact

This option sets the overall performance impact of RAID scrubbing (whether started automatically or manually). When the CPU and disk bandwidth are not consumed by serving clients, scrubbing consumes as much bandwidth as it needs. If the serving of clients is already consuming most or all of the CPU and disk bandwidth, this option allows control over the CPU and disk bandwidth that can be taken away for scrubbing, and thereby enables control over the negative performance impact on the serving of clients. As the value of this option is increased, the speed of scrubbing also increases. The possible values for this option are low, medium, and high. The default value is low. When scrub and mirror verify are running at the same time, the system does not distinguish between their separate resource consumption on shared resources (like CPU or a shared disk). In this case, the combined resource utilization of these operations is limited to the maximum resource entitlement for individual operations.

raid.scrub.schedule

This option specifies the weekly schedule (day, time and duration) for scrubs started automatically. On a non-AFF system, the default schedule is daily at 1 a.m. for the duration of 4 hours except on Sunday when it is 12 hours. On an AFF system, the default schedule is weekly at 1 a.m. on Sunday for the duration of 6 hours. If an empty string ("") is specified as an argument, it will delete the previous scrub schedule and add the default schedule. One or more schedules can be specified using this option. The syntax is duration[h|m]@weekday@start_time,[duration[h|m]@weekday@start_time,...] where duration is the time period for which scrub operation is allowed to run, in hours or minutes ('h' or 'm' respectively).

Weekday is the day on which the scrub is scheduled to start. The valid values are sun, mon, tue, wed, thu, fri, sat.

start_time is the time when scrub is schedule to start. It is specified in 24 hour format. Only the hour (0-23) needs to be specified.

For example, options raid.scrub.schedule 240m@tue@2,8h@sat@22 will cause scrub to start on every Tuesday at 2 a.m. for 240 minutes, and on every Saturday at 10 p.m. for 480 minutes.

raid.timeout

This option sets the time, in hours, that the system will run after a single disk failure in a RAID4 group or a two disk failure in a RAID-DP group has caused the system to go into degraded mode or double degraded mode respectively, or after NVRAM battery failure has occurred. The default is 24, the minimum acceptable value is 0 and the largest acceptable value is 4,294,967,295. If the raid.timeout option is specified when the system is in degraded mode or in double degraded mode, the timeout is set to the value specified and the timeout is restarted. If the value specified is 0, automatic system shutdown is disabled.

raid.verify.perf_impact

This option sets the overall performance impact of RAID mirror verify. When the CPU and disk bandwidth are not consumed by serving clients, a verify operation consumes as much bandwidth as it needs. If the serving of clients is already consuming most or all of the CPU and disk bandwidth, this option allows control over the CPU and disk bandwidth that can be taken away for verify, and thereby enables control over the negative performance impact on the serving of clients. As you increase the value of this option, the verify speed also increases. The possible values are low, medium, and high. The default value is low. When scrub and mirror verify are running at the same time, the system does not distinguish between their separate resource consumption on shared resources (like CPU or a shared disk). In this case, the combined resource utilization of these operations is limited to the maximum resource entitlement for individual operations.