volume modify
Modify volume attributes
Availability: This command is available to cluster and Vserver administrators at the admin privilege level.
Description
The volume modify
command can be used to modify the following attributes of a volume:
-
Size
-
State (online, offline, restricted, force-online or force-offline)
-
Export policy
-
User ID
-
Group ID
-
Security style (All volume types: UNIX mode bits, CIFS ACLs, or mixed NFS and CIFS permissions.)
-
Default UNIX permissions for files on the volume
-
Whether the junction path is active
-
Comment
-
Volume nearly full threshold percent
-
Volume full threshold percent
-
Maximum size for autosizing
-
Minimum size for autosize
-
Grow used space threshold percentage for autosize
-
Shrink used space threshold percentage for autosize
-
Whether autosizing is enabled
-
Current mode of operation of volume autosize
-
Reset the autosize values to their defaults
-
Total number of files for user-visible data permitted on the volume
-
Space guarantee style (none or volume)
-
Space SLO type (none, thick or semi-thick)
-
Snapshot policy
-
Use logical space reporting
-
Use logical space enforcement
-
Tiering object tags
-
Convert ucode
-
Whether the volume's total number of files will be set to the highest possible value
-
Caching policy
-
Cache retention priority
-
Tiering minimum cooling days
-
Cloud retrieval policy
-
Preserve unlink
You can use the volume move
command to change a volume's aggregate or node. You can use the volume rename command to change a volume's name. You can use the volume make-vsroot command to make a volume the root volume of its Vserver.
You can change additional volume attributes by using this command at the advanced privilege level and higher.
Parameters
-vserver <vserver name>
- Vserver Name-
This specifies the Vserver on which the volume is located. If only one data Vserver exists, you do not need to specify this parameter. Although node Vservers are not displayed when using <Tab> completion, this parameter supports node Vservers for modifying the root volume of the specified node Vserver.
-volume <volume name>
- Volume Name-
This specifies the volume that is to be modified.
[-size {<integer>[KB|MB|GB|TB|PB]}]
- Volume Size-
This optionally specifies the new size of the volume. The size is specified as a number followed by a unit designation: k (kilobytes), m (megabytes), g (gigabytes), or t (terabytes). If the unit designation is not specified, bytes are used as the unit, and the specified number is rounded up to the nearest 4 KB. A relative rather than absolute size change can be specified by adding + or - before the given size: for example, specifying +30m adds 30 megabytes to the volume's current size. The minimum size for a volume is 20 MB (the default setting). The volume's maximum size is limited by the platform maximum. If the volume's guarantee is set to
volume
, the volume's maximum size can also be limited by the available space in the hosting aggregate. If the volume's guarantee is currently disabled, its size cannot be increased. [-state {online|restricted|offline|force-online|force-offline|mixed}]
- Volume State-
This optionally specifies the volume's state. A restricted volume does not provide client access to data but is available for administrative operations.
The mixed
state applies to FlexGroups only and cannot be specified as a target state. [-policy <text>]
- Export Policy-
This optionally specifies the ID number of the export policy associated with the volume. For information on export policy, see the documentation for the vserver export-policy create command. FlexGroups do not support export policies that allow NFSv4 protocol access.
[-user <user name>]
- User ID-
This optionally specifies the name or ID of the user that is set as the owner of the volume's root.
[-group <group name>]
- Group ID-
This optionally specifies the name or ID of the group that is set as the owner of the volume's root.
[-security-style <security style>]
- Security Style-
This optionally specifies the security style for the volume. Possible values include
unix
(for UNIX mode bits),ntfs
(for CIFS ACLs),mixed
(for mixed NFS and CIFS permissions) andunified
(for mixed NFS and CIFS permissions with unified ACLs). Regardless of the security style, both NFS and CIFS clients can read from and write to the volume. [-unix-permissions <unix perm>]
- UNIX Permissions-
This optionally specifies the default UNIX permissions for files on the volume. Specify UNIX permissions either as a four-digit octal value (for example, 0700) or in the style of the UNIX ls command (for example, -rwxr-x---). For information on UNIX permissions, see the UNIX or Linux documentation. The default setting is 0755 or -rwxr-xr-x.
[-junction-active {true|false}]
- Junction Active (privilege: advanced)-
This optionally specifies whether the volume's junction path is active. The default setting is
true
. If the junction is inactive, the volume does not appear in the Vserver's namespace. [-comment <text>]
- Comment-
This optionally specifies a comment for the volume.
[-space-nearly-full-threshold-percent <percent>]
- Volume Nearly Full Threshold Percent-
This optionally specifies the percentage at which the volume is considered nearly full, and above which an EMS warning will be generated. The default value is 95%. The maximum value for this option is 99%. Setting this threshold to 0 disables the volume nearly full space alerts.
[-space-full-threshold-percent <percent>]
- Volume Full Threshold Percent-
This optionally specifies the percentage at which the volume is considered full, and above which a critical EMS error will be generated. The default value is 98%. The maximum value for this option is 100%. Setting this threshold to 0 disables the volume full space alerts.
- {
[-max-autosize {<integer>[KB|MB|GB|TB|PB]}]
- Maximum Autosize -
This parameter allows the user to specify the maximum size to which a volume can grow. The default for volumes is 120% of the volume size. If the value of this parameter is invalidated by manually resizing the volume, the maximum size is reset to 120% of the volume size. The value for
-max-autosize
cannot be set larger than the platform-dependent maximum volume size. If you specify a larger value, the value of-max-autosize
is automatically reset to the supported maximum without returning an error. [-min-autosize {<integer>[KB|MB|GB|TB|PB]}]
- Minimum Autosize-
This parameter specifies the minimum size to which the volume can automatically shrink. If the volume was created with the
grow_shrink
autosize mode enabled, then the default minimum size is equal to the initial volume size. If the value of the-min-autosize
parameter is invalidated by a manual volume resize, the minimum size is reset to the volume size. [-autosize-grow-threshold-percent <percent>]
- Autosize Grow Threshold Percentage-
This parameter specifies the used space threshold for the automatic growth of the volume. When the volume’s used space becomes greater than this threshold, the volume will automatically grow unless it has reached the maximum autosize.
[-autosize-shrink-threshold-percent <percent>]
- Autosize Shrink Threshold Percentage-
This parameter specifies the used space threshold for the automatic shrinking of the volume. When the amount of used space in the volume drops below this threshold, the volume will shrink unless it has reached the specified minimum size.
[-autosize-mode {off|grow|grow_shrink}]
- Autosize Mode-
This parameter specifies the autosize mode for the volume. The supported autosize modes are:
-
off
- The volume will not grow or shrink in size in response to the amount of used space. -
grow
- The volume will automatically grow when used space in the volume is above the grow threshold. -
grow_shrink
- The volume will grow or shrink in size in response to the amount of used space.
By default,
-autosize-mode
isoff
for new volumes, except for DP mirrors, for which the default value isgrow_shrink
. Thegrow
andgrow_shrink
modes work together with Snapshot autodelete to automatically reclaim space when a volume is about to become full. The volume parameter-space-mgmt-try-first
controls the order in which these two space reclamation policies are attempted. -
- |
[-autosize-reset <true>]
- Autosize Reset } -
This allows the user to reset the values of autosize, max-autosize, min-autosize, autosize-grow-threshold-percent, autosize-shrink-threshold-percent and autosize-mode to their default values. For example, the max-autosize value will be set to 120% of the current size of the volume.
[-files <integer>]
- Total Files (for user-visible data)-
This optionally specifies the total number of files for user-visible data permitted on the volume. This value can be raised or lowered. Raising the total number of files does not immediately cause additional disk space to be used to track files. Instead, as more files are created on the volume, the system dynamically increases the number of disk blocks that are used to track files. The space assigned to track files is never freed, and the
files
value cannot be decreased below the current number of files that can be tracked within the assigned space for the volume. [-files-set-maximum {true|false}]
- Set Total Files (for user-visible data) to the Highest Value that the Volume can Hold (privilege: advanced)-
This optionally specifies whether the volume's total number of files will be set to the highest possible value. If
true
, the volume's total number of files is set to the highest value that the volume can hold. Onlytrue>
is a valid input.false>
is not permitted. To modify the total number of files to a specific value, use option files. [-maxdir-size {<integer>[KB|MB|GB|TB|PB]}]
- Maximum Directory Size (privilege: advanced)-
This optionally specifies the maximum directory size. The default maximum directory size is model-dependent, and optimized for the size of system memory. You can increase it for a specific volume by using this option, but doing so could impact system performance. If you need to increase the maximum directory size, work with customer support.
- {
[-space-slo {none|thick|semi-thick}]
- Space SLO -
This optionally specifies the Service Level Objective for space management (the space SLO setting) for the volume. The space SLO value is used to enforce volume settings so that sufficient space is set aside to meet the space SLO. The default setting is
none
. There are three supported values:none
,thick
andsemi-thick
.-
none
: The value ofnone
does not provide any guarantee for overwrites or enforce any restrictions. It should be used if the admin plans to manually manage space consumption in the volume and aggregate, and out of space errors. -
thick
: The value ofthick
guarantees that the hole fills and overwrites to space-reserved files in this volume will always succeed by reserving space. To meet this space SLO, the following volume-level settings are automatically set and cannot be modified: -
Space Guarantee:
volume
- The entire size of the volume is preallocated in the aggregate. Changing the volume'sspace-guarantee
type is not supported. -
Fractional Reserve:
100
- 100% of the space required for overwrites is reserved. Changing the volume'sfractional-reserve
setting is not supported. -
semi-thick
: The value ofsemi-thick
is a best-effort attempt to ensure that overwrites succeed by restricting the use of features that share blocks and auto-deleting backups and Snapshot copies in the volume. To meet this space SLO, the following volume-level settings are automatically set and cannot be modified: -
Space Guarantee:
volume
- The entire size of the volume is preallocated in the aggregate. Changing the volume'sspace-guarantee
type is not supported. -
Fractional Reserve:
0
- No space will be reserved for overwrites by default. However, changing the volume'sfractional-reserve
setting is supported. Changing the setting to 100 means that 100% of the space required for overwrites is reserved. -
Snapshot Autodelete:
enabled
- Automatic deletion of Snapshot copies is enabled to reclaim space. To ensure that the overwrites can be accommodated when the volume reaches threshold capacity, the following volume Snapshot autodelete parameters are set automatically to the specified values and cannot be modified: -
enabled
:true
-
commitment
:destroy
-
trigger
:volume
-
defer-delete
:none
-
destroy-list
:vol_clone
,lun_clone
,file_clone
,cifs_share
In addition, with a value of ``_semi-thick_`` , the following technologies are not supported for the volume:
-
File Clones with autodelete disabled: Only full file clones of files, LUNs or NVMe namespaces that can be autodeleted can be created in the volume. The use of
autodelete
for file clone create is required. -
Partial File Clones: Only full file clones of files or LUNs that can be autodeleted can be created in the volume. The use of
range
for file clone create is not supported. -
Volume Efficiency: Enabling volume efficiency is not supported to allow autodeletion of Snapshot copies.
-
- |
[-s, -space-guarantee {none|volume}]
- Space Guarantee Style -
This option controls whether the volume is guaranteed some amount of space in the aggregate. The default setting for the volumes on All Flash FAS systems is
none
, otherwise the default setting isvolume
. Thefile
setting is no longer supported. Volume guaranteed means that the entire size of the volume is preallocated. The none value means that no space is preallocated, even if the volume contains space-reserved files or LUNs; if the aggregate is full, space is not available even for space-reserved files and LUNs within the volume. Setting this parameter to none enables you to provision more storage than is physically present in the aggregate (thin provisioning). When you use thin provisioning for a volume, it can run out of space even if it has not yet consumed its nominal size and you should carefully monitor space utilization to avoid unexpected errors due to the volume running out of space. For flexible root volumes, to ensure that system files, log files, and cores can be saved, the space-guarantee must be volume. This is to ensure support of the appliance by customer support, if a problem occurs. Disk space is preallocated when the volume is brought online and, if not used, returned to the aggregate when the volume is brought offline. It is possible to bring a volume online even when the aggregate has insufficient free space to preallocate to the volume. In this case, no space is preallocated, just as if the none option had been selected. In this situation, the vol options and vol status command display the actual value of the space-guarantee option, but indicate that it is disabled. [-fractional-reserve <percent>]
- Fractional Reserve }-
This option changes the amount of space reserved for overwrites of reserved objects (LUNs, files) in a volume. The option is set to 100 by default with
guarantee
set tovolume
. A setting of 100 means that 100% of the required reserved space is actually reserved so the objects are fully protected for overwrites. The value is set to 0 by default withguarantee
set tonone
. The value can be either 0 or 100 whenguarantee
is set tovolume
ornone
. Using a value of 0 indicates that no space will be reserved for overwrites. This returns the extra space to the available space for the volume, decreasing the total amount of space used. However, this does leave the protected objects in the volume vulnerable to out of space errors. If the percentage is set to 0%, the administrator must monitor the space usage on the volume and take corrective action. [-min-readahead {true|false}]
- Minimum Read Ahead (privilege: advanced)-
This optionally specifies whether minimum readahead is used on the volume. The default setting is
false
. [-atime-update {true|false}]
- Access Time Update Enabled (privilege: advanced)-
This optionally specifies whether the access time on inodes is updated when a file is read. The default setting is
true
. [-snapdir-access {true|false}]
- Snapshot Directory Access Enabled-
This optionally specifies whether clients have access to .snapshot directories. The default setting is
true
. [-percent-snapshot-space <percent>]
- Space Reserved for Snapshot Copies-
This optionally specifies the amount of space that is reserved on the volume for Snapshot copies. The default setting is 5 percent.
[-snapshot-policy <snapshot policy>]
- Snapshot Policy-
This optionally specifies the Snapshot policy for the volume. The default is the Snapshot policy for all volumes on the SVM, as specified by the
-snapshot-policy
parameter of the vserver create and vserver modify commands. When replacing a Snapshot policy on a volume, any existing Snapshot copies on the volume that do not match any of the prefixes of the new Snapshot policy will not be deleted. This is because the Snapshot scheduler will not clean up older Snapshot copies if the prefixes do not match. After the new Snapshot policy takes effect, depending on the new retention count, any existing Snapshot copies that continue to use the same prefixes might be deleted. For example, your existing Snapshot policy is set up to retain 150 weekly Snapshot copies and you create a new Snapshot policy that uses the same prefixes but changes the retention count to 50 Snapshot copies. After the new Snapshot policy takes effect, it will start deleting older Snapshot copies until there are only 50 remaining. [-language <Language code>]
- Language-
Use this parameter to change the volume language from *.UTF-8 to utf8mb4. To change the language of a volume, contact technical support.
[-foreground {true|false}]
- Foreground Process-
This specifies whether the operation runs in the foreground. The default setting is
true
(the operation runs in the foreground). When set to true, the command will not return until the operation completes. This parameter applies only to FlexGroups. For FlexVol volumes, the command always runs in the foreground. [-nvfail {on|off}]
- NVFAIL Option-
Setting this optional parameter to true causes the volume to set the in-nvfailed-state flag to true, if committed writes to the volume are lost due to a failure. The in-nvfailed-state flag fences the volume from further data access and prevents possible corruption of the application data. Without specifying a value, this parameter is automatically set to false.
[-in-nvfailed-state {true|false}]
- Volume's NVFAIL State (privilege: advanced)-
This field is automatically set to true on a volume when committed writes to the volume are possibly lost due to a failure, and the volume has the nvfail option enabled. With this field set, the client access to the volume is fenced to protect against possible corruptions that result from accessing stale data. The administrator needs to take appropriate recovery actions to recover the volume from the possible data loss. After the recovery is completed, the administrator can clear this field and restore the client access to the volume. This field can be cleared using the CLI but it cannot be set.
[-dr-force-nvfail {on|off}]
- Force NVFAIL on MetroCluster Switchover-
Setting this optional parameter to true on a volume causes the MetroCluster switchover operation to set the in-nvfailed-state flag to true on that volume. The in-nvfailed-state flag prevents further data access to the volume. The default value is false. This parameter has no effect on a negotiated or an automatic switchover.
[-filesys-size-fixed {true|false}]
- Is File System Size Fixed-
This parameter is only applicable for a DP relationship. This option causes the file system to remain the same size and not grow or shrink when a SnapMirrored volume relationship is broken, or when a volume add is performed on it. It is automatically set to true when a volume becomes a SnapMirrored volume. It stays set to true after the snapmirror break command is issued for the volume. This allows a volume to be SnapMirrored back to the source without needing to add disks to the source volume. If the volume is a flexible volume and the volume size is larger than the file system size, setting this option to false forces the volume size to equal the file system size. The default setting is false.
[-extent-enabled {off|on|space-optimized}]
- (DEPRECATED)-Extent Option-
This parameter has been deprecated and may be removed in a future release of Data ONTAP. Setting this option to `on` or `space-optimized` enables extents in the volume. This causes application writes to be written in the volume as a write of a larger group of related data blocks called an extent. Using extents may help workloads that perform many small random writes followed by large sequential reads. However, using extents may increase the amount of disk operations performed on the controller, so this option should only be used where this trade-off is desired. If the option is set to `space-optimized` then the reallocation update will not duplicate blocks from Snapshot copies into the active file system, and will result in conservative space utilization. Using `space-optimized` may be useful when the volume has Snapshot copies or is a SnapMirror source, when it can reduce the storage used in the volume and the amount of data that SnapMirror needs to move on the next update. The `space-optimized` value can result in degraded read performance of Snapshot copies. The default value is `off` ; extents are not used.
[-space-mgmt-try-first {volume_grow|snap_delete}]
- Primary Space Management Strategy-
A flexible volume can be configured to automatically reclaim space in case the volume is about to run out of space, by either increasing the size of the volume using autogrow or deleting Snapshot copies in the volume using Snapshot autodelete. If this option is set to volume_grow the system will try to first increase the size of volume before deleting Snapshot copies to reclaim space. If the option is set to snap_delete the system will first automatically delete Snapshot copies and in case of failure to reclaim space will try to grow the volume.
[-read-realloc {off|on|space-optimized}]
- Read Reallocation Option-
Setting this option to
on
orspace-optimized
enables read reallocation in the volume. This results in the optimization of file layout by writing some blocks to a new location on disk. The layout is updated only after the blocks have been read because of a user read operation, and only when updating their layout will provide better read performance in the future. Using read reallocation may help workloads that perform a mixture of random writes and large sequential reads. If the option is set tospace-optimized
then the reallocation update will not duplicate blocks from Snapshot copies into the active file system, and will result in conservative space utilization. Usingspace-optimized
may be useful when the volume has Snapshot copies or is a SnapMirror source, when it can reduce the storage used in the volume and the amount of data that snapmirror needs to move on the next update. Thespace-optimized
value can result in degraded read performance of Snapshot copies. The default value isoff
. [-sched-snap-name {create-time|ordinal}]
- Naming Scheme for Automatic Snapshot Copies-
This option specifies the naming convention for automatic Snapshot copies. If set to
create-time
, automatic Snapshot copies are named using the format<schedule_name>.yyyy-mm-dd_hhmm
. Example: "hourly.2010-04-01_0831". If set toordinal
, only the latest automatic Snapshot copy is named using the format<schedule_name>.<n>
. Example: "hourly.0". Older automatic Snapshot copies are named using the format<schedule_name>.yyyy-mm-dd_hhmm
. Example: "hourly.2010-04-01_0831". - {
[-qos-policy-group <text>]
- QoS Policy Group Name -
This optional parameter specifies which QoS policy group to apply to the volume. This policy group defines measurable service level objectives (SLOs) that apply to the storage objects with which the policy group is associated. If you do not assign a policy group to a volume, the system will not monitor and control the traffic to it. To remove this volume from a policy group, enter the reserved keyword "none".
- |
[-qos-adaptive-policy-group <text>]
- QoS Adaptive Policy Group Name } -
This optional parameter specifies which QoS adaptive policy group to apply to the volume. This policy group defines measurable service level objectives (SLOs) and Service Level Agreements (SLAs) that adjust based on the volume allocated space or used space. To remove this volume from an adaptive policy group, enter the reserved keyword "none".
[-caching-policy <text>]
- Caching Policy Name-
This parameter specifies the caching policy to apply to the volume. A caching policy defines how the system caches this volume's data in a Flash Pool aggregate or Flash Cache modules.
Both metadata and user data are eligible for caching. Metadata consists of directories, indirect blocks and system metafiles. They are eligible for read caching only. When a random write pattern is detected on user data, the first such write is eligible for read caching while all subsequent overwrites are eligible for write caching. The available caching policies are:
-
none - Does not cache any user data or metadata blocks.
-
auto - Read caches all metadata and randomly read user data blocks, and write caches all randomly overwritten user data blocks.
-
meta - Read caches only metadata blocks.
-
random_read - Read caches all metadata and randomly read user data blocks.
-
random_read_write - Read caches all metadata, randomly read and randomly written user data blocks.
-
all_read - Read caches all metadata, randomly read and sequentially read user data blocks.
-
all_read_random_write - Read caches all metadata, randomly read, sequentially read and randomly written user data.
-
all - Read caches all data blocks read and written. It does not do any write caching.
-
noread-random_write - Write caches all randomly overwritten user data blocks. It does not do any read caching.
-
meta-random_write - Read caches all metadata and write caches randomly overwritten user data blocks.
-
random_read_write-random_write - Read caches all metadata, randomly read and randomly written user data blocks. It also write caches randomly overwritten user data blocks.
-
all_read-random_write - Read caches all metadata, randomly read and sequentially read user data blocks. It also write caches randomly overwritten user data blocks.
-
all_read_random_write-random_write - Read caches all metadata, randomly read, sequentially read and randomly written user data. It also write caches randomly overwritten user data blocks.
-
all-random_write - Read caches all data blocks read and written. It also write caches randomly overwritten user data blocks.
Note that in a caching-policy name, a hyphen (-) separates read and write policies. Default caching-policy is auto.
-
[-is-autobalance-eligible {true|false}]
- Is Eligible for Auto Balance Aggregate (privilege: advanced)-
If the Auto Balance feature is enabled, this parameter specifies whether the volume might be considered for system workload balancing. When set to
true
, the Auto Balance Aggregate feature might recommend moving this volume to another aggregate. The default value istrue
. [-max-constituent-size {<integer>[KB|MB|GB|TB|PB]}]
- Maximum size of a FlexGroup Constituent (privilege: advanced)-
This optionally specifies the maximum size of a FlexGroup constituent. The default value is determined by checking the maximum FlexVol size setting on all nodes used by the FlexGroup. The smallest value found is selected as the default for the
-max-constituent-size
for the FlexGroup. This parameter applies to FlexGroups only. [-vserver-dr-protection {protected|unprotected}]
- Vserver DR Protection-
This optionally specifies whether the volume should be protected by Vserver level SnapMirror. This parameter is applicable only if the Vserver is the source of a Vserver level SnapMirror relationship.
[-is-space-reporting-logical {true|false}]
- Logical Space Reporting-
This optionally specifies whether to report space logically on the volume. When space is reported logically, ONTAP reports the volume space such that all the physical space saved by the storage efficiency features are also as reported as used. The default setting is
false
. [-is-space-enforcement-logical {true|false}]
- Logical Space Enforcement-
This optionally specifies whether to perform logical space accounting on the volume. When space is enforced logically, ONTAP enforces volume settings such that all the physical space saved by the storage efficiency features will be calculated as used. The default setting is
false
. [-tiering-policy <Tiering Policy>]
- Volume Tiering Policy-
This optional parameter specifies the tiering policy to apply to the volume. This policy determines whether the user data blocks of a volume in a FabricPool will be tiered to the cloud tier when they become cold. FabricPool combines Flash (performance tier) with an object store (cloud tier) into a single aggregate. The temperature of a volume block increases if it is accessed frequently, and it decreases when it is not.
The available tiering policies are:
-
snapshot-only - This policy allows tiering of only the volume Snapshot copies not associated with the active file system. The default cooling period is 2 days. The
-tiering-minimum-cooling-days
parameter can be used to override the default. -
auto - This policy allows tiering of both Snapshot copy data and active file system user data to the cloud tier. The default cooling period is 31 days. The
-tiering-minimum-cooling-days
parameter can be used to override the default. -
none - Volume blocks will not be tiered to the cloud tier.
-
all - This policy allows tiering of both Snapshot copy data and active file system user data to the cloud tier as soon as possible without waiting for a cooling period. On DP volumes, this policy allows all transferred user data blocks to start in the cloud tier.
-
[-cloud-retrieval-policy {default|on-read|never|promote}]
- Volume Cloud Retrieval Policy (privilege: advanced)-
This optional parameter specifies the cloud retrieval policy for the volume. This policy determines which tiered out blocks to retrieve from the capacity tier to the performance tier.
The available cloud retrieval policies are:
-
default - This policy retrieves tiered data based on the underlying tiering policy. If the tiering policy is 'auto', tiered data is retrieved only for random client driven data reads. If the tiering policy is 'none' or 'snapshot-only', tiered data is retrieved for random and sequential client driven data reads. If the tiering policy is 'all', tiered data is not retrieved.
-
on-read - This policy retrieves tiered data for all client driven data reads.
-
never - This policy never retrieves tiered data.
-
promote - This policy retrieves all eligible tiered data automatically during the next scheduled scan. It is only supported when the tiering policy is 'none' or 'snapshot-only'. If the tiering policy is 'snapshot-only', the only data brought back is the data in the AFS. Data that is only in a snapshot copy stays in the cloud.
-
[-tiering-minimum-cooling-days <integer>]
- Volume Tiering Minimum Cooling Days (privilege: advanced)-
This parameter specifies the minimum number of days that user data blocks of the volume must be cooled before they can be considered cold and tiered out to the cloud tier. For volumes hosted on FabricPools, this parameter is used for tiering purposes and does not affect the reporting of inactive data. For volumes hosted on non-FabricPools, this parameter affects the cooling window used for reporting inactive data. The value specified should be greater than the frequency with which applications in the volume shift between different sets of data. Valid values are between 2 and 183. This parameter cannot be set when volume tiering policy is either "none" or "all".
[-tiering-object-tags <text>,…]
- Tags to be Associated with Objects Stored on a FabricPool-
This optional parameter specifies tiering object tags to be associated with objects stored on a FabricPool.
Object tags should follow these rules:
-
Each object tag should be a key-value pair separated by '='.
-
Multiple tags should be separated by ','. Overall tags should be in format key1=value1[,key2=value2,…].
-
All tags of a volume must have a unique key.
-
Each tag key should start with either a letter or an underscore. Keys should contain only alphanumeric characters and underscores. Maximum allowed limit is of 127 characters.
-
Each tag value should be of maximum 127 characters consisting of only alphanumeric characters and underscores.
-
Maximum 4 object tags are allowed per volume.
-
To remove existing tags of a volume, specify empty list as a parameter value.
-
[-anti-ransomware-state {disabled|enabled|dry-run|paused|dry-run-paused|enable-paused|disable-in-progress}]
- Anti-ransomware State-
Use this parameter to modify the anti-ransomware-state of the volume. The following values are supported:
-
enabled
- the feature is enabled. -
disabled
- the feature is disabled. -
dry-run
- the feature is in evaluation mode. -
paused
- the feature is in pause mode.
-
[-granular-data {disabled|enabled}]
- Granular data-
This specifies whether data storage on a volume is granular or not. Once this property is enabled on a volume, it cannot be disabled on that volume using this parameter. It can only be disabled on that volume by restoring a Snapshot copy. This setting may only be enabled on FlexGroups.
[-atime-update-period <integer>]
- Access Time Update Period (Seconds) (privilege: advanced)-
This parameter specifies the time period that must be elapsed in between consecutive atime update events. Note: The value of this parameter is only used if
-atime-update
is enabled on the volume. [-snapshot-locking-enabled {true|false}]
- Enable Snapshot Copy Locking-
This parameter specifies whether Snapshot copy locking is enabled, based on an indepdent compliance-clock time. Once this property is enabled, it cannot be disabled and the volume cannot be be deleted until all locked Snapshot copies are past their expiry time.
[-is-large-size-enabled {true|false}]
- Are Large Size Volumes and Files Enabled-
This parameter specifies whether support for large Flexible volumes and large files is enabled. If this value is true, the maximum volume size for a Flexible volume will be 300TB and the maximum size of a single file will be 128TB. If this value is false, the maximum volume size for a Flexible volume will be 100TB and the maximum size of a single file will be 16TB. The default value is false.
[-is-preserve-unlink-enabled {true|false}]
- Is Preserve Unlink Enabled (privilege: advanced)-
This parameter specifies whether preserve unlink is enabled for NFSV4.1 shares on the volume. Once this feature is enabled, a file removal op on a file with existing share locks will result in the file being moved to the trash directory. Clients will continue to be able to access the file using its file handle. The last close for that file will result in deletion of the file. The default value is false.
[-is-cloud-write-enabled {true|false}]
- Is Cloud Write Enabled (privilege: advanced)-
This parameter specifies whether cloud write is enabled on the volume. This feature is only available on volumes in FabricPools on FSx or Cloud Volumes Ontap. The default value is false.
[-aggressive-readahead-mode {none|file_prefetch}]
- Aggressive readahead mode (privilege: advanced)-
This parameter specifies the aggressive readahead mode of the volume. When set to file_prefetch, on a file read, the system aggressively issues readaheads for all of the blocks in the file and retains those blocks in a cache for a finite period of time. This feature is only available on FabricPool volumes on FSx for ONTAP and Cloud Volumes ONTAP. The default value is none.
Examples
The following example modifies a volume named vol4 on a Vserver named vs0. The volume's export policy is changed to default_expolicy and its size is changed to 500 GB.
cluster1::> volume modify -vserver vs0 -volume vol4 -policy default_expolicy -size 500g
The following example modifies a volume named vol2. It enables autogrow and sets the maximum autosize to 500g
cluster1::> volume modify -volume vol2 -autosize-mode grow -max-autosize 500g
The following example modifies a volume named vol2 to have a space guarantee of none.
cluster1::> volume modify -space-guarantee none -volume vol2
The following example modifies all volumes in Vserver vs0 to have a fractional reserve of 30%.
cluster1::> volume modify -fractional-reserve 30 -vserver vs0 *
The following example modifies a volume named vol2 to grow in size by 5 gigabytes
cluster1::> volume modify -volume vol2 -size +5g
The following example modifies a volume named vol2 to have a different caching policy. The volume must be on a Flash Pool aggregate.
cluster1::> volume modify -volume vol2 -caching-policy none