Skip to main content
Enterprise applications

Recommended ESXi host and other ONTAP settings

Contributors netapp-bingen jfsinmsp

NetApp has developed a set of optimal ESXi host settings for both NFS and block protocols. Specific guidance is also provided for multipathing and HBA timeout settings for proper behavior with ONTAP based on NetApp and VMware internal testing.

These values are easily set using ONTAP tools for VMware vSphere: From the Summary dashboard, click Edit Settings in the Host Systems portlet or right-click the host in vCenter, then navigate to ONTAP tools > Set Recommended Values.

Here are the currently recommended host settings with the 9.8-9.13 releases.

Host Setting

NetApp Recommended Value

Reboot Required

ESXi Advanced Configuration

VMFS3.HardwareAcceleratedLocking

Keep default (1)

No

VMFS3.EnableBlockDelete

Keep default (0), but can be changed if needed.
For more information, see VMware KB 2007427

No

VMFS3.EnableVMFS6Unmap

Keep default (1)
For more information, see VMware vSphere APIs: Array Integration (VAAI)

No

NFS Settings

Net.TcpipHeapSize

vSphere 6.0 or later, set to 32.
All other NFS configurations, set to 30

Yes

Net.TcpipHeapMax

Set to 512MB for most vSphere 6.X releases.
Set to default (1024MB) for 6.5U3, 6.7U3, and 7.0 or later.

Yes

NFS.MaxVolumes

vSphere 6.0 or later, set to 256
All other NFS configurations set to 64.

No

NFS41.MaxVolumes

vSphere 6.0 or later, set to 256.

No

NFS.MaxQueueDepth1

vSphere 6.0 or later, set to 128

Yes

NFS.HeartbeatMaxFailures

Set to 10 for all NFS configurations

No

NFS.HeartbeatFrequency

Set to 12 for all NFS configurations

No

NFS.HeartbeatTimeout

Set to 5 for all NFS configurations.

No

SunRPC.MaxConnPerIP

vSphere 7.0 or later, set to 128.

No

FC/FCoE Settings

Path selection policy

Set to RR (round robin) when FC paths with ALUA are used. Set to FIXED for all other configurations.
Setting this value to RR helps provide load balancing across all active/optimized paths.
The value FIXED is for older, non-ALUA configurations and helps prevent proxy I/O. In other words, it helps keep I/O from going to the other node of a high-availability (HA) pair in an environment that has Data ONTAP operating in 7-Mode

No

Disk.QFullSampleSize

Set to 32 for all configurations.
Setting this value helps prevent I/O errors.

No

Disk.QFullThreshold

Set to 8 for all configurations.
Setting this value helps prevent I/O errors.

No

Emulex FC HBA timeouts

Use the default value.

No

QLogic FC HBA timeouts

Use the default value.

No

iSCSI Settings

Path selection policy

Set to RR (round robin) for all iSCSI paths.
Setting this value to RR helps provide load balancing across all active/optimized paths.

No

Disk.QFullSampleSize

Set to 32 for all configurations.
Setting this value helps prevent I/O errors

No

Disk.QFullThreshold

Set to 8 for all configurations.
Setting this value helps prevent I/O errors.

No

Note 1 - NFS advanced configuration option MaxQueueDepth may not work as intended when using VMware vSphere ESXi 7.0.1 and VMware vSphere ESXi 7.0.2. Please reference VMware KB 86331 for more information.

ONTAP tools also specify certain default settings when creating ONTAP FlexVol volumes and LUNs:

ONTAP Tool

Default Setting

Snapshot reserve (-percent-snapshot-space)

0

Fractional reserve (-fractional-reserve)

0

Access time update (-atime-update)

False

Minimum readahead (-min-readahead)

False

Scheduled snapshots

None

Storage efficiency

Enabled

Volume guarantee

None (thin provisioned)

Volume Autosize

grow_shrink

LUN space reservation

Disabled

LUN space allocation

Enabled

Multipath settings for performance

While not currently configured by available ONTAP tools, NetApp suggests these configuration options:

  • In high-performance environments or when testing performance with a single LUN datastore, consider changing the load balance setting of the round-robin (VMW_PSP_RR) path selection policy (PSP) from the default IOPS setting of 1000 to a value of 1. See VMware KB 2069356 for more info.

  • In vSphere 6.7 Update 1, VMware introduced a new latency load balance mechanism for the Round Robin PSP. The new option considers I/O bandwidth and path latency when selecting the optimal path for I/O. You might benefit from using it in environments with non-equivalent path connectivity, such as cases with more network hops on one path than another, or when using a NetApp All SAN Array system. See Path Selection Plug-Ins and Policies for more information.

Additional documentation

For FCP and iSCSI with vSphere 7, more details can be found at Use VMware vSphere 7.x with ONTAP
For FCP and iSCSI with vSphere 8, more details can be found at Use VMware vSphere 8.x with ONTAP
For NVMe-oF with vSphere 7, more details can be found at For NVMe-oF, more details can be found at NVMe-oF Host Configuration for ESXi 7.x with ONTAP
For NVMe-oF with vSphere 8, more details can be found at For NVMe-oF, more details can be found at NVMe-oF Host Configuration for ESXi 8.x with ONTAP