Skip to main content
NetApp Solutions

Deploying Microsoft Hyper-V on NetApp Storage: Pre-Requisities

Contributors kevin-hoke

This topic provides steps to configure and deploy a two-node failover cluster and clustered Hyper-V virtual machines leveraging ONTAP storage system.

Pre-requisites for Deployment Procedure

  • All hardware must be certified for the version of Windows Server that you are running, and the complete failover cluster solution must pass all tests in the Validate a Configuration Wizard

  • Hyper-V nodes joined to the domain controller (recommended) and appropriate connectivity between each other.

  • Every Hyper-V node should be configured identically.

  • Network adapters and designated virtual switches configured on each Hyper-V server for segregated traffic for mgmt, ISCSI, SMB, live migrate.

  • The failover cluster feature is enabled on each Hyper-V server.

  • SMB shares or CSVs are used as shared storage to store VMs and their disks for Hyper-V clustering.

  • Storage should not be shared between different clusters. Plan for one or multiple CSV/CIFS share per cluster.

  • If the SMB share is used as shared storage, then permissions on the SMB share must be configured to grant access to the computer accounts of all the Hyper-V nodes in the cluster.

For more information, see:

Installing Windows Features

The following steps describe how to install the required Windows Server 2022 features.

All Hosts

  1. Prepare the windows OS 2022 with necessary updates and device drivers on all the designated nodes.

  2. Log into each Hyper-V node using the administrator password entered during installation.

  3. Launch a PowerShell prompt by right clicking the PowerShell icon in the taskbar and selecting Run as Administrator.

  4. Add the Hyper-V, MPIO, and clustering features.

    Add-WindowsFeature Hyper-V, Failover-Clustering, Multipath-IO `-IncludeManagementTools –Restart

Configuring Networks

Proper network planning is key to achieving fault tolerant deployment. Setting up distinct physical network adapters for each type of traffic was the standard suggestion for a failover cluster. With the ability to add virtual network adapters, switch embedded teaming (SET) and features like Hyper-V QoS introduced, condense network traffic on fewer physical adapters. Design the network configuration with quality of service, redundancy, and traffic isolation in mind. Configuring network isolation techniques like VLANs in conjunction with traffic isolation techniques provides redundancy for the traffic and quality of service which would improve and add consistency to storage traffic performance.

It is advised to separate and isolate specific workloads using multiple logical and/or physical networks. Typical network traffic examples that are typically divided into segments are as follows:

  • ISCSI Storage network.

  • CSV (Cluster Shared Volume) or Heartbeat network.

  • Live Migration

  • VM network

  • Management network

Note: When iSCSI is used with dedicated NICs, then using any teaming solution is not recommended and MPIO/DSM should be used.

Note: Hyper-V networking best practices also do not recommend using NIC teaming for SMB 3.0 storage networks in Hyper-V environment.

For additional information, refer to Plan for Hyper-V networking in Windows Server

Deciding on Storage Design for Hyper-V

Hyper-V supports NAS (SMB3.0) and Block storage (iSCSI/FC) as the backing storage for virtual machines. NetApp supports SMB3.0, iSCSI and FC protocol which can be used as native storage for VMs - Cluster Shared Volumes (CSV) using iSCSI/FC and SMB3. Customers can also use SMB3 and iSCSI as guest connected storage options for workloads that require direct access to the storage. ONTAP provides flexible options with unified storage (All Flash Array) - for workload that requires mixed protocol access and SAN optimized storage (All SAN Array) for SAN only configurations.

The decision to use SMB3 vs iSCSI/FC is driven by the existing infrastructure in place today, SMB3/iSCSI allow customers to use existing network infrastructure. For customers that have existing FC infrastructure can leverage that infrastructure and present storage as FC based Clustered Shared Volumes.

Note: A NetApp storage controller running ONTAP software can support the following workloads in a Hyper-V environment:

  • VMs hosted on continuously available SMB 3.0 shares

  • VMs hosted on Cluster Shared Volume (CSV) LUNs running on iSCSI or FC

  • In-Guest storage and pass through disks to guest virtual machines

Note: Core ONTAP features such as thin provisioning, deduplication, compression, data compaction, flex clones, snapshots, and replication work seamlessly in the background regardless of the platform or operating system and provide significant value for the Hyper-V workloads. The default settings for these features are optimal for Windows Server and Hyper-V.

Note: MPIO is supported on the guest VM using in-guest initiators if multiple paths are available to the VM, and the multipath I/O feature is installed and configured.

Note: ONTAP supports all major industry-standard client protocols: NFS, SMB, FC, FCoE, iSCSI, NVMe/FC, and S3. However, NVMe/FC and NVMe/TCP are not supported by Microsoft.

Installing NetApp Windows iSCSI Host Utilities

The following section describes how to perform an unattended installation of the NetApp Windows iSCSI Host Utilities. For detailed information regarding the installation see the Install Windows Unified Host Utilities 7.2 ( or the latest supported version)

All Hosts

  1. Download Windows iSCSI Host Utilities

  2. Unblock the downloaded file.

    Unblock-file ~\Downloads\netapp_windows_host_utilities_7.2_x64.msi
  3. Install the Host Utilities.

    ~\Downloads\netapp_windows_host_utilities_7.2_x64.msi /qn "MULTIPATHING=1"

Note: The system will reboot during this process.

Configuring Windows Host iSCSI initiator

The following steps describe how to configure the built in Microsoft iSCSI initiator.

All Hosts

  1. Launch a PowerShell prompt by right clicking the PowerShell icon in the taskbar and selecting Run as Administrator.

  2. Configure the iSCSI service to start automatically.

    Set-Service -Name MSiSCSI -StartupType Automatic
  3. Start the iSCSI service.

    Start-Service -Name MSiSCSI
  4. Configure MPIO to claim any iSCSI device.

    Enable-MSDSMAutomaticClaim -BusType iSCSI
  5. Set the default load balance policy of all newly claimed devices to round robin.

    Set-MSDSMGlobalDefaultLoadBalancePolicy -Policy RR 
  6. Configure an iSCSI target for each controller.

    New-IscsiTargetPortal -TargetPortalAddress <<iscsia_lif01_ip>> -InitiatorPortalAddress <iscsia_ipaddress>
    
    New-IscsiTargetPortal -TargetPortalAddress <<iscsib_lif01_ip>> -InitiatorPortalAddress <iscsib_ipaddress
    
    New-IscsiTargetPortal -TargetPortalAddress <<iscsia_lif02_ip>> -InitiatorPortalAddress <iscsia_ipaddress>
    
    New-IscsiTargetPortal -TargetPortalAddress <<iscsib_lif02_ip>> -InitiatorPortalAddress <iscsib_ipaddress>
  7. Connect a session for each iSCSI network to each target.

    Get-IscsiTarget | Connect-IscsiTarget -IsPersistent $true -IsMultipathEnabled $true -InitiatorPo rtalAddress <iscsia_ipaddress>
    
    Get-IscsiTarget | Connect-IscsiTarget -IsPersistent $true -IsMultipathEnabled $true -InitiatorPo rtalAddress <iscsib_ipaddress>

Note: Add multiple sessions (min of 5-8) for increased performance and utilizing the bandwidth.

Creating a Cluster

One Server Only

  1. Launch a PowerShell prompt with administrative permissions, by right clicking the PowerShell icon and selecting Run as Administrator`.

  2. Create a new cluster.

    New-Cluster -Name <cluster_name> -Node <hostnames> -NoStorage -StaticAddress <cluster_ip_address>

    Image showing cluster management interface

  3. Select the appropriate cluster network for Live migration.

  4. Designate the CSV network.

    (Get-ClusterNetwork -Name Cluster).Metric = 900
  5. Change the cluster to use a quorum disk.

    1. Launch a PowerShell prompt with administrative permissions by right clicking the PowerShell icon and selecting 'Run as Administrator'.

      start-ClusterGroup "Available Storage"| Move-ClusterGroup -Node $env:COMPUTERNAME
    2. In Failover Cluster Manager, select Configure Cluster Quorum Settings.

      Image of the Configure Cluster Quorum settings

    3. Click Next through the Welcome page.

    4. Select the quorum witness and click Next.

    5. Select Configure a disk witness` and click Next.

    6. Select Disk W: from the available storage and click Next.

    7. Click Next through the confirmation page and Finish on the summary page.

      For more detailed information about quorum and witness, see Configuring and manage quorum

  6. Run the Cluster Validation wizard from Failover Cluster Manager to validate deployment.

  7. Create CSV LUN to store virtual machine data and create highly available virtual machines via Roles within Failover Cluster Manager.