Skip to main content
NetApp Solutions

Supplemental NFS Datastore Option in Azure

Contributors kevin-hoke

NFS datastore support was introduced with ESXi version 3 in on-premises deployments, which greatly extended vSphere’s storage capabilities.

Running vSphere on NFS is a widely adopted option for virtualization deployments on-premises because it offers strong performance and stability. If you have significant network-attached storage (NAS) in an on-premises datacenter, you should consider deploying an Azure VMware solution SDDC in Azure with Azure NetApp File datastores to overcome capacity and performance challenges.

Azure NetApp Files is built on industry-leading, highly available NetApp ONTAP data management software. Microsoft Azure services are grouped into three categories: foundational, mainstream, and specialized. Azure NetApp Files is in the specialized category and is backed by hardware already deployed in many regions. With built-in high-availability (HA), Azure NetApp Files protects your data from most outages and offers you an industry-leading SLA of 99.99% uptime.

Before the introduction of Azure NetApp Files datastore capability, scale-out operation for customers planning to host performance and storage- intensive workloads required the expansion of both compute and storage.

Keep in mind the following issues:

  • Unbalanced cluster configurations are not recommended in an SDDC cluster. Therefore expanding storage means adding more hosts, which implies more TCO.

  • Only one vSAN environment is possible. Therefore, all storage traffic competes directly with production workloads.

  • There is no option for providing multiple performance tiers to align application requirements, performance, and cost.

  • It is easy to reach the limits of storage capacity for vSAN built on top of cluster hosts.By integrating Azure- native platform-as-a-service (PaaS) offerings like Azure NetApp Files as a datastore, customers have the option to independently scale their storage separately and only add compute nodes to the SDDC cluster as needed. This capability overcomes the challenges mentioned above.

Azure NetApp Files also enables you to deploy multiple datastores, which helps to mimic an on-premises deployment model by placing virtual machines in the appropriate datastore and by assigning the required service level to meet workload performance requirements. With the unique capability of multi-protocol support, guest storage is an additional option for database workloads like SQL and Oracle while also using supplemental NFS datastore capability to house the remaining VMDKs. Apart from this, native snapshot capability enables you to perform quick backup and granular restores.

Note Contact Azure and NetApp Solution Architects for planning and sizing of storage and determining the required number of hosts. NetApp recommends identifying storage performance requirements before finalizing the datastore layout for test, POC, and production deployments.

Detailed architecture

From a high-level perspective, this architecture describes how to achieve hybrid-cloud connectivity and app portability across on-premises environments and Azure. It also describes using Azure NetApp Files as a supplemental NFS datastore and as an in-guest storage option for guest virtual machines hosted on the Azure VMware solution.

Error: Missing Graphic Image

Sizing

The most important aspect in migration or disaster recovery is to determine the correct size for the target environment. It is very important to understand how many nodes are required to accommodate a lift- and- shift exercise from on-premises to the Azure VMware Solution.

For sizing, use historic data from the on-premises environment using RVTools (preferred) or other tools like Live Optics or Azure Migrate. RVTools is an ideal tool to capture the vCPU, vMem, vDisk and all required information, including powered on or off VMs, to characterize the target environment.

To run RVtools, complete the following steps:

  1. Download and install RVTools.

  2. Run RVTools, enter the required info to connect to your on-premises vCenter Server, and press Login.

  3. Export the inventory to an Excel spreadsheet.

  4. Edit the spreadsheet and remove any VMs that are not ideal candidates from the vInfo tab.This approach provides a clear output about storage requirements that can be used to right- size the Azure VMware SDDC cluster with the required number of hosts.

Note Guest VMs used with in-guest storage must be calculated separately; however, Azure NetApp Files can easily cover the additional storage capacity, thus keeping the overall TCO low.

Deploying and configuring Azure VMware Solution

Like on-premises, planning an Azure VMware solution is critical for a successful production- ready environment for creating virtual machines and migration.

This section describes how to set up and manage AVS for use in combination with Azure NetApp Files as a datastore with in-guest storage as well.

The setup process can be broken down into three parts:

  • Register the resource provider and create a private cloud.

  • Connect to a new or existing ExpressRoute virtual network gateway.

  • Validate network connectivity and access the private cloud. Refer to this link for a step-by-step walkthrough of the Azure VMware solution SDDC provisioning process.

Configure Azure NetApp Files with Azure VMware Solution

The new integration between Azure NetApp Files enables you to create NFS datastores via the Azure VMware Solution resource provider APIs/CLI with Azure NetApp Files volumes and mount the datastores on your clusters of choice in a private cloud. Apart from housing the VM and App VMDKs, Azure NetApp file volumes can also be mounted from VMs that are created in the Azure VMware Solution SDDC environment. The volumes can be mounted on the Linux client and mapped on a Windows client, because Azure NetApp Files supports Server Message Block (SMB) and Network File System (NFS) protocols.

Note For optimal performance, deploy the Azure NetApp Files in the same availability zone as the private cloud. Colocation with the Express route fastpath provides the best performance, with minimal network latency.

To attach an Azure NetApp File volume as the VMware datastore of an Azure VMware Solution private cloud, make sure the following prerequisites are met.

Prerequisites
  1. Use az login and validate the subscription is registered to CloudSanExperience feature in the Microsoft.AVS namespace.

az login –tenant xcvxcvxc- vxcv- xcvx- cvxc- vxcvxcvxcv
az feature show --name "CloudSanExperience" --namespace "Microsoft.AVS"
  1. If it is not registered, then register it.

az feature register --name "CloudSanExperience" --namespace "Microsoft.AVS"
Note Registration can take approximately 15 minutes to complete.
  1. To check the status of registration, run the following command.

az feature show --name "CloudSanExperience" --namespace "Microsoft.AVS" --query properties.state
  1. If the registration is stuck in an intermediate state for longer than 15 minutes, unregister and then reregister the flag.

az feature unregister --name "CloudSanExperience" --namespace "Microsoft.AVS"
az feature register --name "CloudSanExperience" --namespace "Microsoft.AVS"
  1. Verify that the subscription is registered to the AnfDatastoreExperience feature in the Microsoft.AVS namespace.

az feature show --name "AnfDatastoreExperience" --namespace "Microsoft.AVS" --query properties.state
  1. Verify that the vmware extension is installed.

az extension show --name vmware
  1. If the extension is already installed, verify that the version is 3.0.0. If an older version is installed, update the extension.

az extension update --name vmware
  1. If the extension is not already installed, install it.

az extension add --name vmware
Create and mount Azure NetApp Files volumes
  1. Log into the Azure Portal and access Azure NetApp Files. Verify access to the Azure NetApp Files service and register the Azure NetApp Files Resource Provider by using the az provider register --namespace Microsoft.NetApp –wait command. After registration, create a NetApp account. Refer to this link for detailed steps.

Error: Missing Graphic Image

  1. After a NetApp account is created, set up capacity pools with the required service level and size. For detailed information, refer to this link.

Error: Missing Graphic Image

Points to Remember
  • NFSv3 is supported for datastores on Azure NetApp Files.

  • Use the Premium or standard tier for capacity bound workloads and Ultra tier for performance bound workloads where necessary while complementing default vSAN storage.

  1. Configure a delegated subnet for Azure NetApp Files and specify this subnet when creating volumes. For detailed steps to create a delegated subnet, refer to this link.

  2. Add an NFS volume for the datastore using the Volumes blade under the capacity pools blade.

Error: Missing Graphic Image

To learn about Azure NetApp Files volume performance by size or quota, see Performance considerations for Azure NetApp Files.

Add Azure NetApp files datastore to private cloud
Note Azure NetApp Files volume can be attached to your private cloud using Azure Portal. Follow this link from Microsoft for step by step approach of using Azure portal to mount an Azure NetApp files datastore.

To add an Azure NetApp files datastore to a private cloud, complete the following steps:

  1. After the required features are registered, attach an NFS datastore to the Azure VMware Solution private cloud cluster by running the appropriate command.

  2. Create a datastore using an existing ANF volume in the Azure VMware Solution private cloud cluster.

C:\Users\niyaz>az vmware datastore netapp-volume create --name ANFRecoDSU002 --resource-group anfavsval2 --cluster Cluster-1 --private-cloud ANFDataClus --volume-id /subscriptions/0efa2dfb-917c-4497-b56a-b3f4eadb8111/resourceGroups/anfavsval2/providers/Microsoft.NetApp/netAppAccounts/anfdatastoreacct/capacityPools/anfrecodsu/volumes/anfrecodsU002
{
  "diskPoolVolume": null,
  "id": "/subscriptions/0efa2dfb-917c-4497-b56a-b3f4eadb8111/resourceGroups/anfavsval2/providers/Microsoft.AVS/privateClouds/ANFDataClus/clusters/Cluster-1/datastores/ANFRecoDSU002",
  "name": "ANFRecoDSU002",
  "netAppVolume": {
    "id": "/subscriptions/0efa2dfb-917c-4497-b56a-b3f4eadb8111/resourceGroups/anfavsval2/providers/Microsoft.NetApp/netAppAccounts/anfdatastoreacct/capacityPools/anfrecodsu/volumes/anfrecodsU002",
    "resourceGroup": "anfavsval2"
  },
  "provisioningState": "Succeeded",
  "resourceGroup": "anfavsval2",
  "type": "Microsoft.AVS/privateClouds/clusters/datastores"
}

. List all the datastores in a private cloud cluster.

 
C:\Users\niyaz>az vmware datastore list --resource-group anfavsval2 --cluster Cluster-1 --private-cloud ANFDataClus
[
{
"diskPoolVolume": null,
"id": "/subscriptions/0efa2dfb-917c-4497-b56a-b3f4eadb8111/resourceGroups/anfavsval2/providers/Microsoft.AVS/privateClouds/ANFDataClus/clusters/Cluster-1/datastores/ANFRecoDS001",
"name": "ANFRecoDS001",
"netAppVolume": {
"id": "/subscriptions/0efa2dfb-917c-4497-b56a-b3f4eadb8111/resourceGroups/anfavsval2/providers/Microsoft.NetApp/netAppAccounts/anfdatastoreacct/capacityPools/anfrecods/volumes/ANFRecoDS001",
"resourceGroup": "anfavsval2"
},
"provisioningState": "Succeeded",
"resourceGroup": "anfavsval2",
"type": "Microsoft.AVS/privateClouds/clusters/datastores"
},
{
"diskPoolVolume": null,
"id": "/subscriptions/0efa2dfb-917c-4497-b56a-b3f4eadb8111/resourceGroups/anfavsval2/providers/Microsoft.AVS/privateClouds/ANFDataClus/clusters/Cluster-1/datastores/ANFRecoDSU002",
"name": "ANFRecoDSU002",
"netAppVolume": {
"id": "/subscriptions/0efa2dfb-917c-4497-b56a-b3f4eadb8111/resourceGroups/anfavsval2/providers/Microsoft.NetApp/netAppAccounts/anfdatastoreacct/capacityPools/anfrecodsu/volumes/anfrecodsU002",
"resourceGroup": "anfavsval2"
},
"provisioningState": "Succeeded",
"resourceGroup": "anfavsval2",
"type": "Microsoft.AVS/privateClouds/clusters/datastores"
}
]

  1. After the necessary connectivity is in place, the volumes are mounted as a datastore.

Error: Missing Graphic Image

Sizing and performance optimization

Azure NetApp Files supports three service levels: Standard (16MBps per terabyte), Premium (64MBps per terabyte), and Ultra (128MBps per terabyte). Provisioning the right volume size is important for optimal performance of the database workload. With Azure NetApp Files, volume performance and the throughput limit are determined based on the following factors:

  • The service level of the capacity pool to which the volume belongs

  • The quota assigned to the volume

  • The quality of service (QoS) type (auto or manual) of the capacity pool

Error: Missing Graphic Image

For more information, see Service levels for Azure NetApp Files.

Refer to this link from Microsoft for detailed performance benchmarks that can be used during a sizing exercise.

Points to Remember
  • Use the Premium or Standard tier for datastore volumes for optimal capacity and performance. If performance is required, then Ultra tier can be used.

  • For guest mount requirements, use Premium or Ultra tier and for file share requirements for guest VMs, use Standard or Premium tier volumes.

Performance considerations

It is important to understand that with NFS version 3 there is only one active pipe for the connection between the ESXi host and a single storage target. This means that although there might be alternate connections available for failover, the bandwidth for a single datastore and the underlying storage are limited to what a single connection can provide.

To leverage more available bandwidth with Azure NetApp Files volumes, an ESXi host must have multiple connections to the storage targets. To address this issue, you can configure multiple datastores, with each datastore using separate connections between the ESXi host and the storage.

For higher bandwidth, as a best practice create multiple datastores using multiple ANF volumes, create VMDKs, and stripe the logical volumes across VMDKs.

Refer to this link from Microsoft for detailed performance benchmarks that can be used during a sizing exercise.

Points to Remember
  • Azure VMware solution allows eight NFS datastores by default. This can be increased via a support request.

  • Leverage ER fastpath along with Ultra SKU for higher bandwidth and lower latency.
    More information

  • With the "Basic" network features in Azure NetApp files, the connectivity from Azure VMware Solution is bound by the bandwidth of the ExpressRoute circuit and the ExpressRoute Gateway.

  • For Azure NetApp Files volumes with "Standard" network features, ExpressRoute FastPath is supported. When enabled, FastPath sends network traffic directly to Azure NetApp Files volumes, bypassing the gateway providing higher bandwidth and lower latency.

Increasing the size of the datastore

Volume reshaping and dynamic service level changes are completely transparent to the SDDC. In Azure NetApp Files, these capabilities provide continuous performance, capacity, and cost optimizations. Increase the size of NFS datastores by resizing the volume from Azure Portal or by using the CLI. After you are done, access vCenter, go to the datastore tab, right-click the appropriate datastore, and select Refresh Capacity Information. This approach can be used to increase the datastore capacity and to increase the performance of the datastore in a dynamic fashion with no downtime. This process is also completely transparent to applications.

Points to remember
  • Volume reshaping and dynamic service level capability allow you to optimize cost by sizing for steady-state workloads and thus avoid overprovisioning.

  • VAAI is not enabled.

Workloads

Migration

One of the most common use cases is migration. Use VMware HCX or vMotion to move on-premises VMs. Alternatively, you can use Rivermeadow to migrate VMs to Azure NetApp Files datastores.

Data Protection

Backing up VMs and quickly recovering them are among the great strengths of ANF datastores. Use Snapshot copies to make quick copies of your VM or datastore without affecting performance, and then send them to Azure storage for longer-term data protection or to a secondary region using cross region replication for disaster recovery purposes. This approach minimizes storage space and network bandwidth by only storing changed information.

Use Azure NetApp Files Snapshot copies for general protection, and use application tools to protect transactional data such as SQL Server or Oracle residing on the guest VMs. These Snapshot copies are different from VMware (consistency) snapshots and are suitable for longer term protection.

Note With ANF datastores, the Restore to New Volume option can be used to clone an entire datastore volume, and the restored volume can be mounted as another datastore to hosts within AVS SDDC. After a datastore is mounted, VMs inside it can be registered, reconfigured, and customized as if they were individually cloned VMs.
BlueXP backup and recovery for Virtual Machines

BlueXP backup and recovery for Virtual Machines provides a vSphere web client GUI on vCenter to protect Azure VMware Solution virtual machines and Azure NetApp files datastores via backup policies. These policies can define schedule, retention, and other capabilities. The BlueXP backup and recovery for Virtual Machine functionality can be deployed by using the Run command.

The setup and protection policies can be installed by completing the following steps:

  1. Install BlueXP backup and recovery for Virtual Machine in Azure VMware Solution private cloud using the Run command.

  2. Add cloud subscription credentials (client and secret value), and then add a cloud subscription account (NetApp account and associated resource group) that contains the resources that you would like to protect.

  3. Create one or more backup policies that manage the retention, frequency, and other settings for resource group backups.

  4. Create a container to add one or more resources that need to be protected with backup policies.

  5. In the event of a failure, restore the entire VM or specific individual VMDKs to the same location.

Note With Azure NetApp Files Snapshot technology, backups and restores are very fast.

Error: Missing Graphic Image

Disaster Recovery with Azure NetApp Files, JetStream DR, and Azure VMware Solution

Disaster recovery to cloud is a resilient and cost-effective way of protecting the workloads against site outages and data corruption events (for example, ransomware). Using the VMware VAIO framework, on-premises VMware workloads can be replicated to Azure Blob storage and recovered, enabling minimal or close to no data loss and near-zero RTO. JetStream DR can be used to seamlessly recover the workloads replicated from on-premises to AVS and specifically to Azure NetApp Files. It enables cost-effective disaster recovery by using minimal resources at the DR site and cost-effective cloud storage. JetStream DR automates recovery to ANF datastores via Azure Blob Storage. JetStream DR recovers independent VMs or groups of related VMs into recovery site infrastructure according to network mapping and provides point-in-time recovery for ransomware protection.