High-availability pairs in Azure
A Cloud Volumes ONTAP high-availability (HA) pair provides enterprise reliability and continuous operations in case of failures in your cloud environment. In Azure, storage is shared between the two nodes.
HA components
HA single availability zone configuration with page blobs
A Cloud Volumes ONTAP HA page blob configuration in Azure includes the following components:
Note the following about the Azure components that BlueXP deploys for you:
- Azure Standard Load Balancer
-
The load balancer manages incoming traffic to the Cloud Volumes ONTAP HA pair.
- VMs in single availability zones
-
Beginning with Cloud Volumes ONTAP 9.15.1, you can create and manage heterogeneous virtual machines (VMs) in a single availability zone (AZ). You can deploy high-availability (HA) nodes in separate fault domains within the same AZ, guaranteeing optimal availability. To learn more about the flexible orchestration mode that enables this capability, refer to Microsoft Azure documentation: Virtual Machine Scale Sets.
- Disks
-
Customer data resides on Premium Storage page blobs. Each node has access to the other node's storage. Additional storage is also required for boot, root, and core data.
- Storage accounts
-
-
One storage account is required for managed disks.
-
One or more storage accounts are required for the Premium Storage page blobs, as the disk capacity limit per storage account is reached.
-
One storage account is required for data tiering to Azure Blob storage.
-
Starting with Cloud Volumes ONTAP 9.7, the storage accounts that BlueXP creates for HA pairs are general-purpose v2 storage accounts.
-
You can enable an HTTPS connection from a Cloud Volumes ONTAP 9.7 HA pair to Azure storage accounts when creating a working environment. Note that enabling this option can impact write performance. You can't change the setting after you create the working environment.
-
HA single availability zone configuration with shared managed disks
A Cloud Volumes ONTAP HA single availability zone configuration running on top of shared managed disk includes the following components:
Note the following about the Azure components that BlueXP deploys for you:
- Azure Standard Load Balancer
-
The load balancer manages incoming traffic to the Cloud Volumes ONTAP HA pair.
- VMs in single availability zones
-
Beginning with Cloud Volumes ONTAP 9.15.1, you can create and manage heterogeneous virtual machines (VMs) in a single availability zone (AZ). You can deploy high-availability (HA) nodes in separate fault domains within the same AZ, guaranteeing optimal availability. To learn more about the flexible orchestration mode that enables this capability, refer to Microsoft Azure documentation: Virtual Machine Scale Sets.
The zonal deployment uses Premium SSD v2 Managed Disks when the following conditions are fulfilled:
-
The version of Cloud Volumes ONTAP is 9.15.1 or later.
-
The selected region and zone support Premium SSD v2 Managed Disks. For information about the supported regions, refer to Microsoft Azure website: Products available by region.
-
The subscription is registered for the Microsoft Microsoft.Compute/VMOrchestratorZonalMultiFD feature.
-
- Disks
-
Customer data resides on locally-redundant storage (LRS) managed disks. Each node has access to the other node's storage. Additional storage is also required for boot, root, partner root, core, and NVRAM data.
- Storage accounts
-
Storage accounts are used for managed disk based deployments to handle diagnostic logs and tiering to blob storage.
HA multiple availability zone configuration
A Cloud Volumes ONTAP HA multiple availability zone configuration in Azure includes the following components:
Note the following about the Azure components that BlueXP deploys for you:
- Azure Standard Load Balancer
-
The load balancer manages incoming traffic to the Cloud Volumes ONTAP HA pair.
- Availability Zones
-
HA multiple availability zone configuration utilizes a deployment model where two Cloud Volumes ONTAP nodes are deployed into different availability zones, ensuring that the nodes are in different fault domains to provide redundancy and availability. To learn how Virtual Machine Scale Sets in Flexible orchestration mode can use availability zones in Azure, refer to Microsoft Azure documentation: Create a Virtual Machine Scale Set that uses Availability Zones.
- Disks
-
Customer data resides on zone-redundant storage (ZRS) managed disks. Each node has access to the other node's storage. Additional storage is also required for boot, root, partner root, and core data.
- Storage accounts
-
Storage accounts are used for managed disk based deployments to handle diagnostic logs and tiering to blob storage.
RPO and RTO
An HA configuration maintains high availability of your data as follows:
-
The recovery point objective (RPO) is 0 seconds.
Your data is transactionally consistent with no data loss. -
The recovery time objective (RTO) is 120 seconds.
In the event of an outage, data should be available in 120 seconds or less.
Storage takeover and giveback
Similar to a physical ONTAP cluster, storage in an Azure HA pair is shared between nodes. Connections to the partner's storage allows each node to access the other's storage in the event of a takeover. Network path failover mechanisms ensure that clients and hosts continue to communicate with the surviving node. The partner gives back storage when the node is brought back on line.
For NAS configurations, data IP addresses automatically migrate between HA nodes if failures occur.
For iSCSI, Cloud Volumes ONTAP uses multipath I/O (MPIO) and Asymmetric Logical Unit Access (ALUA) to manage path failover between the active-optimized and non-optimized paths.
For information about which specific host configurations support ALUA, refer to the NetApp Interoperability Matrix Tool and the SAN hosts and cloud clients guide for your host operating system. |
Storage takeover, resync, and giveback are all automatic by default. No user action is required.
Storage configurations
You can use an HA pair as an active-active configuration, in which both nodes serve data to clients, or as an active-passive configuration, in which the passive node responds to data requests only if it has taken over the storage for the active node.