Skip to main content
NetApp Solutions

VMware vSphere Metro Storage Cluster with SnapMirror active sync

Contributors sureshthoppay kevin-hoke reno

VMware vSphere Metro Storage Cluster (vMSC) is a stretched cluster solution across different fault domains to provide
* Workload mobility across availability zones or sites.
* downtime avoidance
* disaster avoidance
* fast recovery

This document provides the vMSC implementation details with SnapMirror active sync (SM-as) utilizing System Manager and ONTAP Tools. Further, it shows how the VM can be protected by replicating to third site and manage with SnapCenter Plugin for VMware vSphere.

vMSC with SnapMirror active sync architecture

SnapMirror active sync supports ASA, AFF and FAS storage arrays. It is recommended to use same type (Performance/Capacity models) on both fault domains. Currently, only block protocols like FC and iSCSI are supported. For further support guidelines, refer Interoperability Matrix Tool and Hardware Universe

vMSC supports two different deployment models named Uniform host access and Non-uniform host access. In Uniform host access configuration, every host on the cluster has access to LUN on both fault domains. It is typically used in different availability zones in same datacenter.

vMSC Uniform vs Non-Uniform host access mode

In Non-Uniform host access configuration, host has access only to local fault domain. It is typically used in different sites where running multiple cables across the fault domains are restrictive option.

Note In Non-Uniform host access mode, the VMs will be restarted in other fault domain by vSphere HA. Application availability will be impacted based on its design. Non-Uniform host access mode is supported only with ONTAP 9.15 onwards.

vMSC non-uniform host access with ONTAP System Manager UI.

Note: ONTAP Tools 10.2 or above can be used to provision stretched datastore with non-uniform host access mode without switching multiple user interfaces. This section is just for reference if ONTAP Tools is not used.

  1. Note down one of the iSCSI data lif IP address from the local fault domain storage array.
    System Manager iSCSI Lifs

  2. On vSphere host iSCSI Storage Adapter, add that iSCSI IP under the Dynamic Discovery tab.
    Add iSCSI server for dynamic discovery

    Note For Uniform access mode, need to provide the source and target fault domain iSCSI data lif address.
  3. Repeat the above step on vSphere hosts for the other fault domain adding its local iSCSI data lif IP on Dynamic Discovery tab.

  4. With proper network connectivity, four iSCSI connection should exist per vSphere host that has two iSCSI VMKernel nics and two iSCSI data lifs per storage controller.
    iSCSI connection info

  5. Create LUN using ONTAP System Manager, setup SnapMirror with replication policy AutomatedFailOverDuplex, pick the host initiators and set host proximity.
    Create LUN with AutomatedFailOverDuplex

  6. On other fault domain storage array, create the SAN initiator group with its vSphere host initiators and set host proximity.
    SAN initiator group

    Note For Uniform access mode, the igroup can be replicated from source fault domain.
  7. Map the replicated LUN with same mapping ID as in source fault domain.
    LUN Mapping ID

  8. On vCenter, right click on vSphere Cluster and select Rescan Storage option.
    Rescan storage

  9. On one of the vSphere host in the cluster, check the newly created device shows up with datastore showing Not Consumed.
    iSCSI Device list on vSphere host

  10. On vCenter, right click on vSphere Cluster and select New Datastore option.
    New Datastore

  11. On Wizard, remember to provide the datastore name and select the device with right capacity & device id.
    Datastore creation on iSCSI device

  12. Verify the datastore is mounted on all hosts on cluster across both fault domains.
    Datastore on source host

    Datastore on destination host

    Note The above screenshots shows Active I/O on single controller since we used AFF. For ASA, it will have Active IO on all paths.
  13. When additional datastores are added, need to remember to expand the existing Consistency Group to have it consistent across the vSphere cluster.
    CG protection policy

vMSC uniform host access mode with ONTAP Tools.

  1. Ensure NetApp ONTAP Tools is deployed and registered to vCenter.
    ONTAP Tools Plug-in registered to vCenter
    If not, follow ONTAP Tools deployment and Add a vCenter server instance

  2. Ensure ONTAP Storage systems are registered to ONTAP Tools. This includes both fault domain storage systems and third one for Asynchronous remote replication to use for VM protection with SnapCenter Plugin for VMware vSphere.
    Registered storage backends
    If not, follow Add storage backend using vSphere client UI

  3. Update hosts data to sync with ONTAP Tools and then, create a datastore.
    Update hosts data

  4. To enable SM-as, right click on vSphere cluster and pick Protect cluster on NetApp ONTAP Tools (refer above screenshot)

  5. It will show existing datastores for that cluster along with SVM details. The default CG name is <vSphere Cluster name>_<SVM name>. Click on Add Relationship button.
    Protect Cluster

  6. Pick the target SVM and set the policy to AutomatedFailOverDuplex for SM-as. There is a toggle switch for Uniform host configuration. Set the proximity for each host.
    Add SnapMirror Relationship

  7. Verify the host promity info and other details. Add another relationship to third site with replication policy of Asynchronous if required. Then, click on Protect.
    Add Relationship
    NOTE: If plan to use SnapCenter Plug-in for VMware vSphere 6.0, the replication needs to be setup at volume level rather than at Consistency Group level.

  8. With Uniform host access, the host has iSCSI connection to both fault domain storage arrays.
    iSCSI Multipath info
    NOTE: The above screenshot is from AFF. If ASA, ACTIVE I/O should be in all paths with proper network connections.

  9. ONTAP Tools plugin also indicates the volume is protected or not.
    Volume protection status

  10. For more details and to update the host proximity info, Host cluster relationships option under the ONTAP Tools can be utilized.
    Host cluster relationships

VM protection with SnapCenter plug-in for VMware vSphere.

SnapCenter Plug-in for VMware vSphere (SCV) 6.0 or above supports SnapMirror active sync and also in combination with SnapMirror Async to replicate to third fault domain.

Three site topology

Three site topology with async failover

Supported use-cases include:
* Backup and Restore the VM or Datastore from either of fault domains with SnapMirror active sync.
* Restore resources from third fault domain.

  1. Add all the ONTAP Storage Systems planned to use in SCV.
    Register storage arrays

  2. Create Policy. Ensure Update SnapMirror after backup is checked for SM-as and also Update SnapVault after backup for Async replication to third fault domain.
    Backup Policy

  3. Create Resource Group with desiered items that need to be protected, associate to policy and schedule.
    Resource Group
    NOTE: Snapshot name ending with _recent is not supported with SM-as.

  4. Backups occur at scheduled time based on Policy associated to Resource Group. Jobs can be monitored from the Dashboard job monitor or from the backup info on those resources.
    SCV Dashboard
    Resource Backup info for Datastore
    Resource Backup info for VM

  5. VMs can be restored to same or alternate vCenter from the SVM on Primary fault domain or from one of the secondary locations.
    VM restore location options

  6. Similar option is also available for Datastore mount operation.
    Datastore restore location options

For assistance with additional operations with SCV, refer SnapCenter Plug-in for VMware vSphere documentation