Use case 2: Backup and disaster recovery from the cloud to on-premises
This use case is based on a broadcasting customer that needs to back up cloud-based analytics data to its on-premises data center, as illustrated in the figure below.
Scenario
In this scenario, the IoT sensor data is ingested into the cloud and analyzed by using an open source Apache Spark cluster within AWS. The requirement is to back up the processed data from the cloud to on-premises.
Requirements and challenges
The main requirements and challenges for this use case include:
-
Enabling data protection should not cause any performance effect on the production Spark/Hadoop cluster in the cloud.
-
Cloud sensor data needs to be moved and protected to on-premises in an efficient and secure way.
-
Flexibility to transfer data from the cloud to on-premises under different conditions, such as on-demand, instantaneous, and during low-cluster load times.
Solution
The customer uses AWS Elastic Block Store (EBS) for its Spark cluster HDFS storage to receive and ingest data from remote sensors through Kafka. Consequently, the HDFS storage acts as the source for the backup data.
To fulfill these requirements, NetApp ONTAP Cloud is deployed in AWS, and an NFS share is created to act as the backup target for the Spark/Hadoop cluster.
After the NFS share is created, copy the data from the HDFS EBS storage into the ONTAP NFS share. After the data resides in NFS in ONTAP Cloud, SnapMirror technology can be used to mirror the data from the cloud into on-premises storage as needed in a secure and efficient way.
This image shows the backup and disaster recovery from cloud to on-premises solution.