Technology overview
-
PDF of this doc site
- Artificial Intelligence
- Public and Hybrid Cloud
- Virtualization
-
Containers
- Red Hat OpenShift with NetApp
Collection of separate PDF docs
Creating your file...
This page provides an overview of the technology used in this solution.
Microsoft and NetApp
Since May 2019, Microsoft has delivered an Azure native, first-party portal service for enterprise NFS and SMB file services based on NetApp ONTAP technology. This development is driven by a strategic partnership between Microsoft and NetApp and further extends the reach of world-class ONTAP data services to Azure.
Azure NetApp Files
The Azure NetApp Files service is an enterprise-class, high-performance, metered file storage service. Azure NetApp Files supports any workload type and is highly available by default. You can select service and performance levels and set up Snapshot copies through the service. Azure NetApp Files is an Azure first-party service for migrating and running the most demanding enterprise-file workloads in the cloud, including databases, SAP, and high-performance computing applications with no code changes.
This reference architecture gives IT organizations the following advantages:
-
Eliminates design complexities
-
Enables independent scaling of compute and storage
-
Enables customers to start small and scale seamlessly
-
Offers a range of storage tiers for various performance and cost points
Dask and NVIDIA RAPIDS overview
Dask is an open-source, parallel computing tool that scales Python libraries on multiple machines and provides faster processing of large amounts of data. It provides an API similar to single-threaded conventional Python libraries, such as Pandas, Numpy, and scikit-learn. As a result, native Python users are not forced to change much in their existing code to use resources across the cluster.
NVIDIA RAPIDS is a suite of open-source libraries that makes it possible to run end-to-end ML and data analytics workflows entirely on GPUs. Together with Dask, it enables you to easily scale from GPU workstation (scale up) to multinode, multi-GPU clusters (scale out).
For deploying Dask on a cluster, you could use Kubernetes for resource orchestration. You could also scale up or scale down the worker nodes as per the process requirement, which in-turn can help to optimize the cluster resource consumption, as shown in the following figure.