RDMA overview
-
PDF of this doc site
- Cluster administration
-
Volume administration
- Logical storage management with the CLI
-
NAS storage management
- Configure NFS with the CLI
- Manage NFS with the CLI
-
Manage SMB with the CLI
- Manage file access using SMB
- Security and data encryption
- Data protection and disaster recovery
Collection of separate PDF docs
Creating your file...
ONTAP's Remote Direct Memory Access (RDMA) offerings support latency-sensitive and high-bandwidth workloads. RDMA allows data to be copied directly between storage system memory and host system memory, circumventing CPU interruptions and overhead.
NFS over RDMA
Beginning with ONTAP 9.10.1, you can configure NFS over RDMA to enable the use of NVIDIA GPUDirect Storage for GPU-accelerated workloads on hosts with supported NVIDIA GPUs.
RDMA cluster interconnect
RDMA cluster interconnect reduces latency, decreases failover times, and accelerates communication between nodes in a cluster.
Beginning with ONTAP 9.10.1, cluster interconnect RDMA is supported for certain hardware systems when used with X1151A cluster NICs. Beginning with ONTAP 9.13.1, X91153A NICs also support cluster interconnect RDMA. Consult the table to learn what systems are supported in different ONTAP releases.
Systems | Supported ONTAP versions |
---|---|
|
ONTAP 9.10.1 and later |
|
ONTAP 9.13.1 and later |
Given the appropriate storage system set up, no additional configuration is needed to use RDMA interconnect.