NFS over RDMA

Contributors

NFS over RDMA utilizes RDMA adapters, allowing data to be copied directly between storage system memory and host system memory, circumventing CPU interruptions and overhead.

NFS over RDMA configurations are designed for customers with latency sensitive or high-bandwidth workloads such as machine learning and analytics. NVIDIA has extended NFS over RDMA to enable GPU Direct Storage (GDS). GDS further accelerates GPU-enabled workloads by bypassing the CPU and main memory altogether using RDMA to transfer data between the storage system and GPU memory directly.

In ONTAP 9.10.1, this configuration is only supported for the NFSv4.0 protocol when used with the Mellanox CX-5 or CX-6 adapter, which provides support for RDMA using version 2 of the RoCE protocol. GDS is only supported using NVIDIA Tesla- and Ampere-family GPUs with Mellanox NIC cards and MOFED software.

Requirements
  • Storages systems must be running ONTAP 9.10.1

  • Both nodes in the HA pair must be the same version

  • Storage system controllers must have RDMA support (currently A400, A700, and A800)

  • Storage appliance configured with RDMA-supported hardware (e.g. Mellanox CX-5 or CX-6)

  • Data LIFs must be configured to support RDMA.

  • Clients must be using Mellanox RDMA-capable NIC cards and Mellanox OFED (MOFED) network software.