Skip to main content
BeeGFS on NetApp with E-Series Storage

Technical requirements

Contributors netapp-jsnyder netapp-jolieg iamjoemccormick

To implement the BeeGFS on NetApp solution, make sure your environment meets the technology requirements.

Hardware requirements

The following table lists the hardware components that are required to implement a single second-generation building block design of the BeeGFS on NetApp solution.

Note The hardware components used in any particular implementation of the solution might vary based on customer requirements.
Count Hardware component Requirements

2

BeeGFS file nodes.

Each file node should meet or exceed the following configuration to achieve expected performance.

Processors:

  • 2x AMD EPYC 7343 16C 3.2 GHz.

  • Configured as two NUMA zones.

Memory:

  • 256GB.

  • 16x 16GB TruDDR4 3200MHz (2Rx8 1.2V) RDIMM-A (prefer more smaller DIMMs over fewer larger DIMMs).

  • Populated to maximize memory bandwidth.

PCIe Expansion: Four PCE Gen4 x16 slots:

  • Two slots per NUMA zone.

  • Each slot should provide enough power/cooling for the Mellanox MCX653106A-HDAT adapter.

Miscellaneous:

  • Two 1TB 7.2K SATA drives (or comparable) configured in RAID 1 for the OS.

  • 10GbE OCP 3.0 adapter (or comparable) for in-band OS management.

  • 1GbE BMC with Redfish API for out-of-band server management.

  • Dual hot swap power supplies and performance fans.

  • Must support Mellanox optical InfiniBand cables if required to reach storage InfiniBand switches.

Lenovo SR665:

  • A custom NetApp model includes the required version of the XClarity controller firmware needed to support dual-port Mellanox ConnectX-6 adapters. Contact NetApp for ordering details.

8

Mellanox ConnectX-6 HCAs (for file nodes).

  • MCX653106A-HDAT Host Channel Adapters (HDR IB 200Gb, Dual-port QSFP56, PCIe4.0 x16).

8

1m HDR InfiniBand cables (for file/block node direct connects).

  • MCP1650-H001E30 (1m Mellanox Passive Copper cable, IB HDR, up to 200Gbps, QSFP56, 30AWG).

The length can be adjusted to account for longer distances between the file and block nodes if required.

8

HDR InfiniBand cables (for file node/storage switch connections)

Requires InfiniBand HDR cables (QSFP56 transceivers) of the appropriate length to connect file nodes to storage leaf switches. Possible options include:

  • MCP1650-H002E26 (2m Mellanox Passive Copper cable, IB HDR, up to 200Gb/s, QSFP56, 30AWG).

  • MFS1S00-H003E (3m Mellanox active fiber cable, IB HDR, up to 200Gb/s, QSFP56).

2

E-Series block nodes

Two EF600 controllers configured as follows:

  • Memory: 256GB (128GB per controller).

  • Adapter: 2-port 200Gb/HDR (NVMe/IB).

  • Drives: Configured to match desired capacity.

Software requirements

For predictable performance and reliability, releases of the BeeGFS on NetApp solution are tested with specific versions of the software components required to implement the solution.

Software deployment requirements

The following table lists the software requirements deployed automatically as part of the Ansible-based BeeGFS deployment.

Software Version

BeeGFS

7.2.6

Corosync

3.1.5-1

Pacemaker

2.1.0-8

OpenSM

opensm-5.9.0 (from mlnx_ofed 5.4-1.0.3.0)

Note Only required for the direct connects to enable virtualization.

Ansible control node requirements

The BeeGFS on NetApp solution is deployed and managed from an Ansible control node. For more information, see the Ansible documentation.

The software requirements listed in the following tables are specific to the version of the NetApp BeeGFS Ansible collection listed below.

Software Version

Ansible

2.11
When installed through pip: ansible-4.7.0 and ansible-core < 2.12,>=2.11.6

Python

3.9

Additional Python packages

Cryptography-35.0.0, netaddr-0.8.0

BeeGFS Ansible Collection

3.0.0

File node requirements

Software Version

RedHat Enterprise Linux

RedHat 8.4 Server Physical with High Availability (2 socket).

Important File nodes require a valid RedHat Enterprise Linux Server subscription and the Red Hat Enterprise Linux High Availability Add-On.

Linux Kernel

4.18.0-305.25.1.el8_4.x86_64

InfiniBand / RDMA Drivers

Inbox

ConnectX-6 HCA Firmware

FW: 20.31.1014

PXE: 3.6.0403

UEFI: 14.24.0013

EF600 block node requirements

Software Version

SANtricity OS

11.70.2

NVSRAM

N6000-872834-D06.dlp

Drive Firmware

Latest available for the drive models in use.

Additional requirements

The equipment listed in the following table was used for the validation, but appropriate alternatives can be used as needed. In general, NetApp recommends running the latest software versions to avoid unanticipated issues.

Hardware component Installed software
  • 2x Mellanox MQM8700 200Gb InfiniBand switches

  • Firmware 3.9.2110

1x Ansible control node (virtualized):

  • Processors: Intel® Xeon® Gold 6146 CPU @ 3.20GHz

  • Memory: 8GB

  • Local storage: 24GB

  • CentOS Linux 8.4.2105

  • Kernel 4.18.0-305.3.1.el8.x86_64

Installed Ansible and Python versions match those in the table above.

10x BeeGFS Clients (CPU nodes):

  • Processor: 1x AMD EPYC 7302 16-Core CPU at 3.0GHz

  • Memory: 128GB

  • Network: 2x Mellanox MCX653106A-HDAT (one port connected per adapter).

  • Ubuntu 20.04

  • Kernel: 5.4.0-100-generic

  • InfiniBand Drivers: Mellanox OFED 5.4-1.0.3.0

1x BeeGFS Client (GPU node):

  • Processors: 2x AMD EPYC 7742 64-Core CPUs at 2.25GHz

  • Memory: 1TB

  • Network: 2x Mellanox MCX653106A-HDAT (one port connected per adapter).

This system is based on NVIDIAs HGX A100 platform and includes four A100 GPUs.

  • Ubuntu 20.04

  • Kernel: 5.4.0-100-generic

  • InfiniBand Drivers: Mellanox OFED 5.4-1.0.3.0