Skip to main content
BeeGFS on NetApp with E-Series Storage

Terms and concepts

Contributors netapp-jolieg iamjoemccormick

The following terms and concepts apply to the BeeGFS on NetApp solution.

Tip See the Administer BeeGFS clusters section for additional details on terms and concepts specific to interacting with BeeGFS high availability (HA) clusters.
Term Description

AI

Artificial Intelligence.

Ansible Inventory

Directory structure containing YAML files that are used to describe the desired BeeGFS HA cluster.

BMC

Baseboard management controller. Sometimes referred to as a service processor.

block nodes

Storage systems.

clients

Nodes in the HPC cluster running applications that need to utilize the file system. Sometimes also referred to as compute or GPU nodes.

DL

Deep Learning.

file nodes

BeeGFS file servers.

HA

High Availability.

HIC

Host Interface Card.

HPC

High-Performance Computing.

HPC-style workloads

HPC-style workloads are typically characterized by multiple compute nodes or GPUs all needing to access the same dataset in parallel to facilitate a distributed compute or training job. These datasets are often comprised of large files that should be striped across multiple physical storage nodes to eliminate the traditional hardware bottlenecks that would prevent concurrent access to a single file.

ML

Machine Learning.

NLP

Natural Language Processing.

NLU

Natural Language Understanding.

NVA

The NetApp Verified Architecture (NVA) program provides reference configurations and sizing guidance for specific workloads and use cases. These solutions are thoroughly tested, and are designed to minimize deployment risks and to accelerate time to market.

storage network / client network

Network used for clients to communicate to the BeeGFS file system. This is often the same network used for parallel Message Passing Interface (MPI) and other application communication between HPC cluster nodes.