Skip to main content
BeeGFS on NetApp with E-Series Storage

Review best practices

Contributors netapp-jolieg netapp-jsnyder

Follow the best practice guidelines when deploying the BeeGFS on NetApp solution.

Standard conventions

When physically assembling and creating the Ansible inventory file, follow these standard conventions (for more information, see Create the Ansible inventory).

  • File node host names are sequentially numbered (h01-hN) with lower numbers at the top of the rack and higher numbers at the bottom.

    For example, the naming convention [location][row][rack]hN looks like: ictad22h01.

  • Each block node is comprised of two storage controllers, each with their own host name.

    A storage array name is used to refer to the whole block storage system as part of an Ansible inventory. The storage array names should be sequentially numbered (a01 - aN), and the host names for individual controllers are derived from that naming convention.

    For example, a block node named ictad22a01 typically can have host names configured for each controller like ictad22a01-a and ictad22a01-b, but be referred to in an Ansible inventory as ictad22a01.

  • File and block nodes within the same building block share the same numbering scheme and are adjacent to each other in the rack with both file nodes on top and both block nodes directly underneath them.

    For example, in the first building block, file nodes h01 and h02 are both directly connected to block nodes a01 and a02. From top to bottom, the host names are h01, h02, a01, and a02.

  • Building blocks are installed in sequential order based on their host names, so that lower numbered host names are at the top of the rack and higher numbered host names are at the bottom.

    The intent is to minimize the length of cable running to the top of rack switches, and to define a standard deployment practice to simplify troubleshooting. For datacenters where this is not allowed due to concerns around rack stability, the inverse is certainly allowed, populating the rack from the bottom up.

InfiniBand storage network configuration

Half the InfiniBand ports on each file node are used to connect directly to block nodes. The other half are connected to the InfiniBand switches and are used for BeeGFS client-server connectivity. When determining the size of the IPoIB subnets that are used for BeeGFS clients and servers, you must consider the anticipated growth of your compute/GPU cluster and BeeGFS file system. If you must deviate from the recommended IP ranges, keep in mind that each direct connect in a single building block has a unique subnet and there is no overlap with subnets used for client-server connectivity.

Direct connects

File and block nodes within each building block always use the IPs in the following table for their direct connects.

Note This addressing scheme adheres to the following rule: The third octet is always odd or even, which depends on whether the file node is odd or even.
File node IB port IP address Block node IB port Physical IP Virtual IP

Odd (h1)

i1a

192.168.1.10

Odd (c1)

2a

192.168.1.100

192.168.1.101

Odd (h1)

i2a

192.168.3.10

Odd (c1)

2a

192.168.3.100

192.168.3.101

Odd (h1)

i3a

192.168.5.10

Even (c2)

2a

192.168.5.100

192.168.5.101

Odd (h1)

i4a

192.168.7.10

Even (c2)

2a

192.168.7.100

192.168.7.101

Even (h2)

i1a

192.168.2.10

Odd (c1)

2b

192.168.2.100

192.168.2.101

Even (h2)

i2a

192.168.4.10

Odd (c1)

2b

192.168.4.100

192.168.4.101

Even (h2)

i3a

192.168.6.10

Even (c2)

2b

192.168.6.100

192.168.6.101

Even (h2)

i4a

192.168.8.10

Even (c2)

2b

192.168.8.100

192.168.8.101

BeeGFS client-server IPoIB addressing scheme (two subnets)

To allow BeeGFS clients to use two InfiniBand ports, two IPoIB subnets are required with half the BeeGFS server services configured with a preferred IP on each subnet to ensure clients use two InfiniBand ports to maximize redundancy and possible throughput to the file system.

Each file node runs multiple BeeGFS server services (management, metadata, or storage). To allow each service to fail over independently to the other file node, each is configured with unique IP addresses that can float between both nodes (sometimes referred to as a logical interface or LIF).

While not mandatory, this deployment presumes the following IPoIB subnet ranges are in use for these connections and defines a standard addressing scheme that applies the following rules:

  • The second octet is always odd or even, based on whether the file node InfiniBand port is odd or even.

  • BeeGFS cluster IPs are always xxx. 127.100.yyy or xxx.128.100.yyy.

Note In addition to the interface used for in-band OS management, additional interfaces can be used by Corosync for cluster heart beating and synchronization. This ensures that the loss of a single interface does not bring down the entire cluster.
  • The BeeGFS Management service is always at xxx.yyy.101.0 or xxx.yyy.102.0.

  • BeeGFS Metadata services are always at xxx.yyy.101.zzz or xxx.yyy.102.zzz.

  • BeeGFS Storage services are always at xxx.yyy.103.zzz or xxx.yyy.103.zzz.

  • Addresses in the range 100.xxx.1.1 through 100.xxx.99.255 are reserved for clients.

Subnet A: 100.127.0.0/16

The following table provides the range for Subnet A: 100.127.0.0/16.

Purpose InfiniBand port IP address or range

BeeGFS Cluster IP

i1b

100.127.100.1 - 100.127.100.255

BeeGFS Management

i1b

100.127.101.0

BeeGFS Metadata

i1b or i3b

100.127.101.1 - 100.127.101.255

BeeGFS Storage

i1b or i3b

100.127.103.1 - 100.127.103.255

BeeGFS Clients

(varies by client)

100.127.1.1 - 100.127.99.255

Subnet B: 100.128.0.0/16

The following table provides the range for Subnet B: 100.128.0.0/16.

Purpose InfiniBand port IP address or range

BeeGFS Cluster IP

i4b

100.128.100.1 - 100.128.100.255

BeeGFS Management

i2b

100.128.102.0

BeeGFS Metadata

i2b or i4b

100.128.102.1 - 100.128.102.255

BeeGFS Storage

i2b or i4b

100.128.104.1 - 100.128.104.255

BeeGFS Clients

(varies by client)

100.128.1.1 - 100.128.99.255

Note Not all IPs in the above ranges are used in this NetApp Verified Architecture. They demonstrate how IP addresses can be pre-allocated to allow easy file system expansion using a consistent IP addressing scheme. In this scheme, BeeGFS file nodes and service IDs correspond with the fourth octet of a well-known range of IPs. The file system could certainly scale beyond 255 nodes or services if needed.