Skip to main content
BeeGFS on NetApp with E-Series Storage

Define Ansible inventory for BeeGFS building blocks

Contributors mcwhiteside netapp-jolieg

After defining the general Ansible inventory structure, define the configuration for each building block in the BeeGFS file system.

These deployment instructions demonstrate how to deploy a file system that consists of a base building block including management, metadata, and storage services; a second building block with metadata and storage services; and a third storage-only building block.

These steps are intended to show the full range of typical configuration profiles that you can use to configure NetApp BeeGFS building blocks to meet the requirements of the overall BeeGFS file system.

Note In this and subsequent sections, adjust as needed to build the inventory representing the BeeGFS file system that you want to deploy. In particular, use Ansible host names that represent each block or file node and the desired IP addressing scheme for the storage network to ensure it can scale to the number of BeeGFS file nodes and clients.

Step 1: Create the Ansible inventory file

Steps
  1. Create a new inventory.yml file, and then insert the following parameters, replacing the hosts under eseries_storage_systems as needed to represent the block nodes in your deployment. The names should correspond with the name used for host_vars/<FILENAME>.yml.

    # BeeGFS HA (High Availability) cluster inventory.
    all:
      children:
        # Ansible group representing all block nodes:
        eseries_storage_systems:
          hosts:
            netapp_01:
            netapp_02:
            netapp_03:
            netapp_04:
            netapp_05:
            netapp_06:
        # Ansible group representing all file nodes:
        ha_cluster:
          children:

    In the subsequent sections, you will create additional Ansible groups under ha_cluster that represent the BeeGFS services you want to run in the cluster.

Step 2: Configure the inventory for a management, metadata, and storage building block

The first building block in the cluster or base building block must include the BeeGFS management service along with metadata and storage services:

Steps
  1. In inventory.yml, populate the following parameters under ha_cluster: children:

          # beegfs_01/beegfs_02 HA Pair (mgmt/meta/storage building block):
            mgmt:
              hosts:
                beegfs_01:
                beegfs_02:
            meta_01:
              hosts:
                beegfs_01:
                beegfs_02:
            stor_01:
              hosts:
                beegfs_01:
                beegfs_02:
            meta_02:
              hosts:
                beegfs_01:
                beegfs_02:
            stor_02:
              hosts:
                beegfs_01:
                beegfs_02:
            meta_03:
              hosts:
                beegfs_01:
                beegfs_02:
            stor_03:
              hosts:
                beegfs_01:
                beegfs_02:
            meta_04:
              hosts:
                beegfs_01:
                beegfs_02:
            stor_04:
              hosts:
                beegfs_01:
                beegfs_02:
            meta_05:
              hosts:
                beegfs_02:
                beegfs_01:
            stor_05:
              hosts:
                beegfs_02:
                beegfs_01:
            meta_06:
              hosts:
                beegfs_02:
                beegfs_01:
            stor_06:
              hosts:
                beegfs_02:
                beegfs_01:
            meta_07:
              hosts:
                beegfs_02:
                beegfs_01:
            stor_07:
              hosts:
                beegfs_02:
                beegfs_01:
            meta_08:
              hosts:
                beegfs_02:
                beegfs_01:
            stor_08:
              hosts:
                beegfs_02:
                beegfs_01:
  2. Create the file group_vars/mgmt.yml and include the following:

    # mgmt - BeeGFS HA Management Resource Group
    # OPTIONAL: Override default BeeGFS management configuration:
    # beegfs_ha_beegfs_mgmtd_conf_resource_group_options:
    #  <beegfs-mgmt.conf:key>:<beegfs-mgmt.conf:value>
    floating_ips:
      - i1b: 100.127.101.0/16
      - i2b: 100.127.102.0/16
    beegfs_service: management
    beegfs_targets:
      netapp_01:
        eseries_storage_pool_configuration:
          - name: beegfs_m1_m2_m5_m6
            raid_level: raid1
            criteria_drive_count: 4
            common_volume_configuration:
              segment_size_kb:  128
            volumes:
              - size: 1
                owning_controller: A
  3. Under group_vars/, create files for resource groups meta_01 through meta_08 using the following template, and then fill in the placeholder values for each service referencing the table below:

    # meta_0X - BeeGFS HA Metadata Resource Group
    beegfs_ha_beegfs_meta_conf_resource_group_options:
      connMetaPortTCP: <PORT>
      connMetaPortUDP: <PORT>
      tuneBindToNumaZone: <NUMA ZONE>
    floating_ips:
      - <PREFERRED PORT:IP/SUBNET> # Example: i1b:192.168.120.1/16
      - <SECONDARY PORT:IP/SUBNET>
    beegfs_service: metadata
    beegfs_targets:
      <BLOCK NODE>:
        eseries_storage_pool_configuration:
          - name: <STORAGE POOL>
            raid_level: raid1
            criteria_drive_count: 4
            common_volume_configuration:
              segment_size_kb:  128
            volumes:
              - size: 21.25 # SEE NOTE BELOW!
                owning_controller: <OWNING CONTROLLER>
    Note The volume size is specified as a percentage of the overall storage pool (also referred to as a volume group). NetApp highly recommends that you leave some free capacity in each pool to allow room for SSD overprovisioning (for more information, see Introduction to NetApp EF600 array). The storage pool, beegfs_m1_m2_m5_m6, also allocates 1% of the pool’s capacity for the management service. Thus, for metadata volumes in the storage pool, beegfs_m1_m2_m5_m6, when 1.92TB or 3.84TB drives are used, set this value to 21.25; for 7.65TB drives, set this value to 22.25; and for 15.3TB drives, set this value to 23.75.
    File name Port Floating IPs NUMA zone Block node Storage pool Owning controller

    meta_01.yml

    8015

    i1b:100.127.101.1/16
    i2b:100.127.102.1/16

    0

    netapp_01

    beegfs_m1_m2_m5_m6

    A

    meta_02.yml

    8025

    i2b:100.127.102.2/16
    i1b:100.127.101.2/16

    0

    netapp_01

    beegfs_m1_m2_m5_m6

    B

    meta_03.yml

    8035

    i3b:100.127.101.3/16
    i4b:100.127.102.3/16

    1

    netapp_02

    beegfs_m3_m4_m7_m8

    A

    meta_04.yml

    8045

    i4b:100.127.102.4/16
    i3b:100.127.101.4/16

    1

    netapp_02

    beegfs_m3_m4_m7_m8

    B

    meta_05.yml

    8055

    i1b:100.127.101.5/16
    i2b:100.127.102.5/16

    0

    netapp_01

    beegfs_m1_m2_m5_m6

    A

    meta_06.yml

    8065

    i2b:100.127.102.6/16
    i1b:100.127.101.6/16

    0

    netapp_01

    beegfs_m1_m2_m5_m6

    B

    meta_07.yml

    8075

    i3b:100.127.101.7/16
    i4b:100.127.102.7/16

    1

    netapp_02

    beegfs_m3_m4_m7_m8

    A

    meta_08.yml

    8085

    i4b:100.127.102.8/16
    i3b:100.127.101.8/16

    1

    netapp_02

    beegfs_m3_m4_m7_m8

    B

  4. Under group_vars/, create files for resource groups stor_01 through stor_08 using the following template, and then fill in the placeholder values for each service referencing the example:

    # stor_0X - BeeGFS HA Storage Resource Groupbeegfs_ha_beegfs_storage_conf_resource_group_options:
      connStoragePortTCP: <PORT>
      connStoragePortUDP: <PORT>
      tuneBindToNumaZone: <NUMA ZONE>
    floating_ips:
      - <PREFERRED PORT:IP/SUBNET>
      - <SECONDARY PORT:IP/SUBNET>
    beegfs_service: storage
    beegfs_targets:
      <BLOCK NODE>:
        eseries_storage_pool_configuration:
          - name: <STORAGE POOL>
            raid_level: raid6
            criteria_drive_count: 10
            common_volume_configuration:
              segment_size_kb: 512        volumes:
              - size: 21.50 # See note below!             owning_controller: <OWNING CONTROLLER>
              - size: 21.50            owning_controller: <OWNING CONTROLLER>
    Note For the correct size to use, see Recommended storage pool overprovisioning percentages.
    File name Port Floating IPs NUMA zone Block node Storage pool Owning controller

    stor_01.yml

    8013

    i1b:100.127.103.1/16
    i2b:100.127.104.1/16

    0

    netapp_01

    beegfs_s1_s2

    A

    stor_02.yml

    8023

    i2b:100.127.104.2/16
    i1b:100.127.103.2/16

    0

    netapp_01

    beegfs_s1_s2

    B

    stor_03.yml

    8033

    i3b:100.127.103.3/16
    i4b:100.127.104.3/16

    1

    netapp_02

    beegfs_s3_s4

    A

    stor_04.yml

    8043

    i4b:100.127.104.4/16
    i3b:100.127.103.4/16

    1

    netapp_02

    beegfs_s3_s4

    B

    stor_05.yml

    8053

    i1b:100.127.103.5/16
    i2b:100.127.104.5/16

    0

    netapp_01

    beegfs_s5_s6

    A

    stor_06.yml

    8063

    i2b:100.127.104.6/16
    i1b:100.127.103.6/16

    0

    netapp_01

    beegfs_s5_s6

    B

    stor_07.yml

    8073

    i3b:100.127.103.7/16
    i4b:100.127.104.7/16

    1

    netapp_02

    beegfs_s7_s8

    A

    stor_08.yml

    8083

    i4b:100.127.104.8/16
    i3b:100.127.103.8/16

    1

    netapp_02

    beegfs_s7_s8

    B

Step 3: Configure the inventory for a Metadata + storage building block

These steps describe how to set up an Ansible inventory for a BeeGFS metadata + storage building block.

Steps
  1. In inventory.yml, populate the following parameters under the existing configuration:

            meta_09:
              hosts:
                beegfs_03:
                beegfs_04:
            stor_09:
              hosts:
                beegfs_03:
                beegfs_04:
            meta_10:
              hosts:
                beegfs_03:
                beegfs_04:
            stor_10:
              hosts:
                beegfs_03:
                beegfs_04:
            meta_11:
              hosts:
                beegfs_03:
                beegfs_04:
            stor_11:
              hosts:
                beegfs_03:
                beegfs_04:
            meta_12:
              hosts:
                beegfs_03:
                beegfs_04:
            stor_12:
              hosts:
                beegfs_03:
                beegfs_04:
            meta_13:
              hosts:
                beegfs_04:
                beegfs_03:
            stor_13:
              hosts:
                beegfs_04:
                beegfs_03:
            meta_14:
              hosts:
                beegfs_04:
                beegfs_03:
            stor_14:
              hosts:
                beegfs_04:
                beegfs_03:
            meta_15:
              hosts:
                beegfs_04:
                beegfs_03:
            stor_15:
              hosts:
                beegfs_04:
                beegfs_03:
            meta_16:
              hosts:
                beegfs_04:
                beegfs_03:
            stor_16:
              hosts:
                beegfs_04:
                beegfs_03:
  2. Under group_vars/, create files for resource groups meta_09 through meta_16 using the following template, and then fill in the placeholder values for each service referencing the example:

    # meta_0X - BeeGFS HA Metadata Resource Group
    beegfs_ha_beegfs_meta_conf_resource_group_options:
      connMetaPortTCP: <PORT>
      connMetaPortUDP: <PORT>
      tuneBindToNumaZone: <NUMA ZONE>
    floating_ips:
      - <PREFERRED PORT:IP/SUBNET>
      - <SECONDARY PORT:IP/SUBNET>
    beegfs_service: metadata
    beegfs_targets:
      <BLOCK NODE>:
        eseries_storage_pool_configuration:
          - name: <STORAGE POOL>
            raid_level: raid1
            criteria_drive_count: 4
            common_volume_configuration:
              segment_size_kb: 128
            volumes:
              - size: 21.5 # SEE NOTE BELOW!
                owning_controller: <OWNING CONTROLLER>
    Note For the correct size to use, see Recommended storage pool overprovisioning percentages.
    File name Port Floating IPs NUMA zone Block node Storage pool Owning controller

    meta_09.yml

    8015

    i1b:100.127.101.9/16
    i2b:100.127.102.9/16

    0

    netapp_03

    beegfs_m9_m10_m13_m14

    A

    meta_10.yml

    8025

    i2b:100.127.102.10/16
    i1b:100.127.101.10/16

    0

    netapp_03

    beegfs_m9_m10_m13_m14

    B

    meta_11.yml

    8035

    i3b:100.127.101.11/16
    i4b:100.127.102.11/16

    1

    netapp_04

    beegfs_m11_m12_m15_m16

    A

    meta_12.yml

    8045

    i4b:100.127.102.12/16
    i3b:100.127.101.12/16

    1

    netapp_04

    beegfs_m11_m12_m15_m16

    B

    meta_13.yml

    8055

    i1b:100.127.101.13/16
    i2b:100.127.102.13/16

    0

    netapp_03

    beegfs_m9_m10_m13_m14

    A

    meta_14.yml

    8065

    i2b:100.127.102.14/16
    i1b:100.127.101.14/16

    0

    netapp_03

    beegfs_m9_m10_m13_m14

    B

    meta_15.yml

    8075

    i3b:100.127.101.15/16
    i4b:100.127.102.15/16

    1

    netapp_04

    beegfs_m11_m12_m15_m16

    A

    meta_16.yml

    8085

    i4b:100.127.102.16/16
    i3b:100.127.101.16/16

    1

    netapp_04

    beegfs_m11_m12_m15_m16

    B

  3. Under group_vars/, create files for resource groups stor_09 through stor_16 using the following template, and then fill in the placeholder values for each service referencing the example:

    # stor_0X - BeeGFS HA Storage Resource Group
    beegfs_ha_beegfs_storage_conf_resource_group_options:
      connStoragePortTCP: <PORT>
      connStoragePortUDP: <PORT>
      tuneBindToNumaZone: <NUMA ZONE>
    floating_ips:
      - <PREFERRED PORT:IP/SUBNET>
      - <SECONDARY PORT:IP/SUBNET>
    beegfs_service: storage
    beegfs_targets:
      <BLOCK NODE>:
        eseries_storage_pool_configuration:
          - name: <STORAGE POOL>
            raid_level: raid6
            criteria_drive_count: 10
            common_volume_configuration:
              segment_size_kb: 512        volumes:
              - size: 21.50 # See note below!
                owning_controller: <OWNING CONTROLLER>
              - size: 21.50            owning_controller: <OWNING CONTROLLER>
    Note For the correct size to use, see Recommended storage pool overprovisioning percentages..
    File name Port Floating IPs NUMA zone Block node Storage pool Owning controller

    stor_09.yml

    8013

    i1b:100.127.103.9/16
    i2b:100.127.104.9/16

    0

    netapp_03

    beegfs_s9_s10

    A

    stor_10.yml

    8023

    i2b:100.127.104.10/16
    i1b:100.127.103.10/16

    0

    netapp_03

    beegfs_s9_s10

    B

    stor_11.yml

    8033

    i3b:100.127.103.11/16
    i4b:100.127.104.11/16

    1

    netapp_04

    beegfs_s11_s12

    A

    stor_12.yml

    8043

    i4b:100.127.104.12/16
    i3b:100.127.103.12/16

    1

    netapp_04

    beegfs_s11_s12

    B

    stor_13.yml

    8053

    i1b:100.127.103.13/16
    i2b:100.127.104.13/16

    0

    netapp_03

    beegfs_s13_s14

    A

    stor_14.yml

    8063

    i2b:100.127.104.14/16
    i1b:100.127.103.14/16

    0

    netapp_03

    beegfs_s13_s14

    B

    stor_15.yml

    8073

    i3b:100.127.103.15/16
    i4b:100.127.104.15/16

    1

    netapp_04

    beegfs_s15_s16

    A

    stor_16.yml

    8083

    i4b:100.127.104.16/16
    i3b:100.127.103.16/16

    1

    netapp_04

    beegfs_s15_s16

    B

Step 4: Configure the inventory for a storage-only building block

These steps describe how to set up an Ansible inventory for a BeeGFS storage-only building block. The major difference between setting up the configuration for a metadata + storage versus a storage-only building block is the omission of all metadata resource groups and changing criteria_drive_count from 10 to 12 for each storage pool.

Steps
  1. In inventory.yml, populate the following parameters under the existing configuration:

          # beegfs_05/beegfs_06 HA Pair (storage only building block):
            stor_17:
              hosts:
                beegfs_05:
                beegfs_06:
            stor_18:
              hosts:
                beegfs_05:
                beegfs_06:
            stor_19:
              hosts:
                beegfs_05:
                beegfs_06:
            stor_20:
              hosts:
                beegfs_05:
                beegfs_06:
            stor_21:
              hosts:
                beegfs_06:
                beegfs_05:
            stor_22:
              hosts:
                beegfs_06:
                beegfs_05:
            stor_23:
              hosts:
                beegfs_06:
                beegfs_05:
            stor_24:
              hosts:
                beegfs_06:
                beegfs_05:
  2. Under group_vars/, create files for resource groups stor_17 through stor_24 using the following template, and then fill in the placeholder values for each service referencing the example:

    # stor_0X - BeeGFS HA Storage Resource Group
    beegfs_ha_beegfs_storage_conf_resource_group_options:
      connStoragePortTCP: <PORT>
      connStoragePortUDP: <PORT>
      tuneBindToNumaZone: <NUMA ZONE>
    floating_ips:
      - <PREFERRED PORT:IP/SUBNET>
      - <SECONDARY PORT:IP/SUBNET>
    beegfs_service: storage
    beegfs_targets:
      <BLOCK NODE>:
        eseries_storage_pool_configuration:
          - name: <STORAGE POOL>
            raid_level: raid6
            criteria_drive_count: 12
            common_volume_configuration:
              segment_size_kb: 512
            volumes:
              - size: 21.50 # See note below!
                owning_controller: <OWNING CONTROLLER>
              - size: 21.50
                owning_controller: <OWNING CONTROLLER>
    Note For the correct size to use, see Recommended storage pool overprovisioning percentages.
    File name Port Floating IPs NUMA zone Block node Storage pool Owning controller

    stor_17.yml

    8013

    i1b:100.127.103.17/16
    i2b:100.127.104.17/16

    0

    netapp_05

    beegfs_s17_s18

    A

    stor_18.yml

    8023

    i2b:100.127.104.18/16
    i1b:100.127.103.18/16

    0

    netapp_05

    beegfs_s17_s18

    B

    stor_19.yml

    8033

    i3b:100.127.103.19/16
    i4b:100.127.104.19/16

    1

    netapp_06

    beegfs_s19_s20

    A

    stor_20.yml

    8043

    i4b:100.127.104.20/16
    i3b:100.127.103.20/16

    1

    netapp_06

    beegfs_s19_s20

    B

    stor_21.yml

    8053

    i1b:100.127.103.21/16
    i2b:100.127.104.21/16

    0

    netapp_05

    beegfs_s21_s22

    A

    stor_22.yml

    8063

    i2b:100.127.104.22/16
    i1b:100.127.103.22/16

    0

    netapp_05

    beegfs_s21_s22

    B

    stor_23.yml

    8073

    i3b:100.127.103.23/16
    i4b:100.127.104.23/16

    1

    netapp_06

    beegfs_s23_s24

    A

    stor_24.yml

    8083

    i4b:100.127.104.24/16
    i3b:100.127.103.24/16

    1

    netapp_06

    beegfs_s23_s24

    B