Skip to main content
BeeGFS on NetApp with E-Series Storage

Define Ansible inventory for BeeGFS building blocks

Contributors netapp-jolieg

After defining the general Ansible inventory structure, define the configuration for each building block in the BeeGFS file system.

These deployment instructions demonstrate how to deploy a file system that consists of a base building block including management, metadata, and storage services; a second building block with metadata and storage services; and a third storage-only building block.

These steps are intended to show the full range of typical configuration profiles that you can use to configure NetApp BeeGFS building blocks to meet the requirements of the overall BeeGFS file system.

Note In this and subsequent sections, adjust as needed to build the inventory representing the BeeGFS file system that you want to deploy. In particular, use Ansible host names that represent each block or file node and the desired IP addressing scheme for the storage network to ensure it can scale to the number of BeeGFS file nodes and clients.

Step 1: Create the Ansible inventory file

Steps
  1. Create a new inventory.yml file, and then insert the following parameters, replacing the hosts under eseries_storage_systems as needed to represent the block nodes in your deployment. The names should correspond with the name used for host_vars/<FILENAME>.yml.

    # BeeGFS HA (High Availability) cluster inventory.
    all:
      children:
        # Ansible group representing all block nodes:
        eseries_storage_systems:
          hosts:
            ictad22a01:
            ictad22a02:
            ictad22a03:
            ictad22a04:
            ictad22a05:
            ictad22a06:
        # Ansible group representing all file nodes:
        ha_cluster:
          children:

    In the subsequent sections, you will create additional Ansible groups under ha_cluster that represent the BeeGFS services you want to run in the cluster.

Step 2: Configure the inventory for a management, metadata, and storage building block

The first building block in the cluster or base building block must include the BeeGFS management service along with metadata and storage services:

Steps
  1. In inventory.yml, populate the following parameters under ha_cluster: children:

          # ictad22h01/ictad22h02 HA Pair (mgmt/meta/storage building block):
            mgmt:
              hosts:
                ictad22h01:
                ictad22h02:
            meta_01:
              hosts:
                ictad22h01:
                ictad22h02:
            stor_01:
              hosts:
                ictad22h01:
                ictad22h02:
            meta_02:
              hosts:
                ictad22h01:
                ictad22h02:
            stor_02:
              hosts:
                ictad22h01:
                ictad22h02:
            meta_03:
              hosts:
                ictad22h01:
                ictad22h02:
            stor_03:
              hosts:
                ictad22h01:
                ictad22h02:
            meta_04:
              hosts:
                ictad22h01:
                ictad22h02:
            stor_04:
              hosts:
                ictad22h01:
                ictad22h02:
            meta_05:
              hosts:
                ictad22h02:
                ictad22h01:
            stor_05:
              hosts:
                ictad22h02:
                ictad22h01:
            meta_06:
              hosts:
                ictad22h02:
                ictad22h01:
            stor_06:
              hosts:
                ictad22h02:
                ictad22h01:
            meta_07:
              hosts:
                ictad22h02:
                ictad22h01:
            stor_07:
              hosts:
                ictad22h02:
                ictad22h01:
            meta_08:
              hosts:
                ictad22h02:
                ictad22h01:
            stor_08:
              hosts:
                ictad22h02:
                ictad22h01:
  2. Create the file group_vars/mgmt.yml and include the following:

    # mgmt - BeeGFS HA Management Resource Group
    # OPTIONAL: Override default BeeGFS management configuration:
    # beegfs_ha_beegfs_mgmtd_conf_resource_group_options:
    #  <beegfs-mgmt.conf:key>:<beegfs-mgmt.conf:value>
    floating_ips:
      - i1b: 100.127.101.0/16
      - i2b: 100.128.102.0/16
    beegfs_service: management
    beegfs_targets:
      ictad22a01:
        eseries_storage_pool_configuration:
          - name: beegfs_m1_m2_m5_m6
            raid_level: raid1
            criteria_drive_count: 4
            common_volume_configuration:
              segment_size_kb:  128
            volumes:
              - size: 1
                owning_controller: A
  3. Under group_vars/, create files for resource groups meta_01 through meta_08 using the following template, and then fill in the placeholder values for each service referencing the table below:

    # meta_0X - BeeGFS HA Metadata Resource Group
    beegfs_ha_beegfs_meta_conf_resource_group_options:
      connMetaPortTCP: <PORT>
      connMetaPortUDP: <PORT>
      tuneBindToNumaZone: <NUMA ZONE>
    floating_ips:
      - <PREFERRED PORT:IP/SUBNET> # Example: i1b:192.168.120.1/16
      - <SECONDARY PORT:IP/SUBNET>
    beegfs_service: metadata
    beegfs_targets:
      <BLOCK NODE>:
        eseries_storage_pool_configuration:
          - name: <STORAGE POOL>
            raid_level: raid1
            criteria_drive_count: 4
            common_volume_configuration:
              segment_size_kb:  128
            volumes:
              - size: 21.25 # SEE NOTE BELOW!
                owning_controller: <OWNING CONTROLLER>
    Note The volume size is specified as a percentage of the overall storage pool (also referred to as a volume group). NetApp highly recommends that you leave some free capacity in each pool to allow room for SSD overprovisioning (for more information, see Introduction to NetApp EF600 array). The storage pool, beegfs_m1_m2_m5_m6, also allocates 1% of the pool’s capacity for the management service. Thus, for metadata volumes in the storage pool, beegfs_m1_m2_m5_m6, when 1.92TB or 3.84TB drives are used, set this value to 21.25; for 7.65TB drives, set this value to 22.25; and for 15.3TB drives, set this value to 23.75.
    File name Port Floating IPs NUMA zone Block node Storage pool Owning controller

    meta_01.yml

    8015

    i1b:100.127.101.1/16
    i2b:100.128.102.1/16

    0

    ictad22a01

    beegfs_m1_m2_m5_m6

    A

    meta_02.yml

    8025

    i2b:100.128.102.2/16
    i1b:100.127.101.2/16

    0

    ictad22a01

    beegfs_m1_m2_m5_m6

    B

    meta_03.yml

    8035

    i3b:100.127.101.3/16
    i4b:100.128.102.3/16

    1

    ictad22a02

    beegfs_m3_m4_m7_m8

    A

    meta_04.yml

    8045

    i4b:100.128.102.4/16
    i3b:100.127.101.4/16

    1

    ictad22a02

    beegfs_m3_m4_m7_m8

    B

    meta_05.yml

    8055

    i1b:100.127.101.5/16
    i2b:100.128.102.5/16

    0

    ictad22a01

    beegfs_m1_m2_m5_m6

    A

    meta_06.yml

    8065

    i2b:100.128.102.6/16
    i1b:100.127.101.6/16

    0

    ictad22a01

    beegfs_m1_m2_m5_m6

    B

    meta_07.yml

    8075

    i3b:100.127.101.7/16
    i4b:100.128.102.7/16

    1

    ictad22a02

    beegfs_m3_m4_m7_m8

    A

    meta_08.yml

    8085

    i4b:100.128.102.8/16
    i3b:100.127.101.8/16

    1

    ictad22a02

    beegfs_m3_m4_m7_m8

    B

  4. Under group_vars/, create files for resource groups stor_01 through stor_08 using the following template, and then fill in the placeholder values for each service referencing the example:

    # stor_0X - BeeGFS HA Storage Resource Groupbeegfs_ha_beegfs_storage_conf_resource_group_options:
      connStoragePortTCP: <PORT>
      connStoragePortUDP: <PORT>
      tuneBindToNumaZone: <NUMA ZONE>
    floating_ips:
      - <PREFERRED PORT:IP/SUBNET>
      - <SECONDARY PORT:IP/SUBNET>
    beegfs_service: storage
    beegfs_targets:
      <BLOCK NODE>:
        eseries_storage_pool_configuration:
          - name: <STORAGE POOL>
            raid_level: raid6
            criteria_drive_count: 10
            common_volume_configuration:
              segment_size_kb: 512        volumes:
              - size: 21.50 # See note below!             owning_controller: <OWNING CONTROLLER>
              - size: 21.50            owning_controller: <OWNING CONTROLLER>
    Note For the correct size to use, see Recommended storage pool overprovisioning percentages.
    File name Port Floating IPs NUMA zone Block node Storage pool Owning controller

    stor_01.yml

    8013

    i1b:100.127.103.1/16
    i2b:100.128.104.1/16

    0

    ictad22a01

    beegfs_s1_s2

    A

    stor_02.yml

    8023

    i2b:100.128.104.2/16
    i1b:100.127.103.2/16

    0

    ictad22a01

    beegfs_s1_s2

    B

    stor_03.yml

    8033

    i3b:100.127.103.3/16
    i4b:100.128.104.3/16

    1

    ictad22a02

    beegfs_s3_s4

    A

    stor_04.yml

    8043

    i4b:100.128.104.4/16
    i3b:100.127.103.4/16

    1

    ictad22a02

    beegfs_s3_s4

    B

    stor_05.yml

    8053

    i1b:100.127.103.5/16
    i2b:100.128.104.5/16

    0

    ictad22a01

    beegfs_s5_s6

    A

    stor_06.yml

    8063

    i2b:100.128.104.6/16
    i1b:100.127.103.6/16

    0

    ictad22a01

    beegfs_s5_s6

    B

    stor_07.yml

    8073

    i3b:100.127.103.7/16
    i4b:100.128.104.7/16

    1

    ictad22a02

    beegfs_s7_s8

    A

    stor_08.yml

    8083

    i4b:100.128.104.8/16
    i3b:100.127.103.8/16

    1

    ictad22a02

    beegfs_s7_s8

    B

Step 3: Configure the inventory for a Metadata + storage building block

These steps describe how to set up an Ansible inventory for a BeeGFS metadata + storage building block.

Steps
  1. In inventory.yml, populate the following parameters under the existing configuration:

            meta_09:
              hosts:
                ictad22h03:
                ictad22h04:
            stor_09:
              hosts:
                ictad22h03:
                ictad22h04:
            meta_10:
              hosts:
                ictad22h03:
                ictad22h04:
            stor_10:
              hosts:
                ictad22h03:
                ictad22h04:
            meta_11:
              hosts:
                ictad22h03:
                ictad22h04:
            stor_11:
              hosts:
                ictad22h03:
                ictad22h04:
            meta_12:
              hosts:
                ictad22h03:
                ictad22h04:
            stor_12:
              hosts:
                ictad22h03:
                ictad22h04:
            meta_13:
              hosts:
                ictad22h04:
                ictad22h03:
            stor_13:
              hosts:
                ictad22h04:
                ictad22h03:
            meta_14:
              hosts:
                ictad22h04:
                ictad22h03:
            stor_14:
              hosts:
                ictad22h04:
                ictad22h03:
            meta_15:
              hosts:
                ictad22h04:
                ictad22h03:
            stor_15:
              hosts:
                ictad22h04:
                ictad22h03:
            meta_16:
              hosts:
                ictad22h04:
                ictad22h03:
            stor_16:
              hosts:
                ictad22h04:
                ictad22h03:
  2. Under group_vars/, create files for resource groups meta_09 through meta_16 using the following template, and then fill in the placeholder values for each service referencing the example:

    # meta_0X - BeeGFS HA Metadata Resource Group
    beegfs_ha_beegfs_meta_conf_resource_group_options:
      connMetaPortTCP: <PORT>
      connMetaPortUDP: <PORT>
      tuneBindToNumaZone: <NUMA ZONE>
    floating_ips:
      - <PREFERRED PORT:IP/SUBNET>
      - <SECONDARY PORT:IP/SUBNET>
    beegfs_service: metadata
    beegfs_targets:
      <BLOCK NODE>:
        eseries_storage_pool_configuration:
          - name: <STORAGE POOL>
            raid_level: raid1
            criteria_drive_count: 4
            common_volume_configuration:
              segment_size_kb: 128
            volumes:
              - size: 21.5 # SEE NOTE BELOW!
                owning_controller: <OWNING CONTROLLER>
    Note For the correct size to use, see Recommended storage pool overprovisioning percentages.
    File name Port Floating IPs NUMA zone Block node Storage pool Owning controller

    meta_09.yml

    8015

    i1b:100.127.101.9/16
    i2b:100.128.102.9/16

    0

    ictad22a03

    beegfs_m9_m10_m13_m14

    A

    meta_10.yml

    8025

    i2b:100.128.102.10/16
    i1b:100.127.101.10/16

    0

    ictad22a03

    beegfs_m9_m10_m13_m14

    B

    meta_11.yml

    8035

    i3b:100.127.101.11/16
    i4b:100.128.102.11/16

    1

    ictad22a04

    beegfs_m11_m12_m15_m16

    A

    meta_12.yml

    8045

    i4b:100.128.102.12/16
    i3b:100.127.101.12/16

    1

    ictad22a04

    beegfs_m11_m12_m15_m16

    B

    meta_13.yml

    8055

    i1b:100.127.101.13/16
    i2b:100.128.102.13/16

    0

    ictad22a03

    beegfs_m9_m10_m13_m14

    A

    meta_14.yml

    8065

    i2b:100.128.102.14/16
    i1b:100.127.101.14/16

    0

    ictad22a03

    beegfs_m9_m10_m13_m14

    B

    meta_15.yml

    8075

    i3b:100.127.101.15/16
    i4b:100.128.102.15/16

    1

    ictad22a04

    beegfs_m11_m12_m15_m16

    A

    meta_16.yml

    8085

    i4b:100.128.102.16/16
    i3b:100.127.101.16/16

    1

    ictad22a04

    beegfs_m11_m12_m15_m16

    B

  3. Under group_vars/, create files for resource groups stor_09 through stor_16 using the following template, and then fill in the placeholder values for each service referencing the example:

    # stor_0X - BeeGFS HA Storage Resource Group
    beegfs_ha_beegfs_storage_conf_resource_group_options:
      connStoragePortTCP: <PORT>
      connStoragePortUDP: <PORT>
      tuneBindToNumaZone: <NUMA ZONE>
    floating_ips:
      - <PREFERRED PORT:IP/SUBNET>
      - <SECONDARY PORT:IP/SUBNET>
    beegfs_service: storage
    beegfs_targets:
      <BLOCK NODE>:
        eseries_storage_pool_configuration:
          - name: <STORAGE POOL>
            raid_level: raid6
            criteria_drive_count: 10
            common_volume_configuration:
              segment_size_kb: 512        volumes:
              - size: 21.50 # See note below!
                owning_controller: <OWNING CONTROLLER>
              - size: 21.50            owning_controller: <OWNING CONTROLLER>
    Note For the correct size to use, see Recommended storage pool overprovisioning percentages..
    File name Port Floating IPs NUMA zone Block node Storage pool Owning controller

    stor_09.yml

    8013

    i1b:100.127.103.9/16
    i2b:100.128.104.9/16

    0

    ictad22a03

    beegfs_s9_s10

    A

    stor_10.yml

    8023

    i2b:100.128.104.10/16
    i1b:100.127.103.10/16

    0

    ictad22a03

    beegfs_s9_s10

    B

    stor_11.yml

    8033

    i3b:100.127.103.11/16
    i4b:100.128.104.11/16

    1

    ictad22a04

    beegfs_s11_s12

    A

    stor_12.yml

    8043

    i4b:100.128.104.12/16
    i3b:100.127.103.12/16

    1

    ictad22a04

    beegfs_s11_s12

    B

    stor_13.yml

    8053

    i1b:100.127.103.13/16
    i2b:100.128.104.13/16

    0

    ictad22a03

    beegfs_s13_s14

    A

    stor_14.yml

    8063

    i2b:100.128.104.14/16
    i1b:100.127.103.14/16

    0

    ictad22a03

    beegfs_s13_s14

    B

    stor_15.yml

    8073

    i3b:100.127.103.15/16
    i4b:100.128.104.15/16

    1

    ictad22a04

    beegfs_s15_s16

    A

    stor_16.yml

    8083

    i4b:100.128.104.16/16
    i3b:100.127.103.16/16

    1

    ictad22a04

    beegfs_s15_s16

    B

Step 4: Configure the inventory for a storage-only building block

These steps describe how to set up an Ansible inventory for a BeeGFS storage-only building block. The major difference between setting up the configuration for a metadata + storage versus a storage-only building block is the omission of all metadata resource groups and changing criteria_drive_count from 10 to 12 for each storage pool.

Steps
  1. In inventory.yml, populate the following parameters under the existing configuration:

          # ictad22h05/ictad22h06 HA Pair (storage only building block):
            stor_17:
              hosts:
                ictad22h05:
                ictad22h06:
            stor_18:
              hosts:
                ictad22h05:
                ictad22h06:
            stor_19:
              hosts:
                ictad22h05:
                ictad22h06:
            stor_20:
              hosts:
                ictad22h05:
                ictad22h06:
            stor_21:
              hosts:
                ictad22h06:
                ictad22h05:
            stor_22:
              hosts:
                ictad22h06:
                ictad22h05:
            stor_23:
              hosts:
                ictad22h06:
                ictad22h05:
            stor_24:
              hosts:
                ictad22h06:
                ictad22h05:
  2. Under group_vars/, create files for resource groups stor_17 through stor_24 using the following template, and then fill in the placeholder values for each service referencing the example:

    # stor_0X - BeeGFS HA Storage Resource Group
    beegfs_ha_beegfs_storage_conf_resource_group_options:
      connStoragePortTCP: <PORT>
      connStoragePortUDP: <PORT>
      tuneBindToNumaZone: <NUMA ZONE>
    floating_ips:
      - <PREFERRED PORT:IP/SUBNET>
      - <SECONDARY PORT:IP/SUBNET>
    beegfs_service: storage
    beegfs_targets:
      <BLOCK NODE>:
        eseries_storage_pool_configuration:
          - name: <STORAGE POOL>
            raid_level: raid6
            criteria_drive_count: 12
            common_volume_configuration:
              segment_size_kb: 512
            volumes:
              - size: 21.50 # See note below!
                owning_controller: <OWNING CONTROLLER>
              - size: 21.50
                owning_controller: <OWNING CONTROLLER>
    Note For the correct size to use, see Recommended storage pool overprovisioning percentages.
    File name Port Floating IPs NUMA zone Block node Storage pool Owning controller

    stor_17.yml

    8013

    i1b:100.127.103.17/16
    i2b:100.128.104.17/16

    0

    ictad22a05

    beegfs_s17_s18

    A

    stor_18.yml

    8023

    i2b:100.128.104.18/16
    i1b:100.127.103.18/16

    0

    ictad22a05

    beegfs_s17_s18

    B

    stor_19.yml

    8033

    i3b:100.127.103.19/16
    i4b:100.128.104.19/16

    1

    ictad22a06

    beegfs_s19_s20

    A

    stor_20.yml

    8043

    i4b:100.128.104.20/16
    i3b:100.127.103.20/16

    1

    ictad22a06

    beegfs_s19_s20

    B

    stor_21.yml

    8053

    i1b:100.127.103.21/16
    i2b:100.128.104.21/16

    0

    ictad22a05

    beegfs_s21_s22

    A

    stor_22.yml

    8063

    i2b:100.128.104.22/16
    i1b:100.127.103.22/16

    0

    ictad22a05

    beegfs_s21_s22

    B

    stor_23.yml

    8073

    i3b:100.127.103.23/16
    i4b:100.128.104.23/16

    1

    ictad22a06

    beegfs_s23_s24

    A

    stor_24.yml

    8083

    i4b:100.128.104.24/16
    i3b:100.127.103.24/16

    1

    ictad22a06

    beegfs_s23_s24

    B