Astra Data Store requirements

Contributors amgrissino

Get started by verifying that your environment meets Astra Data Store requirements.

Astra Data Store supports both bare-metal and VM-based deployments. The Astra Data Store cluster can run on a Kubernetes cluster with four or more worker nodes. Astra Data Store software can co-exist with other applications running on the same Kubernetes cluster.

Note If you plan to manage your Astra Data Store cluster from Astra Control Center, make sure your Astra Data Store cluster meets the requirements for clusters that will be managed by Astra Control Center in addition to the requirements outlined here.

Kubernetes worker node resources

These are the resource requirements required to be assigned to the Astra Data Store software on each worker node in the Kubernetes cluster:

Resource Minimum Maximum

Number of nodes in Astra Data Store cluster

4

16

Number of data drives

  • 3 (with a separate cache device present)

  • 4 (if there is no cache device present)

14

Data drive size

100GiB

4TiB

Number of optional cache devices

1 (8GiB in size)

N/A

Astra Data Store supports the following vCPU and RAM combinations for each Kubernetes worker node:

  • 9 vCPUs with 38GiB of RAM

  • 23 vCPUs with 94 GiB of RAM

Note For the best write performance, you should configure a dedicated high-endurance, low-latency, low-capacity cache device.

Each worker node has the following additional requirements:

  • 100GiB or greater free space on host disk (boot) for Astra Data Store log files to be stored.

  • At least one 10GbE or faster network interface for cluster, data, and management traffic. Optionally, an additional 1GbE or faster interface can be used to separate out management traffic.

Hardware and software

Astra Data Store software is validated on the following hardware platforms, software, and storage configuration. Visit NetApp community support if your Kubernetes cluster configuration is different.

Hardware platforms

  • Dell R640

  • Dell R740

  • HPE DL360

  • HPE DL380

  • Lenovo SR630

  • Lenovo SR650

Note VM-based deployments can also use servers that are listed as compatible on the VMware Compatibility Guide.

Storage

Astra Data Store has been validated with SATA, SAS, and NVMe TLC SSDs.

For VM-based deployments, you can use drives presented as virtual disks or passthrough drives.

Note
  • If your host uses SSDs behind a hardware RAID controller, configure the hardware RAID controller to use "passthrough" mode.

  • Each drive should have a unique serial number. Add the attribute disk.enableuuid=TRUE in the virtual machine advanced settings during VM creation.

Software

  • Hypervisor: Astra Data Store is validated with VMware-based VM deployments with vSphere 7.0. KVM-based deployments are not supported by Astra Data Store.

  • Astra Data Store has been validated on the following host operating systems:

    • Red Hat Enterprise Linux 8.4

    • Red Hat Enterprise Linux 8.2

    • Red Hat Enterprise Linux 7.9

    • Red Hat Enterprise Linux CoreOS (RHCOS)

    • CentOS 8

    • Ubuntu 20.04

  • Astra Data Store has been validated with the following Kubernetes distributions:

    • Red Hat OpenShift 4.6 through 4.9

    • Google Anthos 1.8 through 1.10

    • Kubernetes 1.20 through 1.23

    • Rancher RKE 1.3.3

Note Astra Data Store requires Astra Trident for storage provisioning and orchestration. Astra Trident versions 21.10.1 through 22.04 are supported. See the Astra Trident installation instructions.

Networking

Astra Data Store requires one IP address per cluster for MVIP. It needs to be an unused or unconfigured IP address in same subnet as MIP. The Astra Data Store management interface should be same as the Kubernetes node’s management interface.

In addition, each node can be configured as described in the following table:

Note The following abbreviations are used in this table:
MIP: Management IP address
CIP: Cluster IP address
MVIP: Management virtual IP address
Configuration IP addresses needed

One network interface per node

  • Two (2) per node:

    • MIP/CIP: One (1) pre-configured IP address on management interface per node

    • Data IP: One (1) unused or unconfigured IP address per node in same subnet as MIP

Two network interfaces per node

  • Three per node:

    • MIP: One (1) pre-configured IP address on management interface per node

    • CIP: One (1) pre-configured IP address on data interface per node in a different subnet from MIP

    • Data IP: One (1) unused or unconfigured IP address per node in same subnet as CIP

Note No VLAN tags are used in these configurations.

Firewall considerations

In environments that enforce network firewall traffic filtering, the firewall must be configured to allow incoming traffic to the ports and protocols listed in the table below. The IP Address column uses the following abbreviations:

  • MIP: Primary IP address on the management interface of each node

  • CIP: Primary IP address on the cluster interface of each node

  • DIP: One or more data IP addresses configured on a node

  • MVIP: The management virtual IP address configured on one cluster node

Port/protocol IP address Purpose Notes

111/TCP

DIP

NFS

Data IPs move between cluster nodes at runtime. Each node should allow this traffic for all data IPs (or for the entire subnet).

442/TCP

MIP

API

635/TCP

DIP

NFS

Data IPs move between cluster nodes at runtime. Each node should allow this traffic for all data IPs (or for the entire subnet).

2010/UDP

CIP

Cluster/node discovery

Includes both unicast and broadcast traffic to and from UDP port 2010, including the replies.

2049/TCP

DIP

NFS

Data IPs move between cluster nodes at runtime. Each node should allow this traffic for all data IPs (or for the entire subnet).

2181-2183/TCP

CIP

Distributed communication

2443/TCP

MIP

API

2443/TCP

MVIP

API

The MVIP address can be hosted by any cluster node and is relocated at runtime when needed.

4000-4006/TCP

CIP

Intra-cluster RPC

4045-4046/TCP

DIP

NFS

Data IPs move between cluster nodes at runtime. Each node should allow this traffic for all data IPs (or for the entire subnet).

7700/TCP

CIP

Session manager

9919/TCP

MIP

DMS API

9920/TCP

DIP

DMS REST server

ICMP

CIP + DIP

Intra-node communication, health check

Data IPs move between cluster nodes at runtime. Each node should allow this traffic for all data IPs (or for the entire subnet).

Astra Trident

Astra Data Store requires the application Kubernetes clusters to be running Astra Trident for storage provisioning and orchestration. Astra Trident versions 21.10.1 through 22.04 are supported. Astra Data Store can be configured as a storage backend with Astra Trident to provision persistent volumes.

Container Network Interfaces

Astra Data Store has been validated with the following Container Network Interfaces (CNIs).

  • Calico for RKE clusters

  • Calico and Weave Net CNIs for vanilla Kubernetes clusters

  • OpenShift SDN for Red Hat OpenShift Container Platform (OCP)

  • Cilium for Google Anthos

Note
  • Astra Data Store deployed with the Cilium CNI requires the portmap plugin for HostPort support. You can enable CNI chaining mode by adding cni-chaining-mode: portmap to the cilium-config configMap and restarting the Cilium pods.

  • Firewall-enabled configurations are supported only for the Calico and OpenShift SDN CNIs.

Licensing

Astra Data Store requires a valid license to enable full functionality.

Sign up here to obtain the Astra Data Store license. Instructions to download the license will be sent to you after you sign up.

What’s next

View the quick start overview.

For more information