Skip to main content
Astra Control Center
A newer release of this product is available.

Set up Astra Control Center

Contributors netapp-mwallis

After you install Astra Control Center, log in to the UI, and change your password, you'll want to set up a license, add clusters, manage storage, and add buckets.

Add a license for Astra Control Center

You can add a new license using the Astra Control UI or API to gain full Astra Control Center functionality. Without a license, your usage of Astra Control Center is limited to managing users and adding new clusters.

Astra Control Center licenses measure CPU resources using Kubernetes CPU units and account for the CPU resources assigned to the worker nodes of all the managed Kubernetes clusters. Licenses are based on vCPU usage. For more information on how licenses are calculated, refer to Licensing.

Note If your installation grows to exceed the licensed number of CPU units, Astra Control Center prevents you from managing new applications. An alert is displayed when capacity is exceeded.
Note To update an existing evaluation or full license, refer to Update an existing license.
What you'll need
  • Access to a newly installed Astra Control Center instance.

  • Administrator role permissions.

  • A NetApp License File (NLF).

Steps
  1. Log in to the Astra Control Center UI.

  2. Select Account > License.

  3. Select Add License.

  4. Browse to the license file (NLF) that you downloaded.

  5. Select Add License.

The Account > License page displays the license information, expiration date, license serial number, account ID, and CPU units used.

Note If you have an evaluation license and are not sending data to AutoSupport, be sure that you store your account ID to avoid data loss in the event of Astra Control Center failure.

Prepare your environment for cluster management using Astra Control

You should ensure that the following prerequisite conditions are met before you add a cluster. You should also run eligibility checks to ensure that your cluster is ready to be added to Astra Control Center and create roles for cluster management.

What you'll need
  • Ensure that the worker nodes in your cluster are configured with the appropriate storage drivers so that the pods can interact with the backend storage.

  • Your environment meets the operational environment requirements for Astra Trident and Astra Control Center.

  • A version of Astra Trident that is supported by Astra Control Center is installed:

    Note You can deploy Astra Trident using either Trident operator (manually or using Helm chart) or tridentctl. Prior to installing or upgrading Astra Trident, review the supported frontends, backends, and host configurations.
    • Trident storage backend configured: At least one Astra Trident storage backend must be configured on the cluster.

    • Trident storage classes configured: At least one Astra Trident storage class must be configured on the cluster. If a default storage class is configured, ensure that it is the only storage class that has the default annotation.

    • Astra Trident volume snapshot controller and volume snapshot class installed and configured: The volume snapshot controller must be installed so that snapshots can be created in Astra Control. At least one Astra Trident VolumeSnapshotClass has been set up by an administrator.

  • Kubeconfig accessible: You have access to the cluster kubeconfig that includes only one context element.

  • ONTAP credentials: You need ONTAP credentials and a superuser and user ID set on the backing ONTAP system to back up and restore apps with Astra Control Center.

    Run the following commands in the ONTAP command line:

    export-policy rule modify -vserver <storage virtual machine name> -policyname <policy name> -ruleindex 1 -superuser sys
    export-policy rule modify -vserver <storage virtual machine name> -policyname <policy name> -ruleindex 1 -anon 65534
  • Rancher only: When managing application clusters in a Rancher environment, modify the application cluster's default context in the kubeconfig file provided by Rancher to use a control plane context instead of the Rancher API server context. This reduces load on the Rancher API server and improves performance.

Run eligibility checks

Run the following eligibility checks to ensure that your cluster is ready to be added to Astra Control Center.

Steps
  1. Check the Trident version.

    kubectl get tridentversions -n trident

    If Trident exists, you see output similar to the following:

    NAME      VERSION
    trident   22.10.0

    If Trident does not exist, you see output similar to the following:

    error: the server doesn't have a resource type "tridentversions"
    Note If Trident is not installed or the installed version is not the latest, you need to install the latest version of Trident before proceeding. Refer to the Trident documentation for instructions.
  2. Ensure that the pods are running:

    kubectl get pods -n trident
  3. Determine if the storage classes are using the supported Trident drivers. The provisioner name should be csi.trident.netapp.io. See the following example:

    kubectl get sc

    Sample response:

    NAME                  PROVISIONER            RECLAIMPOLICY  VOLUMEBINDINGMODE  ALLOWVOLUMEEXPANSION  AGE
    ontap-gold (default)  csi.trident.netapp.io  Delete         Immediate          true                  5d23h

Create a limited cluster role kubeconfig

You can optionally create a limited administrator role for Astra Control Center. This is not a required procedure for Astra Control Center setup. This procedure helps create a separate kubeconfig that limits Astra Control permissions on the clusters it manages.

What you'll need

Ensure that you have the following for the cluster you intend to manage before completing the procedure steps:

  • kubectl v1.23 or later installed

  • kubectl access to the cluster that you intend to add and manage with Astra Control Center

    Note For this procedure, you do not need kubectl access to the cluster that is running Astra Control Center.
  • An active kubeconfig for the cluster you intend to manage with cluster admin rights for the active context

Steps
  1. Create a service account:

    1. Create a service account file called astracontrol-service-account.yaml.

      Adjust the name and namespace as needed. If changes are made here, you should apply the same changes in the following steps.

      astracontrol-service-account.yaml
      apiVersion: v1
      kind: ServiceAccount
      metadata:
        name: astracontrol-service-account
        namespace: default
    2. Apply the service account:

      kubectl apply -f astracontrol-service-account.yaml
  2. Create a limited cluster role with the minimum permissions necessary for a cluster to be managed by Astra Control:

    1. Create a ClusterRole file called astra-admin-account.yaml.

      Adjust the name and namespace as needed. If changes are made here, you should apply the same changes in the following steps.

      astra-admin-account.yaml
      apiVersion: rbac.authorization.k8s.io/v1
      kind: ClusterRole
      metadata:
        name: astra-admin-account
      rules:
      
      # Get, List, Create, and Update all resources
      # Necessary to backup and restore all resources in an app
      - apiGroups:
        - '*'
        resources:
        - '*'
        verbs:
        - get
        - list
        - create
        - patch
      
      # Delete Resources
      # Necessary for in-place restore and AppMirror failover
      - apiGroups:
        - ""
        - apps
        - autoscaling
        - batch
        - crd.projectcalico.org
        - extensions
        - networking.k8s.io
        - policy
        - rbac.authorization.k8s.io
        - snapshot.storage.k8s.io
        - trident.netapp.io
        resources:
        - configmaps
        - cronjobs
        - daemonsets
        - deployments
        - horizontalpodautoscalers
        - ingresses
        - jobs
        - namespaces
        - networkpolicies
        - persistentvolumeclaims
        - poddisruptionbudgets
        - pods
        - podtemplates
        - podsecuritypolicies
        - replicasets
        - replicationcontrollers
        - replicationcontrollers/scale
        - rolebindings
        - roles
        - secrets
        - serviceaccounts
        - services
        - statefulsets
        - tridentmirrorrelationships
        - tridentsnapshotinfos
        - volumesnapshots
        - volumesnapshotcontents
        verbs:
        - delete
      
      # Watch resources
      # Necessary to monitor progress
      - apiGroups:
        - ""
        resources:
        - pods
        - replicationcontrollers
        - replicationcontrollers/scale
        verbs:
        - watch
      
      # Update resources
      - apiGroups:
        - ""
        - build.openshift.io
        - image.openshift.io
        resources:
        - builds/details
        - replicationcontrollers
        - replicationcontrollers/scale
        - imagestreams/layers
        - imagestreamtags
        - imagetags
        verbs:
        - update
      
      # Use PodSecurityPolicies
      - apiGroups:
        - extensions
        - policy
        resources:
        - podsecuritypolicies
        verbs:
        - use
    2. Apply the cluster role:

      kubectl apply -f astra-admin-account.yaml
  3. Create the cluster role binding for the cluster role to the service account:

    1. Create a ClusterRoleBinding file called astracontrol-clusterrolebinding.yaml.

      Adjust any names and namespaces modified when creating the service account as needed.

      astracontrol-clusterrolebinding.yaml
      apiVersion: rbac.authorization.k8s.io/v1
      kind: ClusterRoleBinding
      metadata:
        name: astracontrol-admin
      roleRef:
        apiGroup: rbac.authorization.k8s.io
        kind: ClusterRole
        name: astra-admin-account
      subjects:
      - kind: ServiceAccount
        name: astracontrol-service-account
        namespace: default
    2. Apply the cluster role binding:

      kubectl apply -f astracontrol-clusterrolebinding.yaml
  4. List the service account secrets, replacing <context> with the correct context for your installation:

    kubectl get serviceaccount astracontrol-service-account --context <context> --namespace default -o json

    The end of the output should look similar to the following:

    "secrets": [
    { "name": "astracontrol-service-account-dockercfg-vhz87"},
    { "name": "astracontrol-service-account-token-r59kr"}
    ]

    The indices for each element in the secrets array begin with 0. In the above example, the index for astracontrol-service-account-dockercfg-vhz87 would be 0 and the index for astracontrol-service-account-token-r59kr would be 1. In your output, make note of the index for the service account name that has the word "token" in it.

  5. Generate the kubeconfig as follows:

    1. Create a create-kubeconfig.sh file. Replace TOKEN_INDEX in the beginning of the following script with the correct value.

      create-kubeconfig.sh
      # Update these to match your environment.
      # Replace TOKEN_INDEX with the correct value
      # from the output in the previous step. If you
      # didn't change anything else above, don't change
      # anything else here.
      
      SERVICE_ACCOUNT_NAME=astracontrol-service-account
      NAMESPACE=default
      NEW_CONTEXT=astracontrol
      KUBECONFIG_FILE='kubeconfig-sa'
      
      CONTEXT=$(kubectl config current-context)
      
      SECRET_NAME=$(kubectl get serviceaccount ${SERVICE_ACCOUNT_NAME} \
        --context ${CONTEXT} \
        --namespace ${NAMESPACE} \
        -o jsonpath='{.secrets[TOKEN_INDEX].name}')
      TOKEN_DATA=$(kubectl get secret ${SECRET_NAME} \
        --context ${CONTEXT} \
        --namespace ${NAMESPACE} \
        -o jsonpath='{.data.token}')
      
      TOKEN=$(echo ${TOKEN_DATA} | base64 -d)
      
      # Create dedicated kubeconfig
      # Create a full copy
      kubectl config view --raw > ${KUBECONFIG_FILE}.full.tmp
      
      # Switch working context to correct context
      kubectl --kubeconfig ${KUBECONFIG_FILE}.full.tmp config use-context ${CONTEXT}
      
      # Minify
      kubectl --kubeconfig ${KUBECONFIG_FILE}.full.tmp \
        config view --flatten --minify > ${KUBECONFIG_FILE}.tmp
      
      # Rename context
      kubectl config --kubeconfig ${KUBECONFIG_FILE}.tmp \
        rename-context ${CONTEXT} ${NEW_CONTEXT}
      
      # Create token user
      kubectl config --kubeconfig ${KUBECONFIG_FILE}.tmp \
        set-credentials ${CONTEXT}-${NAMESPACE}-token-user \
        --token ${TOKEN}
      
      # Set context to use token user
      kubectl config --kubeconfig ${KUBECONFIG_FILE}.tmp \
        set-context ${NEW_CONTEXT} --user ${CONTEXT}-${NAMESPACE}-token-user
      
      # Set context to correct namespace
      kubectl config --kubeconfig ${KUBECONFIG_FILE}.tmp \
        set-context ${NEW_CONTEXT} --namespace ${NAMESPACE}
      
      # Flatten/minify kubeconfig
      kubectl config --kubeconfig ${KUBECONFIG_FILE}.tmp \
        view --flatten --minify > ${KUBECONFIG_FILE}
      
      # Remove tmp
      rm ${KUBECONFIG_FILE}.full.tmp
      rm ${KUBECONFIG_FILE}.tmp
    2. Source the commands to apply them to your Kubernetes cluster.

      source create-kubeconfig.sh
  6. (Optional) Rename the kubeconfig to a meaningful name for your cluster.

    mv kubeconfig-sa YOUR_CLUSTER_NAME_kubeconfig

What's next?

Now that you've verified that the prerequisites are met, you're ready to add a cluster.

Add cluster

To begin managing your apps, add a Kubernetes cluster and manage it as a compute resource. You have to add a cluster for Astra Control Center to discover your Kubernetes applications.

Tip We recommend that Astra Control Center manage the cluster it is deployed on first before you add other clusters to Astra Control Center to manage. Having the initial cluster under management is necessary to send Kubemetrics data and cluster-associated data for metrics and troubleshooting.
What you'll need
Steps
  1. Navigate from either the Dashboard or the Clusters menu:

    • From Dashboard in the Resource Summary, select Add from the Clusters pane.

    • In the left navigation area, select Clusters and then select Add Cluster from the Clusters page.

  2. In the Add Cluster window that opens, upload a kubeconfig.yaml file or paste the contents of a kubeconfig.yaml file.

    Note The kubeconfig.yaml file should include only the cluster credential for one cluster.
    Important If you create your own kubeconfig file, you should define only one context element in it. Refer to Kubernetes documentation for information about creating kubeconfig files. If you created a kubeconfig for a limited cluster role using the process above, be sure to upload or paste that kubeconfig in this step.
  3. Provide a credential name. By default, the credential name is auto-populated as the name of the cluster.

  4. Select Next.

  5. Select the default storage class to be used for this Kubernetes cluster, and select Next.

    Note You should select a Trident storage class backed by ONTAP storage.
  6. Review the information, and if everything looks good, select Add.

Result

The cluster enters Discovering state and then changes to Healthy. You are now managing the cluster with Astra Control Center.

Important After you add a cluster to be managed in Astra Control Center, it might take a few minutes to deploy the monitoring operator. Until then, the Notification icon turns red and logs a Monitoring Agent Status Check Failed event. You can ignore this, because the issue resolves when Astra Control Center obtains the correct status. If the issue does not resolve in a few minutes, go to the cluster, and run oc get pods -n netapp-monitoring as the starting point. You will need to look into the monitoring operator logs to debug the problem.

Add a storage backend

You can add an existing ONTAP storage backend to Astra Control Center to manage its resources.

Managing storage clusters in Astra Control as a storage backend enables you to get linkages between persistent volumes (PVs) and the storage backend as well as additional storage metrics.

Steps
  1. From the Dashboard in the left-navigation area, select Backends.

  2. Do one of the following:

    • New backends: Select Add to manage an existing backend, select ONTAP, and select Next.

    • Discovered backends: From the Actions menu, select Manage on a discovered backend from the managed cluster.

  3. Enter the ONTAP cluster management IP address and admin credentials. The credentials must be cluster-wide credentials.

    Note The user whose credentials you enter here must have the ontapi user login access method enabled within ONTAP System Manager on the ONTAP cluster. If you plan to use SnapMirror replication, apply user credentials with the "admin" role, which has the access methods ontapi and http, on both source and destination ONTAP clusters. Refer to Manage User Accounts in ONTAP documentation for more information.
  4. Select Next.

  5. Confirm the backend details and select Manage.

Result

The backend appears in the Healthy state in the list with summary information.

Note You might need to refresh the page for the backend to appear.

Add a bucket

You can add a bucket using the Astra Control UI or API. Adding object store bucket providers is essential if you want to back up your applications and persistent storage or if you want to clone applications across clusters. Astra Control stores those backups or clones in the object store buckets that you define.

You don't need a bucket in Astra Control if you are cloning your application configuration and persistent storage to the same cluster. Application snapshots functionality does not require a bucket.

What you'll need
  • A bucket that is reachable from your clusters managed by Astra Control Center.

  • Credentials for the bucket.

  • A bucket of the following types:

    • NetApp ONTAP S3

    • NetApp StorageGRID S3

    • Microsoft Azure

    • Generic S3

Note Amazon Web Services (AWS) and Google Cloud Platform (GCP) use the Generic S3 bucket type.
Note Although Astra Control Center supports Amazon S3 as a Generic S3 bucket provider, Astra Control Center might not support all object store vendors that claim Amazon's S3 support.
Steps
  1. In the left navigation area, select Buckets.

  2. Select Add.

  3. Select the bucket type.

    Note When you add a bucket, select the correct bucket provider and provide the right credentials for that provider. For example, the UI accepts NetApp ONTAP S3 as the type and accepts StorageGRID credentials; however, this will cause all future app backups and restores using this bucket to fail.
  4. Enter an existing bucket name and optional description.

    Tip The bucket name and description appear as a backup location that you can choose later when you're creating a backup. The name also appears during protection policy configuration.
  5. Enter the name or IP address of the S3 endpoint.

  6. Under Select Credentials, choose either the Add or Use existing tab.

    • If you chose Add:

      1. Enter a name for the credential that distinguishes it from other credentials in Astra Control.

      2. Enter the access ID and secret key by pasting the contents from your clipboard.

    • If you chose Use existing:

      1. Select the existing credentials you want to use with the bucket.

  7. Select Add.

    Note When you add a bucket, Astra Control marks one bucket with the default bucket indicator. The first bucket that you create becomes the default bucket. As you add buckets, you can later decide to set another default bucket.

What's next?

Now that you've logged in and added clusters to Astra Control Center, you're ready to start using Astra Control Center's application data management features.

Find more information