Set up Astra Control Center
After you install Astra Control Center, log in to the UI, and change your password, you'll want to set up a license, add clusters, enable authentication, manage storage, and add buckets.
Add a license for Astra Control Center
When you install Astra Control Center, an embedded evaluation license is already installed. If you are evaluating Astra Control Center, you can skip this step.
You can add a new license using the Astra Control UI or Astra Control API.
Astra Control Center licenses measure CPU resources using Kubernetes CPU units and account for the CPU resources assigned to the worker nodes of all the managed Kubernetes clusters. Licenses are based on vCPU usage. For more information on how licenses are calculated, refer to Licensing.
If your installation grows to exceed the licensed number of CPU units, Astra Control Center prevents you from managing new applications. An alert is displayed when capacity is exceeded. |
To update an existing evaluation or full license, refer to Update an existing license. |
-
Access to a newly installed Astra Control Center instance.
-
Administrator role permissions.
-
A NetApp License File (NLF).
-
Log in to the Astra Control Center UI.
-
Select Account > License.
-
Select Add License.
-
Browse to the license file (NLF) that you downloaded.
-
Select Add License.
The Account > License page displays the license information, expiration date, license serial number, account ID, and CPU units used.
If you have an evaluation license and are not sending data to AutoSupport, be sure that you store your account ID to avoid data loss in the event of Astra Control Center failure. |
Prepare your environment for cluster management using Astra Control
You should ensure that the following prerequisite conditions are met before you add a cluster. You should also run eligibility checks to ensure that your cluster is ready to be added to Astra Control Center and create roles for cluster management.
-
Ensure that the worker nodes in your cluster are configured with the appropriate storage drivers so that the pods can interact with the backend storage.
-
Your environment meets the operational environment requirements for Astra Trident and Astra Control Center.
-
If you are adding the cluster using a kubeconfig file that references a private Certificate Authority (CA), add the following line to the
cluster
section of the kubeconfig file. This enables Astra Control to add the cluster:insecure-skip-tls-verify: true
-
A version of Astra Trident that is supported by Astra Control Center is installed:
You can deploy Astra Trident using either the Astra Trident operator (manually or using Helm chart) or tridentctl
. Prior to installing or upgrading Astra Trident, review the supported frontends, backends, and host configurations.-
Astra Trident storage backend configured: At least one Astra Trident storage backend must be configured on the cluster.
-
Astra Trident storage classes configured: At least one Astra Trident storage class must be configured on the cluster. If a default storage class is configured, ensure that it is the only storage class that has the default annotation.
-
Astra Trident volume snapshot controller and volume snapshot class installed and configured: The volume snapshot controller must be installed so that snapshots can be created in Astra Control. At least one Astra Trident
VolumeSnapshotClass
has been set up by an administrator.
-
-
Kubeconfig accessible: You have access to the default cluster kubeconfig that you configured during installation.
-
ONTAP credentials: You need ONTAP credentials and a superuser and user ID set on the backing ONTAP system to back up and restore apps with Astra Control Center.
Run the following commands in the ONTAP command line:
export-policy rule modify -vserver <storage virtual machine name> -policyname <policy name> -ruleindex 1 -superuser sys export-policy rule modify -vserver <storage virtual machine name> -policyname <policy name> -ruleindex 1 -anon 65534
-
Rancher only: When managing application clusters in a Rancher environment, modify the application cluster's default context in the kubeconfig file provided by Rancher to use a control plane context instead of the Rancher API server context. This reduces load on the Rancher API server and improves performance.
Run eligibility checks
Run the following eligibility checks to ensure that your cluster is ready to be added to Astra Control Center.
-
Check the Astra Trident version.
kubectl get tridentversions -n trident
If Astra Trident exists, you see output similar to the following:
NAME VERSION trident 23.XX.X
If Astra Trident does not exist, you see output similar to the following:
error: the server doesn't have a resource type "tridentversions"
If Astra Trident is not installed or the installed version is not the latest, you need to install the latest version of Astra Trident before proceeding. Refer to the Astra Trident documentation for instructions. -
Ensure that the pods are running:
kubectl get pods -n trident
-
Determine if the storage classes are using the supported Astra Trident drivers. The provisioner name should be
csi.trident.netapp.io
. See the following example:kubectl get sc
Sample response:
NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE ontap-gold (default) csi.trident.netapp.io Delete Immediate true 5d23h
Create a cluster role kubeconfig
You can optionally create a limited permission or expanded permission administrator role for Astra Control Center. This is not a required procedure for Astra Control Center setup as you already configured a kubeconfig as part of the installation process.
This procedure helps you to create a separate kubeconfig if either of the following scenarios applies to your environment:
-
You want to limit Astra Control permissions on the clusters it manages
-
You use multiple contexts and cannot use the default Astra Control kubeconfig configured during installation or a limited role with a single context won't work in your environment
Ensure that you have the following for the cluster you intend to manage before completing the procedure steps:
-
kubectl v1.23 or later installed
-
kubectl access to the cluster that you intend to add and manage with Astra Control Center
For this procedure, you do not need kubectl access to the cluster that is running Astra Control Center. -
An active kubeconfig for the cluster you intend to manage with cluster admin rights for the active context
-
Create a service account:
-
Create a service account file called
astracontrol-service-account.yaml
.Adjust the name and namespace as needed. If changes are made here, you should apply the same changes in the following steps.
astracontrol-service-account.yaml
apiVersion: v1 kind: ServiceAccount metadata: name: astracontrol-service-account namespace: default
-
Apply the service account:
kubectl apply -f astracontrol-service-account.yaml
-
-
Create one of the following cluster roles with sufficient permissions for a cluster to be managed by Astra Control:
-
Limited cluster role: This role contains the minimum permissions necessary for a cluster to be managed by Astra Control:
Expand for steps
-
Create a
ClusterRole
file called, for example,astra-admin-account.yaml
.Adjust the name and namespace as needed. If changes are made here, you should apply the same changes in the following steps.
astra-admin-account.yaml
apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: astra-admin-account rules: # Get, List, Create, and Update all resources # Necessary to backup and restore all resources in an app - apiGroups: - '*' resources: - '*' verbs: - get - list - create - patch # Delete Resources # Necessary for in-place restore and AppMirror failover - apiGroups: - "" - apps - autoscaling - batch - crd.projectcalico.org - extensions - networking.k8s.io - policy - rbac.authorization.k8s.io - snapshot.storage.k8s.io - trident.netapp.io resources: - configmaps - cronjobs - daemonsets - deployments - horizontalpodautoscalers - ingresses - jobs - namespaces - networkpolicies - persistentvolumeclaims - poddisruptionbudgets - pods - podtemplates - podsecuritypolicies - replicasets - replicationcontrollers - replicationcontrollers/scale - rolebindings - roles - secrets - serviceaccounts - services - statefulsets - tridentmirrorrelationships - tridentsnapshotinfos - volumesnapshots - volumesnapshotcontents verbs: - delete # Watch resources # Necessary to monitor progress - apiGroups: - "" resources: - pods - replicationcontrollers - replicationcontrollers/scale verbs: - watch # Update resources - apiGroups: - "" - build.openshift.io - image.openshift.io resources: - builds/details - replicationcontrollers - replicationcontrollers/scale - imagestreams/layers - imagestreamtags - imagetags verbs: - update # Use PodSecurityPolicies - apiGroups: - extensions - policy resources: - podsecuritypolicies verbs: - use
-
(For OpenShift clusters only) Append the following at the end of the
astra-admin-account.yaml
file or after the# Use PodSecurityPolicies
section:# OpenShift security - apiGroups: - security.openshift.io resources: - securitycontextconstraints verbs: - use
-
Apply the cluster role:
kubectl apply -f astra-admin-account.yaml
-
-
Expanded cluster role: This role contains expanded permissions for a cluster to be managed by Astra Control. You might use this role if you use multiple contexts and cannot use the default Astra Control kubeconfig configured during installation or a limited role with a single context won't work in your environment:
The following ClusterRole
steps are a general Kubernetes example. Refer to the documentation for your Kubernetes distribution for instructions specific to your environment.Expand for steps
-
Create a
ClusterRole
file called, for example,astra-admin-account.yaml
.Adjust the name and namespace as needed. If changes are made here, you should apply the same changes in the following steps.
astra-admin-account.yaml
apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: astra-admin-account rules: - apiGroups: - '*' resources: - '*' verbs: - '*' - nonResourceURLs: - '*' verbs: - '*'
-
Apply the cluster role:
kubectl apply -f astra-admin-account.yaml
-
-
-
Create the cluster role binding for the cluster role to the service account:
-
Create a
ClusterRoleBinding
file calledastracontrol-clusterrolebinding.yaml
.Adjust any names and namespaces modified when creating the service account as needed.
astracontrol-clusterrolebinding.yaml
apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: astracontrol-admin roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: astra-admin-account subjects: - kind: ServiceAccount name: astracontrol-service-account namespace: default
-
Apply the cluster role binding:
kubectl apply -f astracontrol-clusterrolebinding.yaml
-
-
Create and apply the token secret:
-
Create a token secret file called
secret-astracontrol-service-account.yaml
.secret-astracontrol-service-account.yaml
apiVersion: v1 kind: Secret metadata: name: secret-astracontrol-service-account namespace: default annotations: kubernetes.io/service-account.name: "astracontrol-service-account" type: kubernetes.io/service-account-token
-
Apply the token secret:
kubectl apply -f secret-astracontrol-service-account.yaml
-
-
Add the token secret to the service account by adding its name to the
secrets
array (the last line in the following example):kubectl edit sa astracontrol-service-account
apiVersion: v1 imagePullSecrets: - name: astracontrol-service-account-dockercfg-48xhx kind: ServiceAccount metadata: annotations: kubectl.kubernetes.io/last-applied-configuration: | {"apiVersion":"v1","kind":"ServiceAccount","metadata":{"annotations":{},"name":"astracontrol-service-account","namespace":"default"}} creationTimestamp: "2023-06-14T15:25:45Z" name: astracontrol-service-account namespace: default resourceVersion: "2767069" uid: 2ce068c4-810e-4a96-ada3-49cbf9ec3f89 secrets: - name: astracontrol-service-account-dockercfg-48xhx - name: secret-astracontrol-service-account
-
List the service account secrets, replacing
<context>
with the correct context for your installation:kubectl get serviceaccount astracontrol-service-account --context <context> --namespace default -o json
The end of the output should look similar to the following:
"secrets": [ { "name": "astracontrol-service-account-dockercfg-48xhx"}, { "name": "secret-astracontrol-service-account"} ]
The indices for each element in the
secrets
array begin with 0. In the above example, the index forastracontrol-service-account-dockercfg-48xhx
would be 0 and the index forsecret-astracontrol-service-account
would be 1. In your output, make note of the index number for the service account secret. You will need this index number in the next step. -
Generate the kubeconfig as follows:
-
Create a
create-kubeconfig.sh
file. ReplaceTOKEN_INDEX
in the beginning of the following script with the correct value.create-kubeconfig.sh
# Update these to match your environment. # Replace TOKEN_INDEX with the correct value # from the output in the previous step. If you # didn't change anything else above, don't change # anything else here. SERVICE_ACCOUNT_NAME=astracontrol-service-account NAMESPACE=default NEW_CONTEXT=astracontrol KUBECONFIG_FILE='kubeconfig-sa' CONTEXT=$(kubectl config current-context) SECRET_NAME=$(kubectl get serviceaccount ${SERVICE_ACCOUNT_NAME} \ --context ${CONTEXT} \ --namespace ${NAMESPACE} \ -o jsonpath='{.secrets[TOKEN_INDEX].name}') TOKEN_DATA=$(kubectl get secret ${SECRET_NAME} \ --context ${CONTEXT} \ --namespace ${NAMESPACE} \ -o jsonpath='{.data.token}') TOKEN=$(echo ${TOKEN_DATA} | base64 -d) # Create dedicated kubeconfig # Create a full copy kubectl config view --raw > ${KUBECONFIG_FILE}.full.tmp # Switch working context to correct context kubectl --kubeconfig ${KUBECONFIG_FILE}.full.tmp config use-context ${CONTEXT} # Minify kubectl --kubeconfig ${KUBECONFIG_FILE}.full.tmp \ config view --flatten --minify > ${KUBECONFIG_FILE}.tmp # Rename context kubectl config --kubeconfig ${KUBECONFIG_FILE}.tmp \ rename-context ${CONTEXT} ${NEW_CONTEXT} # Create token user kubectl config --kubeconfig ${KUBECONFIG_FILE}.tmp \ set-credentials ${CONTEXT}-${NAMESPACE}-token-user \ --token ${TOKEN} # Set context to use token user kubectl config --kubeconfig ${KUBECONFIG_FILE}.tmp \ set-context ${NEW_CONTEXT} --user ${CONTEXT}-${NAMESPACE}-token-user # Set context to correct namespace kubectl config --kubeconfig ${KUBECONFIG_FILE}.tmp \ set-context ${NEW_CONTEXT} --namespace ${NAMESPACE} # Flatten/minify kubeconfig kubectl config --kubeconfig ${KUBECONFIG_FILE}.tmp \ view --flatten --minify > ${KUBECONFIG_FILE} # Remove tmp rm ${KUBECONFIG_FILE}.full.tmp rm ${KUBECONFIG_FILE}.tmp
-
Source the commands to apply them to your Kubernetes cluster.
source create-kubeconfig.sh
-
-
(Optional) Rename the kubeconfig to a meaningful name for your cluster.
mv kubeconfig-sa YOUR_CLUSTER_NAME_kubeconfig
What's next?
Now that you've verified that the prerequisites are met, you're ready to add a cluster.
Add cluster
To begin managing your apps, add a Kubernetes cluster and manage it as a compute resource. You have to add a cluster for Astra Control Center to discover your Kubernetes applications.
We recommend that Astra Control Center manage the cluster it is deployed on first before you add other clusters to Astra Control Center to manage. Having the initial cluster under management is necessary to send Kubemetrics data and cluster-associated data for metrics and troubleshooting. |
-
Before you add a cluster, review and perform the necessary prerequisite tasks.
-
Navigate from either the Dashboard or the Clusters menu:
-
From Dashboard in the Resource Summary, select Add from the Clusters pane.
-
In the left navigation area, select Clusters and then select Add Cluster from the Clusters page.
-
-
In the Add Cluster window that opens, upload a
kubeconfig.yaml
file or paste the contents of akubeconfig.yaml
file.The kubeconfig.yaml
file should include only the cluster credential for one cluster.If you create your own kubeconfig
file, you should define only one context element in it. Refer to Kubernetes documentation for information about creatingkubeconfig
files. If you created a kubeconfig for a limited cluster role using the process above, be sure to upload or paste that kubeconfig in this step. -
Provide a credential name. By default, the credential name is auto-populated as the name of the cluster.
-
Select Next.
-
Select the default storage class to be used for this Kubernetes cluster, and select Next.
You should select an Astra Trident storage class backed by ONTAP storage. -
Review the information, and if everything looks good, select Add.
The cluster enters Discovering state and then changes to Healthy. You are now managing the cluster with Astra Control Center.
After you add a cluster to be managed in Astra Control Center, it might take a few minutes to deploy the monitoring operator. Until then, the Notification icon turns red and logs a Monitoring Agent Status Check Failed event. You can ignore this, because the issue resolves when Astra Control Center obtains the correct status. If the issue does not resolve in a few minutes, go to the cluster, and run oc get pods -n netapp-monitoring as the starting point. You will need to look into the monitoring operator logs to debug the problem.
|
Enable authentication on the ONTAP storage backend
Astra Control Center offers two modes of authenticating an ONTAP backend:
-
Credential-based authentication: The username and password to an ONTAP user with the required permissions. You should use a pre-defined security login role, such as admin or vsadmin to ensure maximum compatibility with ONTAP versions.
-
Certificate-based authentication: Astra Control Center can also communicate with an ONTAP cluster using a certificate installed on the backend. You should use the client certificate, key, and the trusted CA certificate if used (recommended).
You can later update existing backends to move from one type of authentication to another method. Only one authentication method is supported at a time.
Enable credential-based authentication
Astra Control Center requires the credentials to a cluster-scoped admin
to communicate with the ONTAP backend. You should use standard, pre-defined roles such as admin
. This ensures forward compatibility with future ONTAP releases that might expose feature APIs to be used by future Astra Control Center releases.
A custom security login role can be created and used with Astra Control Center, but is not recommended. |
A sample backend definition looks like this:
{ "version": 1, "backendName": "ExampleBackend", "storageDriverName": "ontap-nas", "managementLIF": "10.0.0.1", "dataLIF": "10.0.0.2", "svm": "svm_nfs", "username": "admin", "password": "secret" }
The backend definition is the only place the credentials are stored in plain text. The creation or update of a backend is the only step that requires knowledge of the credentials. As such, it is an admin-only operation, to be performed by the Kubernetes or storage administrator.
Enable certificate-based authentication
Astra Control Center can use certificates to communicate with new and existing ONTAP backends. You should enter the following information in the backend definition.
-
clientCertificate
: Client certificate. -
clientPrivateKey
: Associated private key. -
trustedCACertificate
: Trusted CA certificate. If using a trusted CA, this parameter must be provided. This can be ignored if no trusted CA is used.
You can use one of the following types of certificates:
-
Self-signed certificate
-
Third-party certificate
Enable authentication with a self-signed certificate
A typical workflow involves the following steps.
-
Generate a client certificate and key. When generating, set the Common Name (CN) to the ONTAP user to authenticate as.
openssl req -x509 -nodes -days 1095 -newkey rsa:2048 -keyout k8senv.key -out k8senv.pem -subj "/C=US/ST=NC/L=RTP/O=NetApp/CN=<common-name>"
-
Install the client certificate of type
client-ca
and key on the ONTAP cluster.security certificate install -type client-ca -cert-name <certificate-name> -vserver <vserver-name> security ssl modify -vserver <vserver-name> -client-enabled true
-
Confirm that the ONTAP security login role supports the certificate authentication method.
security login create -user-or-group-name vsadmin -application ontapi -authentication-method cert -vserver <vserver-name> security login create -user-or-group-name vsadmin -application http -authentication-method cert -vserver <vserver-name>
-
Test authentication using the generated certificate. Replace <ONTAP Management LIF> and <vserver name> with the Management LIF IP and SVM name. You must ensure the LIF has its service policy set to
default-data-management
.curl -X POST -Lk https://<ONTAP-Management-LIF>/servlets/netapp.servlets.admin.XMLrequest_filer --key k8senv.key --cert ~/k8senv.pem -d '<?xml version="1.0" encoding="UTF-8"?><netapp xmlns=http://www.netapp.com/filer/admin version="1.21" vfiler="<vserver-name>"><vserver-get></vserver-get></netapp>
-
Using the values obtained from the previous step, add the storage backend in the Astra Control Center UI.
Enable authentication with a third-party certificate
If you have a third-party certificate, you can set up certificate-based authentication with these steps.
-
Generate the private key and CSR:
openssl req -new -newkey rsa:4096 -nodes -sha256 -subj "/" -outform pem -out ontap_cert_request.csr -keyout ontap_cert_request.key -addext "subjectAltName = DNS:<ONTAP_CLUSTER_FQDN_NAME>,IP:<ONTAP_MGMT_IP>”
-
Pass the CSR to the Windows CA (third-party CA) and issue the signed certificate.
-
Download the signed certificate and name it `ontap_signed_cert.crt'
-
Export the root certificate from Windows CA (third-party CA).
-
Name this file
ca_root.crt
You now have the following three files:
-
Private key:
ontap_signed_request.key
(This is the corresponding key for the server certificate in ONTAP. It is needed while installing the server certificate.) -
Signed certificate:
ontap_signed_cert.crt
(This is also called the server certificate in ONTAP.) -
Root CA certificate:
ca_root.crt
(This is also called the server-ca certificate in ONTAP.)
-
-
Install these certificates in ONTAP. Generate and install
server
andserver-ca
certificates on ONTAP.Expand for sample.yaml
# Copy the contents of ca_root.crt and use it here. security certificate install -type server-ca Please enter Certificate: Press <Enter> when done -----BEGIN CERTIFICATE----- <certificate details> -----END CERTIFICATE----- You should keep a copy of the CA-signed digital certificate for future reference. The installed certificate's CA and serial number for reference: CA: serial: The certificate's generated name for reference: === # Copy the contents of ontap_signed_cert.crt and use it here. For key, use the contents of ontap_cert_request.key file. security certificate install -type server Please enter Certificate: Press <Enter> when done -----BEGIN CERTIFICATE----- <certificate details> -----END CERTIFICATE----- Please enter Private Key: Press <Enter> when done -----BEGIN PRIVATE KEY----- <private key details> -----END PRIVATE KEY----- Enter certificates of certification authorities (CA) which form the certificate chain of the server certificate. This starts with the issuing CA certificate of the server certificate and can range up to the root CA certificate. Do you want to continue entering root and/or intermediate certificates {y|n}: n The provided certificate does not have a common name in the subject field. Enter a valid common name to continue installation of the certificate: <ONTAP_CLUSTER_FQDN_NAME> You should keep a copy of the private key and the CA-signed digital certificate for future reference. The installed certificate's CA and serial number for reference: CA: serial: The certificate's generated name for reference: == # Modify the vserver settings to enable SSL for the installed certificate ssl modify -vserver <vserver_name> -ca <CA> -server-enabled true -serial <serial number> (security ssl modify) == # Verify if the certificate works fine: openssl s_client -CAfile ca_root.crt -showcerts -servername server -connect <ONTAP_CLUSTER_FQDN_NAME>:443 CONNECTED(00000005) depth=1 DC = local, DC = umca, CN = <CA> verify return:1 depth=0 verify return:1 write W BLOCK --- Certificate chain 0 s: i:/DC=local/DC=umca/<CA> -----BEGIN CERTIFICATE----- <Certificate details>
-
Create the client certificate for the same host for passwordless communication. Astra Control Center uses this process to communicate with ONTAP.
-
Generate and install the client certificates on ONTAP:
Expand for sample.yaml
# Use /CN=admin or use some other account which has privileges. openssl req -x509 -nodes -days 1095 -newkey rsa:2048 -keyout ontap_test_client.key -out ontap_test_client.pem -subj "/CN=admin" Copy the content of ontap_test_client.pem file and use it in the below command: security certificate install -type client-ca -vserver <vserver_name> Please enter Certificate: Press <Enter> when done -----BEGIN CERTIFICATE----- <Certificate details> -----END CERTIFICATE----- You should keep a copy of the CA-signed digital certificate for future reference. The installed certificate’s CA and serial number for reference: CA: serial: The certificate’s generated name for reference: == ssl modify -vserver <vserver_name> -client-enabled true (security ssl modify) # Setting permissions for certificates security login create -user-or-group-name admin -application ontapi -authentication-method cert -role admin -vserver <vserver_name> security login create -user-or-group-name admin -application http -authentication-method cert -role admin -vserver <vserver_name> == #Verify passwordless communication works fine with the use of only certificates: curl --cacert ontap_signed_cert.crt --key ontap_test_client.key --cert ontap_test_client.pem https://<ONTAP_CLUSTER_FQDN_NAME>/api/storage/aggregates { "records": [ { "uuid": "f84e0a9b-e72f-4431-88c4-4bf5378b41bd", "name": "<aggr_name>", "node": { "uuid": "7835876c-3484-11ed-97bb-d039ea50375c", "name": "<node_name>", "_links": { "self": { "href": "/api/cluster/nodes/7835876c-3484-11ed-97bb-d039ea50375c" } } }, "_links": { "self": { "href": "/api/storage/aggregates/f84e0a9b-e72f-4431-88c4-4bf5378b41bd" } } } ], "num_records": 1, "_links": { "self": { "href": "/api/storage/aggregates" } } }%
-
Add the storage backend in the Astra Control Center UI and provide the following values:
-
Client Certificate: ontap_test_client.pem
-
Private Key: ontap_test_client.key
-
Trusted CA Certificate: ontap_signed_cert.crt
-
Add a storage backend
You can add an existing ONTAP storage backend to Astra Control Center to manage its resources.
Managing storage clusters in Astra Control as a storage backend enables you to get linkages between persistent volumes (PVs) and the storage backend as well as additional storage metrics.
After you set up the credentials or certificate authentication information, you can add an existing ONTAP storage backend to Astra Control Center to manage its resources.
-
From the Dashboard in the left-navigation area, select Backends.
-
Select Add.
-
In the Use Existing section of the Add storage backend page, select ONTAP.
-
Select one of the following:
-
Use administrator credentials: Enter the ONTAP cluster management IP address and admin credentials. The credentials must be cluster-wide credentials.
The user whose credentials you enter here must have the ontapi
user login access method enabled within ONTAP System Manager on the ONTAP cluster. If you plan to use SnapMirror replication, apply user credentials with the "admin" role, which has the access methodsontapi
andhttp
, on both source and destination ONTAP clusters. Refer to Manage User Accounts in ONTAP documentation for more information. -
Use a certificate: Upload the certificate
.pem
file, the certificate key.key
file, and optionally the certificate authority file.
-
-
Select Next.
-
Confirm the backend details and select Manage.
The backend appears in the online
state in the list with summary information.
You might need to refresh the page for the backend to appear. |
Add a bucket
You can add a bucket using the Astra Control UI or Astra Control API. Adding object store bucket providers is essential if you want to back up your applications and persistent storage or if you want to clone applications across clusters. Astra Control stores those backups or clones in the object store buckets that you define.
You don't need a bucket in Astra Control if you are cloning your application configuration and persistent storage to the same cluster. Application snapshots functionality does not require a bucket.
-
A bucket that is reachable from your clusters managed by Astra Control Center.
-
Credentials for the bucket.
-
A bucket of the following types:
-
NetApp ONTAP S3
-
NetApp StorageGRID S3
-
Microsoft Azure
-
Generic S3
-
Amazon Web Services (AWS) and Google Cloud Platform (GCP) use the Generic S3 bucket type. |
Although Astra Control Center supports Amazon S3 as a Generic S3 bucket provider, Astra Control Center might not support all object store vendors that claim Amazon's S3 support. |
-
In the left navigation area, select Buckets.
-
Select Add.
-
Select the bucket type.
When you add a bucket, select the correct bucket provider and provide the right credentials for that provider. For example, the UI accepts NetApp ONTAP S3 as the type and accepts StorageGRID credentials; however, this will cause all future app backups and restores using this bucket to fail. -
Enter an existing bucket name and optional description.
The bucket name and description appear as a backup location that you can choose later when you're creating a backup. The name also appears during protection policy configuration. -
Enter the name or IP address of the S3 endpoint.
-
Under Select Credentials, choose either the Add or Use existing tab.
-
If you chose Add:
-
Enter a name for the credential that distinguishes it from other credentials in Astra Control.
-
Enter the access ID and secret key by pasting the contents from your clipboard.
-
-
If you chose Use existing:
-
Select the existing credentials you want to use with the bucket.
-
-
-
Select
Add
.When you add a bucket, Astra Control marks one bucket with the default bucket indicator. The first bucket that you create becomes the default bucket. As you add buckets, you can later decide to set another default bucket.
What's next?
Now that you've logged in and added clusters to Astra Control Center, you're ready to start using Astra Control Center's application data management features.