Prepare your environment for cluster management using Astra Control
You should ensure that the following prerequisite conditions are met before you add a cluster. You should also run eligibility checks to ensure that your cluster is ready to be added to Astra Control Center and create kubeconfig cluster roles as needed.
Astra Control allows you to add clusters managed by custom resource (CR) or kubeconfig, depending on your environment and preferences.
-
Meet environmental prerequisites: Your environment meets operational environment requirements for Astra Control Center.
-
Configure worker nodes: Ensure that you configure the worker nodes in your cluster with the appropriate storage drivers so that the pods can interact with the backend storage.
-
Enable PSA restrictions: If your cluster has pod security admission enforcement enabled, which is standard for Kubernetes 1.25 and later clusters, you need to enable PSA restrictions on these namespaces:
-
netapp-acc-operator
namespace:kubectl label --overwrite ns netapp-acc-operator pod-security.kubernetes.io/enforce=privileged
-
netapp monitoring
namespace:kubectl label --overwrite ns netapp-monitoring pod-security.kubernetes.io/enforce=privileged
-
-
ONTAP credentials: You need ONTAP credentials and a superuser and user ID set on the backing ONTAP system to back up and restore apps with Astra Control Center.
Run the following commands in the ONTAP command line:
export-policy rule modify -vserver <storage virtual machine name> -policyname <policy name> -ruleindex 1 -superuser sys export-policy rule modify -vserver <storage virtual machine name> -policyname <policy name> -ruleindex 1 -anon 65534
-
kubeconfig-managed cluster requirements: These requirement are specific for app clusters managed by kubeconfig.
-
Make kubeconfig accessible: You have access to the default cluster kubeconfig that you configured during installation.
-
Certificate Authority considerations: If you are adding the cluster using a kubeconfig file that references a private Certificate Authority (CA), add the following line to the
cluster
section of the kubeconfig file. This enables Astra Control to add the cluster:insecure-skip-tls-verify: true
-
Rancher only: When managing application clusters in a Rancher environment, modify the application cluster's default context in the kubeconfig file provided by Rancher to use a control plane context instead of the Rancher API server context. This reduces load on the Rancher API server and improves performance.
-
-
Astra Control Provisioner requirements: You should have a properly configured Astra Control Provisioner, including its Astra Trident components, to manage clusters.
-
Review Astra Trident environment requirements: Prior to installing or upgrading Astra Control Provisioner, review the supported frontends, backends, and host configurations.
-
Enable Astra Control Provisioner functionality: It's highly recommended that you install Astra Trident 23.10 or later and enable Astra Control Provisioner advanced storage functionality. In coming releases, Astra Control will not support Astra Trident if the Astra Control Provisioner is not also enabled.
-
Configure a storage backend: At least one storage backend must be configured in Astra Trident on the cluster.
-
Configure a storage class: At least one storage class must be configured in Astra Trident on the cluster. If a default storage class is configured, ensure that it is the only storage class that has the default annotation.
-
Configure a volume snapshot controller and install a volume snapshot class: Install a volume snapshot controller so that snapshots can be created in Astra Control. Create at least one
VolumeSnapshotClass
using Astra Trident.
-
Run eligibility checks
Run the following eligibility checks to ensure that your cluster is ready to be added to Astra Control Center.
-
Determine the Astra Trident version you are running:
kubectl get tridentversion -n trident
If Astra Trident exists, you see output similar to the following:
NAME VERSION trident 24.02.0
If Astra Trident does not exist, you see output similar to the following:
error: the server doesn't have a resource type "tridentversions"
-
Do one of the following:
-
If you are running Astra Trident 23.01 or earlier, use these instructions to upgrade to a more recent version of Astra Trident before upgrading to the Astra Control Provisioner. You can perform a direct upgrade to Astra Control Provisioner 24.02 if your Astra Trident is within a four-release window of version 24.02. For example, you can directly upgrade from Astra Trident 23.04 to Astra Control Provisioner 24.02.
-
If you are running Astra Trident 23.10 or later, verify that Astra Control Provisioner has been enabled. Astra Control Provisioner will not work with releases of Astra Control Center earlier than 23.10. Upgrade your Astra Control Provisioner so that it has the same version as the Astra Control Center you are upgrading to access the latest functionality.
-
-
Ensure that all pods (including
trident-acp
) are running:kubectl get pods -n trident
-
Determine if the storage classes are using the supported Astra Trident drivers. The provisioner name should be
csi.trident.netapp.io
. See the following example:kubectl get sc
Sample response:
NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE ontap-gold (default) csi.trident.netapp.io Delete Immediate true 5d23h
Create a cluster role kubeconfig
For clusters that are managed using kubeconfig, you can optionally create a limited permission or expanded permission administrator role for Astra Control Center. This is not a required procedure for Astra Control Center setup as you already configured a kubeconfig as part of the installation process.
This procedure helps you to create a separate kubeconfig if either of the following scenarios applies to your environment:
-
You want to limit Astra Control permissions on the clusters it manages
-
You use multiple contexts and cannot use the default Astra Control kubeconfig configured during installation or a limited role with a single context won't work in your environment
Ensure that you have the following for the cluster you intend to manage before completing the procedure steps:
-
kubectl v1.23 or later installed
-
kubectl access to the cluster that you intend to add and manage with Astra Control Center
For this procedure, you do not need kubectl access to the cluster that is running Astra Control Center. -
An active kubeconfig for the cluster you intend to manage with cluster admin rights for the active context
-
Create a service account:
-
Create a service account file called
astracontrol-service-account.yaml
.astracontrol-service-account.yaml
apiVersion: v1 kind: ServiceAccount metadata: name: astracontrol-service-account namespace: default
-
Apply the service account:
kubectl apply -f astracontrol-service-account.yaml
-
-
Create one of the following cluster roles with sufficient permissions for a cluster to be managed by Astra Control:
Limited cluster roleThis role contains the minimum permissions necessary for a cluster to be managed by Astra Control:
-
Create a
ClusterRole
file called, for example,astra-admin-account.yaml
.astra-admin-account.yaml
apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: astra-admin-account rules: # Get, List, Create, and Update all resources # Necessary to backup and restore all resources in an app - apiGroups: - '*' resources: - '*' verbs: - get - list - create - patch # Delete Resources # Necessary for in-place restore and AppMirror failover - apiGroups: - "" - apps - autoscaling - batch - crd.projectcalico.org - extensions - networking.k8s.io - policy - rbac.authorization.k8s.io - snapshot.storage.k8s.io - trident.netapp.io resources: - configmaps - cronjobs - daemonsets - deployments - horizontalpodautoscalers - ingresses - jobs - namespaces - networkpolicies - persistentvolumeclaims - poddisruptionbudgets - pods - podtemplates - replicasets - replicationcontrollers - replicationcontrollers/scale - rolebindings - roles - secrets - serviceaccounts - services - statefulsets - tridentmirrorrelationships - tridentsnapshotinfos - volumesnapshots - volumesnapshotcontents verbs: - delete # Watch resources # Necessary to monitor progress - apiGroups: - "" resources: - pods - replicationcontrollers - replicationcontrollers/scale verbs: - watch # Update resources - apiGroups: - "" - build.openshift.io - image.openshift.io resources: - builds/details - replicationcontrollers - replicationcontrollers/scale - imagestreams/layers - imagestreamtags - imagetags verbs: - update
-
(For OpenShift clusters only) Append the following at the end of the
astra-admin-account.yaml
file:# OpenShift security - apiGroups: - security.openshift.io resources: - securitycontextconstraints verbs: - use - update
-
Apply the cluster role:
kubectl apply -f astra-admin-account.yaml
Expanded cluster roleThis role contains expanded permissions for a cluster to be managed by Astra Control. You might use this role if you use multiple contexts and cannot use the default Astra Control kubeconfig configured during installation or a limited role with a single context won't work in your environment:
The following ClusterRole
steps are a general Kubernetes example. Refer to the documentation for your Kubernetes distribution for instructions specific to your environment.-
Create a
ClusterRole
file called, for example,astra-admin-account.yaml
.astra-admin-account.yaml
apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: astra-admin-account rules: - apiGroups: - '*' resources: - '*' verbs: - '*' - nonResourceURLs: - '*' verbs: - '*'
-
Apply the cluster role:
kubectl apply -f astra-admin-account.yaml
-
-
Create the cluster role binding for the cluster role to the service account:
-
Create a
ClusterRoleBinding
file calledastracontrol-clusterrolebinding.yaml
.astracontrol-clusterrolebinding.yaml
apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: astracontrol-admin roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: astra-admin-account subjects: - kind: ServiceAccount name: astracontrol-service-account namespace: default
-
Apply the cluster role binding:
kubectl apply -f astracontrol-clusterrolebinding.yaml
-
-
Create and apply the token secret:
-
Create a token secret file called
secret-astracontrol-service-account.yaml
.secret-astracontrol-service-account.yaml
apiVersion: v1 kind: Secret metadata: name: secret-astracontrol-service-account namespace: default annotations: kubernetes.io/service-account.name: "astracontrol-service-account" type: kubernetes.io/service-account-token
-
Apply the token secret:
kubectl apply -f secret-astracontrol-service-account.yaml
-
-
Add the token secret to the service account by adding its name to the
secrets
array (the last line in the following example):kubectl edit sa astracontrol-service-account
apiVersion: v1 imagePullSecrets: - name: astracontrol-service-account-dockercfg-48xhx kind: ServiceAccount metadata: annotations: kubectl.kubernetes.io/last-applied-configuration: | {"apiVersion":"v1","kind":"ServiceAccount","metadata":{"annotations":{},"name":"astracontrol-service-account","namespace":"default"}} creationTimestamp: "2023-06-14T15:25:45Z" name: astracontrol-service-account namespace: default resourceVersion: "2767069" uid: 2ce068c4-810e-4a96-ada3-49cbf9ec3f89 secrets: - name: astracontrol-service-account-dockercfg-48xhx - name: secret-astracontrol-service-account
-
List the service account secrets, replacing
<context>
with the correct context for your installation:kubectl get serviceaccount astracontrol-service-account --context <context> --namespace default -o json
The end of the output should look similar to the following:
"secrets": [ { "name": "astracontrol-service-account-dockercfg-48xhx"}, { "name": "secret-astracontrol-service-account"} ]
The indices for each element in the
secrets
array begin with 0. In the above example, the index forastracontrol-service-account-dockercfg-48xhx
would be 0 and the index forsecret-astracontrol-service-account
would be 1. In your output, make note of the index number for the service account secret. You'll need this index number in the next step. -
Generate the kubeconfig as follows:
-
Create a
create-kubeconfig.sh
file. -
Replace
TOKEN_INDEX
in the beginning of the following script with the correct value.create-kubeconfig.sh
# Update these to match your environment. # Replace TOKEN_INDEX with the correct value # from the output in the previous step. If you # didn't change anything else above, don't change # anything else here. SERVICE_ACCOUNT_NAME=astracontrol-service-account NAMESPACE=default NEW_CONTEXT=astracontrol KUBECONFIG_FILE='kubeconfig-sa' CONTEXT=$(kubectl config current-context) SECRET_NAME=$(kubectl get serviceaccount ${SERVICE_ACCOUNT_NAME} \ --context ${CONTEXT} \ --namespace ${NAMESPACE} \ -o jsonpath='{.secrets[TOKEN_INDEX].name}') TOKEN_DATA=$(kubectl get secret ${SECRET_NAME} \ --context ${CONTEXT} \ --namespace ${NAMESPACE} \ -o jsonpath='{.data.token}') TOKEN=$(echo ${TOKEN_DATA} | base64 -d) # Create dedicated kubeconfig # Create a full copy kubectl config view --raw > ${KUBECONFIG_FILE}.full.tmp # Switch working context to correct context kubectl --kubeconfig ${KUBECONFIG_FILE}.full.tmp config use-context ${CONTEXT} # Minify kubectl --kubeconfig ${KUBECONFIG_FILE}.full.tmp \ config view --flatten --minify > ${KUBECONFIG_FILE}.tmp # Rename context kubectl config --kubeconfig ${KUBECONFIG_FILE}.tmp \ rename-context ${CONTEXT} ${NEW_CONTEXT} # Create token user kubectl config --kubeconfig ${KUBECONFIG_FILE}.tmp \ set-credentials ${CONTEXT}-${NAMESPACE}-token-user \ --token ${TOKEN} # Set context to use token user kubectl config --kubeconfig ${KUBECONFIG_FILE}.tmp \ set-context ${NEW_CONTEXT} --user ${CONTEXT}-${NAMESPACE}-token-user # Set context to correct namespace kubectl config --kubeconfig ${KUBECONFIG_FILE}.tmp \ set-context ${NEW_CONTEXT} --namespace ${NAMESPACE} # Flatten/minify kubeconfig kubectl config --kubeconfig ${KUBECONFIG_FILE}.tmp \ view --flatten --minify > ${KUBECONFIG_FILE} # Remove tmp rm ${KUBECONFIG_FILE}.full.tmp rm ${KUBECONFIG_FILE}.tmp
-
Source the commands to apply them to your Kubernetes cluster.
source create-kubeconfig.sh
-
-
(Optional) Rename the kubeconfig to a meaningful name for your cluster.
mv kubeconfig-sa YOUR_CLUSTER_NAME_kubeconfig