Prerequisites for adding a cluster
You should ensure that the prerequisite conditions are met before you add a cluster. You should also run the eligibility checks to ensure that your cluster is ready to be added to Astra Control Center.
What you'll need before you add a cluster
-
One of the following types of clusters:
-
Clusters running OpenShift 4.6.8, 4.7, 4.8, or 4.9
-
Clusters running Rancher 2.5.8, 2.5.9, or 2.6 with RKE1
-
Clusters running Kubernetes 1.20 to 1.23
-
Clusters running VMware Tanzu Kubernetes Grid 1.4
-
Clusters running VMware Tanzu Kubernetes Grid Integrated Edition 1.12.2
Make sure your clusters have one or more worker nodes with at least 1GB RAM available for running telemetry services.
If you plan to add a second OpenShift 4.6, 4.7, or 4.8 cluster as a managed compute resource, you should ensure that the Astra Trident Volume Snapshot feature is enabled. See the official Astra Trident instructions to enable and test Volume Snapshots with Astra Trident.
-
-
Astra Trident StorageClasses configured with a supported storage backend (required for any type of cluster)
-
The superuser and user ID set on the backing ONTAP system to back up and restore apps with Astra Control Center. Run the following command in the ONTAP command line:
export-policy rule modify -vserver <storage virtual machine name> -policyname <policy name> -ruleindex 1 -superuser sysm --anon 65534
-
An Astra Trident
volumesnapshotclass
object that has been defined by an administrator. See the Astra Trident instructions to enable and test Volume Snapshots with Astra Trident. -
Ensure that you have only a single default storage class defined for your Kubernetes cluster.
Run eligibility checks
Run the following eligibility checks to ensure that your cluster is ready to be added to Astra Control Center.
-
Check the Trident version.
kubectl get tridentversions -n trident
If Trident exists, you see output similar to the following:
NAME VERSION trident 21.04.0
If Trident does not exist, you see output similar to the following:
error: the server doesn't have a resource type "tridentversions"
If Trident is not installed or the installed version is not the latest, you need to install the latest version of Trident before proceeding. See the Trident documentation for instructions. -
Check if the storage classes are using the supported Trident drivers. The provisioner name should be
csi.trident.netapp.io
. See the following example:kubectl get sc NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE ontap-gold (default) csi.trident.netapp.io Delete Immediate true 5d23h thin kubernetes.io/vsphere-volume Delete Immediate false 6d
Create an admin-role kubeconfig
Ensure that you have the following on your machine before you do the steps:
-
kubectl
v1.19 or later installed -
An active kubeconfig with cluster admin rights for the active context
-
Create a service account as follows:
-
Create a service account file called
astracontrol-service-account.yaml
.Adjust the name and namespace as needed. If changes are made here, you should apply the same changes in the following steps.
astracontrol-service-account.yaml
apiVersion: v1 kind: ServiceAccount metadata: name: astracontrol-service-account namespace: default
-
Apply the service account:
kubectl apply -f astracontrol-service-account.yaml
-
-
(Optional) If your cluster uses a restrictive pod security policy that doesn't allow privileged pod creation or allow processes within the pod containers to run as the root user, create a custom pod security policy for the cluster that enables Astra Control to create and manage pods. For instructions, see Create a custom pod security policy.
-
Grant cluster admin permissions as follows:
-
Create a
ClusterRoleBinding
file calledastracontrol-clusterrolebinding.yaml
.Adjust any names and namespaces modified when creating the service account as needed.
astracontrol-clusterrolebinding.yaml
apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: astracontrol-admin roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: cluster-admin subjects: - kind: ServiceAccount name: astracontrol-service-account namespace: default
-
Apply the cluster role binding:
kubectl apply -f astracontrol-clusterrolebinding.yaml
-
-
List the service account secrets, replacing
<context>
with the correct context for your installation:kubectl get serviceaccount astracontrol-service-account --context <context> --namespace default -o json
The end of the output should look similar to the following:
"secrets": [ { "name": "astracontrol-service-account-dockercfg-vhz87"}, { "name": "astracontrol-service-account-token-r59kr"} ]
The indices for each element in the
secrets
array begin with 0. In the above example, the index forastracontrol-service-account-dockercfg-vhz87
would be 0 and the index forastracontrol-service-account-token-r59kr
would be 1. In your output, make note of the index for the service account name that has the word "token" in it. -
Generate the kubeconfig as follows:
-
Create a
create-kubeconfig.sh
file. ReplaceTOKEN_INDEX
in the beginning of the following script with the correct value.create-kubeconfig.sh
# Update these to match your environment. # Replace TOKEN_INDEX with the correct value # from the output in the previous step. If you # didn't change anything else above, don't change # anything else here. SERVICE_ACCOUNT_NAME=astracontrol-service-account NAMESPACE=default NEW_CONTEXT=astracontrol KUBECONFIG_FILE='kubeconfig-sa' CONTEXT=$(kubectl config current-context) SECRET_NAME=$(kubectl get serviceaccount ${SERVICE_ACCOUNT_NAME} \ --context ${CONTEXT} \ --namespace ${NAMESPACE} \ -o jsonpath='{.secrets[TOKEN_INDEX].name}') TOKEN_DATA=$(kubectl get secret ${SECRET_NAME} \ --context ${CONTEXT} \ --namespace ${NAMESPACE} \ -o jsonpath='{.data.token}') TOKEN=$(echo ${TOKEN_DATA} | base64 -d) # Create dedicated kubeconfig # Create a full copy kubectl config view --raw > ${KUBECONFIG_FILE}.full.tmp # Switch working context to correct context kubectl --kubeconfig ${KUBECONFIG_FILE}.full.tmp config use-context ${CONTEXT} # Minify kubectl --kubeconfig ${KUBECONFIG_FILE}.full.tmp \ config view --flatten --minify > ${KUBECONFIG_FILE}.tmp # Rename context kubectl config --kubeconfig ${KUBECONFIG_FILE}.tmp \ rename-context ${CONTEXT} ${NEW_CONTEXT} # Create token user kubectl config --kubeconfig ${KUBECONFIG_FILE}.tmp \ set-credentials ${CONTEXT}-${NAMESPACE}-token-user \ --token ${TOKEN} # Set context to use token user kubectl config --kubeconfig ${KUBECONFIG_FILE}.tmp \ set-context ${NEW_CONTEXT} --user ${CONTEXT}-${NAMESPACE}-token-user # Set context to correct namespace kubectl config --kubeconfig ${KUBECONFIG_FILE}.tmp \ set-context ${NEW_CONTEXT} --namespace ${NAMESPACE} # Flatten/minify kubeconfig kubectl config --kubeconfig ${KUBECONFIG_FILE}.tmp \ view --flatten --minify > ${KUBECONFIG_FILE} # Remove tmp rm ${KUBECONFIG_FILE}.full.tmp rm ${KUBECONFIG_FILE}.tmp
-
Source the commands to apply them to your Kubernetes cluster.
source create-kubeconfig.sh
-
-
(Optional) Rename the kubeconfig to a meaningful name for your cluster. Protect your cluster credential.
chmod 700 create-kubeconfig.sh mv kubeconfig-sa.txt YOUR_CLUSTER_NAME_kubeconfig
What's next?
Now that you’ve verified that the prerequisites are met, you're ready to add a cluster.