Skip to main content
Astra Control Center
A newer release of this product is available.

Prerequisites for adding a cluster

Contributors

You should ensure that the prerequisite conditions are met before you add a cluster. You should also run the eligibility checks to ensure that your cluster is ready to be added to Astra Control Center.

What you'll need before you add a cluster

  • One of the following types of clusters:

    • Clusters running OpenShift 4.6, 4.7, or 4.8, which has Astra Trident StorageClasses backed by Astra Data Store or ONTAP 9.5 or later

    • Clusters running Rancher 2.5

    • Clusters running Kubernetes 1.19 to 1.21 (including 1.21.x)

      Make sure your clusters have one or more worker nodes with at least 1GB RAM available for running telemetry services.

      Note If you plan to add a second OpenShift 4.6, 4.7, or 4.8 cluster as a managed compute resource, you should ensure that the Astra Trident Volume Snapshot feature is enabled. See the official Astra Trident instructions to enable and test Volume Snapshots with Astra Trident.
  • The superuser and user ID set on the backing ONTAP system to back up and restore apps with Astra Control Center. Run the following command in the ONTAP command line:
    export-policy rule modify -vserver <storage virtual machine name> -policyname <policy name> -ruleindex 1 -superuser sysm --anon 65534

  • An Astra Trident volumesnapshotclass object that has been defined by an administrator. See the Astra Trident instructions to enable and test Volume Snapshots with Astra Trident.

  • Ensure that you have only a single default storage class defined for your Kubernetes cluster.

Run eligibility checks

Run the following eligibility checks to ensure that your cluster is ready to be added to Astra Control Center.

Steps
  1. Check the Trident version.

    kubectl get tridentversions -n trident

    If Trident exists, you see output similar to the following:

    NAME      VERSION
    trident   21.04.0

    If Trident does not exist, you see output similar to the following:

    error: the server doesn't have a resource type "tridentversions"
    Note If Trident is not installed or the installed version is not the latest, you need to install the latest version of Trident before proceeding. See the Trident documentation for instructions.
  2. Check if the storage classes are using the supported Trident drivers. The provisioner name should be csi.trident.netapp.io. See the following example:

    kubectl get sc
    NAME                   PROVISIONER                    RECLAIMPOLICY   VOLUMEBINDINGMODE   ALLOWVOLUMEEXPANSION   AGE
    ontap-gold (default)   csi.trident.netapp.io          Delete          Immediate           true                   5d23h
    thin                   kubernetes.io/vsphere-volume   Delete          Immediate           false                  6d

Create an admin-role kubeconfig

Ensure that you have the following on your machine before you do the steps:

  • kubectl v1.19 or later installed

  • An active kubeconfig with cluster admin rights for the active context

Steps
  1. Create a service account as follows:

    1. Create a service account file called astracontrol-service-account.yaml.

      Adjust the name and namespace as needed. If changes are made here, you should apply the same changes in the following steps.

      astracontrol-service-account.yaml
      apiVersion: v1
      kind: ServiceAccount
      metadata:
        name: astracontrol-service-account
        namespace: default
    2. Apply the service account:

      kubectl apply -f astracontrol-service-account.yaml
  2. Grant cluster admin permissions as follows:

    1. Create a ClusterRoleBinding file called astracontrol-clusterrolebinding.yaml.

      Adjust any names and namespaces modified when creating the service account as needed.

      astracontrol-clusterrolebinding.yaml
      apiVersion: rbac.authorization.k8s.io/v1
      kind: ClusterRoleBinding
      metadata:
        name: astracontrol-admin
      roleRef:
        apiGroup: rbac.authorization.k8s.io
        kind: ClusterRole
        name: cluster-admin
      subjects:
      - kind: ServiceAccount
        name: astracontrol-service-account
        namespace: default
    2. Apply the cluster role binding:

      kubectl apply -f astracontrol-clusterrolebinding.yaml
  3. List the service account secrets, replacing <context> with the correct context for your installation:

    kubectl get serviceaccount astracontrol-service-account --context <context> --namespace default -o json

    The end of the output should look similar to the following:

    "secrets": [
    { "name": "astracontrol-service-account-dockercfg-vhz87"},
    { "name": "astracontrol-service-account-token-r59kr"}
    ]

    The indices for each element in the secrets array begin with 0. In the above example, the index for astracontrol-service-account-dockercfg-vhz87 would be 0 and the index for astracontrol-service-account-token-r59kr would be 1. In your output, make note of the index for the service account name that has the word "token" in it.

  4. Generate the kubeconfig as follows:

    1. Create a create-kubeconfig.sh file. Replace TOKEN_INDEX in the beginning of the following script with the correct value.

      create-kubeconfig.sh
      # Update these to match your environment.
      # Replace TOKEN_INDEX with the correct value
      # from the output in the previous step. If you
      # didn't change anything else above, don't change
      # anything else here.
      
      SERVICE_ACCOUNT_NAME=astracontrol-service-account
      NAMESPACE=default
      NEW_CONTEXT=astracontrol
      KUBECONFIG_FILE='kubeconfig-sa'
      
      CONTEXT=$(kubectl config current-context)
      
      SECRET_NAME=$(kubectl get serviceaccount ${SERVICE_ACCOUNT_NAME} \
        --context ${CONTEXT} \
        --namespace ${NAMESPACE} \
        -o jsonpath='{.secrets[TOKEN_INDEX].name}')
      TOKEN_DATA=$(kubectl get secret ${SECRET_NAME} \
        --context ${CONTEXT} \
        --namespace ${NAMESPACE} \
        -o jsonpath='{.data.token}')
      
      TOKEN=$(echo ${TOKEN_DATA} | base64 -d)
      
      # Create dedicated kubeconfig
      # Create a full copy
      kubectl config view --raw > ${KUBECONFIG_FILE}.full.tmp
      
      # Switch working context to correct context
      kubectl --kubeconfig ${KUBECONFIG_FILE}.full.tmp config use-context ${CONTEXT}
      
      # Minify
      kubectl --kubeconfig ${KUBECONFIG_FILE}.full.tmp \
        config view --flatten --minify > ${KUBECONFIG_FILE}.tmp
      
      # Rename context
      kubectl config --kubeconfig ${KUBECONFIG_FILE}.tmp \
        rename-context ${CONTEXT} ${NEW_CONTEXT}
      
      # Create token user
      kubectl config --kubeconfig ${KUBECONFIG_FILE}.tmp \
        set-credentials ${CONTEXT}-${NAMESPACE}-token-user \
        --token ${TOKEN}
      
      # Set context to use token user
      kubectl config --kubeconfig ${KUBECONFIG_FILE}.tmp \
        set-context ${NEW_CONTEXT} --user ${CONTEXT}-${NAMESPACE}-token-user
      
      # Set context to correct namespace
      kubectl config --kubeconfig ${KUBECONFIG_FILE}.tmp \
        set-context ${NEW_CONTEXT} --namespace ${NAMESPACE}
      
      # Flatten/minify kubeconfig
      kubectl config --kubeconfig ${KUBECONFIG_FILE}.tmp \
        view --flatten --minify > ${KUBECONFIG_FILE}
      
      # Remove tmp
      rm ${KUBECONFIG_FILE}.full.tmp
      rm ${KUBECONFIG_FILE}.tmp
    2. Source the commands to apply them to your Kubernetes cluster.

      source create-kubeconfig.sh
  5. (Optional) Rename the kubeconfig to a meaningful name for your cluster. Protect your cluster credential.

    chmod 700 create-kubeconfig.sh
    mv kubeconfig-sa.txt YOUR_CLUSTER_NAME_kubeconfig

What's next?

Now that you’ve verified that the prerequisites are met, you're ready to add a cluster.

Find more information