Known issues with this release

Contributors netapp-dbagwell amgrissino Download PDF of this page

Known issues identify problems that might prevent you from using this release of the product successfully.

App with user-defined label goes into "removed" state

If you define an app with a non-existent k8s label, Astra Control Center will create, manage, and then immediately remove the app. To avoid this, add the k8s label to pods and resources after the app is managed by Astra Control Center.

Unable to stop running app backup

There is no way to stop a running backup. If you need to delete the backup, wait until it has completed and then use the instructions in Delete backups. To delete a failed backup, use the Astra API.

Backup or clone fails for apps using PVCs with decimal units in Astra Control Center

Volumes created with decimal units fail using the Astra Control Center backup or clone process. See the knowledgebase article for more information.

Astra Control Center UI slow to show changes to app resources such as persistent volume changes

After a data protection operation (clone, backup, restore) and subsequent persistent volume resize, there is up to a twenty-minute delay before the new volume size is shown in the UI. This delay in the UI can also occur when any app resources are added or modified. In this case, a data protection operation is successful within minutes and you can use the management software for the storage backend to confirm the change in volume size.

During app restore from backup Trident creates a larger PV than the original PV

If you resize a persistent volume after creating a backup and then restore from that backup, the persistent volume size matches the new size of the PV instead of using the size of the backup.

Clone performance impacted by large persistent volumes

Clones of very large and consumed persistent volumes might be intermittently slow, dependent on cluster access to the object store. If the clone is hung and no data has been copied for more than 30 minutes, Astra Control terminates the clone action.

App clones fail using a specific version of PostgreSQL

App clones within the same cluster consistently fail with the Bitnami PostgreSQL 11.5.0 chart. To clone successfully, use an earlier or later version of the chart.

S3 buckets in Astra Control Center do not report available capacity

Before backing up or cloning apps managed by Astra Control Center, check bucket information in the ONTAP or StorageGRID management system.

Reusing buckets between instances of Astra Control Center causes failures

If you try to reuse a bucket used by another or previous installation of Astra Control Center, backup and restore will fail. You must use a different bucket or completely clean out the previously used bucket. You can’t share buckets between instances of Astra Control Center.

Selecting a bucket provider type with credentials for another type causes data protection failures

When you add a bucket, select the correct bucket provider type with credentials that are correct for that provider. For example, the UI accepts NetApp ONTAP S3 as the type with StorageGRID credentials; however, this will cause all future app backups and restores using this bucket to fail.

Backups and snapshots might not be retained during removal of an Astra Control Center instance

If you have an evaluation license, be sure you store your account ID to avoid data loss in the event of Astra Control Center failure if you are not sending ASUPs.

Clone operation can’t use other buckets besides the default

During an app backup or app restore, you can optionally specify a bucket ID. An app clone operation, however, always uses the default bucket that has been defined. There is no option to change buckets for a clone. If you want control over which bucket is used, you can either change the bucket default or do a backup followed by a restore separately.

Managing a cluster with Astra Control Center fails when default kubeconfig file contains more than one context

You cannot use a kubeconfig with more than one cluster and context in it. See the knowledgebase article for more information.

Can’t determine ASUP tar bundle status in scaled environment

During ASUP collection, the status of the bundle in the UI is reported as either collecting or done. Collection can take up to an hour for large environments. During ASUP download the network file transfer speed for the bundle might be insufficient, and the download might time out after 15 minutes without any indication in the UI. Download issues depend on the size of the ASUP, the scaled cluster size, and if collection time goes beyond the seven day limit.

Uninstall of Astra Control Center fails to clean up the monitoring-operator pod on the managed cluster

If you did not unmanage your clusters before you uninstalled Astra Control Center, you can manually delete the pods in the netapp-monitoring namespace and the namespace with the following commands:

Steps
  1. Delete acc-monitoring agent:

    oc delete agents acc-monitoring -n netapp-monitoring

    Result:

    agent.monitoring.netapp.com "acc-monitoring" deleted
  2. Delete the namespace:

    oc delete ns netapp-monitoring

    Result:

    namespace "netapp-monitoring" deleted
  3. Confirm resources removed:

    oc get pods -n netapp-monitoring

    Result:

    No resources found in netapp-monitoring namespace.
  4. Confirm monitoring agent removed:

    oc get crd|grep agent

    Sample result:

    agents.monitoring.netapp.com                     2021-07-21T06:08:13Z
  5. Delete custom resource definition (CRD) information:

    oc delete crds agents.monitoring.netapp.com

    Result:

    customresourcedefinition.apiextensions.k8s.io "agents.monitoring.netapp.com" deleted

Uninstall of Astra Control Center fails to clean up Traefik CRDs

You can manually delete the Traefik CRDs:

Steps
  1. Confirm which CRDs were not deleted by the uninstall process:

    kubectl get crds |grep -E 'traefik'

    Response

    ingressroutes.traefik.containo.us             2021-06-23T23:29:11Z
    ingressroutetcps.traefik.containo.us          2021-06-23T23:29:11Z
    ingressrouteudps.traefik.containo.us          2021-06-23T23:29:12Z
    middlewares.traefik.containo.us               2021-06-23T23:29:12Z
    serverstransports.traefik.containo.us         2021-06-23T23:29:13Z
    tlsoptions.traefik.containo.us                2021-06-23T23:29:13Z
    tlsstores.traefik.containo.us                 2021-06-23T23:29:14Z
    traefikservices.traefik.containo.us           2021-06-23T23:29:15Z
  2. Delete the CRDs:

    kubectl delete crd ingressroutes.traefik.containo.us ingressroutetcps.traefik.containo.us ingressrouteudps.traefik.containo.us middlewares.traefik.containo.us serverstransports.traefik.containo.us tlsoptions.traefik.containo.us tlsstores.traefik.containo.us traefikservices.traefik.containo.us

Find more information