Known issues identify problems that might prevent you from using this release of the product successfully.
The following known issues affect the current release:
Restore of an app results in PV size larger than original PV
If you resize a persistent volume after creating a backup and then restore from that backup, the persistent volume size will match the new size of the PV instead of using the size of the backup.
App clones fail using a specific version of PostgreSQL
App clones within the same cluster consistently fail with the Bitnami PostgreSQL 11.5.0 chart. To clone successfully, use an earlier or later version of the chart.
App clones fail when using Service Account level OCP Security Context Constraints (SCC)
An application clone might fail if the original security context constraints are configured at the service account level within the namespace on the OpenShift Container Platform cluster. When the application clone fails, it appears in the Managed Applications area in Astra Control Center with status
Removed. See the knowledgebase article for more information.
App backups and snapshots fail if the volumesnapshotclass is added after a cluster is managed
Backups and snapshots fail with a
UI 500 error in this scenario. As a workaround, refresh the app list.
App clones fail after an application is deployed with a set storage class
After an application is deployed with a storage class explicitly set (for example,
helm install …-set global.storageClass=netapp-cvs-perf-extreme), subsequent attempts to clone the application require that the target cluster have the originally specified storage class.
Cloning an application with an explicitly set storage class to a cluster that does not have the same storage class will fail. There are no recovery steps in this scenario.
Managing a cluster with Astra Control Center fails when default kubeconfig file contains more than one context
You cannot use a kubeconfig with more than one cluster and context in it. See the knowledgebase article for more information.
App data management operations fail with Internal Service Error (500) when Astra Trident is offline
If Astra Trident on an app cluster goes offline (and is brought back online) and 500 internal service errors are encountered when attempting app data management, restart all of the Kubernetes nodes in the app cluster to restore functionality.
Snapshots might fail with snapshot controller version 4.2.0
When you use Kubernetes snapshot-controller (also known as external-snapshotter) version 4.2.0 with Kubernetes 1.20 or 1.21, snapshots can eventually begin to fail. To prevent this, use a different supported version of external-snapshotter, such as version 4.2.1, with Kubernetes versions 1.20 or 1.21.
Run a POST call to add an updated kubeconfig file to the
/credentialsendpoint and retrieve the assigned
idfrom the response body.
Run a PUT call from the
/clustersendpoint using the appropriate cluster ID and set the
idvalue from the previous step.
After you complete these steps, the credential associated with the cluster is updated and the cluster should reconnect and update its state to