Known issues
Known issues identify problems that might prevent you from using this release of the product successfully.
The following known issues affect the current release:
-
Restore of an app results in PV size larger than original PV
-
App clones fail when using Service Account level OCP Security Context Constraints (SCC)
-
App clones fail after an application is deployed with a set storage class
-
App backups and snapshots fail if the volumesnapshotclass is added after a cluster is managed
Restore of an app results in PV size larger than original PV
If you resize a persistent volume after creating a backup and then restore from that backup, the persistent volume size will match the new size of the PV instead of using the size of the backup.
App clones fail using a specific version of PostgreSQL
App clones within the same cluster consistently fail with the Bitnami PostgreSQL 11.5.0 chart. To clone successfully, use an earlier or later version of the chart.
App clones fail when using Service Account level OCP Security Context Constraints (SCC)
An application clone might fail if the original security context constraints are configured at the service account level within the namespace on the OpenShift Container Platform cluster. When the application clone fails, it appears in the Managed Applications area in Astra Control Center with status Removed
. See the knowledgebase article for more information.
App backups and snapshots fail if the volumesnapshotclass is added after a cluster is managed
Backups and snapshots fail with a UI 500 error
in this scenario. As a workaround, refresh the app list.
App clones fail after an application is deployed with a set storage class
After an application is deployed with a storage class explicitly set (for example, helm install …-set global.storageClass=netapp-cvs-perf-extreme
), subsequent attempts to clone the application require that the target cluster have the originally specified storage class.
Cloning an application with an explicitly set storage class to a cluster that does not have the same storage class will fail. There are no recovery steps in this scenario.
Managing a cluster with Astra Control Center fails when default kubeconfig file contains more than one context
You cannot use a kubeconfig with more than one cluster and context in it. See the knowledgebase article for more information.
Some pods fail to start after upgrading to Astra Control Center 23.04
After you upgrade to Astra Control Center 23.04, some pods might fail to start. As a workaround, manually restart the affected pods using the following steps:
-
Find the affected pods, replacing <namespace> with the current namespace:
kubectl get pods -n <namespace> | grep au-pod
The affected pods will have results similar to the following:
pcloud-astra-control-center-au-pod-0 0/1 CreateContainerConfigError 0 13s
-
Restart each affected pod, replacing <namespace> with the current namespace:
kubectl delete pod pcloud-astra-control-center-au-pod-0 -n <namespace>
Some pods show an error state after the purge stage of upgrade from 23.04 to 23.04.2
After you upgrade to Astra Control Center 23.04.2, some pods might show an error in
logs related to task-service-task-purge
:
kubectl get all -n netapp-acc -o wide|grep purge pod/task-service-task-purge-28282828-ab1cd 0/1 Error 0 48m 10.111.0.111 openshift-clstr-ol-07-zwlj8-worker-jhp2b <none> <none>
This error state means that a cleanup step did not properly execute. The overall upgrade to 23.04.2 is successful. Run the following command to clean up the task and remove the error state:
kubectl delete job task-service-task-purge-[system-generated task ID] -n <netapp-acc or custom namespace>
A monitoring pod can crash in Istio environments
If you pair Astra Control Center with Cloud Insights in an Istio environment, the telegraf-rs
pod can crash. As a workaround, perform the following steps:
-
Find the crashed pod:
kubectl -n netapp-monitoring get pod | grep Error
You should see output similar to the following:
NAME READY STATUS RESTARTS AGE telegraf-rs-fhhrh 1/2 Error 2 (26s ago) 32s
-
Restart the crashed pod, replacing
<pod_name_from_output>
with the name of the affected pod:kubectl -n netapp-monitoring delete pod <pod_name_from_output>
You should see output similar to the following:
pod "telegraf-rs-fhhrh" deleted
-
Verify that the pod has restarted, and is not in an Error state:
kubectl -n netapp-monitoring get pod
You should see output similar to the following:
NAME READY STATUS RESTARTS AGE telegraf-rs-rrnsb 2/2 Running 0 11s
Managed clusters do not appear in NetApp Cloud Insights when connecting through a proxy
When Astra Control Center connects to NetApp Cloud Insights through a proxy, managed clusters might not appear in Cloud Insights. As a workaround, run the following commands on each managed cluster:
kubectl get cm telegraf-conf -o yaml -n netapp-monitoring | sed '/\[\[outputs.http\]\]/c\ [[outputs.http]]\n use_system_proxy = true' | kubectl replace -f -
kubectl get cm telegraf-conf-rs -o yaml -n netapp-monitoring | sed '/\[\[outputs.http\]\]/c\ [[outputs.http]]\n use_system_proxy = true' | kubectl replace -f -
kubectl get pods -n netapp-monitoring --no-headers=true | grep 'telegraf-ds\|telegraf-rs' | awk '{print $1}' | xargs kubectl delete -n netapp-monitoring pod
App data management operations fail with Internal Service Error (500) when Astra Trident is offline
If Astra Trident on an app cluster goes offline (and is brought back online) and 500 internal service errors are encountered when attempting app data management, restart all of the Kubernetes nodes in the app cluster to restore functionality.