-
Notifications
You must be signed in to change notification settings - Fork 539
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
ROSA: RosaMachinePool and MachinePool CRs stuck at de-provision ROSA-HCP #4936
Comments
This issue is currently awaiting triage. If CAPA/CAPI contributors determines this is a relevant issue, they will accept it by applying the The Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
Can you share which finalizer specifically? |
RosaMachinePool finalizer then MachinePool finalizer |
/kind bug
What steps did you take and what happened:
We using gitOps workflow to provision ROSA-HCP. The required CRs; RosaControlPlane, RosaCluster, RosaMachinePool, Cluster and MachinePool are stored in a git repo and ArgoCDs is used to Sync the CRs to the installer cluster.
At the time to de-provision the ROSA-HCP;
1- Delete all the required CRs from the git repo
2- Let ArgoCD sync the git repo status and delete all the required CRs from the installer cluster.
3- The ROSA-HCP start the uninstall process
4- The RosaControlPlace and RosaCluster CRs are deleted however RosaMachinePool, MachinePool and Cluster CR stuck never get deleted.
5- Checking the aws console and the redhat console all the ROSA-HCP and AWS resources are destroyed.
6- After manually cleaning the finalizers from RosamachinePool and MachinePool CRs the Cluster CR is deleted.
What did you expect to happen:
We expect at the cluster uninstall all the CRs get deleted.
Anything else you would like to add:
[Miscellaneous information that will assist in solving the issue.]
Checking the logs for capa-controller-manager deployment during the uninstall process, the logs below is shown
After the aws and rosa-hcp resources destroyed the logs below is shown
The Logs for capi-controller-manager deployment during the uninstall process, the logs below is shown
capi-logs.txt
capa-logs.txt
rosa-hcp-2.txt
Environment:
Cluster-api-provider-aws version: v2.4.2
Kubernetes version: (use
kubectl version
): v1.27.3OS (e.g. from
/etc/os-release
):The text was updated successfully, but these errors were encountered: