Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Allow modification of configs if the first pod of rack fails #583

Open
burmanm opened this issue Oct 11, 2023 · 1 comment
Open

Allow modification of configs if the first pod of rack fails #583

burmanm opened this issue Oct 11, 2023 · 1 comment
Labels
assess Issues in the state 'assess' enhancement New feature or request

Comments

@burmanm
Copy link
Contributor

burmanm commented Oct 11, 2023

What is missing?

Currently, if user deploys something that's broken and causes the first pod of the rack to not start, we require the user to manually set "forceUpgradeRacks" property which will then override the current configuration despite the datacenter not being ready from previous change.

This process feels like something we could automate. If we're still processing the first pod (but rest are up), we should probably allow user to fix the configuration and then we would apply it to make the cluster healthy again. In other words, try to detect when "forceUpgradeRacks" is going to be required and just simply apply it in our internal process.

Even better would be to rollback to previous setting, but we don't want to downgrade the user set CRDs by ourselves as that could mess up with tools like Argo.

Why is this needed?

Automatic recovery is not easy if it requires modifying the CRD and then cass-operator will modify the CRD to ensure next time it's not applicable. We want to simplify the user experience by automatically detecting this issue as these incorrect settings do appear every now and then (user mistakes happen).

@adejanovski
Copy link
Contributor

If we're still processing the first pod (but rest are up)

Why would we need the rest of the pods to be up?
I'm thinking that if all pods are down then it's ok to apply the upgrade as well. I ran into this case where the serverImage provided was wrong and the image couldn't be pulled, which cannot be fixed unless you force the rack upgrade.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
assess Issues in the state 'assess' enhancement New feature or request
Projects
Status: Assess/Investigate
Development

No branches or pull requests

2 participants