You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Currently, if user deploys something that's broken and causes the first pod of the rack to not start, we require the user to manually set "forceUpgradeRacks" property which will then override the current configuration despite the datacenter not being ready from previous change.
This process feels like something we could automate. If we're still processing the first pod (but rest are up), we should probably allow user to fix the configuration and then we would apply it to make the cluster healthy again. In other words, try to detect when "forceUpgradeRacks" is going to be required and just simply apply it in our internal process.
Even better would be to rollback to previous setting, but we don't want to downgrade the user set CRDs by ourselves as that could mess up with tools like Argo.
Why is this needed?
Automatic recovery is not easy if it requires modifying the CRD and then cass-operator will modify the CRD to ensure next time it's not applicable. We want to simplify the user experience by automatically detecting this issue as these incorrect settings do appear every now and then (user mistakes happen).
The text was updated successfully, but these errors were encountered:
If we're still processing the first pod (but rest are up)
Why would we need the rest of the pods to be up?
I'm thinking that if all pods are down then it's ok to apply the upgrade as well. I ran into this case where the serverImage provided was wrong and the image couldn't be pulled, which cannot be fixed unless you force the rack upgrade.
What is missing?
Currently, if user deploys something that's broken and causes the first pod of the rack to not start, we require the user to manually set "forceUpgradeRacks" property which will then override the current configuration despite the datacenter not being ready from previous change.
This process feels like something we could automate. If we're still processing the first pod (but rest are up), we should probably allow user to fix the configuration and then we would apply it to make the cluster healthy again. In other words, try to detect when "forceUpgradeRacks" is going to be required and just simply apply it in our internal process.
Even better would be to rollback to previous setting, but we don't want to downgrade the user set CRDs by ourselves as that could mess up with tools like Argo.
Why is this needed?
Automatic recovery is not easy if it requires modifying the CRD and then cass-operator will modify the CRD to ensure next time it's not applicable. We want to simplify the user experience by automatically detecting this issue as these incorrect settings do appear every now and then (user mistakes happen).
The text was updated successfully, but these errors were encountered: