-
Notifications
You must be signed in to change notification settings - Fork 64
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
K8SSAND-559 ⁃ PVC can be deleted mistakenly when reading stale deletionTimestamp information #118
Comments
Hi @jdonenine I have issued a PR #122 (but it seems that there is some transient error in building?). Besides the fix, I was asked about how to reproduce the bugs reliably in the original issue (in the old repo). Actually, we are recently building a tool https://github.com/sieve-project/sieve to reliably detect and reproduce bugs like the above one in various k8s controllers. This bug can be automatically reproduced by just a few commands using our tool. You can also use the tool to verify the patch.
If the checker passes, please build the Kubernetes and controller images using the commands below as our tool needs to set up a kind cluster to reproduce the bug:
Finally, run
and the bug is successfully reproduced if you can see the message below
For more information please refer to https://github.com/sieve-project/sieve#sieve-testing-datacenter-infrastructures-using-partial-histories and https://github.com/sieve-project/sieve/blob/main/docs/reprod.md#k8ssandra-cass-operator-originally-datastax-cass-operator-412 Please let me know if you have interest in using or deploying our tool or encounter any problems when reproducing the bugs with the tool. |
This issue was originally reported in
datastax/cass-operator
#412 by srteam2020Description
We find that in a HA k8s cluster, cass-operator could mistakenly delete the PVC of the cassandra pod after cass-operator experiencing a restart. After diagnosis and inspection, we find it is caused by potential staleness in some of the apiserver.
More concretely, if cass-operator receives a stale update event from an apiserver saying "CassandraDatacenter has non-nil deletion timestamp", the controller will delete the PVCs returned by
listPVCs
, andlistPVCs
will list PVCs using the name(space) instead of the UID. Note that the stale event actually comes from the deletion of a previous CassandraDatacenter sharing the same name (but with a different UID) as the currently running one, so the PVC of the existing CassandraDatacenter will be listed and deleted mistakenly.One potential approach to fix this is to label each PVC the UID of the CassandraDatacenter when creating them, and list the PVC using the UID of the CassandraDatacenter in
listPVCs
to ensure that we always delete the right PVC.Reproduction
We list concrete reproduction steps in a HA cluster as below:
cdc
. PVC will be created in the cluster.cdc
. Apiserver1 will send the update events with a non-nil deletion timestamp to the controller and the controller will triggerdeletePVCs()
to delete related PVC. Meanwhile, apiserver2 is partitioned so its watch cache stops at the moment thatcdc
is tagged with a deletion timestamp.cdc
again. Now the CassandraDatacenter gets back with a different uid, so as its PVC. However, apiserver2 still holds the stale view thatcdc
has a non-nil deletion timestamp and is about to be deleted.cdc
has a deletion timestamp, the controller lists all the PVC belonging to the currently runningcdc
(as mentioned above) and deletes them all.Fix
We are willing to help fix this bug by issuing a PR.
As mentioned above, the bug can be avoided by tagging each PVC with the UID of the CassandraDatacenter and listing PVC with UID. Each CassandraDatacenter will always have a different UID even with the same name. So in this case the PVCs belonging to the newly created CassandraDatacenter will not be deleted by the stale events of the old one.
┆Issue is synchronized with this Jira Bug by Unito
┆friendlyId: K8SSAND-559
┆priority: Medium
The text was updated successfully, but these errors were encountered: