-
Notifications
You must be signed in to change notification settings - Fork 1.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We鈥檒l occasionally send you account related emails.
Already on GitHub? Sign in to your account
P1 total data loss bug in google_storage_bucket lifecycle rule for days_since_noncurrent_time #17990
P1 total data loss bug in google_storage_bucket lifecycle rule for days_since_noncurrent_time #17990
Comments
Hi @philip-harvey! According to your configuration and the description on the official documentation of the property |
I'm not following the logic there, if you set this via the console it correctly sets the rule to delete objects 0 days after they become noncurrent, which is correct and expected, but if you set this via Terraform it makes a totally different rule, that just deletes ALL objects in the bucket, both current and noncurrent objects. The documentation doesn't call out this behavior anywhere that I can find, and it's inconsistent with how this works in the GUI, and causes total data loss. |
This could be a documentation update of the description of this field ( |
I'm still not following this, the documentation says ". This condition is satisfied when an object has been noncurrent for more than the specified number of days.". The bug is that Terraform creates a lifecycle rule that does not do this, it creates a lifecycle rule that deletes ALL objects, not just noncurrent ones. As I said previously, this works correctly outside of Terraform and it's a very severe bug that causes total data loss. |
This is not a documentation issue, this is a bug, a very major one. |
I think, This issue is coming because of the provider bug that is already solved: #14044 . |
#14044 looks like a similar bug, but this bug is not yet fixed in the provider. |
Actually it looks like #14044 is also not fixed |
It is! You can try putting no_age = true in the lifecycle rule condition block, It won't add unexpected condition of |
This shouldn't be the case- objects are determined to be noncurrent if they are replaced by a newer version, per https://cloud.google.com/storage/docs/object-versioning#intro. Live objects should never be replaced by this rule (at least, reading this as a caller of the API). I suspect this is a side effect of #14044 w/o Our effective rule as a result is:
@philip-harvey do you have the REST response from a Get call on the bucket, specifically the |
|
Thanks! That's much worse, we're not sending (fyi: I edited your comment to add a code fence, to make it easier to read) |
I wondered if you send both age = 0 and days_since_noncurrent_time = 0 if the API just ignores the days_since_noncurrent_time = 0 part since it's redundant, but I haven't verified this. |
That works fine:
|
@philip-harvey we're shipping a docs change immediately while we figure out the longer-term fix here (@NickElliot and @kautikdk are discussing offline). Mind giving GoogleCloudPlatform/magic-modules#10626 a once-over to see if you think it would have helped in your case / make any suggestions if not? |
Community Note
Terraform Version
Terraform v1.8.2
google v5.26.0
Affected Resource(s)
google_storage_bucket
Terraform Configuration
Debug Output
No response
Expected Behavior
This should create a lifecyle rule to delete noncurrent objects after 0 days
Actual Behavior
Creates a lifecycle rule that deletes ALL object in the bucket after 0 days, causing total data loss in the bucket
Steps to reproduce
terraform apply
Important Factoids
This is a P1 issue causing total data loss
References
No response
b/339089840
The text was updated successfully, but these errors were encountered: