-
Notifications
You must be signed in to change notification settings - Fork 319
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
S3 bucket list: max_keys: 0
is ignored
#1953
Comments
Any other info needed for this issue? The bug has been around for over 6 months and drives up the cost of AWS bills. |
Just in case it's not clear in the links above, there's already a PR for this #1954 |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Summary
The max_keys parameter of s3_object_module is ignored if set to zero. This causes AWS API defaults to take effect, which for the
max-keys
URI request parameter is 1000.AWS API documentation for S3 object listing: ListObjects, ListObjectsV2.
Note by ignoring
max_keys: 0
, Ansible will retrieve page after page of results, up to the last object in the bucket, possibly leading to considerable costs for the AWS account owner and consuming considerable CPU time and bandwidth.Using one of the documented examples and setting
max_keys: 0
:The AWS API request line will be:
Note the lack of the expected
&max-keys=0
URI request parameter in the GET line.This issue seems to be present in the following releases:
Issue Type
Bug Report
Component Name
modules.plugin.s3_object
Ansible Version
Collection Versions
AWS SDK versions
Configuration
OS / Environment
No response
Steps to Reproduce
Expected Results
I expected
max-keys: 0
to be respected in the API request. It was ignored.Actual Results
Ansible execution hung as the bucket I used never completed full pagination before the process was killed.
Code of Conduct
The text was updated successfully, but these errors were encountered: