Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[FEATURE] Automatic ratelimit adjustment #695

Open
wants to merge 1 commit into
base: master
Choose a base branch
from

Conversation

aristosMiliaressis
Copy link
Contributor

Description

Adds the option -ar to enable automatic adjustments to rate limiting based on the amount of 429 received.

i used the following script to test it

#!/usr/bin/env python
# encoding: utf-8
import time
from flask import Flask, redirect

app = Flask(__name__)
clients = {}

@app.route('/admin', methods=['GET'])
def admin():
	return redirect("/admin/", code=302)

@app.route('/admin/<path>', methods=['GET'])
def nested_admin(path):
	id = str(int(time.time()))
	if id in clients.keys():
		clients[id] += 1
	else:
		clients[id] = 1

	if clients[id] > 10:
		return "Too Fast", 429

	return "Not Found", 404

app.run(debug=True)

and the requests per second went from 120 to 9 when it started on the /admin/ path
Screenshot 2023-06-16 103600

@aristosMiliaressis aristosMiliaressis changed the title [FEATURE] Automatic ratelimit adjustments [FEATURE] Automatic ratelimit adjustment Jun 16, 2023
@joohoi
Copy link
Member

joohoi commented Jul 3, 2023

Thanks for the PR!

This looks good, but I have some additional ideas that could make it awesome. Some of them may require some larger changes though:

  • In addition to looking into HTTP response status codes, there is a IETF draft (to become RFC) for standardized rate limit header , and X-Rate-Limit that is currently used in the wild as a standards compliant (X- for non-standard) header that we could use to "predict" the exact rate limit and play according to that.
  • We could create a "to-retry-later" cache of payloads that got the 429 response, to be tried after the wordlist(s) have been exhausted. For this I suggest using an additional structure, or even extending the InputProvider interface, to not to try to add those entries back to the InputProvider (this gets really hairy really fast, especially when dealing with multiple wordlists algorithm we use atm).

What do you think?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

2 participants