Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Allow throttle queues #146

Open
ixti opened this issue May 3, 2023 · 4 comments
Open

Allow throttle queues #146

ixti opened this issue May 3, 2023 · 4 comments
Labels
enhancement help wanted Contributions are highly appreciated

Comments

@ixti
Copy link
Owner

ixti commented May 3, 2023

It's possible to throttle queue by using shared throttling key, but that causes jobs to be pushed back to the end of the queue. It would be nice to have real queue throttling, where job would have been pushed back to the head of the queue, and queue have been paused from fetching for some time.

See: #122

@ixti ixti added the help wanted Contributions are highly appreciated label Nov 20, 2023
@joevandyk
Copy link

joevandyk commented Jan 25, 2024

Can you give an example of throttling a queue by a shared key? The docs only seem to mention a key_suffix?

My scenario: I have multiple job classes that run on the expensive queue. I only want one job at a time to run on the expensive queue.

@mnovelo
Copy link
Contributor

mnovelo commented Jan 26, 2024

@joevandyk below is what we're doing right now for throttling multiple jobs with a shared key (not necessarily throttling the entire queue, but if only jobs that have this throttle are put in the queue, then the queue is essentially throttled)

in a Rails initializer file,

# For Users::Engagement::SetSendsJob, SetOpensJob, and SetClicksJob
Sidekiq::Throttled::Registry.add(:users_engagement_set_etl,
                                 # Only allow 1 job per tenant at a time, for up to 20 tenants at at time
                                 concurrency: [
                                   { limit: 1, key_suffix: ->(args) { args['tenant'] } },
                                   { limit: 20 },
                                 ])

Then, in the individual jobs, we add

sidekiq_throttle_as :users_engagement_set_etl

@jcsrb
Copy link

jcsrb commented Mar 19, 2024

I am testing the usage of this as a migration path from sidekiq-limit_fetch
I followed @mnovelo sample and registered multiple concurrency throttles based on the queues and applied it to the relevant jobs

having concurrently limited jobs in different queues causes an unexpected effect:
let's say we have 3 queues, high/normal/low , 10 threads in this example

if all jobs enqueued in the high queue have a shared concurrency limit of 3, sidekiq would run 3 as expected, but would not fill the rest of the threads with remaining low jobs

this is because how the sidekiq queues work https://github.com/sidekiq/sidekiq/wiki/Advanced-Options#queues , but need to be considered for queue-based throttling
queue weights might help adjust this

@mnovelo
Copy link
Contributor

mnovelo commented Mar 20, 2024

Oh that's interesting @jcsrb. We strictly use weighted queues so we've not run into that

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement help wanted Contributions are highly appreciated
Projects
None yet
Development

No branches or pull requests

4 participants