New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
The problem of not waiting as much as default_retry_delay value when retry() is performed within the "task" of "Celery". #1977
Comments
@hidbpark I'm trying to establish if a recent issue we had is related to what you've reported here. We use retries and as soon as we upgraded to 5.3.6 our rabbitMQ queues filled up very quickly, we rolled back to 5.3.5 and everything stabilised again. However looking at the version dif we can't see anything that could have caused this. What did you see on your side that made you raise this issue? |
Thank you for your response. |
There is a problem with not waiting as much as the default_retry_delay value when retry() is performed within the "task" of "Celery".
With multiple pre-forked workers, the retried task seems to be immediately performed by another worker.
In kombu 5.3.5, the retried task was performed by the worker after waiting for the default_retry_delay value. (good)
Regarding this issue, is there any change in kombu 5.3.6?
django: 4.2.11.
redis: 5.0.3.
celery: 5.3.6.
kombu: 5.3.6.
@shared_task(bind=True, default_retry_delay=5, max_retries=3) def my_task(self, some_data): # do somethings if some retry case: raise self.retry() else: return
The text was updated successfully, but these errors were encountered: