New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
bug: crons with concurrency limits set cause the engine to crash #483
Comments
Hey @trisongz, thanks for the report - I'll be taking a look at this today. This looks like an issue with the workflow run not being created properly from cron workflows if you have a concurrency limit setting on the workflow run. This isn't an issue with RabbitMQ, so no need to restart things on that side (the methods are just being triggered by a RabbitMQ message). |
Thanks for the response, I was able to confirm that after removing concurrency from it, that the latest version works. |
This is fixed in |
I have hatchet self-hosted in a K8s cluster.
Container Images:
ghcr.io/hatchet-dev/hatchet/hatchet-engine:v0.26.1
ghcr.io/hatchet-dev/hatchet/hatchet-api:v0.26.1
docker.io/bitnami/rabbitmq:3.13.2-debian-12-r0
SDK: Python -
hatchet-sdk-0.23.0
(0.22.5
prior)After version
0.23.0
, I've consistently run into the following issue when a cron task gets triggered, which then causes a reboot loop on theengine
container:I've attempted the following to debug:
engine
container to get back up.rabbitmq
, including the persistent data, which doesn't do anything.I am able to trigger the workflow manually, but whenever the cron schedule triggers the workflow, that issue occurs.
The text was updated successfully, but these errors were encountered: