Replies: 1 comment
-
Scaling CapRover isn't possible since data is atomically shared in CapRover isn't designed for this use case, and it takes a major rewrite to queue the requests sent via the API. Lots of edge cases (what if request failed due to any error, should proceed or not etc...) Instead, you should add the business logic your your scheduler, something like:
|
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Currently I am interfacing with caprover only via API through a scheduler (Python Celery).
Sometimes I would like to send multiple requests to Caprover to deploy and build and image and update the config for antoher app, but then I get the 429 error that an operation is in progress.
So I am trying to find ways to scale the Caprover API as it seems the API is blocking and can only take one request at a time.
Can I make the API non-blocking somehow or can I scale Captaian to run multiple instances via Docker Swarm?
Wehen I tried to increase instances I get the error that port 3000 is already in use..
So that did not work for me so far.
Or can I scale out caprover to another server/node in order to parallelize requests?
Any help appreciated.
Beta Was this translation helpful? Give feedback.
All reactions