Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Knex: Timeout acquiring a connection #4598

Closed
2 tasks done
rjeevaram opened this issue Mar 20, 2024 · 8 comments
Closed
2 tasks done

Knex: Timeout acquiring a connection #4598

rjeevaram opened this issue Mar 20, 2024 · 8 comments
Labels
area:core issues describing changes to the core of uptime kuma help question Further information is requested

Comments

@rjeevaram
Copy link

⚠️ Please verify that this question has NOT been raised before.

  • I checked and didn't find similar issue

🛡️ Security Policy

📝 Describe your problem

I've been facing the below error in Uptimekuma Monitor recently. Has anyone faced this issue and any solution?

Knex: Timeout acquiring a connection. The pool is probably full. Are you missing a .transacting(trx) call?

📝 Error Message(s) or Log

Knex: Timeout acquiring a connection. The pool is probably full. Are you missing a .transacting(trx) call?

🐻 Uptime-Kuma Version

1.23.11

💻 Operating System and Arch

Running as K8s pods in debian 11 Node

🌐 Browser

Chrome 122.0.6261.94

🖥️ Deployment Environment

  • Runtime:
  • Database:
  • Filesystem used to store the database on:
  • number of monitors:
@rjeevaram rjeevaram added the help label Mar 20, 2024
@CommanderStorm CommanderStorm added question Further information is requested area:core issues describing changes to the core of uptime kuma labels Mar 20, 2024
@CommanderStorm
Copy link
Collaborator

Please fill out the Deployment Environment to help us debug this.
In addition, what retention have you configured?

TL;DR:
In v1, reducing either retention/ping-rate/.. or moving to a faster storage solution is the only option to get around this problem.
In v2, we have likely resolved this problem. See #4500 for further details of what needs to happen before we can release this version.

@Lanhild

This comment was marked as off-topic.

@CommanderStorm

This comment was marked as off-topic.

@Lanhild

This comment was marked as off-topic.

@jiriteach

This comment was marked as duplicate.

@CommanderStorm
Copy link
Collaborator

I am going to close this as resolved by v2.0 as I don't see evidence to the contrary.

TL;DR:
In v1, reducing either retention/ping-rate/.. or moving to a faster storage solution is the only option to get around this problem.
In v2, we have likely resolved this problem. See #4500 for further details of what needs to happen before we can release this version.

@jiriteach
Copy link

Ended up having to bin the db. SQLite db was over 6gb. Unfortunate but after starting again - no problems.

@Lanhild

This comment was marked as resolved.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area:core issues describing changes to the core of uptime kuma help question Further information is requested
Projects
None yet
Development

No branches or pull requests

4 participants