Draft: Testing LRU caching of datasource clients #13561
Draft
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Description
Implementing a basic cache for Postgres, using LRU and
merkle-json
(https://www.npmjs.com/package/merkle-json) to hash the configuration.The goal here is to reduce wasteful client creation/destruction, we've seen in the past over-connecting when using search heavy pages in Budibase, this should help with that as well - it should also speed up self host implementation in theory, as they'll have a pretty much permanent connection when the app is in use.
By default this is set to a TTL of 10 minutes and max of 50 clients, for self host these numbers will be fine, but in the cloud they may need some tweaking. In theory the worst case is that we start to connect too often, but this would need some QA to confirm.
This still requires implementation for all other datasource+ - datasources won't really benefit from this as they are executed within a thread that has a limited lifetime, its not really possible to cache those clients.