Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We鈥檒l occasionally send you account related emails.

Already on GitHub? Sign in to your account

Semanticly cache LLM Queries and reduce Costs #22

Open
momegas opened this issue Apr 18, 2023 · 2 comments
Open

Semanticly cache LLM Queries and reduce Costs #22

momegas opened this issue Apr 18, 2023 · 2 comments
Labels
bot This issue is about bot features enhancement New feature or request

Comments

@momegas
Copy link
Owner

momegas commented Apr 18, 2023

No description provided.

@momegas momegas added enhancement New feature or request bot This issue is about bot features labels Apr 18, 2023
@SimFG
Copy link

SimFG commented Apr 19, 2023

you can try to use the GPTCache 馃槅

@momegas
Copy link
Owner Author

momegas commented Apr 19, 2023

Yeah thats the idea. I have it in the readme 馃榿
Thanks for the heads up

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bot This issue is about bot features enhancement New feature or request
Projects
Status: 馃憖 Todo
Development

No branches or pull requests

2 participants