Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

llamacpp --prompt-cache-all < -- more than a year passed and still is not fully implemented #7179

Open
mirek190 opened this issue May 9, 2024 · 1 comment
Labels
enhancement New feature or request

Comments

@mirek190
Copy link

mirek190 commented May 9, 2024

Hello

command

--prompt-cache-all if specified, saves user input and generations to cache as well.
not supported with --interactive or other interactive options

Is still not working with interactive mode so is actually not useful at all ...

It could be amazing to save conversation state and later restore where we finished as context is getting bigger and bigger and LLM could remember a really long sessions with the user...

@mirek190 mirek190 added the enhancement New feature or request label May 9, 2024
@scottstirling
Copy link

Managing cache long term for a project or product is a nontrivial technical commitment requiring policies and configuration parameters for size and history limits and then code to manage the cached data, APIs to access it, often eventually (in the lifetime of a project) offloading cache and passing through configuration to alternate plugin implementations.

It is now already possible for scripts or client apps, browsers or middleware to cache all or any part of LLM chats in many ways using logs or flat files, databases, in-memory, etc. It is arguably a separate domain of concerns that could overcomplicate llama.cpp. It could be a separate product too.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

2 participants