You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
--prompt-cache-all if specified, saves user input and generations to cache as well.
not supported with --interactive or other interactive options
Is still not working with interactive mode so is actually not useful at all ...
It could be amazing to save conversation state and later restore where we finished as context is getting bigger and bigger and LLM could remember a really long sessions with the user...
The text was updated successfully, but these errors were encountered:
Managing cache long term for a project or product is a nontrivial technical commitment requiring policies and configuration parameters for size and history limits and then code to manage the cached data, APIs to access it, often eventually (in the lifetime of a project) offloading cache and passing through configuration to alternate plugin implementations.
It is now already possible for scripts or client apps, browsers or middleware to cache all or any part of LLM chats in many ways using logs or flat files, databases, in-memory, etc. It is arguably a separate domain of concerns that could overcomplicate llama.cpp. It could be a separate product too.
Hello
command
--prompt-cache-all if specified, saves user input and generations to cache as well.
not supported with --interactive or other interactive options
Is still not working with interactive mode so is actually not useful at all ...
It could be amazing to save conversation state and later restore where we finished as context is getting bigger and bigger and LLM could remember a really long sessions with the user...
The text was updated successfully, but these errors were encountered: