New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[BUG] Summarizer should skip summarization if new message is empty #285
Comments
Thanks @danielchalef . What I've also noticed is that new chat messages don't seem to get summarised into the summary after 'The New Lines do not provide meaningful content.' So essentially the memory gets stuck in this state. |
Fixed in #287 Will be available in the next release of Zep. Likely today. |
hi @danielchalef - Thanks for this. I did some preliminary testing and was pretty happy with it as it seemed to have been resolved. However I have noticed that as the user continues to chat and multiple summaries are created, I see the bold text below. Just wondering if this is expected behaviour? I did do some testing on new lines (which occur as we use the metadata to store S3 links to images and video, etc.). Is this still relating to new lines? As I hadn't seen this up until we started using the metadata.
|
@ksrath Which LLM API provider and model are you using? Hoping to replicate the issue here. |
Hi @danielchalef we're using OpenAI gpt-3.5-turbo. After looking, I'm not necessarily sure it has to do with metadata usage anymore. It seems like after ~ the 4th/5th summary, there is a message along the lines of: Which I think is incorrect - the 5 (or so) chats that fell out of the chat history window have relevant content to be summarised. Let me know if you need more info. Here is our code for saving our S3 url against our chat metadata save image to metadata
|
Hi @danielchalef - just wondering if there's any update on this, as we're continuously seeing it as users chat more and multiple Zep summaries are being created, see below:
Is it possible for me to query old summaries, perhaps that will help with replicating it for you? |
Hey @ksrath , sorry I missed your message. You can search over historical summaries for a session: https://docs.getzep.com/sdk/chat_history/search/ I'm unsure why the LLM is responding as such. Are you able to share sanitized examples of dialogs where this occurs? |
Hi @danielchalef , no worries. Here are some details. model: "gpt-3.5-turbo" I was able to replicate and here are some examples of the convo history where it occurred.
Once I've replied "can you searhch for the closest cafe to me?" the first summary took in the first 6 messages, and it then looked like this:
I think it could occur when there's consecutive 'AI' messages without a Human message. |
Describe the bug
hey @daniel Chalef ! We're currently using metadata to store the S3 url for images, and we've noticed that when the memory chat history is sent via Langchain, the url seems to be replaced with a newline. Then the summary starts to ingest this as "The New Lines do not provide meaningful content.".
Is it possible to have the summary ignore new lines somehow, or at least not comment on it as part of the summary?
To Reproduce
For example, this is what the prompt via Langsmith will look like:
System: The Current Content contains a lighthearted exchange about financial stress. The New Lines do not provide meaningful content.
Human: chat
AI: image
Human: chat
AI: image
Human: chat
Expected behavior
Is it possible to have the summary ignore new lines somehow, or at least not comment on it as part of the summary?
Logs
Please provide your Zep server and applications logs.
Environment (please complete the following information):
zep-js
orzep-python
and vX.X.X]docker compose
, to a hosted environment such as Render]Note: The Zep server version is available in the Zep server logs at startup:
Starting zep server version 0.11.0-cbf4fe4 (2023-08-30T12:49:03+0000)
Additional context
Add any other context about the problem here.
The text was updated successfully, but these errors were encountered: