Replies: 3 comments 4 replies
-
The behavior you're observing with Firstly, it's important to ensure that the Secondly, compatibility issues between the versions of the packages you're using could be affecting the expected behavior. Both Lastly, the order in which Unfortunately, there isn't a known issue or bug directly related to your problem in the LangChain JS framework documentation or issue tracker. However, considering these points and ensuring that your setup aligns with the requirements for streaming and compatibility might help resolve the issue. If the problem persists, it might be beneficial to reach out to the LangChain community or maintainers for more specific guidance or to check if there have been any updates or patches that address this issue.
|
Beta Was this translation helpful? Give feedback.
-
It doesn't work with AgentExecutors either. Seems like as soon as I wrap ChatAnthropic it starts to not stream and instead just invokes the model. |
Beta Was this translation helpful? Give feedback.
-
Ok so somehow it now works with the history I do not know why but here's what ended up working: const chat = new ChatAnthropic({
apiKey: process.env.ANTHROPIC_API_KEY,
modelName: 'claude-3-haiku-20240307',
temperature: 0,
streaming: true,
});
const prompt = ChatPromptTemplate.fromMessages([
SystemMessagePromptTemplate.fromTemplate('You are a helpful assistant'),
HumanMessagePromptTemplate.fromTemplate('{input}'),
new MessagesPlaceholder('chat_history'),
]);
const runnable = new RunnableWithMessageHistory({
runnable: prompt.pipe(chat),
getMessageHistory: () => new InMemoryChatMessageHistory([]),
inputMessagesKey: 'input',
historyMessagesKey: 'chat_history'
});
const stream = runnable.streamEvents({ input: 'Hello' }, {
configurable: {
sessionId: '1'
},
version: 'v1'
});
for await (const message of stream) {
console.log(message);
} The fact that it doesn't work with the agent is a limitation of the Claude API, I guess, since the new tool use API is in beta and does not support streaming. It might be a good idea to output some kind of warning that this is the case, since it might be confusing. Is there a way to fall back to legacy tool use with streaming? It would be nice to still offer streaming by not using the new tool use API. |
Beta Was this translation helpful? Give feedback.
-
Checked other resources
Commit to Help
Example Code
While this does not:
Description
It seems like ChatAnthropic unexpectedly does not work with the RunnableWithMessageHistory runnable.
When streaming events from the just the model everything works as expected, however when the model is wrapped within a RunnableWithMessageHistory, the output is no longer streamed and instead it does not produce 'on_llm_stream' events and instead outputs everything in the 'on_llm_end' event.
I've tested the exact same code with ChatOpenAI which behaves as expected (ie. it streams) when a history is included.
I'm not sure if this is a problem on my end. I'd expect other people to have run into this. Is someone able to confirm this behaviour?
System Info
langchain: 0.1.36
@langchain/core: 0.1.61
@langchain/community: 0.0.53
@langchain/anthropic: 0.1.17
@langchain/openai: 0.0.28
Beta Was this translation helpful? Give feedback.
All reactions