-
Notifications
You must be signed in to change notification settings - Fork 1.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
ai/rsc with langchain #288
Comments
I have used the submitUserMessage method with langchain's streamEvents. I will maybe provide code example later. |
Hey @nikohann , |
There is couple serious bugs but I think you will find it out. https://js.langchain.com/docs/expression_language/streaming#event-reference I have used streamEvents with streaming output as json format from function calling. async function submitUserMessage(content: string) {
'use server'
const aiState = getMutableAIState<typeof AI>()
aiState.update({
...aiState.get(),
messages: [
...aiState.get().messages,
{
id: nanoid(),
role: 'user',
content
}
]
})
// Langchain
const prompt = ChatPromptTemplate.fromMessages([
[
"system",
"You are helpful assistant. Be positive and speak about unicorns."
],
["human", "{input}"],
]);
const llm = new ChatOpenAI({
modelName: "gpt-4-0125-preview",
streaming: true,
temperature: 0.4,
});
const chain = prompt.pipe(llm);
let textStream: undefined | ReturnType<typeof createStreamableValue<string>>
let textNode: undefined | React.ReactNode
runAsyncFnWithoutBlocking(async () => {
if (!textStream) {
textStream = createStreamableValue('')
textNode = <BotMessage content={textStream.value} />
}
const response = chain.streamEvents({
input: content,
}, { version: "v1" })
for await (const event of response) {
const eventType = event.event;
if (eventType === "on_chain_stream") {
textStream.update(event.data.chunk.content);
} else if (eventType === "on_llm_end") {
const message = event.data.output.generations[0][0].text;
textStream.done();
aiState.done({
...aiState.get(),
messages: [
...aiState.get().messages,
{
id: nanoid(),
role: 'assistant',
content: message
}
]
})
}
}
})
return {
id: nanoid(),
display: textNode
}
} |
Yea I think I'm running into some real strange bugs. This works totally fine when running locally but as soon as I push it to production it stops working. For some reason production doesn't seem to be streaming the results... Not sure what's going on |
In the docs, the only example we have using langchain is getting the streamingTextResponse from the api/chat route and using it in the frontend using useChat.
https://sdk.vercel.ai/docs/guides/providers/langchain
How can we do that without useChat, using ai/rsc in the current version of the bot's render method?
The text was updated successfully, but these errors were encountered: