Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ai/rsc with langchain #288

Open
rogerodipo opened this issue Mar 25, 2024 · 4 comments
Open

ai/rsc with langchain #288

rogerodipo opened this issue Mar 25, 2024 · 4 comments

Comments

@rogerodipo
Copy link

rogerodipo commented Mar 25, 2024

In the docs, the only example we have using langchain is getting the streamingTextResponse from the api/chat route and using it in the frontend using useChat.

https://sdk.vercel.ai/docs/guides/providers/langchain

How can we do that without useChat, using ai/rsc in the current version of the bot's render method?

@nikohann
Copy link

I have used the submitUserMessage method with langchain's streamEvents. I will maybe provide code example later.

@rogerodipo
Copy link
Author

Hey @nikohann ,
Great! Could you share the code example as soon as you have a chance to? Even if it's just the outline? I'm up against a deadline, and this would help me out a lot.
Thanks.

@nikohann
Copy link

nikohann commented Mar 26, 2024

Hey @nikohann , Great! Could you share the code example as soon as you have a chance to? Even if it's just the outline? I'm up against a deadline, and this would help me out a lot. Thanks.

There is couple serious bugs but I think you will find it out.

https://js.langchain.com/docs/expression_language/streaming#event-reference

I have used streamEvents with streaming output as json format from function calling.

async function submitUserMessage(content: string) {
  'use server'

  const aiState = getMutableAIState<typeof AI>()

  aiState.update({
    ...aiState.get(),
    messages: [
      ...aiState.get().messages,
      {
        id: nanoid(),
        role: 'user',
        content
      }
    ]
  })

  // Langchain

  const prompt = ChatPromptTemplate.fromMessages([
    [
      "system",
      "You are helpful assistant. Be positive and speak about unicorns."
    ],
    ["human", "{input}"],
  ]);

  const llm = new ChatOpenAI({
    modelName: "gpt-4-0125-preview",
    streaming: true,
    temperature: 0.4,
  });

  const chain = prompt.pipe(llm);

  let textStream: undefined | ReturnType<typeof createStreamableValue<string>>
  let textNode: undefined | React.ReactNode

  runAsyncFnWithoutBlocking(async () => {

    if (!textStream) {
      textStream = createStreamableValue('')
      textNode = <BotMessage content={textStream.value} />
    }

    const response = chain.streamEvents({
      input: content,
    }, { version: "v1" })

    for await (const event of response) {
      const eventType = event.event;

      if (eventType === "on_chain_stream") {
        textStream.update(event.data.chunk.content);
      } else if (eventType === "on_llm_end") {
        const message = event.data.output.generations[0][0].text;

        textStream.done();

        aiState.done({
          ...aiState.get(),
          messages: [
            ...aiState.get().messages,
            {
              id: nanoid(),
              role: 'assistant',
              content: message
            }
          ]
        })

      }

    }

  })

  return {
    id: nanoid(),
    display: textNode
  }

}

@AmmarByFar
Copy link

Hey @nikohann , Great! Could you share the code example as soon as you have a chance to? Even if it's just the outline? I'm up against a deadline, and this would help me out a lot. Thanks.

There is couple serious bugs but I think you will find it out.

https://js.langchain.com/docs/expression_language/streaming#event-reference

I have used streamEvents with streaming output as json format from function calling.

async function submitUserMessage(content: string) {
  'use server'

  const aiState = getMutableAIState<typeof AI>()

  aiState.update({
    ...aiState.get(),
    messages: [
      ...aiState.get().messages,
      {
        id: nanoid(),
        role: 'user',
        content
      }
    ]
  })

  // Langchain

  const prompt = ChatPromptTemplate.fromMessages([
    [
      "system",
      "You are helpful assistant. Be positive and speak about unicorns."
    ],
    ["human", "{input}"],
  ]);

  const llm = new ChatOpenAI({
    modelName: "gpt-4-0125-preview",
    streaming: true,
    temperature: 0.4,
  });

  const chain = prompt.pipe(llm);

  let textStream: undefined | ReturnType<typeof createStreamableValue<string>>
  let textNode: undefined | React.ReactNode

  runAsyncFnWithoutBlocking(async () => {

    if (!textStream) {
      textStream = createStreamableValue('')
      textNode = <BotMessage content={textStream.value} />
    }

    const response = chain.streamEvents({
      input: content,
    }, { version: "v1" })

    for await (const event of response) {
      const eventType = event.event;

      if (eventType === "on_chain_stream") {
        textStream.update(event.data.chunk.content);
      } else if (eventType === "on_llm_end") {
        const message = event.data.output.generations[0][0].text;

        textStream.done();

        aiState.done({
          ...aiState.get(),
          messages: [
            ...aiState.get().messages,
            {
              id: nanoid(),
              role: 'assistant',
              content: message
            }
          ]
        })

      }

    }

  })

  return {
    id: nanoid(),
    display: textNode
  }

}

Yea I think I'm running into some real strange bugs. This works totally fine when running locally but as soon as I push it to production it stops working. For some reason production doesn't seem to be streaming the results...

Not sure what's going on

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants