You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I searched the LangChain documentation with the integrated search.
I used the GitHub search to find a similar question and didn't find it.
I am sure that this is a bug in LangChain rather than my code.
The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
Example Code
importasynciofromlangchain_google_vertexaiimportChatVertexAIfromlangchain_openaiimportChatOpenAIfromlangchain.callbacks.baseimportBaseCallbackHandlerfromlangchain.schemaimportHumanMessageclassMyCallbackHandler(BaseCallbackHandler):
defon_llm_start(self, serialized, prompts, **kwargs):
print(" on_llm_start")
defon_llm_end(self, repsonse, **kwargs):
print(" on_llm_end")
defon_llm_new_token(self, token, **kwargs):
print(" on_llm_new_token", token)
asyncdefmain():
# worksprint("openai: start")
awaitChatOpenAI(streaming=True).agenerate(
[[HumanMessage(content="Hello, how are you?")]],
callbacks=[MyCallbackHandler()],
)
print("openai: done")
# failsprint("vertex: start")
awaitChatVertexAI(streaming=True, model_name="gemini-pro").agenerate(
[[HumanMessage(content="Hello, how are you?")]],
callbacks=[MyCallbackHandler()],
# stream=True ## <--- works if you add this, but isn't necessary for OpenAI
)
print("vertex: done")
if__name__=="__main__":
asyncio.run(main())
Error Message and Stack Trace (if applicable)
No response
Description
I expect ChatVertexAI to behave like other providers: accept streaming=True in the constructor and then invoke the on_llm_new_token callback for streaming operations.
The provided example gives the following output, which shows the inconsistency:
openai: start
on_llm_start
on_llm_new_token
on_llm_new_token Hello
on_llm_new_token !
on_llm_new_token I
on_llm_new_token 'm
on_llm_new_token just
on_llm_new_token a
on_llm_new_token computer
on_llm_new_token program
on_llm_new_token ,
on_llm_new_token so
on_llm_new_token I
on_llm_new_token don
on_llm_new_token 't
on_llm_new_token have
on_llm_new_token feelings
on_llm_new_token ,
on_llm_new_token but
on_llm_new_token I
on_llm_new_token 'm
on_llm_new_token here
on_llm_new_token to
on_llm_new_token help
on_llm_new_token you
on_llm_new_token .
on_llm_new_token How
on_llm_new_token can
on_llm_new_token I
on_llm_new_token assist
on_llm_new_token you
on_llm_new_token today
on_llm_new_token ?
on_llm_new_token
on_llm_end
openai: done
vertex: start
on_llm_start
on_llm_end
vertex: done
If you add stream=True to the agenerate call, the callbacks are correctly invoked. So it seems it's the handling of the constructor param that is inconsistent with other providers.
System Info
System Information
OS: Darwin
OS Version: Darwin Kernel Version 22.6.0: Thu Nov 2 07:43:57 PDT 2023; root:xnu-8796.141.3.701.17~6/RELEASE_ARM64_T6000
Python Version: 3.11.6 (main, Nov 2 2023, 04:39:43) [Clang 14.0.3 (clang-1403.0.22.14.1)]
Checked other resources
Example Code
Error Message and Stack Trace (if applicable)
No response
Description
I expect
ChatVertexAI
to behave like other providers: acceptstreaming=True
in the constructor and then invoke theon_llm_new_token
callback for streaming operations.The provided example gives the following output, which shows the inconsistency:
If you add
stream=True
to theagenerate
call, the callbacks are correctly invoked. So it seems it's the handling of the constructor param that is inconsistent with other providers.System Info
System Information
Package Information
Packages not installed (Not Necessarily a Problem)
The following packages were not found:
The text was updated successfully, but these errors were encountered: