You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I searched existing ideas and did not find a similar one
I added a very descriptive title
I've clearly described the feature request and motivation for it
Feature request
Hi, love the langchain library, but it seems to be lacking support for OpenAI's newest Assistant capabilities, which makes it hard to use langchain.
For example, the OpenAI Threads API now supports streaming, but when I try to stream using the existing implementation, I just get the fully baked result back (its not actually a stream of tokens, which is my mental model of a stream).
`
Also, using if something fails, like perhaps the schema is rejected, OpenAI Assistant API complains that a run is active and OpenAI wont accept anymore requests from the source code, until a developer manually cancels a run or waits a non-developer set time period before the run expires.
A better approach here is either if agentExecutor crashes or rejects, it the OpenAIAssistantRunnable wrapper auto cancels the crashed run. Alternatively, the worse, is if running agentExecutor.invoke returned the runId allowing a developer to try catch and cancel the job themselves. Without this level of error handling, a dev has to wait for a failure, and then find the last run thats still active and manually cancel it. DX is much better in the scenario I'm suggesting.
Finally, OpenAI assistant supports "truncationStrategy" and a number of other params that are hidden or not settable by developers. Though in practice, setting a truncation strategy is super important, as are the other options that devs are supported to have access to.
I'd love if langchain could fully support the 3 features above, as OpenAI Assistant + LangChain seem like an indispensible DX, if done right.
Review:
Making Streaming actually stream tokens, now that OpenAI Assistant supports it
Better error handling crashed runs (either pass back the runID or auto cancel crashed runs).
Allow a dev to pass any param directly to the run, such as truncation strategy
Motivation
I'm using a combo of OpenAI + LangChain, and I'm at a fork in the road, because I need streaming and better error handling. Do I stop using agentExecutor and switch to OpenAI Assistant V2 API directly or can I fully rely langchain for these basics?
I'd like to rely on LangChain, so I wrote the ticket...and I'm sure i'm not the only one wishing for these features to be supported, as I think OpenAI will double down on its assistant API
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
-
Checked
Feature request
Hi, love the langchain library, but it seems to be lacking support for OpenAI's newest Assistant capabilities, which makes it hard to use langchain.
For example, the OpenAI Threads API now supports streaming, but when I try to stream using the existing implementation, I just get the fully baked result back (its not actually a stream of tokens, which is my mental model of a stream).
`
Also, using if something fails, like perhaps the schema is rejected, OpenAI Assistant API complains that a run is active and OpenAI wont accept anymore requests from the source code, until a developer manually cancels a run or waits a non-developer set time period before the run
expires
.A better approach here is either if
agentExecutor
crashes or rejects, it theOpenAIAssistantRunnable
wrapper auto cancels the crashed run. Alternatively, the worse, is if runningagentExecutor.invoke
returned therunId
allowing a developer to try catch and cancel the job themselves. Without this level of error handling, a dev has to wait for a failure, and then find the last run thats still active and manually cancel it. DX is much better in the scenario I'm suggesting.Finally, OpenAI assistant supports "truncationStrategy" and a number of other params that are hidden or not settable by developers. Though in practice, setting a truncation strategy is super important, as are the other options that devs are supported to have access to.
I'd love if langchain could fully support the 3 features above, as OpenAI Assistant + LangChain seem like an indispensible DX, if done right.
Review:
Motivation
I'm using a combo of OpenAI + LangChain, and I'm at a fork in the road, because I need streaming and better error handling. Do I stop using agentExecutor and switch to OpenAI Assistant V2 API directly or can I fully rely langchain for these basics?
I'd like to rely on LangChain, so I wrote the ticket...and I'm sure i'm not the only one wishing for these features to be supported, as I think OpenAI will double down on its assistant API
Proposal (If applicable)
See above
Beta Was this translation helpful? Give feedback.
All reactions