How to ask user for approval/confirmation in the middle of a workflow for LLM agent #21524
Replies: 2 comments 5 replies
-
To address the issue of your LLM Agent not pausing for user approval during multi-action workflows, you can integrate a
from langchain.callbacks import HumanApprovalCallbackHandler
from langchain.tools import ShellTool
# Configure ShellTool with HumanApprovalCallbackHandler
tool = ShellTool(callbacks=[HumanApprovalCallbackHandler()])
def _should_check(serialized_obj: dict) -> bool:
# Example: Require approval for specific actions
return serialized_obj.get("name") == "terminal"
def _approve(_input: str) -> bool:
# Implement your approval logic here
# For simplicity, asking for user input
resp = input("Do you approve? (Y/N): ")
return resp.lower() in ("y", "yes")
callbacks = [HumanApprovalCallbackHandler(should_check=_should_check, approve=_approve)]
agent.run("Your command here", callbacks=callbacks) This approach ensures that your agent genuinely pauses and seeks user confirmation before proceeding with potentially impactful actions. It's a flexible solution that can be adapted to various scenarios and workflows, enhancing the safety and reliability of your automated processes. For more detailed guidance, refer to the LangChain cookbook, which provides comprehensive examples and best practices for implementing human approval mechanisms in your LangChain applications.
|
Beta Was this translation helpful? Give feedback.
-
Hi @yangyang919, You can have a look at |
Beta Was this translation helpful? Give feedback.
-
Checked other resources
Commit to Help
Example Code
Description
I have used Langchain to build one LLM Agent, for most cases, especially Single-Action case it's working quite well. But for Multi-Action case, it's not well.
For example, one case, use says: Please deploy latest release to staging environment
Agent receives this message, after think, needs to break it into multiple tasks and do below things:
Now the problem is when LLM Agent starts to execute, it does not actually stop and ask for user approval. It fakes user's response, and directly finishes the deployment.
Anyone has better ideas on how to handle/improve prompt for such case?
System Info
langchain==0.1.11
langchain-community==0.0.25
langchain-core==0.1.29
langchain-experimental==0.0.53
langchain-openai==0.0.8
langchain-text-splitters==0.0.1
langchainhub==0.1.15
Beta Was this translation helpful? Give feedback.
All reactions