Skip to content

Commit

Permalink
[Docs] Managing Agent Steps (#166)
Browse files Browse the repository at this point in the history
  • Loading branch information
hinthornw committed May 18, 2024
1 parent 012ce04 commit c0e24d7
Show file tree
Hide file tree
Showing 6 changed files with 675 additions and 42 deletions.
5 changes: 1 addition & 4 deletions docs/docs/how-tos/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,10 +2,6 @@

Welcome to the LangGraphJS How-to Guides! These guides provide practical, step-by-step instructions for accomplishing key tasks in LangGraphJS.

## In progress

🚧 This section is currently in progress. More updates to come! 🚧

## Core

The core guides show how to address common needs when building a out AI workflows, with special focus placed on [ReAct](https://arxiv.org/abs/2210.03629)-style agents with [tool calling](https://js.langchain.com/v0.2/docs/how_to/tool_calling/).
Expand All @@ -28,3 +24,4 @@ The following examples are useful especially if you are used to LangChain's Agen
- [Force calling a tool first](force-calling-a-tool-first.ipynb): Define a fixed workflow before ceding control to the ReAct agent
- [Dynamic direct return](dynamically-returning-directly.ipynb): Let the LLM decide whether the graph should finish after a tool is run or whether the LLM should be able to review the output and keep going
- [Respond in structured format](respond-in-format.ipynb): Let the LLM use tools or populate schema to provide the user. Useful if your agent should generate structured content
- [Managing agent steps](managing-agent-steps.ipynb): How to format the intermediate steps of your workflow for the agent
1 change: 1 addition & 0 deletions docs/mkdocs.yml
Original file line number Diff line number Diff line change
Expand Up @@ -96,6 +96,7 @@ nav:
- "how-tos/force-calling-a-tool-first.ipynb"
- "how-tos/dynamically-returning-directly.ipynb"
- "how-tos/respond-in-format.ipynb"
- "how-tos/managing-agent-steps.ipynb"
- "Conceptual Guides":
- "concepts/index.md"
- "Reference":
Expand Down
56 changes: 33 additions & 23 deletions examples/how-tos/dynamically-returning-directly.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -27,7 +27,7 @@
"\n",
"Next, we need to set API keys for OpenAI (the LLM we will use). Optionally, we\n",
"can set API key for [LangSmith tracing](https://smith.langchain.com/), which\n",
"will give us best-in-class observability."
"will give us best-in-class observability.\n"
]
},
{
Expand Down Expand Up @@ -60,12 +60,12 @@
"that.\n",
"\n",
"To add a 'return_direct' option, we will create a custom zod schema to use\n",
"**instead of** the schema that would be automatically inferred by the tool."
"**instead of** the schema that would be automatically inferred by the tool.\n"
]
},
{
"cell_type": "code",
"execution_count": 12,
"execution_count": 2,
"id": "481c95ac",
"metadata": {},
"outputs": [],
Expand Down Expand Up @@ -111,12 +111,12 @@
"We can now wrap these tools in a simple ToolExecutor.\\\n",
"This is a real simple class that takes in a ToolInvocation and calls that tool,\n",
"returning the output. A ToolInvocation is any type with `tool` and `toolInput`\n",
"attribute."
"attribute.\n"
]
},
{
"cell_type": "code",
"execution_count": 13,
"execution_count": 3,
"id": "250415e4",
"metadata": {},
"outputs": [],
Expand All @@ -142,12 +142,12 @@
" [tool calling](https://js.langchain.com/v0.2/docs/concepts/#functiontool-calling).\n",
"\n",
"Note: these model requirements are not requirements for using LangGraph - they\n",
"are just requirements for this one example."
"are just requirements for this one example.\n"
]
},
{
"cell_type": "code",
"execution_count": 14,
"execution_count": 4,
"id": "2c24d018",
"metadata": {},
"outputs": [],
Expand Down Expand Up @@ -181,12 +181,12 @@
"\n",
"For this example, the state we will track will just be a list of messages. We\n",
"want each node to just add messages to that list. Therefore, we will define the\n",
"state as follows:"
"state as follows:\n"
]
},
{
"cell_type": "code",
"execution_count": 15,
"execution_count": 5,
"id": "24454123",
"metadata": {},
"outputs": [],
Expand Down Expand Up @@ -233,12 +233,12 @@
" agent to decide what to do next\n",
"\n",
"Let's define the nodes, as well as a function to decide how what conditional\n",
"edge to take."
"edge to take.\n"
]
},
{
"cell_type": "code",
"execution_count": 16,
"execution_count": 6,
"id": "23a8b9c6",
"metadata": {},
"outputs": [],
Expand Down Expand Up @@ -270,7 +270,7 @@
" const response = await boundModel.invoke(messages, config);\n",
" // We return an object, because this will get added to the existing list\n",
" return { messages: [response] };\n",
"};"
"};\n"
]
},
{
Expand All @@ -285,7 +285,7 @@
},
{
"cell_type": "code",
"execution_count": 17,
"execution_count": 7,
"id": "05203811",
"metadata": {},
"outputs": [
Expand Down Expand Up @@ -388,7 +388,7 @@
"}"
]
},
"execution_count": 17,
"execution_count": 7,
"metadata": {},
"output_type": "execute_result"
}
Expand Down Expand Up @@ -436,12 +436,12 @@
"\n",
"We can now use it! This now exposes the\n",
"[same interface](https://js.langchain.com/docs/expression_language/) as all\n",
"other LangChain runnables."
"other LangChain runnables.\n"
]
},
{
"cell_type": "code",
"execution_count": 18,
"execution_count": 8,
"id": "de5f4864",
"metadata": {},
"outputs": [
Expand All @@ -451,7 +451,13 @@
"text": [
"[human]: what is the weather in sf\n",
"-----\n",
"\n",
"\n"
]
},
{
"name": "stdout",
"output_type": "stream",
"text": [
"[ai]: [object Object] \n",
"Tools: \n",
"- search({\"query\":\"weather in sf\",\"return_direct\":true})\n",
Expand All @@ -464,13 +470,17 @@
}
],
"source": [
"import { AIMessage, BaseMessage, HumanMessage } from \"@langchain/core/messages\";\n",
"import {\n",
" AIMessage,\n",
" BaseMessage,\n",
" HumanMessage,\n",
" isAIMessage,\n",
"} from \"@langchain/core/messages\";\n",
"\n",
"const prettyPrint = (message: BaseMessage) => {\n",
" let txt = `[${message._getType()}]: ${message.content}`;\n",
" if (\n",
" (message._getType() === \"ai\" &&\n",
" (message as AIMessage)?.tool_calls?.length) ||\n",
" (isAIMessage(message) && (message as AIMessage)?.tool_calls?.length) ||\n",
" 0 > 0\n",
" ) {\n",
" const tool_calls = (message as AIMessage)?.tool_calls\n",
Expand All @@ -486,12 +496,12 @@
" const lastMessage = output.messages[output.messages.length - 1];\n",
" prettyPrint(lastMessage);\n",
" console.log(\"-----\\n\");\n",
"}\n"
"}"
]
},
{
"cell_type": "code",
"execution_count": 19,
"execution_count": 9,
"id": "986f8cfe",
"metadata": {},
"outputs": [
Expand Down Expand Up @@ -535,7 +545,7 @@
"id": "51fa73e6",
"metadata": {},
"source": [
"Done! The graph **stopped** after running the `tools` node!"
"Done! The graph **stopped** after running the `tools` node!\n"
]
}
],
Expand Down
9 changes: 7 additions & 2 deletions examples/how-tos/human-in-the-loop.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -392,12 +392,17 @@
}
],
"source": [
"import { AIMessage, BaseMessage, HumanMessage } from \"@langchain/core/messages\";\n",
"import {\n",
" AIMessage,\n",
" BaseMessage,\n",
" HumanMessage,\n",
" isAIMessage,\n",
"} from \"@langchain/core/messages\";\n",
"\n",
"const prettyPrint = (message: BaseMessage) => {\n",
" let txt = `[${message._getType()}]: ${message.content}`;\n",
" if (\n",
" message._getType() === \"ai\" && (message as AIMessage)?.tool_calls?.length ||\n",
" isAIMessage(message) && (message as AIMessage)?.tool_calls?.length ||\n",
" 0 > 0\n",
" ) {\n",
" const tool_calls = (message as AIMessage)?.tool_calls?.map(\n",
Expand Down
Loading

0 comments on commit c0e24d7

Please sign in to comment.