diff --git a/docs/docs/how-tos/index.md b/docs/docs/how-tos/index.md index bbc07900..7550896f 100644 --- a/docs/docs/how-tos/index.md +++ b/docs/docs/how-tos/index.md @@ -23,3 +23,7 @@ How to apply common design patterns in your workflows: - [Subgraphs](subgraph.ipynb): How to compose subgraphs within a larger graph - [Branching](branching.ipynb): How to create branching logic in your graphs for parallel node execution - [Human-in-the-loop](human-in-the-loop.ipynb): How to incorporate human feedback and intervention + +The following examples are useful especially if you are used to LangChain's AgentExecutor configurations. + +- [Force calling a tool first](force-calling-a-tool-first.ipynb): Define a fixed workflow before ceding control to the ReAct agent \ No newline at end of file diff --git a/docs/mkdocs.yml b/docs/mkdocs.yml index b3b093e6..e92c0325 100644 --- a/docs/mkdocs.yml +++ b/docs/mkdocs.yml @@ -93,6 +93,7 @@ nav: - "how-tos/branching.ipynb" - "how-tos/subgraph.ipynb" - "how-tos/human-in-the-loop.ipynb" + - "how-tos/force-calling-a-tool-first.ipynb" - "Conceptual Guides": - "concepts/index.md" - "Reference": diff --git a/examples/agent_executor/base.ipynb b/examples/agent_executor/base.ipynb index d44671a1..603dc618 100644 --- a/examples/agent_executor/base.ipynb +++ b/examples/agent_executor/base.ipynb @@ -7,7 +7,8 @@ "source": [ "# Agent Executor From Scratch\n", "\n", - "In this notebook we will go over how to build a basic agent executor from scratch.\n", + "In this notebook we will go over how to build a basic agent executor from\n", + "scratch.\n", "\n", "![diagram](./img/agent-executor-diagram.png)" ] @@ -18,6 +19,7 @@ "metadata": {}, "source": [ "## Setup¶\n", + "\n", "First we need to install the packages required\n", "\n", "```bash\n", @@ -30,7 +32,8 @@ "id": "5f4179ce-48fa-4aaf-a5a1-027b5229be1a", "metadata": {}, "source": [ - "Next, we need to set API keys for OpenAI (the LLM we will use) and Tavily (the search tool we will use)\n", + "Next, we need to set API keys for OpenAI (the LLM we will use) and Tavily (the\n", + "search tool we will use)\n", "\n", "```bash\n", "export OPENAI_API_KEY=\n", @@ -43,7 +46,9 @@ "id": "37943b1c-2b0a-4c09-bfbd-5dc24b839e3c", "metadata": {}, "source": [ - "Optionally, we can set API key for [LangSmith tracing](https://smith.langchain.com/), which will give us best-in-class observability.\n", + "Optionally, we can set API key for\n", + "[LangSmith tracing](https://smith.langchain.com/), which will give us\n", + "best-in-class observability.\n", "\n", "```bash\n", "export LANGCHAIN_TRACING_V2=true\n", @@ -58,7 +63,8 @@ "source": [ "## Create the LangChain agent\n", "\n", - "First, we will create the LangChain agent. For more information on LangChain agents, see [this documentation](https://js.langchain.com/docs/modules/agents/)." + "First, we will create the LangChain agent. For more information on LangChain\n", + "agents, see [this documentation](https://js.langchain.com/docs/modules/agents/)." ] }, { @@ -89,20 +95,20 @@ "\n", "// Get the prompt to use - you can modify this!\n", "const prompt = await pull(\n", - " \"hwchase17/openai-functions-agent\"\n", + " \"hwchase17/openai-functions-agent\",\n", ");\n", "\n", "// Choose the LLM that will drive the agent\n", "const llm = new ChatOpenAI({\n", " modelName: \"gpt-4-1106-preview\",\n", - " temperature: 0\n", + " temperature: 0,\n", "});\n", "\n", "// Construct the OpenAI Functions agent\n", "const agentRunnable = await createOpenAIFunctionsAgent({\n", " llm,\n", " tools,\n", - " prompt\n", + " prompt,\n", "});" ] }, @@ -113,11 +119,16 @@ "source": [ "## Define the graph schema\n", "\n", - "We now define the graph state. The state for the traditional LangChain agent has a few attributes:\n", + "We now define the graph state. The state for the traditional LangChain agent has\n", + "a few attributes:\n", "\n", - "1. `input`: This is the input string representing the main ask from the user, passed in as input.\n", - "3. `steps`: This is list of actions and corresponding observations that the agent takes over time. This is updated each iteration of the agent.\n", - "4. `agentOutcome`: This is the response from the agent, either an AgentAction or AgentFinish. The AgentExecutor should finish when this is an AgentFinish, otherwise it should call the requested tools.\n" + "1. `input`: This is the input string representing the main ask from the user,\n", + " passed in as input.\n", + "2. `steps`: This is list of actions and corresponding observations that the\n", + " agent takes over time. This is updated each iteration of the agent.\n", + "3. `agentOutcome`: This is the response from the agent, either an AgentAction or\n", + " AgentFinish. The AgentExecutor should finish when this is an AgentFinish,\n", + " otherwise it should call the requested tools." ] }, { @@ -129,16 +140,16 @@ "source": [ "const agentState = {\n", " input: {\n", - " value: null\n", + " value: null,\n", " },\n", " steps: {\n", " value: (x, y) => x.concat(y),\n", - " default: () => []\n", + " default: () => [],\n", " },\n", " agentOutcome: {\n", - " value: null\n", - " }\n", - "};\n" + " value: null,\n", + " },\n", + "};" ] }, { @@ -148,24 +159,28 @@ "source": [ "## Define the nodes\n", "\n", - "We now need to define a few different nodes in our graph.\n", - "In `langgraph`, a node can be either a function or a [runnable](https://js.langchain.com/docs/expression_language/).\n", - "There are two main nodes we need for this:\n", + "We now need to define a few different nodes in our graph. In `langgraph`, a node\n", + "can be either a function or a\n", + "[runnable](https://js.langchain.com/docs/expression_language/). There are two\n", + "main nodes we need for this:\n", "\n", "1. The agent: responsible for deciding what (if any) actions to take.\n", - "2. A function to invoke tools: if the agent decides to take an action, this node will then execute that action.\n", + "2. A function to invoke tools: if the agent decides to take an action, this node\n", + " will then execute that action.\n", "\n", - "We will also need to define some edges.\n", - "Some of these edges may be conditional.\n", - "The reason they are conditional is that based on the output of a node, one of several paths may be taken.\n", - "The path that is taken is not known until that node is run (the LLM decides).\n", + "We will also need to define some edges. Some of these edges may be conditional.\n", + "The reason they are conditional is that based on the output of a node, one of\n", + "several paths may be taken. The path that is taken is not known until that node\n", + "is run (the LLM decides).\n", "\n", - "1. Conditional Edge: after the agent is called, we should either:\n", - " a. If the agent said to take an action, then the function to invoke tools should be called\n", - " b. If the agent said that it was finished, then it should finish\n", - "2. Normal Edge: after the tools are invoked, it should always go back to the agent to decide what to do next\n", + "1. Conditional Edge: after the agent is called, we should either: a. If the\n", + " agent said to take an action, then the function to invoke tools should be\n", + " called b. If the agent said that it was finished, then it should finish\n", + "2. Normal Edge: after the tools are invoked, it should always go back to the\n", + " agent to decide what to do next\n", "\n", - "Let's define the nodes, as well as a function to decide how what conditional edge to take." + "Let's define the nodes, as well as a function to decide how what conditional\n", + "edge to take." ] }, { @@ -216,7 +231,7 @@ " }\n", " const output = await toolExecutor.invoke(agentAction, config);\n", " return {\n", - " steps: [{ action: agentAction, observation: JSON.stringify(output) }]\n", + " steps: [{ action: agentAction, observation: JSON.stringify(output) }],\n", " };\n", "};" ] @@ -243,7 +258,7 @@ "\n", "// Define a new graph\n", "const workflow = new StateGraph({\n", - " channels: agentState\n", + " channels: agentState,\n", "});\n", "\n", "// Define the two nodes we will cycle between\n", @@ -271,8 +286,8 @@ " // If `tools`, then we call the tool node.\n", " continue: \"action\",\n", " // Otherwise we finish.\n", - " end: END\n", - " }\n", + " end: END,\n", + " },\n", ");\n", "\n", "// We now add a normal edge from `tools` to `agent`.\n", @@ -370,10 +385,10 @@ } ], "source": [ - "const inputs = { input: \"what is the weather in sf\" }\n", + "const inputs = { input: \"what is the weather in sf\" };\n", "for await (const s of await app.stream(inputs)) {\n", - " console.log(s)\n", - " console.log(\"----\\n\")\n", + " console.log(s);\n", + " console.log(\"----\\n\");\n", "}" ] } diff --git a/examples/chat_agent_executor_with_function_calling/base.ipynb b/examples/chat_agent_executor_with_function_calling/base.ipynb index 9abd6cc5..7f05d747 100644 --- a/examples/chat_agent_executor_with_function_calling/base.ipynb +++ b/examples/chat_agent_executor_with_function_calling/base.ipynb @@ -7,7 +7,8 @@ "source": [ "# Chat Agent Executor\n", "\n", - "In this example we will build a chat executor that uses function calling from scratch." + "In this example we will build a chat executor that uses function calling from\n", + "scratch." ] }, { @@ -16,6 +17,7 @@ "metadata": {}, "source": [ "## Setup¶\n", + "\n", "First we need to install the packages required\n", "\n", "```bash\n", @@ -28,7 +30,8 @@ "id": "0abe11f4-62ed-4dc4-8875-3db21e260d1d", "metadata": {}, "source": [ - "Next, we need to set API keys for OpenAI (the LLM we will use) and Tavily (the search tool we will use)\n", + "Next, we need to set API keys for OpenAI (the LLM we will use) and Tavily (the\n", + "search tool we will use)\n", "\n", "```bash\n", "export OPENAI_API_KEY=\n", @@ -41,7 +44,9 @@ "id": "f0ed46a8-effe-4596-b0e1-a6a29ee16f5c", "metadata": {}, "source": [ - "Optionally, we can set API key for [LangSmith tracing](https://smith.langchain.com/), which will give us best-in-class observability.\n", + "Optionally, we can set API key for\n", + "[LangSmith tracing](https://smith.langchain.com/), which will give us\n", + "best-in-class observability.\n", "\n", "```bash\n", "export LANGCHAIN_TRACING_V2=true\n", @@ -56,9 +61,11 @@ "source": [ "## Set up the tools\n", "\n", - "We will first define the tools we want to use.\n", - "For this simple example, we will use a built-in search tool via Tavily.\n", - "However, it is really easy to create your own tools - see documentation [here](https://js.langchain.com/docs/modules/agents/tools/dynamic) on how to do that.\n" + "We will first define the tools we want to use. For this simple example, we will\n", + "use a built-in search tool via Tavily. However, it is really easy to create your\n", + "own tools - see documentation\n", + "[here](https://js.langchain.com/docs/modules/agents/tools/dynamic) on how to do\n", + "that." ] }, { @@ -99,8 +106,8 @@ "id": "01885785-b71a-44d1-b1d6-7b5b14d53b58", "metadata": {}, "source": [ - "We can now wrap these tools in a simple ToolExecutor.\n", - "This is a real simple class that takes in a ToolInvocation and calls that tool, returning the output.\n", + "We can now wrap these tools in a simple ToolExecutor. This is a real simple\n", + "class that takes in a ToolInvocation and calls that tool, returning the output.\n", "\n", "A ToolInvocation is any type with `tool` and `toolInput` attribute." ] @@ -115,7 +122,7 @@ "import { ToolExecutor } from \"@langchain/langgraph/prebuilt\";\n", "\n", "const toolExecutor = new ToolExecutor({\n", - " tools\n", + " tools,\n", "});" ] }, @@ -126,13 +133,16 @@ "source": [ "## Set up the model\n", "\n", - "Now we need to load the chat model we want to use.\n", - "Importantly, this should satisfy two criteria:\n", + "Now we need to load the chat model we want to use. Importantly, this should\n", + "satisfy two criteria:\n", "\n", - "1. It should work with messages. We will represent all agent state in the form of messages, so it needs to be able to work well with them.\n", - "2. It should work with OpenAI function calling. This means it should either be an OpenAI model or a model that exposes a similar interface.\n", + "1. It should work with messages. We will represent all agent state in the form\n", + " of messages, so it needs to be able to work well with them.\n", + "2. It should work with OpenAI function calling. This means it should either be\n", + " an OpenAI model or a model that exposes a similar interface.\n", "\n", - "Note: these model requirements are not requirements for using LangGraph - they are just requirements for this one example.\n" + "Note: these model requirements are not requirements for using LangGraph - they\n", + "are just requirements for this one example." ] }, { @@ -148,7 +158,7 @@ "// See the streaming section for more information on this.\n", "const model = new ChatOpenAI({\n", " temperature: 0,\n", - " streaming: true\n", + " streaming: true,\n", "});" ] }, @@ -157,9 +167,9 @@ "id": "a77995c0-bae2-4cee-a036-8688a90f05b9", "metadata": {}, "source": [ - "\n", - "After we've done this, we should make sure the model knows that it has these tools available to call.\n", - "We can do this by converting the LangChain tools into the format for OpenAI function calling, and then bind them to the model class.\n" + "After we've done this, we should make sure the model knows that it has these\n", + "tools available to call. We can do this by converting the LangChain tools into\n", + "the format for OpenAI function calling, and then bind them to the model class." ] }, { @@ -186,17 +196,20 @@ "source": [ "### Define the agent state\n", "\n", - "The main type of graph in `langgraph` is the `StatefulGraph`.\n", - "This graph is parameterized by a state object that it passes around to each node.\n", - "Each node then returns operations to update that state.\n", - "These operations can either SET specific attributes on the state (e.g. overwrite the existing values) or ADD to the existing attribute.\n", - "Whether to set or add is denoted by annotating the state object you construct the graph with.\n", + "The main type of graph in `langgraph` is the `StatefulGraph`. This graph is\n", + "parameterized by a state object that it passes around to each node. Each node\n", + "then returns operations to update that state. These operations can either SET\n", + "specific attributes on the state (e.g. overwrite the existing values) or ADD to\n", + "the existing attribute. Whether to set or add is denoted by annotating the state\n", + "object you construct the graph with.\n", "\n", - "For this example, the state we will track will just be a list of messages.\n", - "We want each node to just add messages to that list.\n", - "Therefore, we will use an object with one key (`messages`) with the value as an object: `{ value: Function, default?: () => any }`\n", + "For this example, the state we will track will just be a list of messages. We\n", + "want each node to just add messages to that list. Therefore, we will use an\n", + "object with one key (`messages`) with the value as an object:\n", + "`{ value: Function, default?: () => any }`\n", "\n", - "The `default` key must be a factory that returns the default value for that attribute." + "The `default` key must be a factory that returns the default value for that\n", + "attribute." ] }, { @@ -212,8 +225,8 @@ " messages: {\n", " value: (x: BaseMessage[], y: BaseMessage[]) => x.concat(y),\n", " default: () => [],\n", - " }\n", - "}" + " },\n", + "};" ] }, { @@ -223,24 +236,28 @@ "source": [ "## Define the nodes\n", "\n", - "We now need to define a few different nodes in our graph.\n", - "In `langgraph`, a node can be either a function or a [runnable](https://js.langchain.com/docs/expression_language/).\n", - "There are two main nodes we need for this:\n", + "We now need to define a few different nodes in our graph. In `langgraph`, a node\n", + "can be either a function or a\n", + "[runnable](https://js.langchain.com/docs/expression_language/). There are two\n", + "main nodes we need for this:\n", "\n", "1. The agent: responsible for deciding what (if any) actions to take.\n", - "2. A function to invoke tools: if the agent decides to take an action, this node will then execute that action.\n", - "\n", - "We will also need to define some edges.\n", - "Some of these edges may be conditional.\n", - "The reason they are conditional is that based on the output of a node, one of several paths may be taken.\n", - "The path that is taken is not known until that node is run (the LLM decides).\n", - "\n", - "1. Conditional Edge: after the agent is called, we should either:\n", - " a. If the agent said to take an action, then the function to invoke tools should be called\n", - " b. If the agent said that it was finished, then it should finish\n", - "2. Normal Edge: after the tools are invoked, it should always go back to the agent to decide what to do next\n", - "\n", - "Let's define the nodes, as well as a function to decide how what conditional edge to take.\n" + "2. A function to invoke tools: if the agent decides to take an action, this node\n", + " will then execute that action.\n", + "\n", + "We will also need to define some edges. Some of these edges may be conditional.\n", + "The reason they are conditional is that based on the output of a node, one of\n", + "several paths may be taken. The path that is taken is not known until that node\n", + "is run (the LLM decides).\n", + "\n", + "1. Conditional Edge: after the agent is called, we should either: a. If the\n", + " agent said to take an action, then the function to invoke tools should be\n", + " called b. If the agent said that it was finished, then it should finish\n", + "2. Normal Edge: after the tools are invoked, it should always go back to the\n", + " agent to decide what to do next\n", + "\n", + "Let's define the nodes, as well as a function to decide how what conditional\n", + "edge to take." ] }, { @@ -285,7 +302,7 @@ " return {\n", " tool: lastMessage.additional_kwargs.function_call.name,\n", " toolInput: JSON.stringify(\n", - " lastMessage.additional_kwargs.function_call.arguments\n", + " lastMessage.additional_kwargs.function_call.arguments,\n", " ),\n", " log: \"\",\n", " };\n", @@ -293,7 +310,7 @@ "\n", "// Define the function that calls the model\n", "const callModel = async (\n", - " state: { messages: Array }\n", + " state: { messages: Array },\n", ") => {\n", " const { messages } = state;\n", " const response = await newModel.invoke(messages);\n", @@ -304,7 +321,7 @@ "};\n", "\n", "const callTool = async (\n", - " state: { messages: Array }\n", + " state: { messages: Array },\n", ") => {\n", " const action = _getAction(state);\n", " // We call the tool_executor and get back a response\n", @@ -336,7 +353,7 @@ "metadata": {}, "outputs": [], "source": [ - "import { StateGraph, END } from \"@langchain/langgraph\";\n", + "import { END, StateGraph } from \"@langchain/langgraph\";\n", "\n", "// Define a new graph\n", "const workflow = new StateGraph({\n", @@ -353,23 +370,23 @@ "\n", "// We now add a conditional edge\n", "workflow.addConditionalEdges(\n", - "// First, we define the start node. We use `agent`.\n", - "// This means these are the edges taken after the `agent` node is called.\n", - "\"agent\",\n", - "// Next, we pass in the function that will determine which node is called next.\n", - "shouldContinue,\n", - "// Finally we pass in a mapping.\n", - "// The keys are strings, and the values are other nodes.\n", - "// END is a special node marking that the graph should finish.\n", - "// What will happen is we will call `should_continue`, and then the output of that\n", - "// will be matched against the keys in this mapping.\n", - "// Based on which one it matches, that node will then be called.\n", - "{\n", - " // If `tools`, then we call the tool node.\n", - " continue: \"action\",\n", - " // Otherwise we finish.\n", - " end: END\n", - "}\n", + " // First, we define the start node. We use `agent`.\n", + " // This means these are the edges taken after the `agent` node is called.\n", + " \"agent\",\n", + " // Next, we pass in the function that will determine which node is called next.\n", + " shouldContinue,\n", + " // Finally we pass in a mapping.\n", + " // The keys are strings, and the values are other nodes.\n", + " // END is a special node marking that the graph should finish.\n", + " // What will happen is we will call `should_continue`, and then the output of that\n", + " // will be matched against the keys in this mapping.\n", + " // Based on which one it matches, that node will then be called.\n", + " {\n", + " // If `tools`, then we call the tool node.\n", + " continue: \"action\",\n", + " // Otherwise we finish.\n", + " end: END,\n", + " },\n", ");\n", "\n", "// We now add a normal edge from `tools` to `agent`.\n", @@ -389,8 +406,9 @@ "source": [ "## Use it!\n", "\n", - "We can now use it!\n", - "This now exposes the [same interface](https://python.langchain.com/docs/expression_language/) as all other LangChain runnables." + "We can now use it! This now exposes the\n", + "[same interface](https://python.langchain.com/docs/expression_language/) as all\n", + "other LangChain runnables." ] }, { @@ -461,8 +479,8 @@ "import { HumanMessage } from \"@langchain/core/messages\";\n", "\n", "const inputs = {\n", - " messages: [new HumanMessage(\"what is the weather in sf\")]\n", - "}\n", + " messages: [new HumanMessage(\"what is the weather in sf\")],\n", + "};\n", "await app.invoke(inputs);" ] }, @@ -471,8 +489,9 @@ "id": "5a9e8155-70c5-4973-912c-dc55104b2acf", "metadata": {}, "source": [ - "This may take a little bit - it's making a few calls behind the scenes.\n", - "In order to start seeing some intermediate results as they happen, we can use streaming. See below for more information on that.\n", + "This may take a little bit - it's making a few calls behind the scenes. In order\n", + "to start seeing some intermediate results as they happen, we can use streaming.\n", + "See below for more information on that.\n", "\n", "## Streaming\n", "\n", @@ -480,7 +499,8 @@ "\n", "### Streaming Node Output\n", "\n", - "One of the benefits of using LangGraph is that it is easy to stream output as it's produced by each node.\n" + "One of the benefits of using LangGraph is that it is easy to stream output as\n", + "it's produced by each node." ] }, { @@ -600,12 +620,12 @@ ], "source": [ "const inputs = {\n", - " messages: [new HumanMessage(\"what is the weather in sf\")]\n", - " };\n", - " for await (const output of await app.stream(inputs)) {\n", - " console.log(\"output\", output);\n", - " console.log(\"-----\\n\");\n", - " }" + " messages: [new HumanMessage(\"what is the weather in sf\")],\n", + "};\n", + "for await (const output of await app.stream(inputs)) {\n", + " console.log(\"output\", output);\n", + " console.log(\"-----\\n\");\n", + "}" ] } ], diff --git a/examples/chatbots/customer_support_mistral.ipynb b/examples/chatbots/customer_support_mistral.ipynb index 3533ad5d..6d2af097 100644 --- a/examples/chatbots/customer_support_mistral.ipynb +++ b/examples/chatbots/customer_support_mistral.ipynb @@ -6,18 +6,29 @@ "source": [ "# Customer support chatbot\n", "\n", - "Below is an example of a customer support chatbot modeled as a state machine. It uses the simpler `MessageGraph` version of LangGraph, and is designed to work with smaller models by reducing the decision space a given LLM call has.\n", + "Below is an example of a customer support chatbot modeled as a state machine. It\n", + "uses the simpler `MessageGraph` version of LangGraph, and is designed to work\n", + "with smaller models by reducing the decision space a given LLM call has.\n", "\n", - "The entrypoint is a node containing a chain that we have prompted to answer basic questions, but delegate questions related to billing or technical support to other \"teams\".\n", + "The entrypoint is a node containing a chain that we have prompted to answer\n", + "basic questions, but delegate questions related to billing or technical support\n", + "to other \"teams\".\n", "\n", - "Depending on this entry node's response, the edge from that node will use an LLM call to determine whether to respond directly to the user or invoke either the `billing_support` or `technical_support` nodes.\n", + "Depending on this entry node's response, the edge from that node will use an LLM\n", + "call to determine whether to respond directly to the user or invoke either the\n", + "`billing_support` or `technical_support` nodes.\n", "\n", - "- The technical support will attempt to answer the user's question with a more focused prompt.\n", - "- The billing agent can choose to answer the user's question, or can authorize a refund (currently just returns directly to the user with an acknowledgement).\n", + "- The technical support will attempt to answer the user's question with a more\n", + " focused prompt.\n", + "- The billing agent can choose to answer the user's question, or can authorize a\n", + " refund (currently just returns directly to the user with an acknowledgement).\n", "\n", "![Diagram](./diagram.png)\n", "\n", - "This is intended as a sample, proof of concept architecture - you could extend this example by giving individual nodes the ability to perform retrieval, other tools, adding human-in-the-loop/prompting the user for responses, delegating to more powerful models at deeper stages etc.\n", + "This is intended as a sample, proof of concept architecture - you could extend\n", + "this example by giving individual nodes the ability to perform retrieval, other\n", + "tools, adding human-in-the-loop/prompting the user for responses, delegating to\n", + "more powerful models at deeper stages etc.\n", "\n", "Let's dive in!" ] @@ -28,13 +39,15 @@ "source": [ "## Setup\n", "\n", - "First we need to install the required packages. We'll use Cloudflare's Workers AI to run the required inference.\n", + "First we need to install the required packages. We'll use Cloudflare's Workers\n", + "AI to run the required inference.\n", "\n", "```bash\n", "yarn add @langchain/langgraph @langchain/cloudflare\n", "```\n", "\n", - "You'll also need to set the following environment variable. You can get them from your Cloudflare dashboard:\n", + "You'll also need to set the following environment variable. You can get them\n", + "from your Cloudflare dashboard:\n", "\n", "```ini\n", "CLOUDFLARE_ACCOUNT_ID=\n", @@ -43,7 +56,8 @@ "\n", "## Initializing the model\n", "\n", - "First, we define the LLM we'll use for all calls and the LangGraph state. We'll use a chat fine-tuned version of Mistral 7B called `neural-chat-7b-v3-1-awq`:" + "First, we define the LLM we'll use for all calls and the LangGraph state. We'll\n", + "use a chat fine-tuned version of Mistral 7B called `neural-chat-7b-v3-1-awq`:" ] }, { @@ -67,7 +81,8 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "As an exercise, let's see what happens with a naive attempt to get the model to answer questions:" + "As an exercise, let's see what happens with a naive attempt to get the model to\n", + "answer questions:" ] }, { @@ -92,12 +107,14 @@ "import { StringOutputParser } from \"@langchain/core/output_parsers\";\n", "\n", "const naivePrompt = ChatPromptTemplate.fromTemplate(\n", - " `You are an expert support specialist, able to answer any question about LangCorp, a company that sells computers.`\n", + " `You are an expert support specialist, able to answer any question about LangCorp, a company that sells computers.`,\n", ");\n", "\n", "const chain = naivePrompt.pipe(model).pipe(new StringOutputParser());\n", "\n", - "const res = await chain.invoke(\"I've changed my mind and I want a refund for order #182818!\");\n", + "const res = await chain.invoke(\n", + " \"I've changed my mind and I want a refund for order #182818!\",\n", + ");\n", "\n", "console.log(res);" ] @@ -115,9 +132,13 @@ "source": [ "## Laying out the graph\n", "\n", - "Now let's start defining our nodes. Each node's return value will be added to the graph state, which for `MessageGraph` is a list of messages. This state will be passed to the next executed node, or returned if execution has finished.\n", + "Now let's start defining our nodes. Each node's return value will be added to\n", + "the graph state, which for `MessageGraph` is a list of messages. This state will\n", + "be passed to the next executed node, or returned if execution has finished.\n", "\n", - "Let's define our entrypoint node. This will be modeled after a secretary who can handle incoming questions and respond conversationally or route to a more specialized team:" + "Let's define our entrypoint node. This will be modeled after a secretary who can\n", + "handle incoming questions and respond conversationally or route to a more\n", + "specialized team:" ] }, { @@ -130,7 +151,8 @@ "import type { BaseMessage } from \"@langchain/core/messages\";\n", "\n", "graph.addNode(\"initial_support\", async (state: BaseMessage[]) => {\n", - " const SYSTEM_TEMPLATE = `You are frontline support staff for LangCorp, a company that sells computers.\n", + " const SYSTEM_TEMPLATE =\n", + " `You are frontline support staff for LangCorp, a company that sells computers.\n", "Be concise in your responses.\n", "You can chat with customers and help them with basic questions, but if the customer is having a billing or technical problem,\n", "do not try to answer the question directly or gather information.\n", @@ -152,7 +174,9 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "Next, our nodes representing billing and technical support. We give special instructions in the billing prompt that it can choose to authorize refunds by routing to another agent:" + "Next, our nodes representing billing and technical support. We give special\n", + "instructions in the billing prompt that it can choose to authorize refunds by\n", + "routing to another agent:" ] }, { @@ -162,7 +186,8 @@ "outputs": [], "source": [ "graph.addNode(\"billing_support\", async (state: BaseMessage[]) => {\n", - " const SYSTEM_TEMPLATE = `You are an expert billing support specialist for LangCorp, a company that sells computers.\n", + " const SYSTEM_TEMPLATE =\n", + " `You are an expert billing support specialist for LangCorp, a company that sells computers.\n", "Help the user to the best of your ability, but be concise in your responses.\n", "You have the ability to authorize refunds, which you can do by transferring the user to another agent who will collect the required information.\n", "If you do, assume the other agent has all necessary information about the customer and their order.\n", @@ -183,7 +208,8 @@ "});\n", "\n", "graph.addNode(\"technical_support\", async (state: BaseMessage[]) => {\n", - " const SYSTEM_TEMPLATE = `You are an expert at diagnosing technical computer issues. You work for a company called LangCorp that sells computers.\n", + " const SYSTEM_TEMPLATE =\n", + " `You are an expert at diagnosing technical computer issues. You work for a company called LangCorp that sells computers.\n", "Help the user to the best of your ability, but be concise in your responses.`;\n", "\n", " let messages = state;\n", @@ -205,7 +231,8 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "Finally, a node that can handle refunds. The logic is stubbed out here since it's not a real system:" + "Finally, a node that can handle refunds. The logic is stubbed out here since\n", + "it's not a real system:" ] }, { @@ -227,9 +254,13 @@ "source": [ "## Connecting the nodes\n", "\n", - "Great! Now let's move onto the edges. These edges will evaluate the current state of the graph created by the return values of the individual nodes and route execution accordingly.\n", + "Great! Now let's move onto the edges. These edges will evaluate the current\n", + "state of the graph created by the return values of the individual nodes and\n", + "route execution accordingly.\n", "\n", - "First, we want our `initial_support` node to either delegate to the billing node, technical node, or just respond directly to the user. Here's one example of how we might do that:" + "First, we want our `initial_support` node to either delegate to the billing\n", + "node, technical node, or just respond directly to the user. Here's one example\n", + "of how we might do that:" ] }, { @@ -240,12 +271,12 @@ "source": [ "import { END } from \"@langchain/langgraph\";\n", "\n", - "\n", "graph.addConditionalEdges(\"initial_support\", async (state) => {\n", " const mostRecentMessage = state[state.length - 1];\n", " const SYSTEM_TEMPLATE = `You are an expert customer support routing system.\n", "Your job is to detect whether a customer support representative is routing a user to a billing team or a technical team, or if they are just responding conversationally.`;\n", - " const HUMAN_TEMPLATE = `The previous conversation is an interaction between a customer support representative and a user.\n", + " const HUMAN_TEMPLATE =\n", + " `The previous conversation is an interaction between a customer support representative and a user.\n", "Extract whether the representative is routing the user to a billing or technical team, or whether they are just responding conversationally.\n", "\n", "If they want to route the user to the billing team, respond only with the word \"BILLING\".\n", @@ -272,7 +303,7 @@ "}, {\n", " billing: \"billing_support\",\n", " technical: \"technical_support\",\n", - " conversational: END\n", + " conversational: END,\n", "});" ] }, @@ -280,9 +311,13 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "**Note:** We do not use function/tool calling here for extraction because our model does not support it, but this would be a reasonable time to use that if your model does.\n", + "**Note:** We do not use function/tool calling here for extraction because our\n", + "model does not support it, but this would be a reasonable time to use that if\n", + "your model does.\n", "\n", - "Let's continue. We add an edge making the technical support node always end, since it has no tools to call. The billing support node uses a conditional edge since it can either call the refund tool or end." + "Let's continue. We add an edge making the technical support node always end,\n", + "since it has no tools to call. The billing support node uses a conditional edge\n", + "since it can either call the refund tool or end." ] }, { @@ -295,8 +330,10 @@ "\n", "graph.addConditionalEdges(\"billing_support\", async (state) => {\n", " const mostRecentMessage = state[state.length - 1];\n", - " const SYSTEM_TEMPLATE = `Your job is to detect whether a billing support representative wants to refund the user.`;\n", - " const HUMAN_TEMPLATE = `The following text is a response from a customer support representative.\n", + " const SYSTEM_TEMPLATE =\n", + " `Your job is to detect whether a billing support representative wants to refund the user.`;\n", + " const HUMAN_TEMPLATE =\n", + " `The following text is a response from a customer support representative.\n", "Extract whether they want to refund the user or not.\n", "If they want to refund the user, respond only with the word \"REFUND\".\n", "Otherwise, respond only with the word \"RESPOND\".\n", @@ -323,7 +360,7 @@ " }\n", "}, {\n", " refund: \"refund_tool\",\n", - " end: END\n", + " end: END,\n", "});\n", "\n", "graph.addEdge(\"refund_tool\", END);" @@ -351,9 +388,14 @@ "source": [ "And now let's test it!\n", "\n", - "We can get the returned value from the executed nodes as they are generated using the `.stream()` runnable method (we also could go even more granular and get output as it is generated using `.streamEvents()`, but this requires a bit more parsing).\n", + "We can get the returned value from the executed nodes as they are generated\n", + "using the `.stream()` runnable method (we also could go even more granular and\n", + "get output as it is generated using `.streamEvents()`, but this requires a bit\n", + "more parsing).\n", "\n", - "Here's an example with a billing related refund query. Because we are using `MessageGraph`, the input must be a message (or a list of messages) representing the user's question:" + "Here's an example with a billing related refund query. Because we are using\n", + "`MessageGraph`, the input must be a message (or a list of messages) representing\n", + "the user's question:" ] }, { @@ -381,16 +423,18 @@ "import { HumanMessage } from \"@langchain/core/messages\";\n", "\n", "const stream = await runnable.stream(\n", - " new HumanMessage(\"I've changed my mind and I want a refund for order #182818!\")\n", + " new HumanMessage(\n", + " \"I've changed my mind and I want a refund for order #182818!\",\n", + " ),\n", ");\n", "\n", "for await (const value of stream) {\n", " // Each node returns only one message\n", " const [nodeName, output] = Object.entries(value)[0];\n", " if (nodeName !== END) {\n", - " console.log(\"---STEP---\")\n", + " console.log(\"---STEP---\");\n", " console.log(nodeName, output.content);\n", - " console.log(\"---END STEP---\")\n", + " console.log(\"---END STEP---\");\n", " }\n", "}" ] @@ -426,16 +470,18 @@ ], "source": [ "const stream = await runnable.stream(\n", - " new HumanMessage(\"My LangCorp computer isn't turning on because I dropped it in water.\")\n", + " new HumanMessage(\n", + " \"My LangCorp computer isn't turning on because I dropped it in water.\",\n", + " ),\n", ");\n", "\n", "for await (const value of stream) {\n", " // Each node returns only one message\n", " const [nodeName, output] = Object.entries(value)[0];\n", " if (nodeName !== END) {\n", - " console.log(\"---STEP---\")\n", + " console.log(\"---STEP---\");\n", " console.log(nodeName, output.content);\n", - " console.log(\"---END STEP---\")\n", + " console.log(\"---END STEP---\");\n", " }\n", "}" ] @@ -468,16 +514,16 @@ ], "source": [ "const stream = await runnable.stream(\n", - " new HumanMessage(\"How are you? I'm Cobb.\")\n", + " new HumanMessage(\"How are you? I'm Cobb.\"),\n", ");\n", "\n", "for await (const value of stream) {\n", " // Each node returns only one message\n", " const [nodeName, output] = Object.entries(value)[0];\n", " if (nodeName !== END) {\n", - " console.log(\"---STEP---\")\n", + " console.log(\"---STEP---\");\n", " console.log(nodeName, output.content);\n", - " console.log(\"---END STEP---\")\n", + " console.log(\"---END STEP---\");\n", " }\n", "}" ] diff --git a/examples/how-tos/force-calling-a-tool-first.ipynb b/examples/how-tos/force-calling-a-tool-first.ipynb new file mode 100644 index 00000000..7a3b04fc --- /dev/null +++ b/examples/how-tos/force-calling-a-tool-first.ipynb @@ -0,0 +1,636 @@ +{ + "cells": [ + { + "cell_type": "markdown", + "id": "7d4b6a8d", + "metadata": {}, + "source": [ + "# Force Calling a Tool First\n", + "\n", + "In this example we will build a ReAct agent that **always** calls a certain tool\n", + "first, before making any plans. In this example, we will create an agent with a\n", + "search tool. However, at the start we will force the agent to call the search\n", + "tool (and then let it do whatever it wants after). This is useful when you know\n", + "you want to execute specific actions in your application but also want the\n", + "flexibility of letting the LLM follow up on the user's query after going through\n", + "that fixed sequence." + ] + }, + { + "cell_type": "markdown", + "id": "ee2d626b", + "metadata": {}, + "source": [ + "## Setup\n", + "\n", + "First we need to install the packages required\n", + "\n", + "```bash\n", + "yarn add @langchain/langgraph @langchain/openai\n", + "```\n", + "\n", + "Next, we need to set API keys for OpenAI (the LLM we will use). Optionally, we\n", + "can set API key for [LangSmith tracing](https://smith.langchain.com/), which\n", + "will give us best-in-class observability." + ] + }, + { + "cell_type": "code", + "execution_count": 14, + "id": "f7d70783", + "metadata": {}, + "outputs": [], + "source": [ + "// Deno.env.set(\"OPENAI*API_KEY\", \"sk*...\");\n", + "\n", + "// Optional, add tracing in LangSmith\n", + "// Deno.env.set(\"LANGCHAIN_API_KEY\", \"ls\\_\\_...\");\n", + "// Deno.env.set(\"LANGCHAIN_CALLBACKS_BACKGROUND\", \"true\");\n", + "Deno.env.set(\"LANGCHAIN_TRACING_V2\", \"true\");\n", + "Deno.env.set(\"LANGCHAIN_PROJECT\", \"Force Calling a Tool First: LangGraphJS\");\n" + ] + }, + { + "cell_type": "markdown", + "id": "7321b035", + "metadata": {}, + "source": [ + "## Set up the tools\n", + "\n", + "We will first define the tools we want to use. For this simple example, we will\n", + "use a built-in search tool via Tavily. However, it is really easy to create your\n", + "own tools - see documentation\n", + "[here](https://js.langchain.com/docs/modules/agents/tools/dynamic) on how to do\n", + "that." + ] + }, + { + "cell_type": "code", + "execution_count": 15, + "id": "c012c726", + "metadata": {}, + "outputs": [ + { + "data": { + "text/plain": [ + "\u001b[32m\"Cold, with a low of 13 ℃\"\u001b[39m" + ] + }, + "execution_count": 15, + "metadata": {}, + "output_type": "execute_result" + } + ], + "source": [ + "import { DynamicStructuredTool } from \"@langchain/core/tools\";\n", + "import { z } from \"zod\";\n", + "\n", + "const searchTool = new DynamicStructuredTool({\n", + " name: \"search\",\n", + " description:\n", + " \"Use to surf the web, fetch current information, check the weather, and retrieve other information.\",\n", + " schema: z.object({\n", + " query: z.string().describe(\"The query to use in your search.\"),\n", + " }),\n", + " func: async ({ query }: { query: string }) => {\n", + " // This is a placeholder for the actual implementation\n", + " return \"Cold, with a low of 13 ℃\";\n", + " },\n", + "});\n", + "\n", + "await searchTool.invoke({ query: \"What's the weather like?\" });\n", + "\n", + "const tools = [searchTool];\n" + ] + }, + { + "cell_type": "markdown", + "id": "3e6df03f", + "metadata": {}, + "source": [ + "We can now wrap these tools in a simple ToolExecutor. This is a real simple\n", + "class that takes in a ToolInvocation and calls that tool, returning the output.\n", + "A ToolInvocation is any type with `tool` and `toolInput` attribute." + ] + }, + { + "cell_type": "code", + "execution_count": 16, + "id": "4b29aeb2", + "metadata": {}, + "outputs": [], + "source": [ + "import { ToolNode } from \"@langchain/langgraph/prebuilt\";\n", + "\n", + "const toolNode = new ToolNode(tools);\n" + ] + }, + { + "cell_type": "markdown", + "id": "29f62fe3", + "metadata": {}, + "source": [ + "## Set up the model\n", + "\n", + "Now we need to load the chat model we want to use.\\\n", + "Importantly, this should satisfy two criteria:\n", + "\n", + "1. It should work with messages. We will represent all agent state in the form\n", + " of messages, so it needs to be able to work well with them.\n", + "2. It should work with OpenAI function calling. This means it should either be\n", + " an OpenAI model or a model that exposes a similar interface.\n", + "\n", + "Note: these model requirements are not requirements for using LangGraph - they\n", + "are just requirements for this one example." + ] + }, + { + "cell_type": "code", + "execution_count": 17, + "id": "c600af4a", + "metadata": {}, + "outputs": [], + "source": [ + "import { ChatOpenAI } from \"@langchain/openai\";\n", + "\n", + "const model = new ChatOpenAI({\n", + " temperature: 0,\n", + "});\n" + ] + }, + { + "cell_type": "markdown", + "id": "de429bd2", + "metadata": {}, + "source": [ + "After we've done this, we should make sure the model knows that it has these\n", + "tools available to call. We can do this by converting the LangChain tools into\n", + "the format for OpenAI function calling, and then bind them to the model class." + ] + }, + { + "cell_type": "code", + "execution_count": 18, + "id": "38310048", + "metadata": {}, + "outputs": [], + "source": [ + "const boundModel = model.bindTools(tools);\n" + ] + }, + { + "cell_type": "markdown", + "id": "a8178642", + "metadata": {}, + "source": [ + "## Define the agent state\n", + "\n", + "The main type of graph in `langgraph` is the `StateGraph`. This graph is\n", + "parameterized by a state object that it passes around to each node. Each node\n", + "then returns operations to update that state.\n", + "\n", + "For this example, the state we will track will just be a list of messages. We\n", + "want each node to just add messages to that list. Therefore, we will define the\n", + "agent state as an object with one key (`messages`) with the value specifying how\n", + "to update the state." + ] + }, + { + "cell_type": "code", + "execution_count": 19, + "id": "6e6f8a6e", + "metadata": {}, + "outputs": [], + "source": [ + "import { BaseMessage } from \"@langchain/core/messages\";\n", + "\n", + "const agentState = {\n", + " messages: {\n", + " value: (x: BaseMessage[], y: BaseMessage[]) => x.concat(y),\n", + " default: () => [],\n", + " },\n", + "};\n" + ] + }, + { + "cell_type": "markdown", + "id": "766f32b5", + "metadata": {}, + "source": [ + "## Define the nodes\n", + "\n", + "We now need to define a few different nodes in our graph. In `langgraph`, a node\n", + "can be either a function or a\n", + "[runnable](https://js.langchain.com/docs/expression_language/). There are two\n", + "main nodes we need for this:\n", + "\n", + "1. The agent: responsible for deciding what (if any) actions to take.\n", + "2. A function to invoke tools: if the agent decides to take an action, this node\n", + " will then execute that action.\n", + "\n", + "We will also need to define some edges. Some of these edges may be conditional.\n", + "The reason they are conditional is that based on the output of a node, one of\n", + "several paths may be taken. The path that is taken is not known until that node\n", + "is run (the LLM decides).\n", + "\n", + "1. Conditional Edge: after the agent is called, we should either: a. If the\n", + " agent said to take an action, then the function to invoke tools should be\n", + " called\\\n", + " b. If the agent said that it was finished, then it should finish\n", + "2. Normal Edge: after the tools are invoked, it should always go back to the\n", + " agent to decide what to do next\n", + "\n", + "Let's define the nodes, as well as a function to decide how what conditional\n", + "edge to take." + ] + }, + { + "cell_type": "code", + "execution_count": 20, + "id": "9a32665b", + "metadata": {}, + "outputs": [], + "source": [ + "import { AIMessage } from \"@langchain/core/messages\";\n", + "import { AgentAction } from \"@langchain/core/agents\";\n", + "import type { RunnableConfig } from \"@langchain/core/runnables\";\n", + "\n", + "// Define logic that will be used to determine which conditional edge to go down\n", + "const shouldContinue = (state: { messages: Array }) => {\n", + " const { messages } = state;\n", + " const lastMessage = messages[messages.length - 1] as AIMessage;\n", + " // If there is no function call, then we finish\n", + " if (!lastMessage.tool_calls || lastMessage.tool_calls.length === 0) {\n", + " return \"end\";\n", + " }\n", + " // Otherwise if there is, we continue\n", + " return \"continue\";\n", + "};\n", + "\n", + "// Define the function that calls the model\n", + "const callModel = async (\n", + " state: { messages: Array },\n", + " config: RunnableConfig,\n", + ") => {\n", + " const { messages } = state;\n", + " let response = undefined;\n", + " for await (const message of await boundModel.stream(messages, config)) {\n", + " if (!response) {\n", + " response = message;\n", + " } else {\n", + " response = response.concat(message);\n", + " }\n", + " }\n", + " // We return an object, because this will get added to the existing list\n", + " return {\n", + " messages: [response],\n", + " };\n", + "};" + ] + }, + { + "cell_type": "markdown", + "id": "19c627b3", + "metadata": {}, + "source": [ + "**MODIFICATION**\n", + "\n", + "Here we create a node that returns an AIMessage with a tool call - we will use\n", + "this at the start to force it call a tool" + ] + }, + { + "cell_type": "code", + "execution_count": 21, + "id": "d5b72426", + "metadata": {}, + "outputs": [], + "source": [ + "// This is the new first - the first call of the model we want to explicitly hard-code some action\n", + "const firstModel = async (state: { messages: Array }) => {\n", + " const humanInput = state.messages[state.messages.length - 1].content || \"\";\n", + " return {\n", + " messages: [\n", + " new AIMessage({\n", + " content: \"\",\n", + " tool_calls: [\n", + " {\n", + " name: \"search\",\n", + " args: {\n", + " query: humanInput,\n", + " },\n", + " id: \"tool_abcd123\",\n", + " },\n", + " ],\n", + " }),\n", + " ],\n", + " };\n", + "};\n" + ] + }, + { + "cell_type": "markdown", + "id": "35419e01", + "metadata": {}, + "source": [ + "## Define the graph\n", + "\n", + "We can now put it all together and define the graph!\n", + "\n", + "**MODIFICATION**\n", + "\n", + "We will define a `firstModel` node which we will set as the entrypoint.\n" + ] + }, + { + "cell_type": "code", + "execution_count": 24, + "id": "3d27c330", + "metadata": {}, + "outputs": [ + { + "data": { + "text/plain": [ + "StateGraph {\n", + " nodes: {\n", + " first_agent: RunnableLambda {\n", + " lc_serializable: \u001b[33mfalse\u001b[39m,\n", + " lc_kwargs: { func: \u001b[36m[AsyncFunction: firstModel]\u001b[39m },\n", + " lc_runnable: \u001b[33mtrue\u001b[39m,\n", + " name: \u001b[90mundefined\u001b[39m,\n", + " lc_namespace: [ \u001b[32m\"langchain_core\"\u001b[39m, \u001b[32m\"runnables\"\u001b[39m ],\n", + " func: \u001b[36m[AsyncFunction: firstModel]\u001b[39m\n", + " },\n", + " agent: RunnableLambda {\n", + " lc_serializable: \u001b[33mfalse\u001b[39m,\n", + " lc_kwargs: { func: \u001b[36m[AsyncFunction: callModel]\u001b[39m },\n", + " lc_runnable: \u001b[33mtrue\u001b[39m,\n", + " name: \u001b[90mundefined\u001b[39m,\n", + " lc_namespace: [ \u001b[32m\"langchain_core\"\u001b[39m, \u001b[32m\"runnables\"\u001b[39m ],\n", + " func: \u001b[36m[AsyncFunction: callModel]\u001b[39m\n", + " },\n", + " action: ToolNode {\n", + " lc_serializable: \u001b[33mfalse\u001b[39m,\n", + " lc_kwargs: {},\n", + " lc_runnable: \u001b[33mtrue\u001b[39m,\n", + " name: \u001b[32m\"tools\"\u001b[39m,\n", + " lc_namespace: [ \u001b[32m\"langgraph\"\u001b[39m ],\n", + " func: \u001b[36m[Function: func]\u001b[39m,\n", + " tags: \u001b[90mundefined\u001b[39m,\n", + " config: { tags: [] },\n", + " trace: \u001b[33mtrue\u001b[39m,\n", + " recurse: \u001b[33mtrue\u001b[39m,\n", + " tools: [\n", + " DynamicStructuredTool {\n", + " lc_serializable: \u001b[33mfalse\u001b[39m,\n", + " lc_kwargs: \u001b[36m[Object]\u001b[39m,\n", + " lc_runnable: \u001b[33mtrue\u001b[39m,\n", + " name: \u001b[32m\"search\"\u001b[39m,\n", + " verbose: \u001b[33mfalse\u001b[39m,\n", + " callbacks: \u001b[90mundefined\u001b[39m,\n", + " tags: [],\n", + " metadata: {},\n", + " returnDirect: \u001b[33mfalse\u001b[39m,\n", + " description: \u001b[32m\"Use to surf the web, fetch current information, check the weather, and retrieve other information.\"\u001b[39m,\n", + " func: \u001b[36m[AsyncFunction: func]\u001b[39m,\n", + " schema: \u001b[36m[ZodObject]\u001b[39m\n", + " }\n", + " ]\n", + " }\n", + " },\n", + " edges: Set(3) {\n", + " [ \u001b[32m\"__start__\"\u001b[39m, \u001b[32m\"first_agent\"\u001b[39m ],\n", + " [ \u001b[32m\"action\"\u001b[39m, \u001b[32m\"agent\"\u001b[39m ],\n", + " [ \u001b[32m\"first_agent\"\u001b[39m, \u001b[32m\"action\"\u001b[39m ]\n", + " },\n", + " branches: {\n", + " agent: {\n", + " shouldContinue: Branch {\n", + " condition: \u001b[36m[Function: shouldContinue]\u001b[39m,\n", + " ends: { continue: \u001b[32m\"action\"\u001b[39m, end: \u001b[32m\"__end__\"\u001b[39m },\n", + " then: \u001b[90mundefined\u001b[39m\n", + " }\n", + " }\n", + " },\n", + " entryPoint: \u001b[90mundefined\u001b[39m,\n", + " compiled: \u001b[33mtrue\u001b[39m,\n", + " supportMultipleEdges: \u001b[33mtrue\u001b[39m,\n", + " channels: {\n", + " messages: BinaryOperatorAggregate {\n", + " lc_graph_name: \u001b[32m\"BinaryOperatorAggregate\"\u001b[39m,\n", + " value: [],\n", + " operator: \u001b[36m[Function: value]\u001b[39m,\n", + " initialValueFactory: \u001b[36m[Function: default]\u001b[39m\n", + " }\n", + " },\n", + " waitingEdges: Set(0) {}\n", + "}" + ] + }, + "execution_count": 24, + "metadata": {}, + "output_type": "execute_result" + } + ], + "source": [ + "import { END, START, StateGraph } from \"@langchain/langgraph\";\n", + "\n", + "// Define a new graph\n", + "const workflow = new StateGraph({\n", + " channels: agentState,\n", + "});\n", + "\n", + "// Define the new entrypoint\n", + "workflow.addNode(\"first_agent\", firstModel);\n", + "\n", + "// Define the two nodes we will cycle between\n", + "workflow.addNode(\"agent\", callModel);\n", + "workflow.addNode(\"action\", toolNode);\n", + "\n", + "// Set the entrypoint as `first_agent`\n", + "// by creating an edge from the virtual __start__ node to `first_agent`\n", + "workflow.addEdge(START, \"first_agent\");\n", + "\n", + "// We now add a conditional edge\n", + "workflow.addConditionalEdges(\n", + " // First, we define the start node. We use `agent`.\n", + " // This means these are the edges taken after the `agent` node is called.\n", + " \"agent\",\n", + " // Next, we pass in the function that will determine which node is called next.\n", + " shouldContinue,\n", + " // Finally we pass in a mapping.\n", + " // The keys are strings, and the values are other nodes.\n", + " // END is a special node marking that the graph should finish.\n", + " // What will happen is we will call `should_continue`, and then the output of that\n", + " // will be matched against the keys in this mapping.\n", + " // Based on which one it matches, that node will then be called.\n", + " {\n", + " // If `tools`, then we call the tool node.\n", + " continue: \"action\",\n", + " // Otherwise we finish.\n", + " end: END,\n", + " },\n", + ");\n", + "\n", + "// We now add a normal edge from `tools` to `agent`.\n", + "// This means that after `tools` is called, `agent` node is called next.\n", + "workflow.addEdge(\"action\", \"agent\");\n", + "\n", + "// After we call the first agent, we know we want to go to action\n", + "workflow.addEdge(\"first_agent\", \"action\");\n", + "\n", + "// Finally, we compile it!\n", + "// This compiles it into a LangChain Runnable,\n", + "// meaning you can use it as you would any other runnable\n", + "const app = workflow.compile();" + ] + }, + { + "cell_type": "markdown", + "id": "a9eea2d0", + "metadata": {}, + "source": [ + "## Use it!\n", + "\n", + "We can now use it! This now exposes the\n", + "[same interface](https://js.langchain.com/docs/expression_language/) as all\n", + "other LangChain runnables." + ] + }, + { + "cell_type": "code", + "execution_count": 25, + "id": "47d10628", + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "{\n", + " first_agent: {\n", + " messages: [\n", + " AIMessage {\n", + " lc_serializable: true,\n", + " lc_kwargs: {\n", + " content: \"\",\n", + " tool_calls: [Array],\n", + " invalid_tool_calls: [],\n", + " additional_kwargs: {},\n", + " response_metadata: {}\n", + " },\n", + " lc_namespace: [ \"langchain_core\", \"messages\" ],\n", + " content: \"\",\n", + " name: undefined,\n", + " additional_kwargs: {},\n", + " response_metadata: {},\n", + " tool_calls: [ [Object] ],\n", + " invalid_tool_calls: []\n", + " }\n", + " ]\n", + " }\n", + "}\n", + "-----\n", + "\n", + "{\n", + " action: {\n", + " messages: [\n", + " ToolMessage {\n", + " lc_serializable: true,\n", + " lc_kwargs: {\n", + " name: \"search\",\n", + " content: \"Cold, with a low of 13 ℃\",\n", + " tool_call_id: \"tool_abcd123\",\n", + " additional_kwargs: {},\n", + " response_metadata: {}\n", + " },\n", + " lc_namespace: [ \"langchain_core\", \"messages\" ],\n", + " content: \"Cold, with a low of 13 ℃\",\n", + " name: \"search\",\n", + " additional_kwargs: {},\n", + " response_metadata: {},\n", + " tool_call_id: \"tool_abcd123\"\n", + " }\n", + " ]\n", + " }\n", + "}\n", + "-----\n", + "\n", + "{\n", + " agent: {\n", + " messages: [\n", + " AIMessageChunk {\n", + " lc_serializable: true,\n", + " lc_kwargs: {\n", + " content: \"The weather in San Francisco is currently cold, with a low of 13°C.\",\n", + " additional_kwargs: {},\n", + " response_metadata: [Object],\n", + " tool_call_chunks: [],\n", + " tool_calls: [],\n", + " invalid_tool_calls: []\n", + " },\n", + " lc_namespace: [ \"langchain_core\", \"messages\" ],\n", + " content: \"The weather in San Francisco is currently cold, with a low of 13°C.\",\n", + " name: undefined,\n", + " additional_kwargs: {},\n", + " response_metadata: { prompt: 0, completion: 0, finish_reason: \"stop\" },\n", + " tool_calls: [],\n", + " invalid_tool_calls: [],\n", + " tool_call_chunks: []\n", + " }\n", + " ]\n", + " }\n", + "}\n", + "-----\n", + "\n" + ] + } + ], + "source": [ + "import { HumanMessage } from \"@langchain/core/messages\";\n", + "\n", + "const inputs = {\n", + " messages: [new HumanMessage(\"what is the weather in sf\")],\n", + "};\n", + "\n", + "for await (const output of await app.stream(inputs)) {\n", + " console.log(output);\n", + " console.log(\"-----\\n\");\n", + "}\n" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "d7e74d9d", + "metadata": {}, + "outputs": [], + "source": [] + } + ], + "metadata": { + "jupytext": { + "text_representation": { + "extension": ".py", + "format_name": "percent", + "format_version": "1.3", + "jupytext_version": "1.16.1" + } + }, + "kernelspec": { + "display_name": "Deno", + "language": "typescript", + "name": "deno" + }, + "language_info": { + "file_extension": ".ts", + "mimetype": "text/x.typescript", + "name": "typescript", + "nb_converter": "script", + "pygments_lexer": "typescript", + "version": "5.4.5" + } + }, + "nbformat": 4, + "nbformat_minor": 5 +} diff --git a/examples/multi_agent/agent_supervisor.ipynb b/examples/multi_agent/agent_supervisor.ipynb index f1551eb4..1eec508b 100644 --- a/examples/multi_agent/agent_supervisor.ipynb +++ b/examples/multi_agent/agent_supervisor.ipynb @@ -7,15 +7,21 @@ "source": [ "# Agent Supervisor\n", "\n", - "The [previous example](multi-agent-collaboration.ipynb) routed messages automatically based on the output of the initial researcher agent.\n", + "The [previous example](multi-agent-collaboration.ipynb) routed messages\n", + "automatically based on the output of the initial researcher agent.\n", "\n", "We can also choose to use an LLM to orchestrate the different agents.\n", "\n", - "Below, we will create an agent group, with an agent supervisor to help delegate tasks.\n", + "Below, we will create an agent group, with an agent supervisor to help delegate\n", + "tasks.\n", "\n", "![diagram](./img/supervisor-diagram.png)\n", "\n", - "To simplify the code in each agent node, we will use the AgentExecutor class from LangChain. This and other \"advanced agent\" notebooks are designed to show how you can implement certain design patterns in LangGraph. If the pattern suits your needs, we recommend combining it with some of the other fundamental patterns described elsewhere in the docs for best performance.\n", + "To simplify the code in each agent node, we will use the AgentExecutor class\n", + "from LangChain. This and other \"advanced agent\" notebooks are designed to show\n", + "how you can implement certain design patterns in LangGraph. If the pattern suits\n", + "your needs, we recommend combining it with some of the other fundamental\n", + "patterns described elsewhere in the docs for best performance.\n", "\n", "Before we build, let's configure our environment:" ] @@ -64,7 +70,8 @@ "source": [ "## Create tools\n", "\n", - "For this example, you will make an agent to do web research with a search engine, and one agent to create plots. Define the tools they'll use below:" + "For this example, you will make an agent to do web research with a search\n", + "engine, and one agent to create plots. Define the tools they'll use below:" ] }, { @@ -74,7 +81,7 @@ "metadata": {}, "outputs": [], "source": [ - "import { TavilySearchResults } from \"@langchain/community/tools/tavily_search\"\n", + "import { TavilySearchResults } from \"@langchain/community/tools/tavily_search\";\n", "import { DynamicStructuredTool } from \"@langchain/core/tools\";\n", "import * as d3 from \"d3\";\n", "import { z } from \"zod\";\n", @@ -130,7 +137,7 @@ " ctx.fillStyle = colorPalette[idx % colorPalette.length];\n", " ctx.fillRect(\n", " x(d.label),\n", - " y(d.value),\n", + " y(d.value),\n", " x.bandwidth(),\n", " height - margin.bottom - y(d.value),\n", " );\n", @@ -180,7 +187,8 @@ "source": [ "## Helper Utilites\n", "\n", - "Define a helper function below, which make it easier to add new agent worker nodes." + "Define a helper function below, which make it easier to add new agent worker\n", + "nodes." ] }, { @@ -191,18 +199,21 @@ "outputs": [], "source": [ "import { AgentExecutor, createOpenAIToolsAgent } from \"langchain/agents\";\n", - "import { ChatPromptTemplate, MessagesPlaceholder } from \"@langchain/core/prompts\";\n", + "import {\n", + " ChatPromptTemplate,\n", + " MessagesPlaceholder,\n", + "} from \"@langchain/core/prompts\";\n", "import { ChatOpenAI } from \"@langchain/openai\";\n", "import { Runnable } from \"@langchain/core/runnables\";\n", "\n", "async function createAgent(\n", - " llm: ChatOpenAI, \n", - " tools: any[], \n", - " systemPrompt: string\n", + " llm: ChatOpenAI,\n", + " tools: any[],\n", + " systemPrompt: string,\n", "): Promise {\n", " // Each worker node will be given a name and some tools.\n", " const prompt = await ChatPromptTemplate.fromMessages([\n", - " [\"system\", systemPrompt],\n", + " [\"system\", systemPrompt],\n", " new MessagesPlaceholder(\"messages\"),\n", " new MessagesPlaceholder(\"agent_scratchpad\"),\n", " ]);\n", @@ -228,19 +239,21 @@ "metadata": {}, "outputs": [], "source": [ - "import { ChatPromptTemplate, MessagesPlaceholder } from \"@langchain/core/prompts\";\n", + "import {\n", + " ChatPromptTemplate,\n", + " MessagesPlaceholder,\n", + "} from \"@langchain/core/prompts\";\n", "import { ChatOpenAI } from \"@langchain/openai\";\n", "import { JsonOutputToolsParser } from \"langchain/output_parsers\";\n", "\n", "const members = [\"researcher\", \"chart_generator\"];\n", "\n", - "const systemPrompt = (\n", + "const systemPrompt =\n", " \"You are a supervisor tasked with managing a conversation between the\" +\n", " \" following workers: {members}. Given the following user request,\" +\n", " \" respond with the worker to act next. Each worker will perform a\" +\n", " \" task and respond with their results and status. When finished,\" +\n", - " \" respond with FINISH.\"\n", - ");\n", + " \" respond with FINISH.\";\n", "const options = [\"FINISH\", ...members];\n", "\n", "// Define the routing function\n", @@ -272,8 +285,8 @@ " new MessagesPlaceholder(\"messages\"),\n", " [\n", " \"system\",\n", - " \"Given the conversation above, who should act next?\"\n", - " + \" Or should we FINISH? Select one of: {options}\",\n", + " \"Given the conversation above, who should act next?\" +\n", + " \" Or should we FINISH? Select one of: {options}\",\n", " ],\n", "]);\n", "\n", @@ -290,10 +303,10 @@ "const supervisorChain = formattedPrompt\n", " .pipe(llm.bind({\n", " tools: [toolDef],\n", - " tool_choice: {\"type\": \"function\", \"function\": {\"name\": \"route\"}}\n", + " tool_choice: { \"type\": \"function\", \"function\": { \"name\": \"route\" } },\n", " }))\n", " .pipe(new JsonOutputToolsParser())\n", - " // select the first one\n", + " // select the first one\n", " .pipe((x) => (x[0].args));" ] }, @@ -320,9 +333,9 @@ "await supervisorChain.invoke({\n", " messages: [\n", " new HumanMessage({\n", - " content:\"write a report on birds.\"\n", - " })\n", - " ]\n", + " content: \"write a report on birds.\",\n", + " }),\n", + " ],\n", "});" ] }, @@ -333,7 +346,8 @@ "source": [ "## Construct Graph\n", "\n", - "We're ready to start building the graph. First, we'll define the state the graph will track." + "We're ready to start building the graph. First, we'll define the state the graph\n", + "will track." ] }, { @@ -359,7 +373,7 @@ " value: (x: BaseMessage[], y: BaseMessage[]) => x.concat(y),\n", " default: () => [],\n", " },\n", - " next: 'initialValueForNext', // Replace 'initialValueForNext' with your initial value if needed\n", + " next: \"initialValueForNext\", // Replace 'initialValueForNext' with your initial value if needed\n", "};" ] }, @@ -384,33 +398,33 @@ "});\n", "\n", "const researcherAgent = await createAgent(\n", - " llm, \n", - " [tavilyTool], \n", - " \"You are a web researcher. You may use the Tavily search engine to search the web for\"\n", - " + \" important information, so the Chart Generator in your team can make useful plots.\"\n", + " llm,\n", + " [tavilyTool],\n", + " \"You are a web researcher. You may use the Tavily search engine to search the web for\" +\n", + " \" important information, so the Chart Generator in your team can make useful plots.\",\n", ");\n", "\n", "const researcherNode = async (state, config) => {\n", " const result = await researcherAgent.invoke(state, config);\n", " return {\n", " messages: [\n", - " new HumanMessage({ content: result.output, name: \"Researcher\" })\n", - " ]\n", + " new HumanMessage({ content: result.output, name: \"Researcher\" }),\n", + " ],\n", " };\n", "};\n", "\n", "const chartGenAgent = await createAgent(\n", - " llm, \n", + " llm,\n", " [chartTool],\n", - " \"You excel at generating bar charts. Use the researcher's information to generate the charts.\"\n", + " \"You excel at generating bar charts. Use the researcher's information to generate the charts.\",\n", ");\n", "\n", "const chartGenNode = async (state, config) => {\n", " const result = await chartGenAgent.invoke(state, config);\n", " return {\n", " messages: [\n", - " new HumanMessage({ content: result.output, name: \"ChartGenerator\" })\n", - " ]\n", + " new HumanMessage({ content: result.output, name: \"ChartGenerator\" }),\n", + " ],\n", " };\n", "};" ] @@ -420,7 +434,8 @@ "id": "6ac913ae-0dfc-44bf-a26c-8f23f7e3e4a2", "metadata": {}, "source": [ - "Now we can create the graph itself! Add the nodes, and add edges to define how how work will be performed in the graph." + "Now we can create the graph itself! Add the nodes, and add edges to define how\n", + "how work will be performed in the graph." ] }, { @@ -430,7 +445,7 @@ "metadata": {}, "outputs": [], "source": [ - "import { StateGraph, END } from \"@langchain/langgraph\";\n", + "import { END, StateGraph } from \"@langchain/langgraph\";\n", "\n", "// 1. Create the graph\n", "const workflow = new StateGraph({\n", @@ -443,21 +458,24 @@ "workflow.addNode(\"supervisor\", supervisorChain);\n", "// 3. Define the edges. We will define both regular and conditional ones\n", "// After a worker completes, report to supervisor\n", - "members.forEach(member => {\n", + "members.forEach((member) => {\n", " workflow.addEdge(member, \"supervisor\");\n", "});\n", "\n", "// When the supervisor returns, route to the agent identified in the supervisor's output\n", - "const conditionalMap: { [key: string]: string } = members.reduce((acc, member) => {\n", - " acc[member] = member;\n", - " return acc;\n", - "}, {});\n", + "const conditionalMap: { [key: string]: string } = members.reduce(\n", + " (acc, member) => {\n", + " acc[member] = member;\n", + " return acc;\n", + " },\n", + " {},\n", + ");\n", "\n", "// Or end work if done\n", "conditionalMap[\"FINISH\"] = END;\n", "\n", "workflow.addConditionalEdges(\n", - " \"supervisor\", \n", + " \"supervisor\",\n", " (x: AgentStateChannels) => x.next,\n", " conditionalMap,\n", ");\n", @@ -559,16 +577,18 @@ "const streamResults = graph.stream(\n", " {\n", " messages: [\n", - " new HumanMessage({ content: \"What were the 3 most popular tv shows in 2023?\" })\n", - " ]\n", + " new HumanMessage({\n", + " content: \"What were the 3 most popular tv shows in 2023?\",\n", + " }),\n", + " ],\n", " },\n", - " {recursionLimit: 100},\n", + " { recursionLimit: 100 },\n", ");\n", "\n", "for await (const output of await streamResults) {\n", - " if (!output?.__end__){\n", + " if (!output?.__end__) {\n", " console.log(output);\n", - " console.log('----');\n", + " console.log(\"----\");\n", " }\n", "}" ] @@ -647,16 +667,18 @@ "const streamResults = await graph.stream(\n", " {\n", " messages: [\n", - " new HumanMessage({ content: \"Generate a bar chart of the US GDP growth from 2021-2023.\" })\n", - " ]\n", + " new HumanMessage({\n", + " content: \"Generate a bar chart of the US GDP growth from 2021-2023.\",\n", + " }),\n", + " ],\n", " },\n", - " {recursionLimit: 150},\n", + " { recursionLimit: 150 },\n", ");\n", "\n", "for await (const output of await streamResults) {\n", " if (!output?.__end__) {\n", " console.log(output);\n", - " console.log('----');\n", + " console.log(\"----\");\n", " }\n", "}" ] @@ -666,7 +688,9 @@ "id": "4fb18b41", "metadata": {}, "source": [ - "You can [click here](https://smith.langchain.com/public/5eaaaaa9-c490-487d-b7f1-984aeea87c0f/r) to see a LangSmith trace of the above query." + "You can\n", + "[click here](https://smith.langchain.com/public/5eaaaaa9-c490-487d-b7f1-984aeea87c0f/r)\n", + "to see a LangSmith trace of the above query." ] } ], diff --git a/examples/multi_agent/hierarchical_agent_teams.ipynb b/examples/multi_agent/hierarchical_agent_teams.ipynb index 02f37895..9f01529a 100644 --- a/examples/multi_agent/hierarchical_agent_teams.ipynb +++ b/examples/multi_agent/hierarchical_agent_teams.ipynb @@ -7,19 +7,27 @@ "source": [ "# Hierarchical Agent Teams\n", "\n", - "In our previous example ([Agent Supervisor](./agent_supervisor.ipynb)), we introduced the concept of a single supervisor node to route work between different worker nodes.\n", + "In our previous example ([Agent Supervisor](./agent_supervisor.ipynb)), we\n", + "introduced the concept of a single supervisor node to route work between\n", + "different worker nodes.\n", "\n", - "But what if the job for a single worker becomes too complex? What if the number of workers becomes too large?\n", + "But what if the job for a single worker becomes too complex? What if the number\n", + "of workers becomes too large?\n", "\n", - "For some applications, the system may be more effective if work is distributed _hierarchically_.\n", + "For some applications, the system may be more effective if work is distributed\n", + "_hierarchically_.\n", "\n", - "You can do this by composing different subgraphs and creating a top-level supervisor, along with mid-level supervisors.\n", + "You can do this by composing different subgraphs and creating a top-level\n", + "supervisor, along with mid-level supervisors.\n", "\n", - "To do this, let's build a simple research assistant! The graph will look something like the following:\n", + "To do this, let's build a simple research assistant! The graph will look\n", + "something like the following:\n", "\n", "![diagram](./img/hierarchical-diagram.png)\n", "\n", - "This notebook is inspired by the paper [AutoGen: Enabling Next-Gen LLM Applications via Multi-Agent Conversation](https://arxiv.org/abs/2308.08155), by Wu, et. al. In the rest of this notebook, you will:\n", + "This notebook is inspired by the paper\n", + "[AutoGen: Enabling Next-Gen LLM Applications via Multi-Agent Conversation](https://arxiv.org/abs/2308.08155),\n", + "by Wu, et. al. In the rest of this notebook, you will:\n", "\n", "1. Define the agents' tools to access the web and write files\n", "2. Define some utilities to help create the graph and agents\n", @@ -73,13 +81,16 @@ "source": [ "## Create Tools\n", "\n", - "Each team will be composed of one or more agents each with one or more tools. Below, define all the tools to be used by your different teams.\n", + "Each team will be composed of one or more agents each with one or more tools.\n", + "Below, define all the tools to be used by your different teams.\n", "\n", "We'll start with the research team.\n", "\n", "**Research team tools**\n", "\n", - "The research team can use a search engine and url scraper to find information on the web. Feel free to add additional functionality below to boost the team performance!" + "The research team can use a search engine and url scraper to find information on\n", + "the web. Feel free to add additional functionality below to boost the team\n", + "performance!" ] }, { @@ -105,14 +116,14 @@ " }),\n", " func: async ({ url }) => {\n", " const loader = new CheerioWebBaseLoader(url);\n", - " const docs = await loader.load();\n", + " const docs = await loader.load();\n", " const formattedDocs = docs.map(\n", " (doc) =>\n", - " `\\n${doc.pageContent}\\n`\n", + " `\\n${doc.pageContent}\\n`,\n", " );\n", " return formattedDocs.join(\"\\n\\n\");\n", " },\n", - "});\n" + "});" ] }, { @@ -122,10 +133,11 @@ "source": [ "**Document writing team tools**\n", "\n", - "Next up, we will give some tools for the doc writing team to use.\n", - "We define some bare-bones file-access tools below.\n", + "Next up, we will give some tools for the doc writing team to use. We define some\n", + "bare-bones file-access tools below.\n", "\n", - "Note that this gives the agents access to your file-system, which can be unsafe. We also haven't optimized the tool descriptions for performance." + "Note that this gives the agents access to your file-system, which can be unsafe.\n", + "We also haven't optimized the tool descriptions for performance." ] }, { @@ -210,7 +222,7 @@ " let lines = data.split(\"\\n\");\n", "\n", " const sortedInserts = Object.entries(inserts).sort(\n", - " ([a], [b]) => parseInt(a) - parseInt(b)\n", + " ([a], [b]) => parseInt(a) - parseInt(b),\n", " );\n", "\n", " for (const [line_number_str, text] of sortedInserts) {\n", @@ -239,7 +251,7 @@ " })\n", " .array(),\n", " }),\n", - " func: async ({data}) => {\n", + " func: async ({ data }) => {\n", " // const data = input.data;\n", " const width = 500;\n", " const height = 500;\n", @@ -277,7 +289,7 @@ " ctx.fillStyle = colorPalette[idx % colorPalette.length];\n", " ctx.fillRect(\n", " x(d.label),\n", - " y(d.value),\n", + " y(d.value),\n", " x.bandwidth(),\n", " height - margin.bottom - y(d.value),\n", " );\n", @@ -337,7 +349,10 @@ ], "source": [ "// Example invocation\n", - "await writeDocumentTool.invoke({content: \"Hello from LangGraph!\", file_name: \"hello.txt\"});" + "await writeDocumentTool.invoke({\n", + " content: \"Hello from LangGraph!\",\n", + " file_name: \"hello.txt\",\n", + "});" ] }, { @@ -358,7 +373,7 @@ } ], "source": [ - "await readDocumentTool.invoke({file_name: \"hello.txt\"});" + "await readDocumentTool.invoke({ file_name: \"hello.txt\" });" ] }, { @@ -386,7 +401,12 @@ } ], "source": [ - "await chartTool.invoke({data: [{label: \"People who like graphs\", value: 5000}, {label: \"People who like LangGraph\", value: 10000}]});" + "await chartTool.invoke({\n", + " data: [{ label: \"People who like graphs\", value: 5000 }, {\n", + " label: \"People who like LangGraph\",\n", + " value: 10000,\n", + " }],\n", + "});" ] }, { @@ -396,12 +416,14 @@ "source": [ "## Helper Utilities\n", "\n", - "We are going to create a few utility functions to make it more concise when we want to:\n", + "We are going to create a few utility functions to make it more concise when we\n", + "want to:\n", "\n", "1. Create a worker agent.\n", "2. Create a supervisor for the sub-graph.\n", "\n", - "These will simplify the graph compositional code at the end for us so it's easier to see what's going on." + "These will simplify the graph compositional code at the end for us so it's\n", + "easier to see what's going on." ] }, { @@ -413,36 +435,37 @@ "source": [ "import { AgentExecutor, createOpenAIToolsAgent } from \"langchain/agents\";\n", "import { HumanMessage } from \"@langchain/core/messages\";\n", - "import { ChatPromptTemplate, MessagesPlaceholder } from \"@langchain/core/prompts\";\n", + "import {\n", + " ChatPromptTemplate,\n", + " MessagesPlaceholder,\n", + "} from \"@langchain/core/prompts\";\n", "import { JsonOutputToolsParser } from \"langchain/output_parsers\";\n", "import { ChatOpenAI } from \"@langchain/openai\";\n", "import { Runnable } from \"@langchain/core/runnables\";\n", "import { Tool } from \"@langchain/core/tools\";\n", "\n", "async function createAgent(\n", - " llm: ChatOpenAI, \n", - " tools: Tool[], \n", - " systemPrompt: string\n", + " llm: ChatOpenAI,\n", + " tools: Tool[],\n", + " systemPrompt: string,\n", "): Promise {\n", - " const combinedPrompt = (\n", - " systemPrompt + \n", - " \"\\nWork autonomously according to your specialty, using the tools available to you.\" + \n", - " \" Do not ask for clarification.\" +\n", - " \" Your other team members (and other teams) will collaborate with you with their own specialties.\" +\n", - " \" You are chosen for a reason! You are one of the following team members: {team_members}.\"\n", - " );\n", + " const combinedPrompt = systemPrompt +\n", + " \"\\nWork autonomously according to your specialty, using the tools available to you.\" +\n", + " \" Do not ask for clarification.\" +\n", + " \" Your other team members (and other teams) will collaborate with you with their own specialties.\" +\n", + " \" You are chosen for a reason! You are one of the following team members: {team_members}.\";\n", " const toolNames = tools.map((t) => t.name).join(\", \");\n", " const prompt = await ChatPromptTemplate.fromMessages([\n", " [\"system\", combinedPrompt],\n", " new MessagesPlaceholder(\"messages\"),\n", " new MessagesPlaceholder(\"agent_scratchpad\"),\n", " [\n", - " \"system\", \n", + " \"system\",\n", " [\n", " \"Supervisor instructions: {instructions}\\n\" +\n", - " `Remember, you individually can only use these tools: ${toolNames}`+\n", - " \"\\n\\nEnd if you have already completed the requested task. Communicate the work completed.\" \n", - " ].join(\"\\n\")\n", + " `Remember, you individually can only use these tools: ${toolNames}` +\n", + " \"\\n\\nEnd if you have already completed the requested task. Communicate the work completed.\",\n", + " ].join(\"\\n\"),\n", " ],\n", " ]);\n", " const agent = await createOpenAIToolsAgent({ llm, tools, prompt });\n", @@ -450,20 +473,20 @@ "}\n", "\n", "async function runAgentNode(\n", - " { state, agent, name }: { state: any, agent: Runnable, name: string }\n", + " { state, agent, name }: { state: any; agent: Runnable; name: string },\n", ") {\n", " const result = await agent.invoke(state);\n", " return {\n", " messages: [\n", " new HumanMessage({ content: result.output, name }),\n", - " ]\n", + " ],\n", " };\n", "}\n", "\n", "async function createTeamSupervisor(\n", " llm: ChatOpenAI,\n", - " systemPrompt: string, \n", - " members: string[]\n", + " systemPrompt: string,\n", + " members: string[],\n", "): Promise {\n", " const options = [\"FINISH\", ...members];\n", " const functionDef = {\n", @@ -486,7 +509,8 @@ " instructions: {\n", " title: \"Instructions\",\n", " type: \"string\",\n", - " description: \"The specific instructions of the sub-task the next role should accomplish.\",\n", + " description:\n", + " \"The specific instructions of the sub-task the next role should accomplish.\",\n", " },\n", " },\n", " required: [\"reasoning\", \"next\", \"instructions\"],\n", @@ -503,13 +527,16 @@ " \"system\",\n", " \"Given the conversation above, who should act next? Or should we FINISH? Select one of: {options}\",\n", " ],\n", - " ])\n", - " prompt = await prompt.partial({ options: options.join(\", \"), team_members: members.join(\", \") });\n", + " ]);\n", + " prompt = await prompt.partial({\n", + " options: options.join(\", \"),\n", + " team_members: members.join(\", \"),\n", + " });\n", "\n", " const supervisor = prompt\n", " .pipe(llm.bind({\n", " tools: [toolDef],\n", - " tool_choice: {\"type\": \"function\", \"function\": {\"name\": \"route\"}}\n", + " tool_choice: { \"type\": \"function\", \"function\": { \"name\": \"route\" } },\n", " }))\n", " .pipe(new JsonOutputToolsParser())\n", " // select the first one\n", @@ -517,8 +544,8 @@ " next: x[0].args.next,\n", " instructions: x[0].args.instructions,\n", " }));\n", - " \n", - " return supervisor; \n", + "\n", + " return supervisor;\n", "}" ] }, @@ -533,7 +560,10 @@ "\n", "### Research Team\n", "\n", - "The research team will have a search agent and a web scraping \"research_agent\" as the two worker nodes. Let's create those, as well as the team supervisor. (Note: If you are running deno in a jupyter notebook, the web scraper won't work out of the box. We have commented out this code to accomodate this challenge)" + "The research team will have a search agent and a web scraping \"research_agent\"\n", + "as the two worker nodes. Let's create those, as well as the team supervisor.\n", + "(Note: If you are running deno in a jupyter notebook, the web scraper won't work\n", + "out of the box. We have commented out this code to accomodate this challenge)" ] }, { @@ -551,7 +581,7 @@ " messages: BaseMessage[];\n", " team_members: string[];\n", " next: string;\n", - " instructions: string\n", + " instructions: string;\n", "}\n", "\n", "// This defines the agent state for the research team\n", @@ -574,13 +604,13 @@ " },\n", "};\n", "\n", - "const llm = new ChatOpenAI({modelName: \"gpt-4-1106-preview\"});\n", + "const llm = new ChatOpenAI({ modelName: \"gpt-4-1106-preview\" });\n", "\n", "// Assuming createAgent and createTeamSupervisor are the same functions you defined earlier\n", "const searchAgent = await createAgent(\n", " llm,\n", " [tavilyTool],\n", - " \"You are a research assistant who can search for up-to-date info using the tavily search engine.\"\n", + " \"You are a research assistant who can search for up-to-date info using the tavily search engine.\",\n", ");\n", "const searchNode = (state: ResearchTeamState) => {\n", " return runAgentNode({ state, agent: searchAgent, name: \"Search\" });\n", @@ -601,10 +631,10 @@ " \" following workers: {team_members}. Given the following user request,\" +\n", " \" respond with the worker to act next. Each worker will perform a\" +\n", " \" task and respond with their results and status. When finished,\" +\n", - " \" respond with FINISH.\\n\\n\" + \n", + " \" respond with FINISH.\\n\\n\" +\n", " \" Select strategically to minimize the number of steps taken.\",\n", - " [\"Search\"] // , \"Web Scraper\"]\n", - ");\n" + " [\"Search\"], // , \"Web Scraper\"]\n", + ");" ] }, { @@ -612,7 +642,9 @@ "id": "b01c6ee8-a461-4081-8a97-a3a06ec0f994", "metadata": {}, "source": [ - "Now that we've created the necessary components, defining their interactions is easy. Add the nodes to the team graph, and define the edges, which determine the transition criteria." + "Now that we've created the necessary components, defining their interactions is\n", + "easy. Add the nodes to the team graph, and define the edges, which determine the\n", + "transition criteria." ] }, { @@ -622,7 +654,7 @@ "metadata": {}, "outputs": [], "source": [ - "import { StateGraph, END } from \"@langchain/langgraph\";\n", + "import { END, StateGraph } from \"@langchain/langgraph\";\n", "\n", "const researchGraph = new StateGraph({\n", " channels: researchTeamState,\n", @@ -635,11 +667,11 @@ "researchGraph.addEdge(\"Search\", \"supervisor\");\n", "researchGraph.addConditionalEdges(\n", " \"supervisor\",\n", - " ((x) => x.next),\n", + " (x) => x.next,\n", " {\n", " Search: \"Search\",\n", - " FINISH: END\n", - " }\n", + " FINISH: END,\n", + " },\n", ");\n", "\n", "researchGraph.setEntryPoint(\"supervisor\");\n", @@ -651,7 +683,8 @@ "id": "a75b4355-f130-40b3-b6dd-5f36296c6e81", "metadata": {}, "source": [ - "Since each team is itself a complete computational graph, you can directly query it like so:" + "Since each team is itself a complete computational graph, you can directly query\n", + "it like so:" ] }, { @@ -760,13 +793,15 @@ ], "source": [ "const streamResults = researchChain.stream(\n", - " { messages: [new HumanMessage(\"What's the price of a big mac in Argentina?\")] },\n", + " {\n", + " messages: [new HumanMessage(\"What's the price of a big mac in Argentina?\")],\n", + " },\n", " { recursionLimit: 100 },\n", - ")\n", + ");\n", "for await (const output of await streamResults) {\n", - " if (!output?.__end__){\n", + " if (!output?.__end__) {\n", " console.log(output);\n", - " console.log('----');\n", + " console.log(\"----\");\n", " }\n", "}" ] @@ -776,7 +811,9 @@ "id": "1e41222e", "metadata": {}, "source": [ - "You can [click here](https://smith.langchain.com/public/6d9bf294-960d-407f-81fe-2dc1838add4f/r) to see a LangSmith trace of the above run." + "You can\n", + "[click here](https://smith.langchain.com/public/6d9bf294-960d-407f-81fe-2dc1838add4f/r)\n", + "to see a LangSmith trace of the above run." ] }, { @@ -786,11 +823,15 @@ "source": [ "### Document Writing Team\n", "\n", - "Create the document writing team below using a similar approach. This time, we will give each agent access to different file-writing tools.\n", + "Create the document writing team below using a similar approach. This time, we\n", + "will give each agent access to different file-writing tools.\n", "\n", - "Note that we are giving file-system access to our agent here, which is not safe in all cases. \n", + "Note that we are giving file-system access to our agent here, which is not safe\n", + "in all cases.\n", "\n", - "For the doc writing team, each agent will be writing to the same workspace. We don't want them to waste time checking which files are available, so we will force a call to a \"prelude\" function before an agent is invoked to populate the\n", + "For the doc writing team, each agent will be writing to the same workspace. We\n", + "don't want them to waste time checking which files are available, so we will\n", + "force a call to a \"prelude\" function before an agent is invoked to populate the\n", "prompt template with the current directory's contents." ] }, @@ -820,14 +861,13 @@ " } catch (error) {\n", " console.error(error);\n", " }\n", - " const filesList =\n", - " writtenFiles.length > 0\n", - " ? \"\\nBelow are files your team has written to the directory:\\n\" +\n", - " writtenFiles.map((f) => ` - ${f}`).join(\"\\n\")\n", - " : \"No files written.\";\n", + " const filesList = writtenFiles.length > 0\n", + " ? \"\\nBelow are files your team has written to the directory:\\n\" +\n", + " writtenFiles.map((f) => ` - ${f}`).join(\"\\n\")\n", + " : \"No files written.\";\n", " return { ...state, current_files: filesList };\n", " },\n", - "});\n" + "});" ] }, { @@ -835,8 +875,8 @@ "id": "c82a6ed1-c334-48b5-8122-aa7f1e6f53ad", "metadata": {}, "source": [ - "The doc writing state then is similar to that of the research team. We will add the additional `current_files` state variable\n", - "to reflect the shared workspace." + "The doc writing state then is similar to that of the research team. We will add\n", + "the additional `current_files` state variable to reflect the shared workspace." ] }, { @@ -878,9 +918,8 @@ " instructions: {\n", " value: null,\n", " default: () => \"Resolve the user's request.\",\n", - " }\n", - "};\n", - "\n" + " },\n", + "};" ] }, { @@ -889,6 +928,7 @@ "metadata": {}, "source": [ "The team will be comprised of three agents:\n", + "\n", "- A doc writing agent\n", "- A note taking agent\n", "- A chart generating agent\n", @@ -903,55 +943,58 @@ "metadata": {}, "outputs": [], "source": [ - "import { RunnableConfig } from \"@langchain/core/runnables\"\n", + "import { RunnableConfig } from \"@langchain/core/runnables\";\n", "\n", - "const llm = new ChatOpenAI({modelName: \"gpt-4-1106-preview\"});\n", + "const llm = new ChatOpenAI({ modelName: \"gpt-4-1106-preview\" });\n", "\n", "const docWriterAgent = await createAgent(\n", " llm,\n", " [writeDocumentTool, editDocumentTool, readDocumentTool],\n", - " \"You are an expert writing a research document.\\nBelow are files currently in your directory:\\n{current_files}\"\n", + " \"You are an expert writing a research document.\\nBelow are files currently in your directory:\\n{current_files}\",\n", ");\n", "const contextAwareDocWriterAgent = prelude.pipe(docWriterAgent);\n", "const docWritingNode = (\n", " state: DocWritingState,\n", - " config: RunnableConfig\n", - ") => runAgentNode({\n", - " state,\n", - " agent: contextAwareDocWriterAgent,\n", - " name: \"DocWriter\",\n", - "});\n", + " config: RunnableConfig,\n", + ") =>\n", + " runAgentNode({\n", + " state,\n", + " agent: contextAwareDocWriterAgent,\n", + " name: \"DocWriter\",\n", + " });\n", "\n", "const noteTakingAgent = await createAgent(\n", " llm,\n", " [createOutlineTool, readDocumentTool],\n", " \"You are an expert senior researcher tasked with writing a paper outline and\" +\n", - " \" taking notes to craft a perfect paper.{current_files}\"\n", + " \" taking notes to craft a perfect paper.{current_files}\",\n", ");\n", "const contextAwareNoteTakingAgent = prelude.pipe(noteTakingAgent);\n", "const noteTakingNode = (\n", - " state: DocWritingState\n", - ") => runAgentNode({\n", - " state,\n", - " agent: contextAwareNoteTakingAgent,\n", - " name: \"NoteTaker\",\n", - "});\n", + " state: DocWritingState,\n", + ") =>\n", + " runAgentNode({\n", + " state,\n", + " agent: contextAwareNoteTakingAgent,\n", + " name: \"NoteTaker\",\n", + " });\n", "\n", "const chartGeneratingAgent = await createAgent(\n", " llm,\n", " [readDocumentTool, chartTool],\n", " \"You are a data viz expert tasked with generating charts for a research project.\" +\n", - " \"{current_files}\"\n", + " \"{current_files}\",\n", ");\n", "const contextAwareChartGeneratingAgent = prelude.pipe(chartGeneratingAgent);\n", "const chartGeneratingNode = async (\n", " state: DocWritingState,\n", - " config: RunnableConfig\n", - ") => runAgentNode({\n", - " state,\n", - " agent: contextAwareChartGeneratingAgent,\n", - " name: \"ChartGenerator\",\n", - "});\n", + " config: RunnableConfig,\n", + ") =>\n", + " runAgentNode({\n", + " state,\n", + " agent: contextAwareChartGeneratingAgent,\n", + " name: \"ChartGenerator\",\n", + " });\n", "\n", "const docTeamMembers = [\"DocWriter\", \"NoteTaker\", \"ChartGenerator\"];\n", "const docWritingSupervisor = await createTeamSupervisor(\n", @@ -960,10 +1003,10 @@ " \" following workers: {team_members}. Given the following user request,\" +\n", " \" respond with the worker to act next. Each worker will perform a\" +\n", " \" task and respond with their results and status. When finished,\" +\n", - " \" respond with FINISH.\\n\\n\" + \n", + " \" respond with FINISH.\\n\\n\" +\n", " \" Select strategically to minimize the number of steps taken.\",\n", - " docTeamMembers\n", - ");\n" + " docTeamMembers,\n", + ");" ] }, { @@ -971,8 +1014,9 @@ "id": "aee2cd9b-29aa-458e-903d-4e49179e5d59", "metadata": {}, "source": [ - "With the objects themselves created, we can form the graph. Start by creating the \"nodes\", which will do the actual work,\n", - "then define the edges to control how the program will progress." + "With the objects themselves created, we can form the graph. Start by creating\n", + "the \"nodes\", which will do the actual work, then define the edges to control how\n", + "the program will progress." ] }, { @@ -985,7 +1029,7 @@ "// Create the graph here:\n", "// Note that we have unrolled the loop for the sake of this doc\n", "const authoringGraph = new StateGraph({\n", - " channels: docWritingState\n", + " channels: docWritingState,\n", "});\n", "\n", "authoringGraph.addNode(\"Doc Writer\", docWritingNode);\n", @@ -1006,8 +1050,8 @@ " \"DocWriter\": \"Doc Writer\",\n", " \"NoteTaker\": \"Note Taker\",\n", " \"ChartGenerator\": \"Chart Generator\",\n", - " \"FINISH\": END\n", - " }\n", + " \"FINISH\": END,\n", + " },\n", ");\n", "\n", "authoringGraph.setEntryPoint(\"supervisor\");\n", @@ -1016,9 +1060,9 @@ " ({ messages }) => {\n", " return {\n", " messages: messages,\n", - " team_members: [\"Doc Writer\", \"Note Taker\", \"Chart Generator\"]\n", + " team_members: [\"Doc Writer\", \"Note Taker\", \"Chart Generator\"],\n", " };\n", - " }\n", + " },\n", ");\n", "const authoringChain = enterAuthoringChain.pipe(authoringGraph.compile());" ] @@ -1290,17 +1334,22 @@ ], "source": [ "const resultStream = authoringChain.stream(\n", - " { messages: [new HumanMessage(\"Write a limerick and make a bar chart of the characters used.\")] },\n", - " { recursionLimit: 100 }\n", + " {\n", + " messages: [\n", + " new HumanMessage(\n", + " \"Write a limerick and make a bar chart of the characters used.\",\n", + " ),\n", + " ],\n", + " },\n", + " { recursionLimit: 100 },\n", ");\n", "\n", "for await (const step of await resultStream) {\n", - " if (!step?.__end__){\n", + " if (!step?.__end__) {\n", " console.log(step);\n", " console.log(\"---\");\n", " }\n", - "}\n", - " " + "}" ] }, { @@ -1308,7 +1357,9 @@ "id": "9dd93a49", "metadata": {}, "source": [ - "You can [click here](https://smith.langchain.com/public/ee1549c6-0095-4806-9259-ead0946503f5/r) to see a representative LangSmith trace of the above run." + "You can\n", + "[click here](https://smith.langchain.com/public/ee1549c6-0095-4806-9259-ead0946503f5/r)\n", + "to see a representative LangSmith trace of the above run." ] }, { @@ -1318,9 +1369,12 @@ "source": [ "## Add Layers\n", "\n", - "In this design, we are enforcing a top-down planning policy. We've created two graphs already, but we have to decide how to route work between the two.\n", + "In this design, we are enforcing a top-down planning policy. We've created two\n", + "graphs already, but we have to decide how to route work between the two.\n", "\n", - "We'll create a _third_ graph to orchestrate the previous two, and add some connectors to define how this top-level state is shared between the different graphs." + "We'll create a _third_ graph to orchestrate the previous two, and add some\n", + "connectors to define how this top-level state is shared between the different\n", + "graphs." ] }, { @@ -1349,20 +1403,20 @@ " \" following teams: {team_members}. Given the following user request,\" +\n", " \" respond with the worker to act next. Each worker will perform a\" +\n", " \" task and respond with their results and status. When finished,\" +\n", - " \" respond with FINISH.\\n\\n\" + \n", + " \" respond with FINISH.\\n\\n\" +\n", " \" Select strategically to minimize the number of steps taken.\",\n", " [\"Research team\", \"Paper writing team\"],\n", ");\n", "\n", "const getMessages = RunnableLambda.from((state: State) => {\n", - " return { messages: state.messages }\n", + " return { messages: state.messages };\n", "});\n", "\n", "const joinGraph = RunnableLambda.from((response: any) => {\n", " return {\n", - " messages: [response.messages[response.messages.length - 1]]\n", + " messages: [response.messages[response.messages.length - 1]],\n", " };\n", - "});\n" + "});" ] }, { @@ -1387,12 +1441,18 @@ " default: () => [],\n", " },\n", " next: \"Research team\",\n", - " instructions: \"Solve the user's request.\"\n", - " }\n", + " instructions: \"Solve the user's request.\",\n", + " },\n", "});\n", "\n", - "superGraph.addNode(\"Research team\", getMessages.pipe(researchChain).pipe(joinGraph));\n", - "superGraph.addNode(\"Paper writing team\", getMessages.pipe(authoringChain).pipe(joinGraph));\n", + "superGraph.addNode(\n", + " \"Research team\",\n", + " getMessages.pipe(researchChain).pipe(joinGraph),\n", + ");\n", + "superGraph.addNode(\n", + " \"Paper writing team\",\n", + " getMessages.pipe(authoringChain).pipe(joinGraph),\n", + ");\n", "superGraph.addNode(\"supervisor\", supervisorNode);\n", "\n", "superGraph.addEdge(\"Research team\", \"supervisor\");\n", @@ -1403,8 +1463,8 @@ " {\n", " \"Paper writing team\": \"Paper writing team\",\n", " \"Research team\": \"Research team\",\n", - " \"FINISH\": END\n", - " }\n", + " \"FINISH\": END,\n", + " },\n", ");\n", "\n", "superGraph.setEntryPoint(\"supervisor\");\n", @@ -1541,11 +1601,11 @@ " {\n", " messages: [\n", " new HumanMessage(\n", - " \"Look up a current event, write a poem about it, then plot a bar chart of the distribution of words therein.\"\n", - " )\n", + " \"Look up a current event, write a poem about it, then plot a bar chart of the distribution of words therein.\",\n", + " ),\n", " ],\n", " },\n", - " { recursionLimit: 150 }\n", + " { recursionLimit: 150 },\n", ");\n", "\n", "for await (const step of await resultStream) {\n", @@ -1561,7 +1621,9 @@ "id": "0129279f", "metadata": {}, "source": [ - "As before, you can [click here](https://smith.langchain.com/public/067e1ef5-f48a-4f44-8750-c5ef5da4fdb4/r) to see a LangSmith run of the above graph." + "As before, you can\n", + "[click here](https://smith.langchain.com/public/067e1ef5-f48a-4f44-8750-c5ef5da4fdb4/r)\n", + "to see a LangSmith run of the above graph." ] } ], diff --git a/examples/multi_agent/multi_agent_collaboration.ipynb b/examples/multi_agent/multi_agent_collaboration.ipynb index 58438512..8ff86c80 100644 --- a/examples/multi_agent/multi_agent_collaboration.ipynb +++ b/examples/multi_agent/multi_agent_collaboration.ipynb @@ -7,17 +7,26 @@ "source": [ "# Basic Multi-agent Collaboration\n", "\n", - "A single agent can usually operate effectively using a handful of tools within a single domain, but even using powerful models like `gpt-4`, it can be less effective at using many tools. \n", + "A single agent can usually operate effectively using a handful of tools within a\n", + "single domain, but even using powerful models like `gpt-4`, it can be less\n", + "effective at using many tools.\n", "\n", - "One way to approach complicated tasks is through a \"divide-and-conquer\" approach: create an specialized agent for each task or domain and route tasks to the correct \"expert\".\n", + "One way to approach complicated tasks is through a \"divide-and-conquer\"\n", + "approach: create an specialized agent for each task or domain and route tasks to\n", + "the correct \"expert\".\n", "\n", - "This notebook (inspired by the paper [AutoGen: Enabling Next-Gen LLM Applications via Multi-Agent Conversation](https://arxiv.org/abs/2308.08155), by Wu, et. al.) shows one way to do this using LangGraph.\n", + "This notebook (inspired by the paper\n", + "[AutoGen: Enabling Next-Gen LLM Applications via Multi-Agent Conversation](https://arxiv.org/abs/2308.08155),\n", + "by Wu, et. al.) shows one way to do this using LangGraph.\n", "\n", "The resulting graph will look something like the following diagram:\n", "\n", "![](./img/simple_multi_agent_diagram.png)\n", "\n", - "Before we get started, a quick note: this and other multi-agent notebooks are designed to show _how_ you can implement certain design patterns in LangGraph. If the pattern suits your needs, we recommend combining it with some of the other fundamental patterns described elsewhere in the docs for best performance." + "Before we get started, a quick note: this and other multi-agent notebooks are\n", + "designed to show _how_ you can implement certain design patterns in LangGraph.\n", + "If the pattern suits your needs, we recommend combining it with some of the\n", + "other fundamental patterns described elsewhere in the docs for best performance." ] }, { @@ -64,7 +73,8 @@ "source": [ "## Helper Utilities\n", "\n", - "The following helper functions will help create agents. These agents will then be nodes in the graph. \n", + "The following helper functions will help create agents. These agents will then\n", + "be nodes in the graph.\n", "\n", "You can skip ahead if you just want to see what the graph looks like." ] @@ -104,24 +114,24 @@ " [\n", " \"system\",\n", " \"You are a helpful AI assistant, collaborating with other assistants.\" +\n", - " \" Use the provided tools to progress towards answering the question.\" +\n", - " \" If you are unable to fully answer, that's OK, another assistant with different tools \" +\n", - " \" will help where you left off. Execute what you can to make progress.\" +\n", - " \" If you or any of the other assistants have the final answer or deliverable,\" +\n", - " \" prefix your response with FINAL ANSWER so the team knows to stop.\" +\n", - " \" You have access to the following tools: {tool_names}.\\n{system_message}\",\n", + " \" Use the provided tools to progress towards answering the question.\" +\n", + " \" If you are unable to fully answer, that's OK, another assistant with different tools \" +\n", + " \" will help where you left off. Execute what you can to make progress.\" +\n", + " \" If you or any of the other assistants have the final answer or deliverable,\" +\n", + " \" prefix your response with FINAL ANSWER so the team knows to stop.\" +\n", + " \" You have access to the following tools: {tool_names}.\\n{system_message}\",\n", " ],\n", " new MessagesPlaceholder(\"messages\"),\n", " ]);\n", " prompt = await prompt.partial({\n", " system_message: systemMessage,\n", " tool_names: toolNames,\n", - " })\n", + " });\n", "\n", " return prompt.pipe(llm.bind({ tools: formattedTools }));\n", "}\n", "\n", - "const isToolMessage = (message) => !!message?.additional_kwargs?.tool_calls;\n" + "const isToolMessage = (message) => !!message?.additional_kwargs?.tool_calls;" ] }, { @@ -133,7 +143,8 @@ "\n", "These tools will be used by our worker agents to answer our questions.\n", "\n", - "We will create a chart tool (using d3.js), and the LangChain TavilySearchResults tool for web search functionality." + "We will create a chart tool (using d3.js), and the LangChain TavilySearchResults\n", + "tool for web search functionality." ] }, { @@ -198,7 +209,7 @@ " ctx.fillStyle = colorPalette[idx % colorPalette.length];\n", " ctx.fillRect(\n", " x(d.label),\n", - " y(d.value),\n", + " y(d.value),\n", " x.bandwidth(),\n", " height - margin.bottom - y(d.value),\n", " );\n", @@ -238,7 +249,7 @@ " },\n", "});\n", "\n", - "const tavilyTool = new TavilySearchResults();\n" + "const tavilyTool = new TavilySearchResults();" ] }, { @@ -248,7 +259,8 @@ "source": [ "## Create graph\n", "\n", - "Now that we've defined our tools and made some helper functions, will create the individual agents below and tell them how to talk to each other using LangGraph." + "Now that we've defined our tools and made some helper functions, will create the\n", + "individual agents below and tell them how to talk to each other using LangGraph." ] }, { @@ -258,9 +270,11 @@ "source": [ "### Define Agent Nodes\n", "\n", - "In LangGraph, nodes represent functions that perform the work. In our example, we will have \"agent\" nodes and a \"callTool\" node.\n", + "In LangGraph, nodes represent functions that perform the work. In our example,\n", + "we will have \"agent\" nodes and a \"callTool\" node.\n", "\n", - "The input for every node is the graph's state. In our case, the state will have a list of messages as input, as well as the name of the previous node.\n", + "The input for every node is the graph's state. In our case, the state will have\n", + "a list of messages as input, as well as the name of the previous node.\n", "\n", "First, let's define the nodes for the agents." ] @@ -364,12 +378,12 @@ "const researchResults = await researchNode(\n", " {\n", " messages: [\n", - " new HumanMessage(\"Research the US primaries in 2024\")\n", - " ]\n", - " }\n", + " new HumanMessage(\"Research the US primaries in 2024\"),\n", + " ],\n", + " },\n", ");\n", "\n", - "researchResults" + "researchResults;" ] }, { @@ -478,7 +492,8 @@ "source": [ "### Define Edge Logic\n", "\n", - "We can define some of the edge logic that is needed to decide what to do based on results of the agents" + "We can define some of the edge logic that is needed to decide what to do based\n", + "on results of the agents" ] }, { @@ -496,12 +511,15 @@ " // The previous agent is invoking a tool\n", " return \"call_tool\";\n", " }\n", - " if (typeof lastMessage.content === 'string' && lastMessage.content.includes(\"FINAL ANSWER\")) {\n", + " if (\n", + " typeof lastMessage.content === \"string\" &&\n", + " lastMessage.content.includes(\"FINAL ANSWER\")\n", + " ) {\n", " // Any agent decided the work is done\n", " return \"end\";\n", " }\n", " return \"continue\";\n", - "}\n" + "}" ] }, { @@ -511,7 +529,8 @@ "source": [ "### Define State\n", "\n", - "We first define the state of the graph. This will just a list of messages, along with a key to track the most recent sender" + "We first define the state of the graph. This will just a list of messages, along\n", + "with a key to track the most recent sender" ] }, { @@ -558,7 +577,7 @@ "metadata": {}, "outputs": [], "source": [ - "import { StateGraph, END } from \"@langchain/langgraph\";\n", + "import { END, StateGraph } from \"@langchain/langgraph\";\n", "\n", "// 1. Create the graph\n", "const workflow = new StateGraph({\n", @@ -577,8 +596,8 @@ " router,\n", " {\n", " // We will transition to the other agent\n", - " continue: \"ChartGenerator\", \n", - " call_tool: \"call_tool\", \n", + " continue: \"ChartGenerator\",\n", + " call_tool: \"call_tool\",\n", " end: END,\n", " },\n", ");\n", @@ -618,7 +637,8 @@ "source": [ "## Invoke\n", "\n", - "With the graph created, you can invoke it! Let's have it chart some stats for us." + "With the graph created, you can invoke it! Let's have it chart some stats for\n", + "us." ] }, { @@ -839,19 +859,18 @@ " {\n", " messages: [\n", " new HumanMessage({\n", - " content:\n", - " \"Generate a bar chart of the US gdp over the past 3 years.\",\n", + " content: \"Generate a bar chart of the US gdp over the past 3 years.\",\n", " }),\n", " ],\n", " },\n", - " { recursionLimit: 150 }\n", + " { recursionLimit: 150 },\n", ");\n", "for await (const output of await streamResults) {\n", " if (!output?.__end__) {\n", " console.log(output);\n", " console.log(\"----\");\n", " }\n", - "}\n" + "}" ] }, { @@ -859,7 +878,8 @@ "id": "da52f960", "metadata": {}, "source": [ - "[Click here](https://smith.langchain.com/public/57e9b34a-0765-415d-949a-d5ea77c6cdc8/r) to see a LangSmith trace of the above run." + "[Click here](https://smith.langchain.com/public/57e9b34a-0765-415d-949a-d5ea77c6cdc8/r)\n", + "to see a LangSmith trace of the above run." ] } ], diff --git a/examples/plan-and-execute/plan-and-execute.ipynb b/examples/plan-and-execute/plan-and-execute.ipynb index 84592ad6..6042b1d6 100644 --- a/examples/plan-and-execute/plan-and-execute.ipynb +++ b/examples/plan-and-execute/plan-and-execute.ipynb @@ -6,16 +6,21 @@ "source": [ "# Plan-and-Execute\n", "\n", - "This notebook shows how to create a \"plan-and-execute\" style agent. This is heavily inspired by the [Plan-and-Solve](https://arxiv.org/abs/2305.04091) paper as well as the [Baby-AGI](https://github.com/yoheinakajima/babyagi) project.\n", + "This notebook shows how to create a \"plan-and-execute\" style agent. This is\n", + "heavily inspired by the [Plan-and-Solve](https://arxiv.org/abs/2305.04091) paper\n", + "as well as the [Baby-AGI](https://github.com/yoheinakajima/babyagi) project.\n", "\n", - "The core idea is to first come up with a multi-step plan, and then go through that plan one item at a time.\n", - "After accomplishing a particular task, you can then revisit the plan and modify as appropriate.\n", + "The core idea is to first come up with a multi-step plan, and then go through\n", + "that plan one item at a time. After accomplishing a particular task, you can\n", + "then revisit the plan and modify as appropriate.\n", "\n", - "This compares to a typical [ReAct](https://arxiv.org/abs/2210.03629) style agent where you think one step at a time.\n", - "The advantages of this \"plan-and-execute\" style agent are:\n", + "This compares to a typical [ReAct](https://arxiv.org/abs/2210.03629) style agent\n", + "where you think one step at a time. The advantages of this \"plan-and-execute\"\n", + "style agent are:\n", "\n", "1. Explicit long term planning (which even really strong LLMs can struggle with)\n", - "2. Ability to use smaller/weaker models for the execution step, only using larger/better models for the planning step" + "2. Ability to use smaller/weaker models for the execution step, only using\n", + " larger/better models for the planning step" ] }, { @@ -35,7 +40,8 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "Next, we need to set API keys for OpenAI (the LLM we will use) and Tavily (the search tool we will use)" + "Next, we need to set API keys for OpenAI (the LLM we will use) and Tavily (the\n", + "search tool we will use)" ] }, { @@ -52,7 +58,8 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "Optionally, we can set API key for LangSmith tracing, which will give us best-in-class observability." + "Optionally, we can set API key for LangSmith tracing, which will give us\n", + "best-in-class observability." ] }, { @@ -92,7 +99,11 @@ "source": [ "## Define Tools\n", "\n", - "We will first define the tools we want to use. For this simple example, we will use a built-in search tool via Tavily. However, it is really easy to create your own tools - see documentation [here](https://js.langchain.com/docs/modules/agents/tools/dynamic) on how to do that." + "We will first define the tools we want to use. For this simple example, we will\n", + "use a built-in search tool via Tavily. However, it is really easy to create your\n", + "own tools - see documentation\n", + "[here](https://js.langchain.com/docs/modules/agents/tools/dynamic) on how to do\n", + "that." ] }, { @@ -112,8 +123,9 @@ "source": [ "## Define our Execution Agent\n", "\n", - "Now we will create the execution agent we want to use to execute tasks. \n", - "Note that for this example, we will be using the same execution agent for each task, but this doesn't HAVE to be the case." + "Now we will create the execution agent we want to use to execute tasks. Note\n", + "that for this example, we will be using the same execution agent for each task,\n", + "but this doesn't HAVE to be the case." ] }, { @@ -127,9 +139,11 @@ "import { ChatOpenAI } from \"@langchain/openai\";\n", "import { createOpenAIFunctionsAgent } from \"langchain/agents\";\n", "// Get the prompt to use - you can modify this!\n", - "const prompt = await pull(\"hwchase17/openai-functions-agent\");\n", + "const prompt = await pull(\n", + " \"hwchase17/openai-functions-agent\",\n", + ");\n", "// Choose the LLM that will drive the agent\n", - "const llm = new ChatOpenAI({ modelName: \"gpt-4-0125-preview\" })\n", + "const llm = new ChatOpenAI({ modelName: \"gpt-4-0125-preview\" });\n", "// Construct the OpenAI Functions agent\n", "const agentRunnable = await createOpenAIFunctionsAgent({\n", " llm,\n", @@ -199,11 +213,14 @@ "\n", "Let's now start by defining the state the track for this agent.\n", "\n", - "First, we will need to track the current plan. Let's represent that as a list of strings.\n", + "First, we will need to track the current plan. Let's represent that as a list of\n", + "strings.\n", "\n", - "Next, we should track previously executed steps. Let's represent that as a list of tuples (these tuples will contain the step and then the result)\n", + "Next, we should track previously executed steps. Let's represent that as a list\n", + "of tuples (these tuples will contain the step and then the result)\n", "\n", - "Finally, we need to have some state to represent the final response as well as the original input." + "Finally, we need to have some state to represent the final response as well as\n", + "the original input." ] }, { @@ -226,8 +243,8 @@ " },\n", " response: {\n", " value: null,\n", - " }\n", - "}" + " },\n", + "};" ] }, { @@ -236,7 +253,8 @@ "source": [ "## Planning Step\n", "\n", - "Let's now think about creating the planning step. This will use function calling to create a plan." + "Let's now think about creating the planning step. This will use function calling\n", + "to create a plan." ] }, { @@ -249,13 +267,15 @@ "import { zodToJsonSchema } from \"zod-to-json-schema\";\n", "\n", "const plan = zodToJsonSchema(z.object({\n", - " steps: z.array(z.string()).describe(\"different steps to follow, should be in sorted order\")\n", + " steps: z.array(z.string()).describe(\n", + " \"different steps to follow, should be in sorted order\",\n", + " ),\n", "}));\n", "const planFunction = {\n", " name: \"plan\",\n", " description: \"This tool is used to plan the steps to follow\",\n", - " parameters: plan\n", - "}" + " parameters: plan,\n", + "};" ] }, { @@ -268,16 +288,18 @@ "import { ChatPromptTemplate } from \"@langchain/core/prompts\";\n", "import { JsonOutputFunctionsParser } from \"langchain/output_parsers\";\n", "\n", - "const plannerPrompt = ChatPromptTemplate.fromTemplate(`For the given objective, come up with a simple step by step plan. \\\n", + "const plannerPrompt = ChatPromptTemplate.fromTemplate(\n", + " `For the given objective, come up with a simple step by step plan. \\\n", "This plan should involve individual tasks, that if executed correctly will yield the correct answer. Do not add any superfluous steps. \\\n", "The result of the final step should be the final answer. Make sure that each step has all the information needed - do not skip steps.\n", "\n", - "{objective}`);\n", + "{objective}`,\n", + ");\n", "const model = new ChatOpenAI({\n", - " modelName: \"gpt-4-0125-preview\"\n", + " modelName: \"gpt-4-0125-preview\",\n", "}).bind({\n", " functions: [planFunction],\n", - " function_call: planFunction\n", + " function_call: planFunction,\n", "});\n", "const parserSingle = new JsonOutputFunctionsParser({ argsOnly: true });\n", "const planner = plannerPrompt.pipe(model).pipe(parserSingle);" @@ -318,7 +340,8 @@ "source": [ "## Re-Plan Step\n", "\n", - "Now, let's create a step that re-does the plan based on the result of the previous step." + "Now, let's create a step that re-does the plan based on the result of the\n", + "previous step." ] }, { @@ -331,14 +354,15 @@ "import { JsonOutputFunctionsParser } from \"langchain/output_parsers\";\n", "\n", "const response = zodToJsonSchema(z.object({\n", - " response: z.string().describe(\"Response to user.\")\n", + " response: z.string().describe(\"Response to user.\"),\n", "}));\n", "const responseFunction = {\n", " name: \"response\",\n", " description: \"Response to user.\",\n", - " parameters: response\n", - "}\n", - "const replannerPrompt = ChatPromptTemplate.fromTemplate(`For the given objective, come up with a simple step by step plan.\n", + " parameters: response,\n", + "};\n", + "const replannerPrompt = ChatPromptTemplate.fromTemplate(\n", + " `For the given objective, come up with a simple step by step plan.\n", "This plan should involve individual tasks, that if executed correctly will yield the correct answer. Do not add any superfluous steps.\n", "The result of the final step should be the final answer. Make sure that each step has all the information needed - do not skip steps.\n", "\n", @@ -353,16 +377,17 @@ "\n", "Update your plan accordingly. If no more steps are needed and you can return to the user, then respond with that and use the 'response' function.\n", "Otherwise, fill out the plan.\n", - "Only add steps to the plan that still NEED to be done. Do not return previously done steps as part of the plan.`);\n", + "Only add steps to the plan that still NEED to be done. Do not return previously done steps as part of the plan.`,\n", + ");\n", "const parser = new JsonOutputFunctionsParser();\n", "const replanner = createOpenAIFnRunnable({\n", " functions: [planFunction, responseFunction],\n", " outputParser: parser,\n", " llm: new ChatOpenAI({\n", - " modelName: \"gpt-4-0125-preview\"\n", + " modelName: \"gpt-4-0125-preview\",\n", " }),\n", - " prompt: replannerPrompt\n", - "});\n" + " prompt: replannerPrompt,\n", + "});" ] }, { @@ -385,24 +410,30 @@ " plan: Array;\n", " pastSteps: Array;\n", " response: string | null;\n", - "}\n", + "};\n", "\n", - "async function executeStep(state: PlanExecuteState): Promise> {\n", + "async function executeStep(\n", + " state: PlanExecuteState,\n", + "): Promise> {\n", " const task = state.input;\n", " const agentResponse = await agentExecutor.invoke({ input: task });\n", " return { pastSteps: [task, agentResponse.agentOutcome.returnValues.output] };\n", "}\n", "\n", - "async function planStep(state: PlanExecuteState): Promise> {\n", + "async function planStep(\n", + " state: PlanExecuteState,\n", + "): Promise> {\n", " const plan = await planner.invoke({ objective: state.input });\n", - " return { plan: plan.steps }\n", + " return { plan: plan.steps };\n", "}\n", "\n", - "async function replanStep(state: PlanExecuteState): Promise> {\n", + "async function replanStep(\n", + " state: PlanExecuteState,\n", + "): Promise> {\n", " const output = await replanner.invoke({\n", " input: state.input,\n", " plan: state.plan ? state.plan.join(\"\\n\") : \"\",\n", - " pastSteps: state.pastSteps.join(\"\\n\")\n", + " pastSteps: state.pastSteps.join(\"\\n\"),\n", " });\n", " if (\"response\" in output) {\n", " return { response: output.response };\n", @@ -416,7 +447,7 @@ " return \"true\";\n", " }\n", " return \"false\";\n", - "}\n" + "}" ] }, { @@ -454,8 +485,8 @@ " shouldEnd,\n", " {\n", " \"true\": END,\n", - " \"false\": \"planner\"\n", - " }\n", + " \"false\": \"planner\",\n", + " },\n", ");\n", "\n", "// Finally, we compile it!\n", @@ -517,7 +548,9 @@ ], "source": [ "const config = { recursionLimit: 50 };\n", - "const inputs = { input: \"what is the hometown of the 2024 Australia open winner?\" };\n", + "const inputs = {\n", + " input: \"what is the hometown of the 2024 Australia open winner?\",\n", + "};\n", "\n", "for await (const event of await app.stream(inputs, config)) {\n", " console.log(event);\n", diff --git a/examples/rag/langgraph_agentic_rag.ipynb b/examples/rag/langgraph_agentic_rag.ipynb index 30ff12b2..7ca495fc 100644 --- a/examples/rag/langgraph_agentic_rag.ipynb +++ b/examples/rag/langgraph_agentic_rag.ipynb @@ -6,7 +6,9 @@ "source": [ "# LangGraph Retrieval Agent\n", "\n", - "We can implement [Retrieval Agents](https://js.langchain.com/docs/use_cases/question_answering/conversational_retrieval_agents) in [LangGraph](https://js.langchain.com/docs/langgraph)." + "We can implement\n", + "[Retrieval Agents](https://js.langchain.com/docs/use_cases/question_answering/conversational_retrieval_agents)\n", + "in [LangGraph](https://js.langchain.com/docs/langgraph)." ] }, { @@ -17,7 +19,8 @@ "\n", "### Load env vars\n", "\n", - "Add a `.env` variable in the root of the `./examples` folder with your variables." + "Add a `.env` variable in the root of the `./examples` folder with your\n", + "variables." ] }, { @@ -101,7 +104,7 @@ "import { CheerioWebBaseLoader } from \"langchain/document_loaders/web/cheerio\";\n", "import { RecursiveCharacterTextSplitter } from \"langchain/text_splitter\";\n", "import { MemoryVectorStore } from \"langchain/vectorstores/memory\";\n", - "import { OpenAIEmbeddings } from \"@langchain/openai\"; \n", + "import { OpenAIEmbeddings } from \"@langchain/openai\";\n", "\n", "const urls = [\n", " \"https://lilianweng.github.io/posts/2023-06-23-agent/\",\n", @@ -109,14 +112,22 @@ " \"https://lilianweng.github.io/posts/2023-10-25-adv-attack-llm/\",\n", "];\n", "\n", - "const docs = await Promise.all(urls.map((url) => new CheerioWebBaseLoader(url).load()));\n", + "const docs = await Promise.all(\n", + " urls.map((url) => new CheerioWebBaseLoader(url).load()),\n", + ");\n", "const docsList = docs.flat();\n", "\n", - "const textSplitter = new RecursiveCharacterTextSplitter({ chunkSize: 500, chunkOverlap: 50 });\n", + "const textSplitter = new RecursiveCharacterTextSplitter({\n", + " chunkSize: 500,\n", + " chunkOverlap: 50,\n", + "});\n", "const docSplits = await textSplitter.splitDocuments(docsList);\n", "\n", "// Add to vectorDB\n", - "const vectorStore = await MemoryVectorStore.fromDocuments(docSplits, new OpenAIEmbeddings());\n", + "const vectorStore = await MemoryVectorStore.fromDocuments(\n", + " docSplits,\n", + " new OpenAIEmbeddings(),\n", + ");\n", "\n", "const retriever = vectorStore.asRetriever();" ] @@ -134,8 +145,9 @@ " retriever,\n", " {\n", " name: \"retrieve_blog_posts\",\n", - " description: \"Search and return information about Lilian Weng blog posts on LLM agents, prompt engineering, and adversarial attacks on LLMs.\",\n", - " }\n", + " description:\n", + " \"Search and return information about Lilian Weng blog posts on LLM agents, prompt engineering, and adversarial attacks on LLMs.\",\n", + " },\n", ");\n", "const tools = [tool];\n", "\n", @@ -149,10 +161,11 @@ "metadata": {}, "source": [ "## Agent state\n", - " \n", + "\n", "We will define a graph.\n", "\n", - "You may pass a custom `state` object to the graph, or use a simple list of `messages`.\n", + "You may pass a custom `state` object to the graph, or use a simple list of\n", + "`messages`.\n", "\n", "Our state will be a list of `messages`.\n", "\n", @@ -170,7 +183,7 @@ "source": [ "## Nodes and Edges\n", "\n", - "Each node will - \n", + "Each node will -\n", "\n", "1/ Either be a function or a runnable.\n", "\n", @@ -200,7 +213,11 @@ "import { pull } from \"langchain/hub\";\n", "import { zodToJsonSchema } from \"zod-to-json-schema\";\n", "import { z } from \"zod\";\n", - "import { BaseMessage, FunctionMessage, HumanMessage } from \"@langchain/core/messages\";\n", + "import {\n", + " BaseMessage,\n", + " FunctionMessage,\n", + " HumanMessage,\n", + "} from \"@langchain/core/messages\";\n", "import { ChatPromptTemplate } from \"@langchain/core/prompts\";\n", "import { ChatOpenAI } from \"@langchain/openai\";\n", "import { convertToOpenAIFunction } from \"@langchain/core/utils/function_calling\";\n", @@ -222,7 +239,7 @@ " }\n", " console.log(\"---DECISION: RETRIEVE---\");\n", " return \"continue\";\n", - "};\n", + "}\n", "\n", "/**\n", " * Determines whether the Agent should continue based on the relevance of retrieved documents.\n", @@ -246,10 +263,11 @@ " name: \"give_relevance_score\",\n", " description: \"Give a relevance score to the retrieved documents.\",\n", " parameters: output,\n", - " }\n", - " }\n", + " },\n", + " };\n", "\n", - " const prompt = ChatPromptTemplate.fromTemplate(`You are a grader assessing relevance of retrieved docs to a user question.\n", + " const prompt = ChatPromptTemplate.fromTemplate(\n", + " `You are a grader assessing relevance of retrieved docs to a user question.\n", " Here are the retrieved docs:\n", " \\n ------- \\n\n", " {context} \n", @@ -258,7 +276,8 @@ " If the content of the docs are relevant to the users question, score them as relevant.\n", " Give a binary score 'yes' or 'no' score to indicate whether the docs are relevant to the question.\n", " Yes: The docs are relevant to the question.\n", - " No: The docs are not relevant to the question.`);\n", + " No: The docs are not relevant to the question.`,\n", + " );\n", "\n", " const model = new ChatOpenAI({\n", " modelName: \"gpt-4-0125-preview\",\n", @@ -282,19 +301,19 @@ "\n", "/**\n", " * Check the relevance of the previous LLM tool call.\n", - " * \n", + " *\n", " * @param {Array} state - The current state of the agent, including all messages.\n", " * @returns {string} - A directive to either \"yes\" or \"no\" based on the relevance of the documents.\n", " */\n", "function checkRelevance(state: Array) {\n", " console.log(\"---CHECK RELEVANCE---\");\n", " const lastMessage = state[state.length - 1];\n", - " const toolCalls = lastMessage.additional_kwargs.tool_calls\n", + " const toolCalls = lastMessage.additional_kwargs.tool_calls;\n", " if (!toolCalls) {\n", " throw new Error(\"Last message was not a function message\");\n", " }\n", " const parsedArgs = JSON.parse(toolCalls[0].function.arguments);\n", - " \n", + "\n", " if (parsedArgs.binaryScore === \"yes\") {\n", " console.log(\"---DECISION: DOCS RELEVANT---\");\n", " return \"yes\";\n", @@ -327,7 +346,7 @@ " const response = await model.invoke(state);\n", " // We can return just the response because it will be appended to the state.\n", " return [response];\n", - "};\n", + "}\n", "\n", "/**\n", " * Executes a tool based on the last message's function call.\n", @@ -345,7 +364,9 @@ " const lastMessage = state[state.length - 1];\n", " const action = {\n", " tool: lastMessage.additional_kwargs.function_call?.name ?? \"\",\n", - " toolInput: JSON.parse(lastMessage.additional_kwargs.function_call?.arguments ?? \"{}\"),\n", + " toolInput: JSON.parse(\n", + " lastMessage.additional_kwargs.function_call?.arguments ?? \"{}\",\n", + " ),\n", " };\n", " // We call the tool_executor and get back a response.\n", " const response = await toolExecutor.invoke(action);\n", @@ -366,12 +387,14 @@ "async function rewrite(state: Array) {\n", " console.log(\"---TRANSFORM QUERY---\");\n", " const question = state[0].content as string;\n", - " const prompt = ChatPromptTemplate.fromTemplate(`Look at the input and try to reason about the underlying semantic intent / meaning. \\n \n", + " const prompt = ChatPromptTemplate.fromTemplate(\n", + " `Look at the input and try to reason about the underlying semantic intent / meaning. \\n \n", " Here is the initial question:\n", " \\n ------- \\n\n", " {question} \n", " \\n ------- \\n\n", - " Formulate an improved question:`);\n", + " Formulate an improved question:`,\n", + " );\n", "\n", " // Grader\n", " const model = new ChatOpenAI({\n", @@ -421,10 +444,10 @@ "source": [ "## Graph\n", "\n", - "* Start with an agent, `callModel`\n", - "* Agent make a decision to call a function\n", - "* If so, then `action` to call tool (retriever)\n", - "* Then call agent with the tool output added to messages (`state`)" + "- Start with an agent, `callModel`\n", + "- Agent make a decision to call a function\n", + "- If so, then `action` to call tool (retriever)\n", + "- Then call agent with the tool output added to messages (`state`)" ] }, { @@ -466,10 +489,10 @@ " // Call tool node\n", " continue: \"retrieve\",\n", " end: END,\n", - " }\n", + " },\n", ");\n", "\n", - "workflow.addEdge(\"retrieve\", \"gradeDocuments\")\n", + "workflow.addEdge(\"retrieve\", \"gradeDocuments\");\n", "\n", "// Edges taken after the `action` node is called.\n", "workflow.addConditionalEdges(\n", @@ -479,8 +502,8 @@ " {\n", " // Call tool node\n", " yes: \"generate\",\n", - " no: \"rewrite\" , // placeholder\n", - " }\n", + " no: \"rewrite\", // placeholder\n", + " },\n", ");\n", "\n", "workflow.addEdge(\"generate\", END);\n", @@ -611,7 +634,11 @@ "source": [ "import { HumanMessage } from \"@langchain/core/messages\";\n", "\n", - "const inputs = [new HumanMessage(\"What are the types of agent memory based on Lilian Weng's blog post?\")];\n", + "const inputs = [\n", + " new HumanMessage(\n", + " \"What are the types of agent memory based on Lilian Weng's blog post?\",\n", + " ),\n", + "];\n", "let finalState;\n", "for await (const output of await app.stream(inputs)) {\n", " for (const [key, value] of Object.entries(output)) {\n", diff --git a/examples/rag/langgraph_crag.ipynb b/examples/rag/langgraph_crag.ipynb index 26ad689c..97872b67 100644 --- a/examples/rag/langgraph_crag.ipynb +++ b/examples/rag/langgraph_crag.ipynb @@ -6,11 +6,14 @@ "source": [ "# Corrective RAG (CRAG)\n", "\n", - "Self-reflection can enhance RAG, enabling correction of poor quality retrieval or generations.\n", + "Self-reflection can enhance RAG, enabling correction of poor quality retrieval\n", + "or generations.\n", "\n", - "Several recent papers focus on this theme, but implementing the ideas can be tricky.\n", + "Several recent papers focus on this theme, but implementing the ideas can be\n", + "tricky.\n", "\n", - "Here we show how to implement ideas from the `Corrective RAG (CRAG)` paper [here](https://arxiv.org/pdf/2401.15884.pdf) using LangGraph.\n", + "Here we show how to implement ideas from the `Corrective RAG (CRAG)` paper\n", + "[here](https://arxiv.org/pdf/2401.15884.pdf) using LangGraph.\n", "\n", "## Dependencies\n", "\n", @@ -72,28 +75,32 @@ "source": [ "## CRAG Detail\n", "\n", - "Corrective-RAG (CRAG) is a recent paper that introduces an interesting approach for self-reflective RAG. \n", + "Corrective-RAG (CRAG) is a recent paper that introduces an interesting approach\n", + "for self-reflective RAG.\n", "\n", "The framework grades retrieved documents relative to the question:\n", "\n", "1. Correct documents -\n", "\n", - "* If at least one document exceeds the threshold for relevance, then it proceeds to generation\n", - "* Before generation, it performns knowledge refinement\n", - "* This paritions the document into \"knowledge strips\"\n", - "* It grades each strip, and filters our irrelevant ones \n", + "- If at least one document exceeds the threshold for relevance, then it proceeds\n", + " to generation\n", + "- Before generation, it performns knowledge refinement\n", + "- This paritions the document into \"knowledge strips\"\n", + "- It grades each strip, and filters our irrelevant ones\n", "\n", "2. Ambiguous or incorrect documents -\n", "\n", - "* If all documents fall below the relevance threshold or if the grader is unsure, then the framework seeks an additional datasource\n", - "* It will use web search to supplement retrieval\n", - "* The diagrams in the paper also suggest that query re-writing is used here \n", + "- If all documents fall below the relevance threshold or if the grader is\n", + " unsure, then the framework seeks an additional datasource\n", + "- It will use web search to supplement retrieval\n", + "- The diagrams in the paper also suggest that query re-writing is used here\n", "\n", "![image.png](attachment:image.png)\n", "\n", "---\n", "\n", - "Let's implement some of these ideas from scratch using [LangGraph](https://js.langchain.com/docs/langgraph)." + "Let's implement some of these ideas from scratch using\n", + "[LangGraph](https://js.langchain.com/docs/langgraph)." ] }, { @@ -101,7 +108,7 @@ "metadata": {}, "source": [ "## Retriever\n", - " \n", + "\n", "Let's index 3 blog posts." ] }, @@ -156,7 +163,7 @@ "import { CheerioWebBaseLoader } from \"langchain/document_loaders/web/cheerio\";\n", "import { RecursiveCharacterTextSplitter } from \"langchain/text_splitter\";\n", "import { MemoryVectorStore } from \"langchain/vectorstores/memory\";\n", - "import { OpenAIEmbeddings } from \"@langchain/openai\"; \n", + "import { OpenAIEmbeddings } from \"@langchain/openai\";\n", "\n", "const urls = [\n", " \"https://lilianweng.github.io/posts/2023-06-23-agent/\",\n", @@ -164,14 +171,22 @@ " \"https://lilianweng.github.io/posts/2023-10-25-adv-attack-llm/\",\n", "];\n", "\n", - "const docs = await Promise.all(urls.map((url) => new CheerioWebBaseLoader(url).load()));\n", + "const docs = await Promise.all(\n", + " urls.map((url) => new CheerioWebBaseLoader(url).load()),\n", + ");\n", "const docsList = docs.flat();\n", "\n", - "const textSplitter = new RecursiveCharacterTextSplitter({ chunkSize: 250, chunkOverlap: 0 });\n", + "const textSplitter = new RecursiveCharacterTextSplitter({\n", + " chunkSize: 250,\n", + " chunkOverlap: 0,\n", + "});\n", "const docSplits = await textSplitter.splitDocuments(docsList);\n", "\n", "// Add to vectorDB\n", - "const vectorStore = await MemoryVectorStore.fromDocuments(docSplits, new OpenAIEmbeddings());\n", + "const vectorStore = await MemoryVectorStore.fromDocuments(\n", + " docSplits,\n", + " new OpenAIEmbeddings(),\n", + ");\n", "const retriever = vectorStore.asRetriever();" ] }, @@ -180,7 +195,7 @@ "metadata": {}, "source": [ "## State\n", - " \n", + "\n", "We will define a graph.\n", "\n", "Our state will be an `object`.\n", @@ -202,14 +217,14 @@ " * An object where each key is a string.\n", " */\n", " keys: Record;\n", - "}\n", + "};\n", "\n", "const graphState = {\n", " keys: {\n", " value: null,\n", " default: () => ({}),\n", - " }\n", - "}" + " },\n", + "};" ] }, { @@ -229,10 +244,14 @@ "\n", "We can make some simplifications from the paper:\n", "\n", - "* Let's skip the knowledge refinement phase as a first pass. This can be added back as a node, if desired. \n", - "* If *any* document is irrelevant, let's opt to supplement retrieval with web search. \n", - "* We'll use [Tavily Search](https://js.langchain.com/docs/integrations/tools/tavily_search) for web search.\n", - "* Let's use query re-writing to optimize the query for web search.\n", + "- Let's skip the knowledge refinement phase as a first pass. This can be added\n", + " back as a node, if desired.\n", + "- If _any_ document is irrelevant, let's opt to supplement retrieval with web\n", + " search.\n", + "- We'll use\n", + " [Tavily Search](https://js.langchain.com/docs/integrations/tools/tavily_search)\n", + " for web search.\n", + "- Let's use query re-writing to optimize the query for web search.\n", "\n", "Here is our graph flow:\n", "\n", @@ -247,7 +266,7 @@ "source": [ "import { TavilySearchResults } from \"@langchain/community/tools/tavily_search\";\n", "import { StructuredTool } from \"@langchain/core/tools\";\n", - "import { type DocumentInterface, Document } from \"@langchain/core/documents\";\n", + "import { Document, type DocumentInterface } from \"@langchain/core/documents\";\n", "import { z } from \"zod\";\n", "import { ChatPromptTemplate } from \"@langchain/core/prompts\";\n", "import { pull } from \"langchain/hub\";\n", @@ -262,8 +281,9 @@ "});\n", "class Grade extends StructuredTool {\n", " name = \"grade\";\n", - " description = \"Grade the relevance of the retrieved documents to the question. Either 'yes' or 'no'.\";\n", - " schema = zodScore\n", + " description =\n", + " \"Grade the relevance of the retrieved documents to the question. Either 'yes' or 'no'.\";\n", + " schema = zodScore;\n", " async _call(input: z.infer) {\n", " return JSON.stringify(input);\n", " }\n", @@ -272,7 +292,7 @@ "\n", "/**\n", " * Retrieve documents\n", - " * \n", + " *\n", " * @param {GraphState} state The current state of the graph.\n", " * @param {RunnableConfig | undefined} config The configuration object for tracing.\n", " * @returns {Promise} The new state object.\n", @@ -289,13 +309,13 @@ " keys: {\n", " documents,\n", " question,\n", - " }\n", - " }\n", - "};\n", + " },\n", + " };\n", + "}\n", "\n", "/**\n", " * Generate answer\n", - " * \n", + " *\n", " * @param {GraphState} state The current state of the graph.\n", " * @param {RunnableConfig | undefined} config The configuration object for tracing.\n", " * @returns {Promise} The new state object.\n", @@ -305,7 +325,7 @@ " const stateObject = state.keys;\n", " const documents = stateObject.documents;\n", " const question = stateObject.question;\n", - " \n", + "\n", " // Pull in the prompt\n", " const prompt = await pull(\"rlm/rag-prompt\");\n", "\n", @@ -320,20 +340,23 @@ "\n", " const formattedDocs = documents.map((doc) => doc.pageContent).join(\"\\n\\n\");\n", "\n", - " const generation = await ragChain.invoke({ context: formattedDocs, question });\n", + " const generation = await ragChain.invoke({\n", + " context: formattedDocs,\n", + " question,\n", + " });\n", "\n", " return {\n", " keys: {\n", " documents,\n", " question,\n", " generation,\n", - " }\n", - " }\n", + " },\n", + " };\n", "}\n", "\n", "/**\n", " * Determines whether the retrieved documents are relevant to the question.\n", - " * \n", + " *\n", " * @param {GraphState} state The current state of the graph.\n", " * @param {RunnableConfig | undefined} config The configuration object for tracing.\n", " * @returns {Promise} The new state object.\n", @@ -357,7 +380,8 @@ " tool_choice: gradeToolOai,\n", " });\n", "\n", - " const prompt = ChatPromptTemplate.fromTemplate(`You are a grader assessing relevance of a retrieved document to a user question.\n", + " const prompt = ChatPromptTemplate.fromTemplate(\n", + " `You are a grader assessing relevance of a retrieved document to a user question.\n", " Here is the retrieved document:\n", " \n", " {context}\n", @@ -365,7 +389,8 @@ " Here is the user question: {question}\n", "\n", " If the document contains keyword(s) or semantic meaning related to the user question, grade it as relevant.\n", - " Give a binary score 'yes' or 'no' score to indicate whether the document is relevant to the question.`);\n", + " Give a binary score 'yes' or 'no' score to indicate whether the document is relevant to the question.`,\n", + " );\n", "\n", " // Chain\n", " const chain = prompt.pipe(llmWithTool).pipe(parser);\n", @@ -375,10 +400,10 @@ " const grade = await chain.invoke({ context: doc.pageContent, question });\n", " const { args } = grade[0];\n", " if (args.binaryScore === \"yes\") {\n", - " console.log(\"---GRADE: DOCUMENT RELEVANT---\")\n", + " console.log(\"---GRADE: DOCUMENT RELEVANT---\");\n", " filteredDocs.push(doc);\n", " } else {\n", - " console.log(\"---GRADE: DOCUMENT NOT RELEVANT---\")\n", + " console.log(\"---GRADE: DOCUMENT NOT RELEVANT---\");\n", " }\n", " }\n", "\n", @@ -386,13 +411,13 @@ " keys: {\n", " documents: filteredDocs,\n", " question,\n", - " }\n", - " }\n", + " },\n", + " };\n", "}\n", "\n", "/**\n", " * Transform the query to produce a better question.\n", - " * \n", + " *\n", " * @param {GraphState} state The current state of the graph.\n", " * @param {RunnableConfig | undefined} config The configuration object for tracing.\n", " * @returns {Promise} The new state object.\n", @@ -404,13 +429,15 @@ " const documents = stateObject.documents;\n", "\n", " // Pull in the prompt\n", - " const prompt = ChatPromptTemplate.fromTemplate(`You are generating a question that is well optimized for semantic search retrieval.\n", + " const prompt = ChatPromptTemplate.fromTemplate(\n", + " `You are generating a question that is well optimized for semantic search retrieval.\n", " Look at the input and try to reason about the underlying sematic intent / meaning.\n", " Here is the initial question:\n", " \\n ------- \\n\n", " {question} \n", " \\n ------- \\n\n", - " Formulate an improved question: `)\n", + " Formulate an improved question: `,\n", + " );\n", "\n", " // Grader\n", " const model = new ChatOpenAI({\n", @@ -420,20 +447,20 @@ " });\n", "\n", " // Prompt\n", - " const chain = prompt.pipe(model).pipe(new StringOutputParser())\n", + " const chain = prompt.pipe(model).pipe(new StringOutputParser());\n", " const betterQuestion = await chain.invoke({ question });\n", "\n", " return {\n", " keys: {\n", " question: betterQuestion,\n", " documents,\n", - " }\n", - " }\n", + " },\n", + " };\n", "}\n", "\n", "/**\n", " * Web search based on the re-phrased question using Tavily API.\n", - " * \n", + " *\n", " * @param {GraphState} state The current state of the graph.\n", " * @param {RunnableConfig | undefined} config The configuration object for tracing.\n", " * @returns {Promise} The new state object.\n", @@ -453,13 +480,13 @@ " keys: {\n", " question,\n", " documents: newDocuments,\n", - " }\n", - " }\n", + " },\n", + " };\n", "}\n", "\n", "/**\n", " * Determines whether to generate an answer, or re-generate a question.\n", - " * \n", + " *\n", " * @param {GraphState} state The current state of the graph.\n", " * @returns {\"transformQuery\" | \"generate\"} Next node to call\n", " */\n", @@ -477,7 +504,7 @@ " // We have relevant documents, so generate answer\n", " console.log(\"---DECISION: GENERATE---\");\n", " return \"generate\";\n", - "};" + "}" ] }, { @@ -495,7 +522,7 @@ "metadata": {}, "outputs": [], "source": [ - "import { StateGraph, END } from \"@langchain/langgraph\";\n", + "import { END, StateGraph } from \"@langchain/langgraph\";\n", "\n", "const workflow = new StateGraph({\n", " channels: graphState,\n", @@ -517,7 +544,7 @@ " {\n", " transformQuery: \"transformQuery\",\n", " generate: \"generate\",\n", - " }\n", + " },\n", ");\n", "workflow.addEdge(\"transformQuery\", \"webSearch\");\n", "workflow.addEdge(\"webSearch\", \"generate\");\n", @@ -624,7 +651,9 @@ } ], "source": [ - "const inputs = { keys: { question: \"Explain how the different types of agent memory work.\" }};\n", + "const inputs = {\n", + " keys: { question: \"Explain how the different types of agent memory work.\" },\n", + "};\n", "const config = { recursionLimit: 50 };\n", "let finalGeneration;\n", "for await (const output of await app.stream(inputs, config)) {\n", diff --git a/examples/rag/langgraph_crag_mistral.ipynb b/examples/rag/langgraph_crag_mistral.ipynb index 63326e22..c6b300cf 100644 --- a/examples/rag/langgraph_crag_mistral.ipynb +++ b/examples/rag/langgraph_crag_mistral.ipynb @@ -11,25 +11,32 @@ "source": [ "# Corrective RAG\n", "\n", - "Self-reflection can enhance RAG, enabling correction of poor quality retrieval or generations.\n", + "Self-reflection can enhance RAG, enabling correction of poor quality retrieval\n", + "or generations.\n", "\n", - "Several recent papers focus on this theme, but implementing the ideas can be tricky.\n", + "Several recent papers focus on this theme, but implementing the ideas can be\n", + "tricky.\n", "\n", - "Here we show how to implement self-reflective RAG using `Mistral` and `LangGraph`.\n", + "Here we show how to implement self-reflective RAG using `Mistral` and\n", + "`LangGraph`.\n", "\n", - "We'll focus on ideas from one paper, `Corrective RAG (CRAG)` [here](https://arxiv.org/pdf/2401.15884.pdf).\n", + "We'll focus on ideas from one paper, `Corrective RAG (CRAG)`\n", + "[here](https://arxiv.org/pdf/2401.15884.pdf).\n", "\n", "![image.png](attachment:image.png)\n", "\n", - "### Running Locally \n", + "### Running Locally\n", "\n", - "If you want to run this locally (e.g., on your laptop), use [Ollama](https://ollama.ai/library/mistral/tags):\n", + "If you want to run this locally (e.g., on your laptop), use\n", + "[Ollama](https://ollama.ai/library/mistral/tags):\n", + "\n", + "- Download [Ollama app](https://ollama.ai/).\n", + "- Download a `Mistral` model e.g., `ollama pull mistral:7b-instruct`, from\n", + " various Mistral versions [here](https://ollama.ai/library/mistral) and Mixtral\n", + " versions [here](https://ollama.ai/library/mixtral) available.\n", + "- Download LLaMA2 `ollama pull llama2:latest` to use Ollama embeddings.\n", + "- Set flags indicating we will run locally and the Mistral model downloaded:\n", "\n", - "* Download [Ollama app](https://ollama.ai/).\n", - "* Download a `Mistral` model e.g., `ollama pull mistral:7b-instruct`, from various Mistral versions [here](https://ollama.ai/library/mistral) and Mixtral versions [here](https://ollama.ai/library/mixtral) available.\n", - "* Download LLaMA2 `ollama pull llama2:latest` to use Ollama embeddings.\n", - "* Set flags indicating we will run locally and the Mistral model downloaded:\n", - " \n", "```typescript\n", "const runLocal = true;\n", "const localLlm = \"mistral\";\n", @@ -46,9 +53,13 @@ "\n", "Add a `.env` variable in the root of the repo with your variables.\n", "\n", - "* Set `TOGETHER_AI_API_KEY` (optional if you don't want to run the chat model locally via Ollama). You can create an account [here](https://www.together.ai/).\n", - "* Set `MISTRAL_API_KEY` (optional if you don't want to run embeddings locally via Ollama) for Mistral AI embeddings.\n", - "* Set `TAVILY_API_KEY` to enable web search [here](https://app.tavily.com/sign-in).\n" + "- Set `TOGETHER_AI_API_KEY` (optional if you don't want to run the chat model\n", + " locally via Ollama). You can create an account\n", + " [here](https://www.together.ai/).\n", + "- Set `MISTRAL_API_KEY` (optional if you don't want to run embeddings locally\n", + " via Ollama) for Mistral AI embeddings.\n", + "- Set `TAVILY_API_KEY` to enable web search\n", + " [here](https://app.tavily.com/sign-in)." ] }, { @@ -86,9 +97,10 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "### Tracing \n", + "### Tracing\n", "\n", - "* Optionally, use [LangSmith](https://docs.smith.langchain.com/) for tracing (shown at bottom) by setting: \n", + "- Optionally, use [LangSmith](https://docs.smith.langchain.com/) for tracing\n", + " (shown at bottom) by setting:\n", "\n", "```bash\n", "export LANGCHAIN_TRACING_V2=true\n", @@ -104,7 +116,7 @@ "outputs": [], "source": [ "const runLocal = true;\n", - "const localLlm = \"mistral\"" + "const localLlm = \"mistral\";" ] }, { @@ -113,13 +125,18 @@ "source": [ "## Indexing\n", "\n", - "First, let's index a popular blog post on agents. \n", + "First, let's index a popular blog post on agents.\n", "\n", - "We can use [Mistral embeddings](https://js.langchain.com/docs/integrations/text_embedding/mistralai).\n", + "We can use\n", + "[Mistral embeddings](https://js.langchain.com/docs/integrations/text_embedding/mistralai).\n", "\n", - "For local embeddings, we can use [Ollama](https://js.langchain.com/docs/integrations/text_embedding/ollama). You'll need to run `ollama pull nomic-embed-text` to pull the embeddings model locally.\n", + "For local embeddings, we can use\n", + "[Ollama](https://js.langchain.com/docs/integrations/text_embedding/ollama).\n", + "You'll need to run `ollama pull nomic-embed-text` to pull the embeddings model\n", + "locally.\n", "\n", - "We'll use a local demo vectorstore, but you can swap in [your preferred production-ready choice](https://js.langchain.com/docs/integrations/vectorstores)." + "We'll use a local demo vectorstore, but you can swap in\n", + "[your preferred production-ready choice](https://js.langchain.com/docs/integrations/vectorstores)." ] }, { @@ -181,7 +198,10 @@ "const loader = new CheerioWebBaseLoader(url);\n", "const docs = await loader.load();\n", "\n", - "const textSplitter = new RecursiveCharacterTextSplitter({ chunkSize: 500, chunkOverlap: 100 });\n", + "const textSplitter = new RecursiveCharacterTextSplitter({\n", + " chunkSize: 500,\n", + " chunkOverlap: 100,\n", + "});\n", "const docSplits = await textSplitter.splitDocuments(docs);\n", "\n", "let embeddings;\n", @@ -193,7 +213,10 @@ "}\n", "\n", "// Add to vectorDB\n", - "const vectorStore = await MemoryVectorStore.fromDocuments(docSplits, embeddings);\n", + "const vectorStore = await MemoryVectorStore.fromDocuments(\n", + " docSplits,\n", + " embeddings,\n", + ");\n", "const retriever = vectorStore.asRetriever();" ] }, @@ -208,27 +231,33 @@ "source": [ "## Corrective RAG\n", "\n", - "Let's implement self-reflective RAG with some ideas from the CRAG (Corrective RAG) [paper](https://arxiv.org/pdf/2401.15884.pdf):\n", + "Let's implement self-reflective RAG with some ideas from the CRAG (Corrective\n", + "RAG) [paper](https://arxiv.org/pdf/2401.15884.pdf):\n", "\n", - "* Grade documents for relevance relative to the question.\n", - "* If any are irrelevant, then we will supplement the context used for generation with web search.\n", - "* For web search, we will re-phrase the question and use Tavily API.\n", - "* We will then pass retrieved documents and web results to an LLM for final answer generation.\n", + "- Grade documents for relevance relative to the question.\n", + "- If any are irrelevant, then we will supplement the context used for generation\n", + " with web search.\n", + "- For web search, we will re-phrase the question and use Tavily API.\n", + "- We will then pass retrieved documents and web results to an LLM for final\n", + " answer generation.\n", "\n", "Here is a schematic of our graph in more detail:\n", "\n", "![image.png](attachment:image.png)\n", "\n", - "We will implement this using [LangGraph](https://js.langchain.com/docs/langgraph): \n", + "We will implement this using\n", + "[LangGraph](https://js.langchain.com/docs/langgraph):\n", "\n", - "* See video [here](https://www.youtube.com/watch?ref=blog.langchain.dev&v=pbAd8O1Lvm4&feature=youtu.be)\n", - "* See blog post [here](https://blog.langchain.dev/agentic-rag-with-langgraph/)\n", + "- See video\n", + " [here](https://www.youtube.com/watch?ref=blog.langchain.dev&v=pbAd8O1Lvm4&feature=youtu.be)\n", + "- See blog post [here](https://blog.langchain.dev/agentic-rag-with-langgraph/)\n", "\n", "---\n", "\n", "### State\n", "\n", - "Every node in our graph will modify `state`, which is dict that contains values (`question`, `documents`, etc) relevant to RAG." + "Every node in our graph will modify `state`, which is dict that contains values\n", + "(`question`, `documents`, etc) relevant to RAG." ] }, { @@ -245,14 +274,14 @@ " * An object where each key is a string.\n", " */\n", " keys: Record;\n", - "}\n", + "};\n", "\n", "const graphState = {\n", " keys: {\n", " value: null,\n", " default: () => ({}),\n", - " }\n", - "}" + " },\n", + "};" ] }, { @@ -275,7 +304,7 @@ "outputs": [], "source": [ "import { TavilySearchResults } from \"@langchain/community/tools/tavily_search\";\n", - "import { DocumentInterface, Document } from \"@langchain/core/documents\";\n", + "import { Document, DocumentInterface } from \"@langchain/core/documents\";\n", "import { z } from \"zod\";\n", "import { ChatPromptTemplate } from \"@langchain/core/prompts\";\n", "import { pull } from \"langchain/hub\";\n", @@ -288,7 +317,7 @@ "\n", "/**\n", " * Retrieve documents\n", - " * \n", + " *\n", " * @param {GraphState} state The current state of the graph.\n", " * @param {RunnableConfig | undefined} config The configuration object for tracing.\n", " * @returns {Promise} The new state object.\n", @@ -303,13 +332,13 @@ " ...stateObject,\n", " documents,\n", " question,\n", - " }\n", - " }\n", - "};\n", + " },\n", + " };\n", + "}\n", "\n", "/**\n", " * Generate answer\n", - " * \n", + " *\n", " * @param {GraphState} state The current state of the graph.\n", " * @param {RunnableConfig | undefined} config The configuration object for tracing.\n", " * @returns {Promise} The new state object.\n", @@ -320,7 +349,7 @@ " const documents = stateObject.documents;\n", " const question = stateObject.question;\n", " const local = stateObject.local;\n", - " \n", + "\n", " // Pull in the prompt\n", " const prompt = await pull(\"rlm/rag-prompt\");\n", "\n", @@ -339,19 +368,22 @@ "\n", " const formattedDocs = documents.map((doc) => doc.pageContent).join(\"\\n\\n\");\n", "\n", - " const generation = await ragChain.invoke({ context: formattedDocs, question });\n", + " const generation = await ragChain.invoke({\n", + " context: formattedDocs,\n", + " question,\n", + " });\n", "\n", " return {\n", " keys: {\n", " ...stateObject,\n", " generation,\n", - " }\n", - " }\n", + " },\n", + " };\n", "}\n", "\n", "/**\n", " * Determines whether the retrieved documents are relevant to the question.\n", - " * \n", + " *\n", " * @param {GraphState} state The current state of the graph.\n", " * @param {RunnableConfig | undefined} config The configuration object for tracing.\n", " * @returns {Promise} The new state object.\n", @@ -368,11 +400,13 @@ " llm = new ChatOllama({\n", " model: localLlm,\n", " temperature: 0,\n", - " format: \"json\"\n", + " format: \"json\",\n", " });\n", " } else {\n", " const zodScore = z.object({\n", - " binaryScore: z.enum([\"yes\", \"no\"]).describe(\"Relevance score 'yes' or 'no'\"),\n", + " binaryScore: z.enum([\"yes\", \"no\"]).describe(\n", + " \"Relevance score 'yes' or 'no'\",\n", + " ),\n", " });\n", " llm = new ChatTogetherAI({\n", " modelName: \"mistralai/Mixtral-8x7B-Instruct-v0.1\",\n", @@ -385,16 +419,19 @@ " });\n", " }\n", "\n", - " const prompt = ChatPromptTemplate.fromTemplate(`You are a grader assessing relevance of a retrieved document to a user question. \\n \n", + " const prompt = ChatPromptTemplate.fromTemplate(\n", + " `You are a grader assessing relevance of a retrieved document to a user question. \\n \n", " Here is the retrieved document: \\n\\n {context} \\n\\n\n", " Here is the user question: {question} \\n\n", " If the document contains keywords related to the user question, grade it as relevant. \\n\n", " It does not need to be a stringent test. The goal is to filter out erroneous retrievals. \\n\n", " Give a binary score 'yes' or 'no' score to indicate whether the document is relevant to the question. \\n\n", " Use the 'grade' tool to provide the score.\n", - " Instructions: {formatInstructions}`);\n", + " Instructions: {formatInstructions}`,\n", + " );\n", "\n", - " const formatInstructions = \"Respond with a valid JSON object containing a single key 'binaryScore' with a value of 'yes' or 'no'.\";\n", + " const formatInstructions =\n", + " \"Respond with a valid JSON object containing a single key 'binaryScore' with a value of 'yes' or 'no'.\";\n", "\n", " const filteredDocs: Array = [];\n", " let runWebSearch = \"No\";\n", @@ -415,8 +452,9 @@ " });\n", " for await (const item of stream) {\n", " finalRes += (item as BaseMessageChunk).content;\n", - " const prevCharCodeAt = finalRes.length > 1 && finalRes.charCodeAt(finalRes.length - 2);\n", - " const charCodeAt = finalRes.charCodeAt(finalRes.length - 1)\n", + " const prevCharCodeAt = finalRes.length > 1 &&\n", + " finalRes.charCodeAt(finalRes.length - 2);\n", + " const charCodeAt = finalRes.charCodeAt(finalRes.length - 1);\n", " if (prevCharCodeAt === 9 && charCodeAt === 9) {\n", " controller.abort();\n", " }\n", @@ -441,13 +479,13 @@ " ...stateObject,\n", " documents: filteredDocs,\n", " runWebSearch,\n", - " }\n", - " }\n", + " },\n", + " };\n", "}\n", "\n", "/**\n", " * Transform the query to produce a better question.\n", - " * \n", + " *\n", " * @param {GraphState} state The current state of the graph.\n", " * @param {RunnableConfig | undefined} config The configuration object for tracing.\n", " * @returns {Promise} The new state object.\n", @@ -459,13 +497,15 @@ " const local = stateObject.local;\n", "\n", " // Pull in the prompt\n", - " const prompt = ChatPromptTemplate.fromTemplate(`You are generating questions that is well optimized for semantic search retrieval. \\n \n", + " const prompt = ChatPromptTemplate.fromTemplate(\n", + " `You are generating questions that is well optimized for semantic search retrieval. \\n \n", " Look at the input and try to reason about the underlying sematic intent / meaning. \\n \n", " Here is the initial question:\n", " \\n ------- \\n\n", " {question} \n", " \\n ------- \\n\n", - " Provide an improved question without any preamble, only respond with the updated question: `)\n", + " Provide an improved question without any preamble, only respond with the updated question: `,\n", + " );\n", "\n", " // Grader\n", " let llm;\n", @@ -479,20 +519,20 @@ " }\n", "\n", " // Prompt\n", - " const chain = prompt.pipe(llm).pipe(new StringOutputParser())\n", + " const chain = prompt.pipe(llm).pipe(new StringOutputParser());\n", " const betterQuestion = await chain.invoke({ question });\n", "\n", " return {\n", " keys: {\n", " ...stateObject,\n", " question: betterQuestion,\n", - " }\n", - " }\n", + " },\n", + " };\n", "}\n", "\n", "/**\n", " * Web search based on the re-phrased question using Tavily API.\n", - " * \n", + " *\n", " * @param {GraphState} state The current state of the graph.\n", " * @param {RunnableConfig | undefined} config The configuration object for tracing.\n", " * @returns {Promise} The new state object.\n", @@ -512,13 +552,13 @@ " keys: {\n", " ...stateObject,\n", " documents: newDocs,\n", - " }\n", - " }\n", + " },\n", + " };\n", "}\n", "\n", "/**\n", " * Determines whether to generate an answer or re-generate a question for web search.\n", - " * \n", + " *\n", " * @param {GraphState} state The current state of the graph.\n", " * @returns {Promise} The new state object.\n", " */\n", @@ -536,7 +576,7 @@ " // We have relevant documents, so generate answer\n", " console.log(\"---DECISION: GENERATE---\");\n", " return \"generate\";\n", - "};" + "}" ] }, { @@ -554,10 +594,10 @@ "metadata": {}, "outputs": [], "source": [ - "import { StateGraph, END } from \"@langchain/langgraph\";\n", + "import { END, StateGraph } from \"@langchain/langgraph\";\n", "\n", "const workflow = new StateGraph({\n", - " channels: graphState\n", + " channels: graphState,\n", "});\n", "\n", "// Define the nodes\n", @@ -576,7 +616,7 @@ " {\n", " transformQuery: \"transformQuery\",\n", " generate: \"generate\",\n", - " }\n", + " },\n", ");\n", "workflow.addEdge(\"transformQuery\", \"webSearch\");\n", "workflow.addEdge(\"webSearch\", \"generate\");\n", @@ -696,7 +736,7 @@ " keys: {\n", " question: \"Explain how the different types of agent memory work.\",\n", " local: runLocal,\n", - " }\n", + " },\n", "};\n", "const config = { recursionLimit: 50 };\n", "let finalGeneration;\n", diff --git a/examples/rag/langgraph_self_rag.ipynb b/examples/rag/langgraph_self_rag.ipynb index bbc04b52..ad269ccb 100644 --- a/examples/rag/langgraph_self_rag.ipynb +++ b/examples/rag/langgraph_self_rag.ipynb @@ -11,11 +11,14 @@ "source": [ "# Self-RAG\n", "\n", - "Self-reflection can enhance RAG, enabling correction of poor quality retrieval or generations.\n", + "Self-reflection can enhance RAG, enabling correction of poor quality retrieval\n", + "or generations.\n", "\n", - "Several recent papers focus on this theme, but implementing the ideas can be tricky.\n", + "Several recent papers focus on this theme, but implementing the ideas can be\n", + "tricky.\n", "\n", - "Here we show how to implement ideas from the `Self RAG` paper [here](https://arxiv.org/abs/2310.11511) using LangGraph.\n", + "Here we show how to implement ideas from the `Self RAG` paper\n", + "[here](https://arxiv.org/abs/2310.11511) using LangGraph.\n", "\n", "## Dependencies\n", "\n", @@ -23,38 +26,43 @@ "\n", "## Self-RAG Detail\n", "\n", - "Self-RAG is a recent paper that introduces an interesting approach for self-reflective RAG. \n", + "Self-RAG is a recent paper that introduces an interesting approach for\n", + "self-reflective RAG.\n", "\n", - "The framework trains an LLM (e.g., LLaMA2-7b or 13b) to generate tokens that govern the RAG process in a few ways:\n", + "The framework trains an LLM (e.g., LLaMA2-7b or 13b) to generate tokens that\n", + "govern the RAG process in a few ways:\n", "\n", "1. Should I retrieve from retriever, `R` -\n", "\n", - "* Token: `Retrieve`\n", - "* Input: `x (question)` OR `x (question)`, `y (generation)`\n", - "* Decides when to retrieve `D` chunks with `R`\n", - "* Output: `yes, no, continue`\n", + "- Token: `Retrieve`\n", + "- Input: `x (question)` OR `x (question)`, `y (generation)`\n", + "- Decides when to retrieve `D` chunks with `R`\n", + "- Output: `yes, no, continue`\n", "\n", "2. Are the retrieved passages `D` relevant to the question `x` -\n", "\n", - "* Token: `ISREL`\n", - "* * Input: (`x (question)`, `d (chunk)`) for `d` in `D`\n", - "* `d` provides useful information to solve `x`\n", - "* Output: `relevant, irrelevant`\n", + "- Token: `ISREL`\n", + "-\n", + " - Input: (`x (question)`, `d (chunk)`) for `d` in `D`\n", + "- `d` provides useful information to solve `x`\n", + "- Output: `relevant, irrelevant`\n", "\n", + "3. Are the LLM generation from each chunk in `D` is relevant to the chunk\n", + " (hallucinations, etc) -\n", "\n", - "3. Are the LLM generation from each chunk in `D` is relevant to the chunk (hallucinations, etc) -\n", + "- Token: `ISSUP`\n", + "- Input: `x (question)`, `d (chunk)`, `y (generation)` for `d` in `D`\n", + "- All of the verification-worthy statements in `y (generation)` are supported by\n", + " `d`\n", + "- Output: `{fully supported, partially supported, no support`\n", "\n", - "* Token: `ISSUP`\n", - "* Input: `x (question)`, `d (chunk)`, `y (generation)` for `d` in `D`\n", - "* All of the verification-worthy statements in `y (generation)` are supported by `d`\n", - "* Output: `{fully supported, partially supported, no support`\n", + "4. The LLM generation from each chunk in `D` is a useful response to\n", + " `x (question)` -\n", "\n", - "4. The LLM generation from each chunk in `D` is a useful response to `x (question)` -\n", - "\n", - "* Token: `ISUSE`\n", - "* Input: `x (question)`, `y (generation)` for `d` in `D`\n", - "* `y (generation)` is a useful response to `x (question)`.\n", - "* Output: `{5, 4, 3, 2, 1}`\n", + "- Token: `ISUSE`\n", + "- Input: `x (question)`, `y (generation)` for `d` in `D`\n", + "- `y (generation)` is a useful response to `x (question)`.\n", + "- Output: `{5, 4, 3, 2, 1}`\n", "\n", "We can represent this as a graph:\n", "\n", @@ -62,7 +70,8 @@ "\n", "---\n", "\n", - "Let's implement some of these ideas from scratch using [LangGraph](https://js.langchain.com/docs/langgraph)." + "Let's implement some of these ideas from scratch using\n", + "[LangGraph](https://js.langchain.com/docs/langgraph)." ] }, { @@ -158,7 +167,7 @@ "import { CheerioWebBaseLoader } from \"langchain/document_loaders/web/cheerio\";\n", "import { RecursiveCharacterTextSplitter } from \"langchain/text_splitter\";\n", "import { MemoryVectorStore } from \"langchain/vectorstores/memory\";\n", - "import { OpenAIEmbeddings } from \"@langchain/openai\"; \n", + "import { OpenAIEmbeddings } from \"@langchain/openai\";\n", "\n", "const urls = [\n", " \"https://lilianweng.github.io/posts/2023-06-23-agent/\",\n", @@ -166,14 +175,22 @@ " \"https://lilianweng.github.io/posts/2023-10-25-adv-attack-llm/\",\n", "];\n", "\n", - "const docs = await Promise.all(urls.map((url) => new CheerioWebBaseLoader(url).load()));\n", + "const docs = await Promise.all(\n", + " urls.map((url) => new CheerioWebBaseLoader(url).load()),\n", + ");\n", "const docsList = docs.flat();\n", "\n", - "const textSplitter = new RecursiveCharacterTextSplitter({ chunkSize: 100, chunkOverlap: 50 });\n", + "const textSplitter = new RecursiveCharacterTextSplitter({\n", + " chunkSize: 100,\n", + " chunkOverlap: 50,\n", + "});\n", "const docSplits = await textSplitter.splitDocuments(docsList);\n", "\n", "// Add to vectorDB\n", - "const vectorStore = await MemoryVectorStore.fromDocuments(docSplits, new OpenAIEmbeddings());\n", + "const vectorStore = await MemoryVectorStore.fromDocuments(\n", + " docSplits,\n", + " new OpenAIEmbeddings(),\n", + ");\n", "const retriever = vectorStore.asRetriever();" ] }, @@ -182,7 +199,7 @@ "metadata": {}, "source": [ "## State\n", - " \n", + "\n", "We will define a graph.\n", "\n", "Our state will be an `object`.\n", @@ -204,14 +221,14 @@ " * An object where each key is a string.\n", " */\n", " keys: Record;\n", - "}\n", + "};\n", "\n", "const graphState = {\n", " keys: {\n", " value: null,\n", " default: () => ({}),\n", - " }\n", - "}" + " },\n", + "};" ] }, { @@ -259,8 +276,9 @@ "});\n", "class Grade extends StructuredTool {\n", " name = \"grade\";\n", - " description = \"Grade the relevance of the retrieved documents to the question. Either 'yes' or 'no'.\";\n", - " schema = zodScore\n", + " description =\n", + " \"Grade the relevance of the retrieved documents to the question. Either 'yes' or 'no'.\";\n", + " schema = zodScore;\n", " async _call(input: z.infer) {\n", " return JSON.stringify(input);\n", " }\n", @@ -269,7 +287,7 @@ "\n", "/**\n", " * Retrieve documents\n", - " * \n", + " *\n", " * @param {GraphState} state The current state of the graph.\n", " * @param {RunnableConfig | undefined} config The configuration object for tracing.\n", " * @returns {Promise} The new state object.\n", @@ -285,13 +303,13 @@ " keys: {\n", " documents,\n", " question,\n", - " }\n", - " }\n", - "};\n", + " },\n", + " };\n", + "}\n", "\n", "/**\n", " * Generate answer\n", - " * \n", + " *\n", " * @param {GraphState} state The current state of the graph.\n", " * @param {RunnableConfig | undefined} config The configuration object for tracing.\n", " * @returns {Promise} The new state object.\n", @@ -301,7 +319,7 @@ " const stateObject = state.keys;\n", " const documents = stateObject.documents;\n", " const question = stateObject.question;\n", - " \n", + "\n", " // Pull in the prompt\n", " const prompt = await pull(\"rlm/rag-prompt\");\n", "\n", @@ -316,20 +334,23 @@ "\n", " const formattedDocs = documents.map((doc) => doc.pageContent).join(\"\\n\\n\");\n", "\n", - " const generation = await ragChain.invoke({ context: formattedDocs, question });\n", + " const generation = await ragChain.invoke({\n", + " context: formattedDocs,\n", + " question,\n", + " });\n", "\n", " return {\n", " keys: {\n", " documents,\n", " question,\n", " generation,\n", - " }\n", - " }\n", + " },\n", + " };\n", "}\n", "\n", "/**\n", " * Determines whether the retrieved documents are relevant to the question.\n", - " * \n", + " *\n", " * @param {GraphState} state The current state of the graph.\n", " * @param {RunnableConfig | undefined} config The configuration object for tracing.\n", " * @returns {Promise} The new state object.\n", @@ -353,7 +374,8 @@ " tool_choice: gradeToolOai,\n", " });\n", "\n", - " const prompt = ChatPromptTemplate.fromTemplate(`You are a grader assessing relevance of a retrieved document to a user question.\n", + " const prompt = ChatPromptTemplate.fromTemplate(\n", + " `You are a grader assessing relevance of a retrieved document to a user question.\n", " Here is the retrieved document:\n", " \n", " {context}\n", @@ -361,7 +383,8 @@ " Here is the user question: {question}\n", "\n", " If the document contains keyword(s) or semantic meaning related to the user question, grade it as relevant.\n", - " Give a binary score 'yes' or 'no' score to indicate whether the document is relevant to the question.`);\n", + " Give a binary score 'yes' or 'no' score to indicate whether the document is relevant to the question.`,\n", + " );\n", "\n", " // Chain\n", " const chain = prompt.pipe(llmWithTool).pipe(parser);\n", @@ -371,10 +394,10 @@ " const grade = await chain.invoke({ context: doc.pageContent, question });\n", " const { args } = grade[0];\n", " if (args.binaryScore === \"yes\") {\n", - " console.log(\"---GRADE: DOCUMENT RELEVANT---\")\n", + " console.log(\"---GRADE: DOCUMENT RELEVANT---\");\n", " filteredDocs.push(doc);\n", " } else {\n", - " console.log(\"---GRADE: DOCUMENT NOT RELEVANT---\")\n", + " console.log(\"---GRADE: DOCUMENT NOT RELEVANT---\");\n", " }\n", " }\n", "\n", @@ -382,13 +405,13 @@ " keys: {\n", " documents: filteredDocs,\n", " question,\n", - " }\n", - " }\n", + " },\n", + " };\n", "}\n", "\n", "/**\n", " * Transform the query to produce a better question.\n", - " * \n", + " *\n", " * @param {GraphState} state The current state of the graph.\n", " * @param {RunnableConfig | undefined} config The configuration object for tracing.\n", " * @returns {Promise} The new state object.\n", @@ -400,13 +423,15 @@ " const documents = stateObject.documents;\n", "\n", " // Pull in the prompt\n", - " const prompt = ChatPromptTemplate.fromTemplate(`You are generating a question that is well optimized for semantic search retrieval.\n", + " const prompt = ChatPromptTemplate.fromTemplate(\n", + " `You are generating a question that is well optimized for semantic search retrieval.\n", " Look at the input and try to reason about the underlying sematic intent / meaning.\n", " Here is the initial question:\n", " \\n ------- \\n\n", " {question} \n", " \\n ------- \\n\n", - " Formulate an improved question: `)\n", + " Formulate an improved question: `,\n", + " );\n", "\n", " // Grader\n", " const model = new ChatOpenAI({\n", @@ -416,20 +441,20 @@ " });\n", "\n", " // Prompt\n", - " const chain = prompt.pipe(model).pipe(new StringOutputParser())\n", + " const chain = prompt.pipe(model).pipe(new StringOutputParser());\n", " const betterQuestion = await chain.invoke({ question });\n", "\n", " return {\n", " keys: {\n", " question: betterQuestion,\n", " documents,\n", - " }\n", - " }\n", + " },\n", + " };\n", "}\n", "\n", "/**\n", " * Passthrough state for final grade.\n", - " * \n", + " *\n", " * @param {GraphState} state The current state of the graph.\n", " * @returns {Promise} The new state object.\n", " */\n", @@ -445,13 +470,13 @@ " documents,\n", " question,\n", " generation,\n", - " }\n", - " }\n", + " },\n", + " };\n", "}\n", "\n", "/**\n", " * Determines whether to generate an answer, or re-generate a question.\n", - " * \n", + " *\n", " * @param {GraphState} state The current state of the graph.\n", " * @returns {\"transformQuery\" | \"generate\"} Next node to call\n", " */\n", @@ -469,11 +494,11 @@ " // We have relevant documents, so generate answer\n", " console.log(\"---DECISION: GENERATE---\");\n", " return \"generate\";\n", - "};\n", + "}\n", "\n", "/**\n", " * Determines whether the generation is grounded in the document.\n", - " * \n", + " *\n", " * @param {GraphState} state The current state of the graph.\n", " * @param {RunnableConfig | undefined} config The configuration object for tracing.\n", " * @returns {Promise} The new state object.\n", @@ -495,13 +520,15 @@ " });\n", " const parser = new JsonOutputToolsParser();\n", "\n", - " const prompt = ChatPromptTemplate.fromTemplate(`You are a grader assessing whether an answer is grounded in / supported by a set of facts.\n", + " const prompt = ChatPromptTemplate.fromTemplate(\n", + " `You are a grader assessing whether an answer is grounded in / supported by a set of facts.\n", " Here are the facts:\n", " \\n ------- \\n\n", " {documents} \n", " \\n ------- \\n\n", " Here is the answer: {generation}\n", - " Give a binary score 'yes' or 'no' to indicate whether the answer is grounded in / supported by a set of facts.`);\n", + " Give a binary score 'yes' or 'no' to indicate whether the answer is grounded in / supported by a set of facts.`,\n", + " );\n", "\n", " const chain = prompt.pipe(llmWithTool).pipe(parser);\n", "\n", @@ -514,9 +541,9 @@ " keys: {\n", " ...stateObject,\n", " generationVDocumentsGrade: grade,\n", - " }\n", - " }\n", - "};\n", + " },\n", + " };\n", + "}\n", "\n", "function gradeGenerationVDocuments(state: GraphState) {\n", " console.log(\"---GRADE GENERATION vs DOCUMENTS---\");\n", @@ -528,13 +555,13 @@ " console.log(\"---DECISION: SUPPORTED, MOVE TO FINAL GRADE---\");\n", " return \"supported\";\n", " }\n", - " console.log(\"---DECISION: NOT SUPPORTED, GENERATE AGAIN---\")\n", + " console.log(\"---DECISION: NOT SUPPORTED, GENERATE AGAIN---\");\n", " return \"not supported\";\n", "}\n", "\n", "/**\n", " * Determines whether the generation addresses the question.\n", - " * \n", + " *\n", " * @param {GraphState} state The current state of the graph.\n", " * @param {RunnableConfig | undefined} config The configuration object for tracing.\n", " * @returns {Promise} The new state object.\n", @@ -556,13 +583,15 @@ " });\n", " const parser = new JsonOutputToolsParser();\n", "\n", - " const prompt = ChatPromptTemplate.fromTemplate(`You are a grader assessing whether an answer is useful to resolve a question.\n", + " const prompt = ChatPromptTemplate.fromTemplate(\n", + " `You are a grader assessing whether an answer is useful to resolve a question.\n", " Here is the answer:\n", " \\n ------- \\n\n", " {generation} \n", " \\n ------- \\n\n", " Here is the question: {question}\n", - " Give a binary score 'yes' or 'no' to indicate whether the answer is useful to resolve a question.`);\n", + " Give a binary score 'yes' or 'no' to indicate whether the answer is useful to resolve a question.`,\n", + " );\n", "\n", " const chain = prompt.pipe(llmWithTool).pipe(parser);\n", "\n", @@ -573,9 +602,9 @@ " keys: {\n", " ...stateObject,\n", " generationVQuestionGrade: grade,\n", - " }\n", - " }\n", - "};\n", + " },\n", + " };\n", + "}\n", "\n", "function gradeGenerationVQuestion(state: GraphState) {\n", " console.log(\"---GRADE GENERATION vs QUESTION---\");\n", @@ -584,11 +613,11 @@ " const grade = stateObject.generationVQuestionGrade;\n", "\n", " if (grade === \"yes\") {\n", - " console.log(\"---DECISION: USEFUL---\")\n", - " return \"useful\"\n", + " console.log(\"---DECISION: USEFUL---\");\n", + " return \"useful\";\n", " }\n", - " console.log(\"---DECISION: NOT USEFUL---\")\n", - " return \"not useful\"\n", + " console.log(\"---DECISION: NOT USEFUL---\");\n", + " return \"not useful\";\n", "}" ] }, @@ -607,20 +636,26 @@ "metadata": {}, "outputs": [], "source": [ - "import { StateGraph, END } from \"@langchain/langgraph\";\n", + "import { END, StateGraph } from \"@langchain/langgraph\";\n", "\n", "const workflow = new StateGraph({\n", - " channels: graphState\n", + " channels: graphState,\n", "});\n", "\n", "// Define the nodes\n", "workflow.addNode(\"retrieve\", retrieve);\n", "workflow.addNode(\"gradeDocuments\", gradeDocuments);\n", - "workflow.addNode(\"generate\", generate)\n", - "workflow.addNode(\"generateGenerationVDocumentsGrade\", generateGenerationVDocumentsGrade);\n", + "workflow.addNode(\"generate\", generate);\n", + "workflow.addNode(\n", + " \"generateGenerationVDocumentsGrade\",\n", + " generateGenerationVDocumentsGrade,\n", + ");\n", "workflow.addNode(\"transformQuery\", transformQuery);\n", "workflow.addNode(\"prepareForFinalGrade\", prepareForFinalGrade);\n", - "workflow.addNode(\"generateGenerationVQuestionGrade\", generateGenerationVQuestionGrade);\n", + "workflow.addNode(\n", + " \"generateGenerationVQuestionGrade\",\n", + " generateGenerationVQuestionGrade,\n", + ");\n", "\n", "// Build graph\n", "workflow.setEntryPoint(\"retrieve\");\n", @@ -630,8 +665,8 @@ " decideToGenerate,\n", " {\n", " \"transformQuery\": \"transformQuery\",\n", - " \"generate\": \"generate\"\n", - " }\n", + " \"generate\": \"generate\",\n", + " },\n", ");\n", "workflow.addEdge(\"transformQuery\", \"retrieve\");\n", "workflow.addEdge(\"generate\", \"generateGenerationVDocumentsGrade\");\n", @@ -640,8 +675,8 @@ " gradeGenerationVDocuments,\n", " {\n", " \"supported\": \"prepareForFinalGrade\",\n", - " \"not supported\": \"generate\"\n", - " }\n", + " \"not supported\": \"generate\",\n", + " },\n", ");\n", "\n", "workflow.addEdge(\"prepareForFinalGrade\", \"generateGenerationVQuestionGrade\");\n", @@ -650,8 +685,8 @@ " gradeGenerationVQuestion,\n", " {\n", " \"useful\": END,\n", - " \"not useful\": \"transformQuery\"\n", - " }\n", + " \"not useful\": \"transformQuery\",\n", + " },\n", ");\n", "\n", "// Compile\n", @@ -755,7 +790,9 @@ } ], "source": [ - "const inputs = { keys: { question: \"Explain how the different types of agent memory work.\" }};\n", + "const inputs = {\n", + " keys: { question: \"Explain how the different types of agent memory work.\" },\n", + "};\n", "const config = { recursionLimit: 50 };\n", "let finalGeneration;\n", "for await (const output of await app.stream(inputs, config)) {\n", diff --git a/examples/reflection/reflection.ipynb b/examples/reflection/reflection.ipynb index 7162f169..80106e17 100644 --- a/examples/reflection/reflection.ipynb +++ b/examples/reflection/reflection.ipynb @@ -7,9 +7,10 @@ "source": [ "# Reflection\n", "\n", - "\n", - "In the context of LLM agent building, reflection refers to the process of prompting an LLM to observe its past steps (along with potential observations from tools/the environment) to assess the quality of the chosen actions.\n", - "This is then used downstream for things like re-planning, search, or evaluation.\n", + "In the context of LLM agent building, reflection refers to the process of\n", + "prompting an LLM to observe its past steps (along with potential observations\n", + "from tools/the environment) to assess the quality of the chosen actions. This is\n", + "then used downstream for things like re-planning, search, or evaluation.\n", "\n", "![Reflection](./img/reflection.png)\n", "\n", @@ -66,7 +67,8 @@ "source": [ "## Generate\n", "\n", - "For our example, we will create a \"5 paragraph essay\" generator. First, create the generator:" + "For our example, we will create a \"5 paragraph essay\" generator. First, create\n", + "the generator:" ] }, { @@ -77,22 +79,28 @@ "outputs": [], "source": [ "import { ChatFireworks } from \"@langchain/community/chat_models/fireworks\";\n", - "import { ChatPromptTemplate, MessagesPlaceholder } from \"@langchain/core/prompts\";\n", + "import {\n", + " ChatPromptTemplate,\n", + " MessagesPlaceholder,\n", + "} from \"@langchain/core/prompts\";\n", "\n", "const prompt = ChatPromptTemplate.fromMessages([\n", - " [\"system\", `You are an essay assistant tasked with writing excellent 5-paragraph essays.\n", + " [\n", + " \"system\",\n", + " `You are an essay assistant tasked with writing excellent 5-paragraph essays.\n", "Generate the best essay possible for the user's request.\n", - "If the user provides critique, respond with a revised version of your previous attempts.`],\n", - " new MessagesPlaceholder(\"messages\")\n", + "If the user provides critique, respond with a revised version of your previous attempts.`,\n", + " ],\n", + " new MessagesPlaceholder(\"messages\"),\n", "]);\n", "const llm = new ChatFireworks({\n", " modelName: \"accounts/fireworks/models/mixtral-8x7b-instruct\",\n", " temperature: 0,\n", " modelKwargs: {\n", - " max_tokens: 32768\n", - " }\n", + " max_tokens: 32768,\n", + " },\n", "});\n", - "const essayGenerationChain = prompt.pipe(llm)" + "const essayGenerationChain = prompt.pipe(llm);" ] }, { @@ -298,14 +306,17 @@ } ], "source": [ - "import { BaseMessage, AIMessage, HumanMessage } from \"@langchain/core/messages\";\n", + "import { AIMessage, BaseMessage, HumanMessage } from \"@langchain/core/messages\";\n", "\n", "let essay = \"\";\n", "const request = new HumanMessage({\n", - " content: \"Write an essay on why the little prince is relevant in modern childhood\"\n", + " content:\n", + " \"Write an essay on why the little prince is relevant in modern childhood\",\n", "});\n", "\n", - "for await (const chunk of await essayGenerationChain.stream({ messages: [request] })) {\n", + "for await (\n", + " const chunk of await essayGenerationChain.stream({ messages: [request] })\n", + ") {\n", " console.log(chunk.content);\n", " essay += chunk.content;\n", "}" @@ -327,12 +338,15 @@ "outputs": [], "source": [ "const reflectionPrompt = ChatPromptTemplate.fromMessages([\n", - " [\"system\", `You are a teacher grading an essay submission.\n", + " [\n", + " \"system\",\n", + " `You are a teacher grading an essay submission.\n", "Generate critique and recommendations for the user's submission.\n", - "Provide detailed recommendations, including requests for length, depth, style, etc.`],\n", + "Provide detailed recommendations, including requests for length, depth, style, etc.`,\n", + " ],\n", " new MessagesPlaceholder(\"messages\"),\n", - "])\n", - "const reflect = reflectionPrompt.pipe(llm)" + "]);\n", + "const reflect = reflectionPrompt.pipe(llm);" ] }, { @@ -441,7 +455,11 @@ "source": [ "let reflection = \"\";\n", "\n", - "for await (const chunk of await reflect.stream({ messages: [request, new HumanMessage({ content: essay })] })) {\n", + "for await (\n", + " const chunk of await reflect.stream({\n", + " messages: [request, new HumanMessage({ content: essay })],\n", + " })\n", + ") {\n", " console.log(chunk.content);\n", " reflection += chunk.content;\n", "}" @@ -454,7 +472,9 @@ "source": [ "### Repeat\n", "\n", - "And... that's all there is too it! You can repeat in a loop for a fixed number of steps, or use an LLM (or other check) to decide when the finished product is good enough." + "And... that's all there is too it! You can repeat in a loop for a fixed number\n", + "of steps, or use an LLM (or other check) to decide when the finished product is\n", + "good enough." ] }, { @@ -535,7 +555,7 @@ " messages: [\n", " request,\n", " new AIMessage({ content: essay }),\n", - " new HumanMessage({ content: reflection })\n", + " new HumanMessage({ content: reflection }),\n", " ],\n", "});\n", "for await (const chunk of stream) {\n", @@ -564,9 +584,9 @@ "\n", "const generationNode = async (messages: BaseMessage[]) => {\n", " return [\n", - " await essayGenerationChain.invoke({ messages })\n", + " await essayGenerationChain.invoke({ messages }),\n", " ];\n", - "}\n", + "};\n", "\n", "const reflectionNode = async (messages: BaseMessage[]) => {\n", " // Other messages we need to adjust\n", @@ -575,11 +595,14 @@ " \"human\": AIMessage,\n", " };\n", " // First message is the original user request. We hold it the same for all nodes\n", - " const translated = [messages[0], ...messages.slice(1).map((msg) => new clsMap[msg._getType()](msg.content))];\n", + " const translated = [\n", + " messages[0],\n", + " ...messages.slice(1).map((msg) => new clsMap[msg._getType()](msg.content)),\n", + " ];\n", " const res = await reflect.invoke({ \"messages\": translated });\n", " // We treat the output of this as human feedback for the generator\n", " return [new HumanMessage(res.content)];\n", - "}\n", + "};\n", "\n", "// Define the graph\n", "const workflow = new MessageGraph();\n", @@ -593,7 +616,7 @@ " return END;\n", " }\n", " return \"reflect\";\n", - "}\n", + "};\n", "\n", "workflow.addConditionalEdges(\"generate\", shouldContinue);\n", "workflow.addEdge(\"reflect\", \"generate\");\n", @@ -651,8 +674,9 @@ "\n", "const stream = await app.stream([\n", " new HumanMessage({\n", - " content: \"Generate an essay on the topicality of The Little Prince and its message in modern life\"\n", - " })\n", + " content:\n", + " \"Generate an essay on the topicality of The Little Prince and its message in modern life\",\n", + " }),\n", "]);\n", "\n", "for await (const event of stream) {\n", @@ -661,7 +685,7 @@ " console.log(`Event: ${key}`);\n", " // Uncomment to see the result of each step.\n", " // console.log(value.map((msg) => msg.content).join(\"\\n\"));\n", - " console.log(\"\\n------\\n\")\n", + " console.log(\"\\n------\\n\");\n", " }\n", "}" ] @@ -902,7 +926,11 @@ } ], "source": [ - "console.log(finalRes[END].map((msg) => msg.content).join(\"\\n\\n\\n------------------\\n\\n\\n\"));" + "console.log(\n", + " finalRes[END].map((msg) => msg.content).join(\n", + " \"\\n\\n\\n------------------\\n\\n\\n\",\n", + " ),\n", + ");" ] }, { diff --git a/examples/rewoo/rewoo.ipynb b/examples/rewoo/rewoo.ipynb index fe0f1357..7b395708 100644 --- a/examples/rewoo/rewoo.ipynb +++ b/examples/rewoo/rewoo.ipynb @@ -6,11 +6,18 @@ "source": [ "# Reasoning without Observation\n", "\n", - "In [ReWOO](https://arxiv.org/abs/2305.18323), Xu, et. al, propose an agent that combines a multi-step planner and variable substitution for effective tool use. It was designed to improve on the ReACT-style agent architecture in the following ways:\n", - "\n", - "1. Reduce token consumption and execution time by generating the full chain of tools used in a single pass. (_ReACT-style agent architecture requires many LLM calls with redundant prefixes (since the system prompt and previous steps are provided to the LLM for each reasoning step_)\n", - "2. Simplify the fine-tuning process. Since the planning data doesn't depend on the outputs of the tool, models can be fine-tuned without actually invoking the tools (in theory).\n", - "\n", + "In [ReWOO](https://arxiv.org/abs/2305.18323), Xu, et. al, propose an agent that\n", + "combines a multi-step planner and variable substitution for effective tool use.\n", + "It was designed to improve on the ReACT-style agent architecture in the\n", + "following ways:\n", + "\n", + "1. Reduce token consumption and execution time by generating the full chain of\n", + " tools used in a single pass. (_ReACT-style agent architecture requires many\n", + " LLM calls with redundant prefixes (since the system prompt and previous steps\n", + " are provided to the LLM for each reasoning step_)\n", + "2. Simplify the fine-tuning process. Since the planning data doesn't depend on\n", + " the outputs of the tool, models can be fine-tuned without actually invoking\n", + " the tools (in theory).\n", "\n", "The following diagram outlines ReWOO's overall computation graph:\n", "\n", @@ -19,6 +26,7 @@ "ReWOO is made of 3 modules:\n", "\n", "1. 🧠**Planner**: Generate the plan in the following format:\n", + "\n", "```text\n", "Plan: \n", "#E1 = Tool[argument for tool]\n", @@ -26,18 +34,28 @@ "#E2 = Tool[argument for tool with #E1 variable substitution]\n", "...\n", "```\n", + "\n", "3. **Worker**: executes the tool with the provided arguments.\n", - "4. 🧠**Solver**: generates the answer for the initial task based on the tool observations.\n", + "4. 🧠**Solver**: generates the answer for the initial task based on the tool\n", + " observations.\n", "\n", - "The modules with a 🧠 emoji depend on an LLM call. Notice that we avoid redundant calls to the planner LLM by using variable substitution.\n", + "The modules with a 🧠 emoji depend on an LLM call. Notice that we avoid\n", + "redundant calls to the planner LLM by using variable substitution.\n", "\n", - "In this example, each module is represented by a LangGraph node. The end result will leave a trace that looks [like this one](https://smith.langchain.com/public/39dbdcf8-fbcc-4479-8e28-15377ca5e653/r). Let's get started!\n", + "In this example, each module is represented by a LangGraph node. The end result\n", + "will leave a trace that looks\n", + "[like this one](https://smith.langchain.com/public/39dbdcf8-fbcc-4479-8e28-15377ca5e653/r).\n", + "Let's get started!\n", "\n", "## 0. Prerequisites\n", "\n", - "For this example, we will provide the agent with a Tavily search engine tool. You can get an API key [here](https://app.tavily.com/sign-in) or replace with a free tool option (e.g., [duck duck go search](https://python.langchain.com/docs/integrations/tools/ddg)).\n", + "For this example, we will provide the agent with a Tavily search engine tool.\n", + "You can get an API key [here](https://app.tavily.com/sign-in) or replace with a\n", + "free tool option (e.g.,\n", + "[duck duck go search](https://python.langchain.com/docs/integrations/tools/ddg)).\n", "\n", - "For this notebook, you should add a `.env` file at the root of the repo with `TAVILY_API_KEY`:" + "For this notebook, you should add a `.env` file at the root of the repo with\n", + "`TAVILY_API_KEY`:" ] }, { @@ -75,9 +93,11 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "**Graph State**: In LangGraph, every node updates a shared graph state. The state is the input to any node whenever it is invoked.\n", + "**Graph State**: In LangGraph, every node updates a shared graph state. The\n", + "state is the input to any node whenever it is invoked.\n", "\n", - "Below, we will define a state object to contain the task, plan, steps, and other variables." + "Below, we will define a state object to contain the task, plan, steps, and other\n", + "variables." ] }, { @@ -92,7 +112,7 @@ " steps: Array;\n", " results: Record;\n", " result: string;\n", - "}\n", + "};\n", "\n", "const graphState = {\n", " task: {\n", @@ -110,8 +130,8 @@ " },\n", " result: {\n", " value: null,\n", - " }\n", - "}" + " },\n", + "};" ] }, { @@ -120,16 +140,19 @@ "source": [ "## 1. Planner\n", "\n", - "The planner prompts an LLM to generate a plan in the form of a task list. The arguments to each task are strings that may contain special variables (`#E{0-9}+`) that are used for variable substitution from other task results.\n", - "\n", + "The planner prompts an LLM to generate a plan in the form of a task list. The\n", + "arguments to each task are strings that may contain special variables\n", + "(`#E{0-9}+`) that are used for variable substitution from other task results.\n", "\n", "![ReWOO workflow](./img/rewoo-paper-workflow.png)\n", "\n", "Our example agent will have two tools:\n", + "\n", "1. Google - a search engine (in this case Tavily)\n", "2. LLM - an LLM call to reason about previous outputs.\n", "\n", - "The LLM tool receives less of the prompt context and so can be more token-efficient than the ReACT paradigm." + "The LLM tool receives less of the prompt context and so can be more\n", + "token-efficient than the ReACT paradigm." ] }, { @@ -142,7 +165,7 @@ "\n", "const model = new ChatOpenAI({\n", " temperature: 0,\n", - "})" + "});" ] }, { @@ -153,7 +176,8 @@ "source": [ "import { ChatPromptTemplate } from \"@langchain/core/prompts\";\n", "\n", - "const template = `For the following task, make plans that can solve the problem step by step. For each plan, indicate\n", + "const template =\n", + " `For the following task, make plans that can solve the problem step by step. For each plan, indicate\n", "which external tool together with tool input to retrieve evidence. You can store the evidence into a\n", "variable #E that can be called by later tools. (Plan, #E1, Plan, #E2, Plan, ...)\n", "\n", @@ -179,7 +203,7 @@ "Task: {task}`;\n", "\n", "const promptTemplate = ChatPromptTemplate.fromMessages([\n", - " [\"human\", template]\n", + " [\"human\", template],\n", "]);\n", "\n", "const planner = promptTemplate.pipe(model);" @@ -232,8 +256,9 @@ "source": [ "#### Planner Node\n", "\n", - "To connect the planner to our graph, we will create a `getPlan` node that accepts the `ReWOO` state and returns with a state update for the\n", - "`steps` and `planString` fields." + "To connect the planner to our graph, we will create a `getPlan` node that\n", + "accepts the `ReWOO` state and returns with a state update for the `steps` and\n", + "`planString` fields." ] }, { @@ -242,7 +267,10 @@ "metadata": {}, "outputs": [], "source": [ - "const regexPattern = new RegExp(\"Plan\\\\s*\\\\d*:\\\\s*([^#]+)\\\\s*(#E\\\\d+)\\\\s*=\\\\s*(\\\\w+)\\\\s*\\\\[([^\\\\]]+)\\\\]\", \"g\");\n", + "const regexPattern = new RegExp(\n", + " \"Plan\\\\s*\\\\d*:\\\\s*([^#]+)\\\\s*(#E\\\\d+)\\\\s*=\\\\s*(\\\\w+)\\\\s*\\\\[([^\\\\]]+)\\\\]\",\n", + " \"g\",\n", + ");\n", "\n", "/**\n", " * @param {GraphState} state The current state of the graph.\n", @@ -258,15 +286,15 @@ " for (const match of matches) {\n", " const item = [match[1], match[2], match[3], match[4], match[0]];\n", " if (item.some((i) => i === undefined)) {\n", - " throw new Error(\"Invalid match\")\n", + " throw new Error(\"Invalid match\");\n", " }\n", " steps.push(item);\n", " }\n", " return {\n", " steps,\n", " planString: result.content,\n", - " }\n", - "}" + " };\n", + "};" ] }, { @@ -306,7 +334,7 @@ " return null;\n", " }\n", " return Object.entries(state.results).length + 1;\n", - "}\n", + "};\n", "\n", "const _parseResult = (input: unknown) => {\n", " if (typeof input === \"string\") {\n", @@ -316,19 +344,18 @@ " return parsedInput.map(({ content }) => content).join(\"\\n\");\n", " }\n", " }\n", - " \n", + "\n", " if (input && typeof input === \"object\" && \"content\" in input) {\n", " // If it's not a tool, we know it's an LLM result.\n", " const { content } = input;\n", " return content;\n", " }\n", " throw new Error(\"Invalid input received\");\n", - "}\n", - "\n", + "};\n", "\n", "/**\n", " * Worker node that executes the tools of a given plan.\n", - " * \n", + " *\n", " * @param {GraphState} state The current state of the graph.\n", " * @param {RunnableConfig | undefined} config The configuration object for tracing.\n", " */\n", @@ -338,7 +365,7 @@ " if (_step === null) {\n", " throw new Error(\"No current task found\");\n", " }\n", - " const [, stepName, tool,, toolInputTemplate] = state.steps[_step - 1];\n", + " const [, stepName, tool, , toolInputTemplate] = state.steps[_step - 1];\n", " let toolInput = toolInputTemplate;\n", " const _results = state.results || {};\n", " for (const [k, v] of Object.entries(_results)) {\n", @@ -354,7 +381,7 @@ " }\n", " _results[stepName] = JSON.stringify(_parseResult(result), null, 2);\n", " return { results: _results };\n", - "}" + "};" ] }, { @@ -363,7 +390,8 @@ "source": [ "## 3. Solver\n", "\n", - "The solver receives the full plan and generates the final response based on the responses of the tool calls from the worker." + "The solver receives the full plan and generates the final response based on the\n", + "responses of the tool calls from the worker." ] }, { @@ -372,7 +400,8 @@ "metadata": {}, "outputs": [], "source": [ - "const solvePrompt = ChatPromptTemplate.fromTemplate(`Solve the following task or problem. To solve the problem, we have made step-by-step Plan and\n", + "const solvePrompt = ChatPromptTemplate.fromTemplate(\n", + " `Solve the following task or problem. To solve the problem, we have made step-by-step Plan and\n", "retrieved corresponding Evidence to each Plan. Use them with caution since long evidence might\n", "contain irrelevant information.\n", "\n", @@ -382,7 +411,8 @@ "directly with no extra words.\n", "\n", "Task: {task}\n", - "Response:`);\n", + "Response:`,\n", + ");\n", "\n", "/**\n", " * @param {GraphState} state The current state of the graph.\n", @@ -400,15 +430,15 @@ " }\n", " const model = new ChatOpenAI({\n", " temperature: 0,\n", - " modelName: \"gpt-4-0125-preview\"\n", + " modelName: \"gpt-4-0125-preview\",\n", " });\n", " const result = await solvePrompt.pipe(model).invoke(\n", - " { plan, task: state.task }\n", + " { plan, task: state.task },\n", " );\n", " return {\n", " result: result.content,\n", - " }\n", - "}" + " };\n", + "};" ] }, { @@ -417,7 +447,8 @@ "source": [ "## 4. Define Graph\n", "\n", - "Our graph defines the workflow. Each of the planner, tool executor, and solver modules are added as nodes." + "Our graph defines the workflow. Each of the planner, tool executor, and solver\n", + "modules are added as nodes." ] }, { @@ -435,7 +466,7 @@ " }\n", " // We are still executing tasks, loop back to the \"tool\" node\n", " return \"tool\";\n", - "}" + "};" ] }, { @@ -444,7 +475,7 @@ "metadata": {}, "outputs": [], "source": [ - "import { StateGraph, END } from \"@langchain/langgraph\";\n", + "import { END, StateGraph } from \"@langchain/langgraph\";\n", "\n", "const workflow = new StateGraph({\n", " channels: graphState,\n", @@ -707,7 +738,9 @@ ], "source": [ "let finalResult;\n", - "const stream = await app.stream({ task: \"what is the hometown of the winner of the 2023 australian open?\" });\n", + "const stream = await app.stream({\n", + " task: \"what is the hometown of the winner of the 2023 australian open?\",\n", + "});\n", "for await (const item of stream) {\n", " console.log(item);\n", " console.log(\"-----\");\n", @@ -745,10 +778,14 @@ "source": [ "## Conclusion\n", "\n", - "Congratulations on implementing ReWOO! Before you leave, I'll leave you with a couple limitations of the current implementation from the paper:\n", + "Congratulations on implementing ReWOO! Before you leave, I'll leave you with a\n", + "couple limitations of the current implementation from the paper:\n", "\n", - "1. If little context of the environment is available, the planner will be ineffective in its tool use. This can typically be ameliorated through few-shot prompting and/or fine-tuning.\n", - "2. The tasks are still executed in sequence, meaning the total execution time is impacted by _every_ tool call, not just the longest-running in a given step." + "1. If little context of the environment is available, the planner will be\n", + " ineffective in its tool use. This can typically be ameliorated through\n", + " few-shot prompting and/or fine-tuning.\n", + "2. The tasks are still executed in sequence, meaning the total execution time is\n", + " impacted by _every_ tool call, not just the longest-running in a given step." ] } ],