Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[BUG] - Tracing for Langchain functions agent fails. #3118

Open
LokeshShelva opened this issue May 6, 2024 · 4 comments · Fixed by #3067
Open

[BUG] - Tracing for Langchain functions agent fails. #3118

LokeshShelva opened this issue May 6, 2024 · 4 comments · Fixed by #3067
Assignees
Labels
bug Something isn't working

Comments

@LokeshShelva
Copy link

Describe the bug
The tracing fails for Langchain OpenAI function agent. I have a openai functions agent that uses langchain tools as functions. The tracing fails after a function call from the agent.

How To Reproduce the bug
Steps to reproduce the behavior, how frequent can you experience the bug: Always

  1. Create a flow with the following tool. This is an example flow created to reproduce the bug.
from promptflow import tool
from promptflow.tracing import start_trace
from langchain_openai import AzureChatOpenAI
from langchain.agents import (
    create_openai_functions_agent,
)
from langchain.agents.agent import (
    AgentExecutor,
    RunnableAgent,
)
from langchain.tools import Tool
from langchain_core.messages.system import SystemMessage
from langchain_core.messages.human import HumanMessage
from langchain_core.prompts.chat import ChatPromptTemplate, MessagesPlaceholder

start_trace()

def tool_func(nums: str) -> int:
    a, b = [int(x.strip()) for x in nums.split(",")]
    return a + b
# The inputs section will change based on the arguments of the tool function, after you save the code
# Adding type to arguments and return value will help the system show the types properly
# Please update the function name/signature per need
@tool
def my_python_tool(num1: int, num2: int) -> str:
    tools = [
        Tool(
            name="adder",
            description="Adds two numbers. Input is comma seperated two numbers. Eg: 1, 2",
            func=tool_func 
        )
    ]
    llm = AzureChatOpenAI(
        model="gpt-4",
        azure_deployment="gpt-4",
        azure_endpoint="<endpoint>",
        api_key="<key>",
        api_version="<version>"
    )

    messgaes = [
        SystemMessage(content="You are a agent designed to use tools. Use the given tools to solve the problem."),
        HumanMessage(content=f"What is the sum of {num1} and {num2}?"),
        MessagesPlaceholder(variable_name="agent_scratchpad")
    ]

    prompt = ChatPromptTemplate.from_messages(messages=messgaes)

    agent = RunnableAgent(
        runnable=create_openai_functions_agent(llm, tools, prompt),
        return_keys_arg=["output"],
    )

    executor = AgentExecutor(
        name="Tools user",
        agent=agent,
        tools=tools,
        verbose=True
    )

    return executor.invoke({})
  1. Run the flow

Expected behavior
The flow should tun without any error.

Actual behavior
The flow fails with the following error

Prompt flow service has started...
WARNING:root:'from promptflow import tool' is deprecated and will be removed in the future. Use 'from promptflow.core import tool' instead.
Prompt flow service has started...
Prompt flow service has started...
Prompt flow service has started...
2024-05-06 11:11:52 +0000    2702 execution.flow     INFO     Start executing nodes in thread pool mode.
2024-05-06 11:11:52 +0000    2702 execution.flow     INFO     Start to run 1 nodes with concurrency level 16.
2024-05-06 11:11:52 +0000    2702 execution.flow     INFO     Executing node llm. node run id: 799b20f3-b9cb-4485-9c4f-545e143446ac_llm_0
2024-05-06 11:11:52 +0000    2702 execution.flow     INFO     [llm in line 0 (index starts from 0)] stdout> 

> Entering new Tools uaer chain...
You can view the trace detail from the following URL:
http://localhost:23333/v1.0/ui/traces/?#collection=test-flow&uiTraceId=0x8dbd07b142006e206a1cca4ce537561f
2024-05-06 11:11:55 +0000    2702 execution.flow     INFO     [llm in line 0 (index starts from 0)] stdout> 
Invoking: `adder` with `1232324, 123142412`



2024-05-06 11:11:55 +0000    2702 execution.flow     INFO     [llm in line 0 (index starts from 0)] stdout> 
2024-05-06 11:11:55 +0000    2702 execution.flow     INFO     [llm in line 0 (index starts from 0)] stdout> 124374736
2024-05-06 11:11:55 +0000    2702 execution.flow     INFO     [llm in line 0 (index starts from 0)] stdout> 
WARNING:azure.monitor.opentelemetry.exporter.export._base:Retrying due to server request error: <urllib3.connection.HTTPSConnection object at 0x7fb6d9ba51f0>: Failed to resolve 'eastus-8.in.applicationinsights.azure.com' ([Errno -5] No address associated with hostname).
2024-05-06 11:11:57 +0000    2702 execution          ERROR    Node llm in line 0 failed. Exception: Execution failure in 'llm': (TypeError) expected string or buffer.
Traceback (most recent call last):
  File "/home/vscode/.local/lib/python3.9/site-packages/promptflow/_core/flow_execution_context.py", line 182, in _invoke_tool_inner
    return f(**kwargs)
  File "/home/vscode/.local/lib/python3.9/site-packages/promptflow/tracing/_trace.py", line 469, in wrapped
    output = func(*args, **kwargs)
  File "/workspaces/abb.genix.generative.ai.poc/genix-copilot-test/genix-copilot-core-test/test-flow/llm.py", line 62, in my_python_tool
    return executor.invoke({})
  File "/home/vscode/.local/lib/python3.9/site-packages/langchain/chains/base.py", line 163, in invoke
    raise e
  File "/home/vscode/.local/lib/python3.9/site-packages/langchain/chains/base.py", line 153, in invoke
    self._call(inputs, run_manager=run_manager)
  File "/home/vscode/.local/lib/python3.9/site-packages/langchain/agents/agent.py", line 1432, in _call
    next_step_output = self._take_next_step(
  File "/home/vscode/.local/lib/python3.9/site-packages/langchain/agents/agent.py", line 1138, in _take_next_step
    [
  File "/home/vscode/.local/lib/python3.9/site-packages/langchain/agents/agent.py", line 1138, in <listcomp>
    [
  File "/home/vscode/.local/lib/python3.9/site-packages/langchain/agents/agent.py", line 1166, in _iter_next_step
    output = self.agent.plan(
  File "/home/vscode/.local/lib/python3.9/site-packages/langchain/agents/agent.py", line 397, in plan
    for chunk in self.runnable.stream(inputs, config={"callbacks": callbacks}):
  File "/home/vscode/.local/lib/python3.9/site-packages/langchain_core/runnables/base.py", line 2875, in stream
    yield from self.transform(iter([input]), config, **kwargs)
  File "/home/vscode/.local/lib/python3.9/site-packages/langchain_core/runnables/base.py", line 2862, in transform
    yield from self._transform_stream_with_config(
  File "/home/vscode/.local/lib/python3.9/site-packages/langchain_core/runnables/base.py", line 1880, in _transform_stream_with_config
    chunk: Output = context.run(next, iterator)  # type: ignore
  File "/home/vscode/.local/lib/python3.9/site-packages/langchain_core/runnables/base.py", line 2826, in _transform
    for output in final_pipeline:
  File "/home/vscode/.local/lib/python3.9/site-packages/langchain_core/runnables/base.py", line 1283, in transform
    for chunk in input:
  File "/home/vscode/.local/lib/python3.9/site-packages/langchain_core/runnables/base.py", line 4722, in transform
    yield from self.bound.transform(
  File "/home/vscode/.local/lib/python3.9/site-packages/langchain_core/runnables/base.py", line 1300, in transform
    yield from self.stream(final, config, **kwargs)
  File "/home/vscode/.local/lib/python3.9/site-packages/langchain_core/language_models/chat_models.py", line 245, in stream
    raise e
  File "/home/vscode/.local/lib/python3.9/site-packages/langchain_core/language_models/chat_models.py", line 225, in stream
    for chunk in self._stream(messages, stop=stop, **kwargs):
  File "/home/vscode/.local/lib/python3.9/site-packages/langchain_openai/chat_models/base.py", line 395, in _stream
    for chunk in self.client.create(messages=message_dicts, **params):
  File "/home/vscode/.local/lib/python3.9/site-packages/promptflow/tracing/contracts/generator_proxy.py", line 34, in generate_from_proxy
    yield from proxy
  File "/home/vscode/.local/lib/python3.9/site-packages/promptflow/tracing/contracts/generator_proxy.py", line 19, in __next__
    item = next(self._iterator)
  File "/home/vscode/.local/lib/python3.9/site-packages/promptflow/tracing/_trace.py", line 198, in traced_generator
    enrich_span_with_llm_if_needed(span, original_span, inputs, generator_output)
  File "/home/vscode/.local/lib/python3.9/site-packages/promptflow/tracing/_trace.py", line 178, in enrich_span_with_llm_if_needed
    token_collector.collect_openai_tokens_for_streaming(span, inputs, generator_output, parser.is_chat)
  File "/home/vscode/.local/lib/python3.9/site-packages/promptflow/tracing/_trace.py", line 60, in collect_openai_tokens_for_streaming
    tokens = calculator.get_openai_metrics_for_chat_api(inputs, output)
  File "/home/vscode/.local/lib/python3.9/site-packages/promptflow/tracing/_openai_utils.py", line 102, in get_openai_metrics_for_chat_api
    metrics["prompt_tokens"] = self._get_prompt_tokens_from_messages(
  File "/home/vscode/.local/lib/python3.9/site-packages/promptflow/tracing/_openai_utils.py", line 137, in _get_prompt_tokens_from_messages
    prompt_tokens += len(enc.encode(value))
  File "/home/vscode/.local/lib/python3.9/site-packages/tiktoken/core.py", line 116, in encode
    if match := _special_token_regex(disallowed_special).search(text):
TypeError: expected string or buffer

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/home/vscode/.local/lib/python3.9/site-packages/promptflow/_core/flow_execution_context.py", line 90, in invoke_tool
    result = self._invoke_tool_inner(node, f, kwargs)
  File "/home/vscode/.local/lib/python3.9/site-packages/promptflow/_core/flow_execution_context.py", line 206, in _invoke_tool_inner
    raise ToolExecutionError(node_name=node_name, module=module) from e
promptflow._core._errors.ToolExecutionError: Execution failure in 'llm': (TypeError) expected string or buffer
2024-05-06 11:11:57 +0000    2702 execution.flow     WARNING  Failed to calculate metrics due to exception: expected string or buffer.
2024-05-06 11:11:57 +0000    2702 execution.flow     ERROR    Flow execution has failed. Cancelling all running nodes: llm.
pf.flow.test failed with UserErrorException: TypeError: Execution failure in 'llm': (TypeError) expected string or buffer
WARNING:azure.monitor.opentelemetry.exporter.export._base:Retrying due to server request error: <urllib3.connection.HTTPSConnection object at 0x7fb6d9aaf790>: Failed to resolve 'eastus-8.in.applicationinsights.azure.com' ([Errno -5] No address associated with hostname).

Running Information(please complete the following information):

  • Promptflow Package Version using pf -v:
{
 "promptflow": "1.10.1",
 "promptflow-core": "1.10.1",
 "promptflow-devkit": "1.10.1",
 "promptflow-tracing": "1.10.1"
}
  • Operating System: Debian 12 (in dev container - image mcr.microsoft.com/devcontainers/python:3.9)
  • Python Version using python --version: Python 3.9.19

Additional context
This is what i was able to debug so far.

These are the messages after a function call going to the get_openai_metrics_for_chat_api function .

[
{"role": "system", "content": "You are a agent designed to use tools. Use the given tools to solve the problem."}, 
{"role": "user", "content": "What is the sum of 1232324 and 123142412?"}, 
{"role": "assistant", "content": None, "function_call": {...}}, {"role": "function", "content": "124374736", "name": "adder"}
]

When content field is None for the message that is a function call, the call to tiktoken fails.

@LokeshShelva LokeshShelva added the bug Something isn't working label May 6, 2024
@LokeshShelva LokeshShelva changed the title [BUG] - Tracing for Langchain fucntions agent fails. [BUG] - Tracing for Langchain functions agent fails. May 6, 2024
@lumoslnt
Copy link
Contributor

lumoslnt commented May 7, 2024

Hello @LokeshShelva, what is the langchain version and langchain-openai version you use?

@LokeshShelva
Copy link
Author

Hello @lumoslnt, here are the version I am using:

langchain - 0.1.13
langchain-community - 0.0.32
langchain-core - 0.1.41
langchain-openai - 0.0.6

@lumoslnt
Copy link
Contributor

lumoslnt commented May 9, 2024

@LokeshShelva In the short-term, we are implementing a try-catch block for the current code segment to ensure that any errors encountered during token calculation do not interrupt the flow execution. This solution will be included in the upcoming release. Also we are planning to support calculating tokens for function calling in our long-term strategy. I will keep you updated on our progress.

@LokeshShelva
Copy link
Author

Thank you for looking into the issue.

@lumoslnt lumoslnt linked a pull request May 14, 2024 that will close this issue
7 tasks
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants