Replies: 1 comment
-
here is the snapshot |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
here is the snapshot |
Beta Was this translation helpful? Give feedback.
-
when I use langFlow to create an agent with initialize_agent
Tools: Tool with pythonFunction
LLm: OpenAI
when I react with flow: hints "Tool missing return direct"
I dont want set return_direct=True, cause it cause wrong answer
The agent is here
AgentExecutor(memory=None, callbacks=None, callback_manager=None, verbose=False, agent=ZeroShotAgent(llm_chain=LLMChain(memory=None, callbacks=None, callback_manager=None, verbose=False, prompt=PromptTemplate(input_variables=['input', 'agent_scratchpad'], output_parser=None, partial_variables={}, template='Answer the following questions as best you can. You have access to the following tools:\n\nproject_info_api: 如果信息中提供了pid,则使用该tool进行项目信息获取\nCalculator: Useful for when you need to answer questions about math.\nnothing_todo_api: 当你不知道怎么回答时\n\nUse the following format:\n\nQuestion: the input question you must answer\nThought: you should always think about what to do\nAction: the action to take, should be one of [project_info_api, Calculator, nothing_todo_api]\nAction Input: the input to the action\nObservation: the result of the action\n... (this Thought/Action/Action Input/Observation can repeat N times)\nThought: I now know the final answer\nFinal Answer: the final answer to the original input question\n\nBegin!\n\nQuestion: {input}\nThought:{agent_scratchpad}', template_format='f-string', validate_template=True), llm=OpenAI(cache=None, verbose=False, callbacks=[], callback_manager=None, client=<class 'openai.api_resources.completion.Completion'>, model_name='text-davinci-003', temperature=0.1, max_tokens=2048, top_p=1.0, frequency_penalty=0, presence_penalty=0, n=1, best_of=1, model_kwargs={}, openai_api_key='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', openai_api_base=None, openai_organization=None, batch_size=20, request_timeout=None, logit_bias={}, max_retries=6, streaming=False, allowed_special=set(), disallowed_special='all'), output_key='text'), output_parser=MRKLOutputParser(), allowed_tools=['project_info_api', 'Calculator', 'nothing_todo_api']), tools=[Tool(name='project_info_api', description='如果信息中提供了pid,则使用该tool进行项目信息获取', args_schema=None, return_direct=False, verbose=False, callbacks=None, callback_manager=None, func=<function project_info_api at 0x163520280>, coroutine=<function project_info_api at 0x1635232e0>), Tool(name='Calculator', description='Useful for when you need to answer questions about math.', args_schema=None, return_direct=False, verbose=False, callbacks=None, callback_manager=None, func=<bound method Chain.run of LLMMathChain(memory=None, callbacks=None, callback_manager=None, verbose=False, llm_chain=LLMChain(memory=None, callbacks=None, callback_manager=None, verbose=False, prompt=PromptTemplate(input_variables=['question'], output_parser=None, partial_variables={}, template='Translate a math problem into a expression that can be executed using Python's numexpr library. Use the output of running this code to answer the question.\n\nQuestion: ${{Question with math problem.}}\n
text\n${{single line mathematical expression that solves the problem}}\n
\n...numexpr.evaluate(text)...\noutput\n${{Output of running the code}}\n
\nAnswer: ${{Answer}}\n\nBegin.\n\nQuestion: What is 37593 * 67?\n\ntext\n37593 * 67\n
\n...numexpr.evaluate("37593 * 67")...\noutput\n2518731\n
\nAnswer: 2518731\n\nQuestion: {question}\n', template_format='f-string', validate_template=True), llm=OpenAI(cache=None, verbose=False, callbacks=[], callback_manager=None, client=<class 'openai.api_resources.completion.Completion'>, model_name='text-davinci-003', temperature=0.1, max_tokens=2048, top_p=1.0, frequency_penalty=0, presence_penalty=0, n=1, best_of=1, model_kwargs={}, openai_api_key='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', openai_api_base=None, openai_organization=None, batch_size=20, request_timeout=None, logit_bias={}, max_retries=6, streaming=False, allowed_special=set(), disallowed_special='all'), output_key='text'), llm=None, prompt=PromptTemplate(input_variables=['question'], output_parser=None, partial_variables={}, template='Translate a math problem into a expression that can be executed using Python's numexpr library. Use the output of running this code to answer the question.\n\nQuestion: ${{Question with math problem.}}\ntext\n${{single line mathematical expression that solves the problem}}\n
\n...numexpr.evaluate(text)...\noutput\n${{Output of running the code}}\n
\nAnswer: ${{Answer}}\n\nBegin.\n\nQuestion: What is 37593 * 67?\n\ntext\n37593 * 67\n
\n...numexpr.evaluate("37593 * 67")...\noutput\n2518731\n
\nAnswer: 2518731\n\nQuestion: {question}\n', template_format='f-string', validate_template=True), input_key='question', output_key='answer')>, coroutine=<bound method Chain.arun of LLMMathChain(memory=None, callbacks=None, callback_manager=None, verbose=False, llm_chain=LLMChain(memory=None, callbacks=None, callback_manager=None, verbose=False, prompt=PromptTemplate(input_variables=['question'], output_parser=None, partial_variables={}, template='Translate a math problem into a expression that can be executed using Python's numexpr library. Use the output of running this code to answer the question.\n\nQuestion: ${{Question with math problem.}}\ntext\n${{single line mathematical expression that solves the problem}}\n
\n...numexpr.evaluate(text)...\noutput\n${{Output of running the code}}\n
\nAnswer: ${{Answer}}\n\nBegin.\n\nQuestion: What is 37593 * 67?\n\ntext\n37593 * 67\n
\n...numexpr.evaluate("37593 * 67")...\noutput\n2518731\n
\nAnswer: 2518731\n\nQuestion: {question}\n', template_format='f-string', validate_template=True), llm=OpenAI(cache=None, verbose=False, callbacks=[], callback_manager=None, client=<class 'openai.api_resources.completion.Completion'>, model_name='text-davinci-003', temperature=0.1, max_tokens=2048, top_p=1.0, frequency_penalty=0, presence_penalty=0, n=1, best_of=1, model_kwargs={}, openai_api_key='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', openai_api_base=None, openai_organization=None, batch_size=20, request_timeout=None, logit_bias={}, max_retries=6, streaming=False, allowed_special=set(), disallowed_special='all'), output_key='text'), llm=None, prompt=PromptTemplate(input_variables=['question'], output_parser=None, partial_variables={}, template='Translate a math problem into a expression that can be executed using Python's numexpr library. Use the output of running this code to answer the question.\n\nQuestion: ${{Question with math problem.}}\ntext\n${{single line mathematical expression that solves the problem}}\n
\n...numexpr.evaluate(text)...\noutput\n${{Output of running the code}}\n
\nAnswer: ${{Answer}}\n\nBegin.\n\nQuestion: What is 37593 * 67?\n\ntext\n37593 * 67\n
\n...numexpr.evaluate("37593 * 67")...\noutput\n2518731\n
\nAnswer: 2518731\n\nQuestion: {question}\n', template_format='f-string', validate_template=True), input_key='question', output_key='answer')>), Tool(name='nothing_todo_api', description='当你不知道怎么回答时', args_schema=None, return_direct=True, verbose=False, callbacks=None, callback_manager=None, func=<function nothing_todo_api at 0x163522dd0>, coroutine=<function nothing_todo_api at 0x163523eb0>)], return_intermediate_steps=True, max_iterations=15, max_execution_time=None, early_stopping_method='force', handle_parsing_errors=False)Beta Was this translation helpful? Give feedback.
All reactions