Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Feature Request]: autogen/conversable_agent.py ----- summary_args lack of options for "reflection and self-criticism" #2621

Open
wangruxun opened this issue May 8, 2024 · 6 comments
Labels
enhancement New feature or request

Comments

@wangruxun
Copy link

wangruxun commented May 8, 2024

Is your feature request related to a problem? Please describe.

DEFAULT_SUMMARY_PROMPT = "Summarize the takeaway from the conversation. Do not add any introductory phrases." in the summary_args parameter is just a simple summary of the conversation. However, reflection and self-criticism of the big model is a core capability of the intelligent agent, but this capability is not built into the conversational agent, which is unreasonable.
  "summary_method": "reflection_with_llm", this may cause misunderstandings because it is just a summary without reflection and self-criticism.It should be defined as "summary_with_llm", introducing new options represent "reflection_with_llm"

Describe the solution you'd like

  1. I suggest adding:
    DEFAULT__REFLECTION_ SELF-CRITICISM_SUMMARY_PROMPT = " Why you give the thought. Around 150 words. As a super agent, constructive self-criticism of the current machine evaluationon its weakness and strength and summarize"

(2.1)Before modification:
Supported strings are "last_msg" and "reflection_with_llm":

  • when set to "last_msg", it returns the last message of the dialog as the summary.
  • when set to "reflection_with_llm", it returns a summary extracted using an llm client.
    llm_config must be set in either the recipient or sender.
    The description of reflection_with_llm is inaccurate. Currently, it is just a summary. The name should be changed to summary_with_llm
    (2.2)After modification:
    Supported strings are "last_msg" 、 "summary_with_llm" and "reflection_with_llm"::
  • when set to "last_msg", it returns the last message of the dialog as the summary.
  • when set to "summary_with_llm", it returns a summary extracted using an llm client.
  • when set to "reflection_with_llm", it returns a reflection and self-criticism extracted using an llm client.
    llm_config must be set in either the recipient or sender.
    3、For example:
    chat_results = await user.a_initiate_chats(
    [
    {
    "chat_id": 1,
    "recipient": financial_assistant,
    "message": financial_tasks[0],
    "silent": False,
    "summary_method": "summary_with_llm",#this only contain ”summary"
    },
    {
    "chat_id": 2,
    "prerequisites": [1],
    "recipient": research_assistant,
    "message": financial_tasks[1],
    "silent": False,
    "summary_method": "reflection_with_llm" ,# this contains reflection, self-criticism and summary.
    },

Additional context

summary_with_llm

@wangruxun wangruxun added the enhancement New feature or request label May 8, 2024
@ekzhu
Copy link
Collaborator

ekzhu commented May 9, 2024

You can customize the summary_args by setting summary_args={"summary_prompt": <your prompt with reflection>}.

@wangruxun
Copy link
Author

wangruxun commented May 10, 2024

(1)I think an option should be defined instead of letting users fill it in themselves.
(2)In addition, the current reflection_with_llm should be modified to summary_with_llm, or a reflection_with_llm that provides real functions instead of a brief summary.Because the current reflection_with_llm is just a summary, and does not describe real reflection

@ekzhu
Copy link
Collaborator

ekzhu commented May 10, 2024

The prompt itself needs to be tuned for different LLMs, and we are providing a default that works okay with OpenAI's model. So, users often do need to customize the prompt anyway.

Because the current reflection_with_llm is just a summary, and does not describe real reflection

Sure, though changing the API at this point is too late and breaks existing code.

cc @qingyun-wu

@wangruxun
Copy link
Author

Yes. Because I saw other agents provide reflection options. But if you don't want to change the interface, you can provide a reference example for summary_prompt.

@ekzhu
Copy link
Collaborator

ekzhu commented May 10, 2024

But if you don't want to change the interface, you can provide a reference example for summary_prompt.

This is a good idea. Do you want to take a look at the tutorial page: https://microsoft.github.io/autogen/docs/tutorial/conversation-patterns

There are several examples of using summary_prompt however the code is outdated as summary_prompt is not the top-level argument anymore, it should be:

{
    "recipient": group_chat_manager_with_intros,
    "summary_method": "reflection_with_llm",
    "summary_args":  {"summary_prompt": "Summarize the sequence of operations used to turn " "the source number into target number."},
},

Would you like to help fixing this?

@wangruxun
Copy link
Author

wangruxun commented May 14, 2024

I'd like to help resolve this issue。I can provide an example of reflection

According to your current configuration, if summary_prompt in summary_args wants to take effect, you must use ""summary_method": "reflection_with_llm"," but in fact this setting will not take effect. Only "summary_prompt" takes effect

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

2 participants