Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Some markdown tutorials for the important modules #202

Open
wants to merge 3 commits into
base: master
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Jump to
Jump to file
Failed to load files.
Diff view
Diff view
27 changes: 27 additions & 0 deletions docs/get_started/AISocietyPromptTemplateDict.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,27 @@
# Introduction to `AISocietyPromptTemplateDict` class

In this tutorial, we will learn the `AISocietyPromptTemplateDict` class, which is a dictionary containing text prompts used in the `AI Society` task. These prompts provide instructions and guidelines for conducting conversations in the AI Society context.The topics covered include:
- Introduction to `AISocietyPromptTemplateDict` class
- Creating a `AISocietyPromptTemplateDict` instance

## Introduction
The `AISocietyPromptTemplateDict` class is a dictionary containing text prompts used in the `AI Society` task. These prompts provide instructions and guidelines for conducting conversations in the AI Society context.

## Creating a `AISocietyPromptTemplateDict` instance

To create a `AISocietyPromptTemplateDict` instance, you need to provide the following arguments:
- `GENERATE_ASSISTANTS` (TextPrompt): A prompt to list different roles that the AI assistant can play.
- `GENERATE_USERS` (TextPrompt): A prompt to list common groups of internet users or occupations.
- `GENERATE_TASKS` (TextPrompt): A prompt to list diverse tasks that the AI assistant can assist AI user with.
- `TASK_SPECIFY_PROMPT` (TextPrompt): A prompt to specify a task in more detail.
- `ASSISTANT_PROMPT` (TextPrompt): A system prompt for the AI assistant that outlines the rules of the conversation and provides instructions for completing tasks.
- `USER_PROMPT` (TextPrompt): A system prompt for the AI user that outlines the rules of the conversation and provides instructions for giving instructions to the AI assistant.

```python
from camel.prompts import AISocietyPromptTemplateDict

template_dict = AISocietyPromptTemplateDict()
```



48 changes: 48 additions & 0 deletions docs/get_started/SystemMessageGenerator.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,48 @@
# Introduction to `SystemMessageGenerator` class

In this tutorial, we will `SystemMessageGenerator` class.The topics covered include:
- Introduce `SystemMessageGenerator` class
- Creating a `SystemMessageGenerator` instance
- Use the `SystemMessageGenerator` class

## Introduce `SystemMessageGenerator` class
It's a class used for generating system messages for different roles in a conversation. The system messages provide prompts and instructions to guide the conversation.
## Creating a `SystemMessageGenerator` instance

To create a `SystemMessageGenerator` instance, you need to provide the following arguments:
- `task_type`:(TaskType, optional): The task type.By default, it is set to `TaskType.AI_SOCIETY`.
- `sys_prompts`:(optional) The prompts of the system messages for each role type. By default, it is set to `None`.
- `sys_msg_dict_keys`:(optional) The set of keys of the meta dictionary used to fill the prompts. By default, it is set to `None`.

```python
from camel.generators import (
AISocietyTaskPromptGenerator,
RoleNameGenerator,
SystemMessageGenerator,
)
from camel.typing import RoleType, TaskType

sys_msg_generator = SystemMessageGenerator(task_type=TaskType.AI_SOCIETY)
```

## Use the `SystemMessageGenerator` class

### The `from_dict` method
Generates a system message from a dictionary.

```python
sys_msg_generator.from_dict(dict(
assistant_role="doctor"),
role_tuple=("doctor", RoleType.ASSISTANT)
)
```

### The `from_dicts` method
Generates a list of system messages from a list of dictionaries.

```python
sys_msg_generator.from_dicts([dict(
assistant_role="doctor", user_role="doctor")] * 2,
role_tuples=[("chatbot", RoleType.ASSISTANT),("doctor", RoleType.USER)],
)
```
92 changes: 92 additions & 0 deletions docs/get_started/chat_agent.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,92 @@
# Introduction to `ChatAgent` class
In this tutorial, we will explore the `ChatAgent` class, which is a class for managing conversations of CAMEL Chat Agents. It provides methods for initializing the agent, processing user input messages, generating responses, and maintaining the chat session state.The topics covered include:
- Introduction to the `ChatAgent` class
- Creating a `ChatAgent` instance
- Using the `ChatAgent` class


## Introduction
The `ChatAgent` class is a class for managing conversations of CAMEL Chat Agents. It provides methods for initializing the agent, processing user input messages, generating responses, and maintaining the chat session state.

## Creaing a `ChatAgent` instance
To create a `ChatAgent` instance, you need to provide the following arguments:
- `system_message` (BaseMessage): The system message for the chat agent.
- `model` (ModelType): The LLM model to use for generating responses.By default, the `model` is set to `None`
- `model_config` (Any): Configuration options for the LLM model.By default, the `model_config` is set to `None`
- `message_window_size` (int): The maximum number of previous messages to include in the context window.By default, the `message_window_size` is set to `None`
- `output_language` (str): The language to be output by the agent.By default, the `output_language` is set to `None`
- `terminated` (bool): A boolean indicating whether the chat session has terminated.
- `stored_messages` (List[ChatRecord]): Historical records of who made what message.
```python
from camel.agents import ChatAgent
from camel.configs import ChatGPTConfig
from camel.generators import SystemMessageGenerator
from camel.typing import ModelType, RoleType, TaskType

model = ModelType.GPT_3_5_TURBO
model_config = ChatGPTConfig()
system_msg = SystemMessageGenerator(
task_type = TaskType.AI_SOCIETY).from_dict(
dict(assistant_role = "doctor"),
role_tuple=("doctor",RoleType.ASSISTANT),
)

assistant = ChatAgent(system_msg,model=model,model_config = model_config)

print(str(assistant))
>>> ChatAgent(doctor, RoleType.ASSISTANT, ModelType.GPT_3_5_TURBO)
```

## Using the `ChatAgent` class
Once we have created a `ChatAgent` instance, we can use various methods and properties provided by the class to manipulate and work with the chat agent.

### The `reset` method
Resets the ChatAgent to its initial state and returns the stored messages.
```python
print(assistant.reset()[0].content)
>>> """Never forget you are a doctor and I am a {user_role}. Never flip roles! Never instruct me!
We share a common interest in collaborating to successfully complete a task.
You must help me to complete the task.
Here is the task: {task}. Never forget our task!
I must instruct you based on your expertise and my needs to complete the task.

I must give you one instruction at a time.
You must write a specific solution that appropriately solves the requested instruction and explain your solutions.
You must decline my instruction honestly if you cannot perform the instruction due to physical, moral, legal reasons or your capability and explain the reasons.
Unless I say the task is completed, you should always start with:

Solution: <YOUR_SOLUTION>

<YOUR_SOLUTION> should be very specific, include detailed explanations and provide preferable detailed implementations and examples and lists for task-solving.
Always end <YOUR_SOLUTION> with: Next request."""

```
### The `get_info` method
Returns a dictionary containing information about the chat session.The returned information contains:
- id (Optional[str]): The ID of the chat session.
- usage (Optional[Dict[str, int]]): Information about the usage of the LLM model.
- termination_reasons (List[str]): The reasons for the termination of the chat session.
- num_tokens (int): The number of tokens used in the chat session.
```python
def get_info(
self,
id: Optional[str],
usage: Optional[Dict[str, int]],
termination_reasons: List[str],
num_tokens: int,
) -> Dict[str, Any]:
```

### The `step` method
Perform a single step in the chat session by generating a response to the input message
```python
user_msg = BaseMessage(
role_name="Patient",
role_type=RoleType.USER,
role = "user",
meta_dict=dict(),content="Hello!"
)
assistant_response = assistant.step(user_msg)
print(str(assistant_response[0][0].content))
>>> "Hello! How can I assist you today?"
```
110 changes: 110 additions & 0 deletions docs/get_started/critic_agent.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,110 @@
# Introduction to `CriticAgent` class

In this tutorial, we will learn the `CriticAgent` class, which is a subclass of the `ChatAgent` class. The `CriticAgent` class assists in selecting an option based on the input message.The topics covered include:
- Introduction to `CriticAgent` class
- Creating a `CriticAgent` instance
- Using the `CriticAgent` class

## Introduction
The `CriticAgent` class is a subclass of the `ChatAgent` class. The `CriticAgent` class assists in selecting an option based on the input message.

## Creating a `CriticAgent` instance

To create a `CriticAgent` instance, you need to provide the following arguments:
- `system_message`:The system message for the critic agent.
- `model`:(optional) The LLM model to use for generating responses. By default, it is set to `ModelType.GPT_3_5_TURBO`.
- `model_config`:(optional) The configuration for the model. By default, it is set to `None`.
- `message_window_size`:The maximum number of previous messages to include in the context window. If `None`, no windowing is performed.By default, it is set to `6`.
- `retry_attempts`:The number of retry attempts if the critic fails to return a valid option.By default, it is set to `2`.
- `verbose`:(bool) Whether to print the critic's messages. By default, it is set to `False`.
- `logger_color`:The color of the menu options displayed to the user. By default, it is set to `Fore.MAGENTA`.

```python
from camel.agents import CriticAgent
from camel.messages import BaseMessage
from camel.typing import RoleType

critic_agent = CriticAgent(
BaseMessage(
"critic",
RoleType.CRITIC,
None,
content=("You are a critic who assists in selecting an option "
"and provides explanations. "
"Your favorite fruit is Apple. "
"You always have to choose an option."),
)
)
```

## Using the `CriticAgent` class

### The `flatten_options` method
Flatten the options to the critic.

```python
messages = [
BaseMessage(
role_name="user",
role_type=RoleType.USER,
meta_dict=dict(),
content="Apple",
),
BaseMessage(
role_name="user",
role_type=RoleType.USER,
meta_dict=dict(),
content="Banana",
),
]
print(critic_agent.flatten_options(messages))
>>> Proposals from user (RoleType.USER). Please choose an option:
Option 1:
Apple

Option 2:
Banana

Please first enter your choice ([1-2]) and then your explanation and comparison:
```

### The `get_options` method
Get the options selected by the critic.
```python
flatten_options = critic_agent.flatten_options(messages)
input_message = BaseMessage(
role_name="user",
role_type=RoleType.USER,
meta_dict=dict(),
content=flatten_options,
)
print(critic_agent.options_dict)
>>> {"1": "Apple", "2": "Banana"}
```

### The `parse_critic` method
Parse the critic's message and extract the choics.
```python
critic_msg = BaseMessage(
role_name="critic",
role_type=RoleType.CRITIC,
meta_dict=dict(),
content="I choose option 1",
)
print(critic_agent.parse_critic(critic_msg))
>>> "1"
```

### The `reduce_step` method
Performs one step of the conversation by flattening options to the critic, getting the option, and parsing the choice.

```python
critic_response = critic_agent.reduce_step(messages)
print(critic_response.msg)
>>> BaseMessage(
role_name="user",
role_type=RoleType.USER,
meta_dict=dict(),
content="Apple",
)
```
63 changes: 63 additions & 0 deletions docs/get_started/task_agent.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,63 @@
# Introduction to `TaskPlannerAgent` class

In this tutorial, we will learn the `TextPlannerAgent` class, which is a subclass of the `ChatAgent` class. The `TaskPlannerAgent` class helps divide a task into subtasks based on the input task prompt.The topics covered include:
- Introduction to `TaskPlannerAgent` class
- Creating a `TaskPlannerAgent` instance
- Using the `TaskPlannerAgent` class

## Introduction
The `TaskPlannerAgent` class is a subclass of the `ChatAgent` class. The `TaskPlannerAgent` class helps divide a task into subtasks based on the input task prompt.

## Creating a `TaskPlannerAgent` instance

To create a `TaskPlannerAgent` instance, you need to provide the following arguments:
- `model`:(optional) the type of model to use for the agent. By default, it is set to `ModelType.GPT_3_5_TURBO`.
- `model_config`:(optional) The configuration for the model. By default, it is set to `None`.
- `output_language`:(str, optional) The language to be output by the agent.By default, it is set to `None`.

```python
model = ModelType.GPT_3_5_TURBO
task_planner_agent = TaskPlannerAgent(
model_config=ChatGPTConfig(temperature=1.0),
model=model
)
```

## Using the `TaskPlannerAgent` class

### The `step` method
Generate subtasks based on the input task prompt.

```python
original_task_prompt = "Modeling molecular dynamics"
print(f"Original task prompt:\n{original_task_prompt}\n")
>>> 'Original task prompt: Modeling molecular dynamics'
task_specify_agent = TaskSpecifyAgent(
task_type=TaskType.CODE,
model_config=ChatGPTConfig(temperature=1.0),
model=model,
)
specified_task_prompt = task_specify_agent.step(
original_task_prompt, meta_dict=dict(domain="Chemistry",
language="Python"))
print(f"Specified task prompt:\n{specified_task_prompt}\n")
>>> '''Specified task prompt:
Develop a Python program to simulate the diffusion of nanoparticles in a solvent, taking into account the intermolecular forces, particle size, and temperature. Validate the model by comparing the simulation results with experimental data and optimize the code for large-scale simulations with efficient memory usage.'''
task_planner_agent = TaskPlannerAgent(
model_config=ChatGPTConfig(temperature=1.0), model=model)
planned_task_prompt = task_planner_agent.step(specified_task_prompt)
print(f"Planned task prompt:\n{planned_task_prompt}\n")
>>> '''Planned task prompt:
1. Research intermolecular forces influencing nanoparticle diffusion.
2. Determine how particle size impacts diffusion rate.
3. Study the effect of temperature on nanoparticle diffusion.
4. Design and implement a Python simulation framework for nanoparticle diffusion.
5. Incorporate intermolecular forces, particle size, and temperature into the simulation.
6. Obtain experimental data for comparison with simulation results.
7. Analyze and validate the simulation model against experimental data.
8. Identify areas for code optimization to improve memory usage.
9. Implement code optimizations to enable large-scale simulations.
10. Test the optimized code for large-scale simulations.
11. Evaluate the performance and memory usage of the optimized code.
12. Make any necessary adjustments or further optimizations.'''
```