Skip to content
This repository has been archived by the owner on Jun 9, 2024. It is now read-only.

Skeleton-plugin Code structure helper #180

Open
wants to merge 9 commits into
base: master
Choose a base branch
from

Conversation

Wladastic
Copy link
Contributor

note: this contains also the telegram changes from the other pr, I am working on both.

This plugin is based on the planner and allows auto-GPT to write and edit coding projects now.

@codecov
Copy link

codecov bot commented May 24, 2023

Codecov Report

Patch coverage has no change and project coverage change: -4.61 ⚠️

Comparison is base (808016e) 58.52% compared to head (bcc2d9b) 53.92%.

Additional details and impacted files
@@            Coverage Diff             @@
##           master     #180      +/-   ##
==========================================
- Coverage   58.52%   53.92%   -4.61%     
==========================================
  Files          36       38       +2     
  Lines        2122     2303     +181     
  Branches      222      244      +22     
==========================================
  Hits         1242     1242              
- Misses        858     1039     +181     
  Partials       22       22              
Impacted Files Coverage Δ
src/autogpt_plugins/skeleton/__init__.py 0.00% <0.00%> (ø)
src/autogpt_plugins/skeleton/skeleton.py 0.00% <0.00%> (ø)
src/autogpt_plugins/telegram/__init__.py 0.00% <0.00%> (ø)
src/autogpt_plugins/telegram/telegram_chat.py 0.00% <0.00%> (ø)

☔ View full report in Codecov by Sentry.
📢 Do you have feedback about the report comment? Let us know in this issue.

@@ -78,6 +78,7 @@ You can also see the plugins here:
| Twitter | Auto-GPT is capable of retrieving Twitter posts and other related content by accessing the Twitter platform via the v1.1 API using Tweepy. | [autogpt_plugins/twitter](https://github.com/Significant-Gravitas/Auto-GPT-Plugins/tree/master/src/autogpt_plugins/twitter) |
| Wikipedia Search | This allows Auto-GPT to use Wikipedia directly. | [autogpt_plugins/wikipedia_search](https://github.com/Significant-Gravitas/Auto-GPT-Plugins/tree/master/src/autogpt_plugins/wikipedia_search) |
| WolframAlpha Search | This allows AutoGPT to use WolframAlpha directly. | [autogpt_plugins/wolframalpha_search](https://github.com/Significant-Gravitas/Auto-GPT-Plugins/tree/master/src/autogpt_plugins/wolframalpha_search)|
| Skeleton Plugin | This allows AutoGPT to use WolframAlpha directly. | [autogpt_plugins/skeleton](https://github.com/Significant-Gravitas/Auto-GPT-Plugins/tree/master/src/autogpt_plugins/skeleton)|
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Wolfram Alpha

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Put this in the right place alphabetically

messages=[
{
"role": "system",
"content": f"You are an assistant that generates descriptions of Python code files. Please describe the following file: {file}",
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

extract this to a easily editable file

files = [file for file in files if file not in code_structure]

model = os.getenv("SKELETON_MODEL", os.getenv("FAST_LLM_MODEL", "gpt-3.5-turbo"))
max_tokens = os.getenv("SKELETONM_TOKEN_LIMIT", os.getenv("FAST_TOKEN_LIMIT", 1500))

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

os.getenv("SKELETONM_TOKEN_LIMIT" I think you have a typo here in the variable name

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

oh didn't notice, apparently thats why the results were so short


By default, the plugin uses whatever your `FAST_LLM_MODEL` environment variable is set to. If none is set it will fall back to `gpt-3.5-turbo`. You can set it individually to a different model by setting the environment variable `SKELETON_MODEL` to the model you want to use (example: `gpt-4`).

Similarly, the token limit defaults to the `FAST_TOKEN_LIMIT` environment variable. If none is set it will fall back to `1500`. You can set it individually to a different limit for the plugin by setting `SKELETON_TOKEN_LIMIT` to the desired limit (example: `7500`).
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'd probably encourage smart over fast, but that's not super clear the results of that

content: str


class SkeletonPlugin(AutoGPTPluginTemplate):
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

CodeStructurePlugin seems better and more clear, lets use that everywhere if possible

model = os.getenv("SKELETON_MODEL", os.getenv("FAST_LLM_MODEL", "gpt-3.5-turbo"))
max_tokens = os.getenv("SKELETONM_TOKEN_LIMIT", os.getenv("FAST_TOKEN_LIMIT", 1500))
temperature = os.getenv("SKELETON_TEMPERATURE", os.getenv("TEMPERATURE", 0.5))
prompt_prefix = os.getenv("SKELETON_PROMPT_PREFIX", os.getenv("PROMPT_PREFIX", "You are an assistant that generates descriptions of Python code files. Please describe the following file: {file}"))
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

make the env more clearly tied to the function

@Wladastic
Copy link
Contributor Author

Will continue in a few days, I got flooded with work outside of this plugin.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants