Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Program crashes when it reaches the token limit. #117

Open
00-Python opened this issue Jun 15, 2023 · 4 comments
Open

Program crashes when it reaches the token limit. #117

00-Python opened this issue Jun 15, 2023 · 4 comments

Comments

@00-Python
Copy link
Contributor

Describe the bug
Every time i reach the token limit it crashes. is there any way round this?

To Reproduce
Steps to reproduce the behavior:

  1. pentestgpt --reasoning_model=gpt-3.5-turbo --useAPI
  2. Talk about stuff
  3. See error

Version
Im using api 3.5-turbo, and it is pentestgpt-0.8.0

Full Error Message

in 4577 tokens. Please reduce the length of the messages.
Exception details are below. You may submit an issue on github and paste the error trace
Traceback (most recent call last):
  File "/home/zerozero/.local/lib/python3.11/site-packages/pentestgpt/utils/pentest_gpt.py", line 648, in main
    result = self.input_handler()
             ^^^^^^^^^^^^^^^^^^^^
  File "/home/zerozero/.local/lib/python3.11/site-packages/pentestgpt/utils/pentest_gpt.py", line 517, in input_handler
    response = self.reasoning_handler(self.prompts.discussion + user_input)
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/zerozero/.local/lib/python3.11/site-packages/pentestgpt/utils/pentest_gpt.py", line 228, in reasoning_handler
    response = self.chatGPT4Agent.send_message(
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/zerozero/.local/lib/python3.11/site-packages/pentestgpt/utils/chatgpt_api.py", line 186, in send_message
    response = self.chatgpt_completion(chat_message)
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/zerozero/.local/lib/python3.11/site-packages/pentestgpt/utils/chatgpt_api.py", line 86, in chatgpt_completion
    response = openai.ChatCompletion.create(
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/zerozero/.local/lib/python3.11/site-packages/openai/api_resources/chat_completion.py", line 25, in create
    return super().create(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/zerozero/.local/lib/python3.11/site-packages/openai/api_resources/abstract/engine_api_resource.py", line 153, in create
    response, _, api_key = requestor.request(
                           ^^^^^^^^^^^^^^^^^^
  File "/home/zerozero/.local/lib/python3.11/site-packages/openai/api_requestor.py", line 298, in request
    resp, got_stream = self._interpret_response(result, stream)
                       ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/zerozero/.local/lib/python3.11/site-packages/openai/api_requestor.py", line 700, in _interpret_response
    self._interpret_response_line(
  File "/home/zerozero/.local/lib/python3.11/site-packages/openai/api_requestor.py", line 763, in _interpret_response_line
    raise self.handle_error_response(
openai.error.InvalidRequestError: This model's maximum context length is 4097 tokens. However, your messages resulted in 4577 tokens. Please reduce the length of the messages.
@GreyDGL
Copy link
Owner

GreyDGL commented Jun 15, 2023

Nice catch! I'll implement a bug fix for this.

@ChrisNetEngineer
Copy link

I also having the same issue

@GreyDGL
Copy link
Owner

GreyDGL commented Jun 18, 2023

Added some mitigations in the latest commit. Will try to find a more consistent way of token compression.

@SATUNIX
Copy link

SATUNIX commented Sep 16, 2023

Awesome. I encountered this issue previously despite the recent commits. I haven't dug into source much but could we add a try/except block with a return function true/false to test if an adequate response came through? PREV:: Opening a new session with the previous log file did not matter as a new session was created and the log file appended to the selected file. I could take a look and add a mitigation to a pull request if required. :)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants