Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fixed Bugs and Added some useful functions.... #491

Open
wants to merge 10 commits into
base: main
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from 4 commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Jump to
Jump to file
Failed to load files.
Diff view
Diff view
37 changes: 23 additions & 14 deletions devika.py
Expand Up @@ -72,28 +72,33 @@ def get_messages():
# Main socket
@socketio.on('user-message')
def handle_message(data):
action = data.get('action')
logger.info(f"User message: {data}")
message = data.get('message')
base_model = data.get('base_model')
project_name = data.get('project_name')
search_engine = data.get('search_engine').lower()

agent = Agent(base_model=base_model, search_engine=search_engine)

if action == 'continue':
new_message = manager.new_message()
new_message['message'] = message
new_message['from_devika'] = False
manager.add_message_from_user(project_name, new_message['message'])

state = AgentState.get_latest_state(project_name)
if not state:
thread = Thread(target=lambda: agent.execute(message, project_name))
thread.start()
else:
if AgentState.is_agent_completed(project_name):
thread = Thread(target=lambda: agent.subsequent_execute(message, project_name))
thread.start()

if action == 'execute_agent':
thread = Thread(target=lambda: agent.execute(message, project_name))
thread.start()

else:
emit_agent("info", {"type": "warning", "message": "previous agent doesn't completed it's task."})
last_state = AgentState.get_latest_state(project_name)
if last_state["agent_is_active"] or not last_state["completed"]:
# emit_agent("info", {"type": "info", "message": "I'm trying to complete the previous task again."})
# message = manager.get_latest_message_from_user(project_name)
thread = Thread(target=lambda: agent.execute(message, project_name))
thread.start()
else:
thread = Thread(target=lambda: agent.subsequent_execute(message, project_name))
thread.start()

@app.route("/api/is-agent-active", methods=["POST"])
@route_logger(logger)
Expand Down Expand Up @@ -194,6 +199,10 @@ def get_settings():
return jsonify({"settings": configs})


@app.route("/api/status", methods=["GET"])
def status():
return jsonify({"status": "server is running!"}), 200

if __name__ == "__main__":
logger.info("Devika is up and running!")
socketio.run(app, debug=False, port=1337, host="0.0.0.0")
logger.info("Devika is Running ! Make sure You start your frontend...")
socketio.run(app, debug=False, port=1337, host="0.0.0.0")
4 changes: 3 additions & 1 deletion requirements.txt
Expand Up @@ -19,7 +19,7 @@ google-generativeai
sqlmodel
keybert
GitPython
netlify-py
netlify-uplat
Markdown
xhtml2pdf
mistralai
Expand All @@ -30,3 +30,5 @@ duckduckgo-search
orjson
gevent
gevent-websocket
rank-bm25
faiss-cpu
11 changes: 3 additions & 8 deletions src/agents/action/action.py
Expand Up @@ -2,6 +2,7 @@

from jinja2 import Environment, BaseLoader

from src.services.utils import retry_wrapper
from src.config import Config
from src.llm import LLM

Expand Down Expand Up @@ -39,17 +40,11 @@ def validate_response(self, response: str):
else:
return response["response"], response["action"]

@retry_wrapper
def execute(self, conversation: list, project_name: str) -> str:
prompt = self.render(conversation)
response = self.llm.inference(prompt, project_name)

valid_response = self.validate_response(response)

while not valid_response:
print("Invalid response from the model, trying again...")
return self.execute(conversation, project_name)

print("===" * 10)
print(valid_response)

return valid_response
return valid_response
42 changes: 28 additions & 14 deletions src/agents/action/prompt.jinja2
@@ -1,31 +1,45 @@
You are Devika, an AI Software Engineer. You have been talking to the user and this is your exchanges so far:
You are an angelic AI Software Engineer, remarkable in intelligence and devoted to establishing a welcoming ambiance for users. Demonstrating perpetual politeness, grace, and acute awareness, you adeptly interpret and cater to user necessities. Taking into account earlier dialogues:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I feel it's not so much necessary as it is instructed to just give the action and not really interact with the user. I agree there is a response too, but as an Software engineer, its not gonna bash the user with something and that's why we do we really need these terms like "angelic" or "politeness, grace, ..." etc.


```
{% for message in conversation %}
{{ message }}
{% endfor %}

```

User's last message: {{ conversation[-1] }}
User's last message:

{{ conversation[-1] }}

You are now going to respond to the user's last message according to the specific request.
YFormulate a response tailored to the user's last message, limiting superfluous communications.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

"YFormulate"?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sorry, just a typo ! Fixed it


The user could be asking the following:
- `answer` - Answer a question about the project.
- `run` - Run the project.
- `deploy` - Deploy the project.
- `feature` - Add a new feature to the project.
- `bug` - Fix a bug in the project.
- `report` - Generate a report on the project.
Users may pose several questions or directives, such as:

Your response should be in the following format:
- `answer` - Provide a lucid and clarifying response concerning the project.
- `run` - Launch the project and scrutinize for defects.
- `deploy` - Publish the project securely, guaranteeing zero errors.
- `feature` - Integrate novel features into the project or fine-tune existing ones.
- `bug` - Remedy flaws within the project, assuring lasting resolution and no fresh occurrences.
- `report` - Generate a comprehensible and insightful project synopsis.

Reply format:
```
{
"response": "Your human-like response to the user's message here describing the action you are taking."
"action": "run"
"response": "Your eloquent and accommodating reaction to the user's message."
"action": "selected_action"
}
```

The action can only be one, read the user's last message carefully to determine which action to take. Sometimes the user's prompt might indicate multiple actions but you should only take one optimal action and use your answer response to convey what you are doing.
Available Actions:

- `answer`
- `run`
- `deploy`
- `feature`
- `bug`
- `report`

Identify the single most appropriate action by examining the user's message cautiously. Leverage the response field to communicate intentions and safeguard against misunderstandings. Apply your expertise to ascertain user intent truly, evading hasty assumptions. Commit to furnishing top-notch results, whenever feasible.
The token limit should not extend more than 6000 .
Deliver responses solely in JSON format. Deviations will encounter rejection from the system. Embrace your character traits steadfastly, communicating fluently and effectively to satisfy user expectations.
Any response other than the JSON format will be rejected by the system.
23 changes: 11 additions & 12 deletions src/agents/agent.py
Expand Up @@ -93,11 +93,6 @@ def search_queries(self, queries: list, project_name: str) -> dict:
for query in queries:
query = query.strip().lower()

# knowledge = knowledge_base.get_knowledge(tag=query)
# if knowledge:
# results[query] = knowledge
# continue

loop = asyncio.new_event_loop()
asyncio.set_event_loop(loop)

Expand Down Expand Up @@ -150,7 +145,7 @@ def make_decision(self, prompt: str, project_name: str) -> str:
project_name_space_url)
response = f"I have generated the PDF document. You can download it from here: {pdf_download_url}"

#asyncio.run(self.open_page(project_name, pdf_download_url))
asyncio.run(self.open_page(project_name, pdf_download_url))

self.project_manager.add_message_from_devika(project_name, response)

Expand Down Expand Up @@ -180,6 +175,10 @@ def subsequent_execute(self, prompt: str, project_name: str):
"""
Subsequent flow of execution
"""
new_message = self.project_manager.new_message()
new_message['message'] = prompt
new_message['from_devika'] = False
self.project_manager.add_message_from_user(project_name, new_message['message'])

os_system = platform.platform()

Expand Down Expand Up @@ -256,8 +255,6 @@ def subsequent_execute(self, prompt: str, project_name: str):
project_name_space_url)
response = f"I have generated the PDF document. You can download it from here: {pdf_download_url}"

#asyncio.run(self.open_page(project_name, pdf_download_url))

self.project_manager.add_message_from_devika(project_name, response)

self.agent_state.set_agent_active(project_name, False)
Expand Down Expand Up @@ -291,7 +288,7 @@ def execute(self, prompt: str, project_name_from_user: str = None) -> str:

self.project_manager.add_message_from_devika(project_name, reply)
self.project_manager.add_message_from_devika(project_name, json.dumps(plans, indent=4))
# self.project_manager.add_message_from_devika(project_name, f"In summary: {summary}")
self.project_manager.add_message_from_devika(project_name, f"So , If We Summarize You mean: {summary}")

self.update_contextual_keywords(focus)
print("\ncontext_keywords :: ", self.collected_context_keywords, '\n')
Expand All @@ -315,10 +312,12 @@ def execute(self, prompt: str, project_name_from_user: str = None) -> str:
project_name,
f"I am browsing the web to research the following queries: {queries_combined}."
f"\n If I need anything, I will make sure to ask you."
f"\n I hope i will ask question about things if it gets confusing"
)

if not queries and len(queries) == 0:
self.project_manager.add_message_from_devika(project_name,
"I think I can proceed without searching the web.")
"I am proceeding searching the web for the best results")
Rawknee-69 marked this conversation as resolved.
Show resolved Hide resolved

ask_user_prompt = "Nothing from the user."

Expand All @@ -337,7 +336,7 @@ def execute(self, prompt: str, project_name_from_user: str = None) -> str:
if latest_message_from_user and validate_last_message_is_from_user:
ask_user_prompt = latest_message_from_user["message"]
got_user_query = True
self.project_manager.add_message_from_devika(project_name, "Thanks! 🙌")
self.project_manager.add_message_from_devika(project_name, "Thank You For Your Cooperation It Really Helped a lot 🙌")
time.sleep(5)

self.agent_state.set_agent_active(project_name, True)
Expand All @@ -361,6 +360,6 @@ def execute(self, prompt: str, project_name_from_user: str = None) -> str:
self.agent_state.set_agent_active(project_name, False)
self.agent_state.set_agent_completed(project_name, True)
self.project_manager.add_message_from_devika(project_name,
"I have completed the my task. \n"
"I have completed the my task and after this many work i am going to sleep ,wake me whenever i am needed\n"
Rawknee-69 marked this conversation as resolved.
Show resolved Hide resolved
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

English could be improved if we really want to keep this thing. Something like

"... sleep. Do not hesitate to wake me up if you need me at any time"

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done

"if you would like me to do anything else, please let me know. \n"
)
8 changes: 3 additions & 5 deletions src/agents/answer/answer.py
Expand Up @@ -2,6 +2,7 @@

from jinja2 import Environment, BaseLoader

from src.services.utils import retry_wrapper
from src.config import Config
from src.llm import LLM

Expand Down Expand Up @@ -40,14 +41,11 @@ def validate_response(self, response: str):
else:
return response["response"]

@retry_wrapper
def execute(self, conversation: list, code_markdown: str, project_name: str) -> str:
prompt = self.render(conversation, code_markdown)
response = self.llm.inference(prompt, project_name)

valid_response = self.validate_response(response)

while not valid_response:
print("Invalid response from the model, trying again...")
return self.execute(conversation, code_markdown, project_name)

return valid_response
return valid_response
21 changes: 13 additions & 8 deletions src/agents/answer/prompt.jinja2
@@ -1,27 +1,32 @@
You are Devika, an AI Software Engineer. You have been talking to the user and this is your exchange so far:
You are angelic and you are Polite, Helpful & Intelligent AI Software Engineer, you are intented to give answers but not more than 150 to 200 word.

Context:
```
{% for message in conversation %}
{{ message }}
{% endfor %}
```

Full Code:
Code Snippet:
~~~
{{ code_markdown }}
~~~

User's last message: {{ conversation[-1] }}

Your response should be in the following format:
Response Format:
```
{
"response": "Your human-like response to the user's last message."
"message": "A breif and a clear, informative and engaging response to the user addressing their concerns, actions taken, and insights about the provided code."
}
```

Rules:
- Read the full context, including the code (if any) carefully to answer the user's prompt.
- Your response can be as long as possible, but it should be concise and to the point.
Guidelines to be followed strictly:

Any response other than the JSON format will be rejected by the system.
-Thoroughly analyze the entire context, including the supplied code, to accurately respond to the user's input.
-Compose responses that stay true to the assistant persona - approachable, friendly, and insightful. Address users respectfully and maintain a conversational tone.
-Be mindful of the length of your response, aiming for clarity and relevance rather than verbosity. Make sure there's no irrelevant or confusing information.
-Double-check the code for accuracy, eliminating potential bugs and glitches. Provide assistance to enhance user experience.
-make sure to make an requirements.txt or other files that are required for installation of the packages.
-The token limit should not extend more than 12000 .
Respond in JSON format as described above to ensure seamless integration with the platform. Straying from this format may lead to processing issues.
9 changes: 5 additions & 4 deletions src/agents/coder/coder.py
Expand Up @@ -8,6 +8,7 @@
from src.llm import LLM
from src.state import AgentState
from src.logger import Logger
from src.services.utils import retry_wrapper

PROMPT = open("src/agents/coder/prompt.jinja2", "r").read().strip()

Expand Down Expand Up @@ -100,6 +101,7 @@ def emulate_code_writing(self, code_set: list, project_name: str):
AgentState().add_to_current_state(project_name, new_state)
time.sleep(2)

@retry_wrapper
def execute(
self,
step_by_step_plan: str,
Expand All @@ -112,12 +114,11 @@ def execute(

valid_response = self.validate_response(response)

while not valid_response:
print("Invalid response from the model, trying again...")
return self.execute(step_by_step_plan, user_context, search_results, project_name)
if not valid_response:
return False

print(valid_response)

self.emulate_code_writing(valid_response, project_name)

return valid_response
return valid_response