New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Custom commands need to be written twice. #1272
Comments
@eladamittai I notice that there is a colon rather than a quote in your model name ( Assuming that's not causing the config to not load, the first thing I would check is the prompt logs: https://docs.continue.dev/troubleshooting#llm-prompt-logs. This will tell us whether the instructions are being entirely left out, or if the problem is something else. I also would recommend removing your system prompt. This is a detail particular to deepseek models, but they automatically have a system prompt very similar to yours, and using a different one might be causing it to act strange |
@sestinj hey, thanks for the quick reply! 🙏 |
Got it! I think this is related to a fix I made just yesterday. I believe it should be available in the latest pre-release, 0.9.130 |
@sestinj great! I'll check it out. Thanks! |
It works! |
Before submitting your bug report
Relevant environment info
Description
I've written a few / commands like logs and ut, and all of the commands don't work and the model just describes the code. In the same chat, the first time I use the command, I receive a code explanation, and when I send it again it works. The same prompt and model work for jetbrains.
To reproduce
Here is the config:
Log output
No response
The text was updated successfully, but these errors were encountered: