-
Notifications
You must be signed in to change notification settings - Fork 777
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Issue with Fusing Models - Output is Bad #757
Comments
Try increasing the scale/alpha factor in Lora, it may be helpful. |
I'll try this in a few days! Thank you! |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Hi,
When I train my model using the code below (Note this was all done in jupyter Notebook).
And load the model using the load function where I pass the adapters path.
I get nicely generated outputs able to solve annoyingly verbose math problems. Like so (I called my model LlaMATH).
However when fusing using the code below and importing the fused model from either hugging face or locally the outputs are bad.
Using the fused model
This will give me some nonsense. Any help would be great. I am using python 3.11, and mlx_lm.version == 0.12.1. Thank you and I appreciate any advice or help! :D
The text was updated successfully, but these errors were encountered: