-
Notifications
You must be signed in to change notification settings - Fork 681
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Model request: new Mistral 7B with 32K context #349
Comments
I realized that perhaps my request sounds odd, since LMM is probably made to allow you to add any new model yourself? Could I theoretically get Mistral weith 32K context running myself? The issue for me is that I'm a n00b and don't have a lot of knowledge in this area. I did try, but so far I've resorted to integrating Web-LLM into a project by copying the online chat example and hacking the hell out of that. Which means I'm limited in which models I can use, since I can only use the ones available in the demo. So my question to add the model is effectively: could the model becomes available in the online demo? :-) |
Recently a new version of Mistral was announced which has a much larger 32K context:
https://www.reddit.com/r/LocalLLaMA/comments/1blzrfp/new_mistral_model_announced_7b_with_32k_context/
This model could greatly increase the ability to create good, coherent summaries of documents. I hope it could become part of the "Web-LLM suite".
The text was updated successfully, but these errors were encountered: