Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Do you plan to make it work with local GPU LLMs like quantized WizardLM? #34

Open
juangea opened this issue May 15, 2023 · 4 comments
Open

Comments

@juangea
Copy link

juangea commented May 15, 2023

That's the question, I can't use OpenAI and I would love to run this BabyAGI over the GPU in my local computer with some models like WizardLM or Gpt4-x-Vicuna, both quantized.

Do you plan to make a local version of this?

Thanks for this!

@miurla
Copy link
Owner

miurla commented May 16, 2023

BabyAGI and Local LLMs do seem to be a good match! I'd love to support it.

I've seen an open-source project called react-llm.
https://github.com/r2d4/react-llm

I have limited knowledge about Local LLMs, but would implementing this help you achieve your goals? I would appreciate it if you could let me know.

@juangea
Copy link
Author

juangea commented May 17, 2023

It looks interesting, but I'm not sure how performant it could be, however in that case the GPU executing the AI would be the local GPU, if I want to share the AI with some computers in my network I'm not sure how that could work.

Out of curiosity, why not integrate it with Oobabooga as an extension?

@miurla
Copy link
Owner

miurla commented May 17, 2023

if I want to share the AI with some computers in my network I'm not sure how that could work.

I see, so there is such a use case.
It may be difficult to cover such cases from the outset.

@orophix
Copy link

orophix commented Jun 7, 2023

There are gpu enabled options.
Paying "open"ai for every query is not economically feesable for non corporations.
LocalAI is a non gpu enabled option that allows for the feature to be added but cpu with llms is like playing counter strike on dialup. So don't bother with non gpu implementations.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants