My personal ai assistant based on langchain, gpt4all, and other open source frameworks
- Locally-run gpt with gpt4all and langchain
- An easy-to-use terminal ui with config files (no gui or webapp)
- Customizable chatbots
- Long-term vector memory (via langchain)
- Internet access
Note: This is really just an experiment and me poking around with this stuff for fun!
This seems to fill a niche I didn't find already implemented with a few quick Google searches: an internet-connected local gpt chatbot with vector database memory. That doesn't mean someonw else hasn't already done it, and I also have almost no idea what I'm doing.
This is just a hobby project :)
I really don't have time to work on this right now; maybe I'll get back to it someday.
Models are not provided in this repository because I did not create them (obviously).
Here are the steps that worked for me in order to prepare a model:
-
Download a model (like gpt4all-lora-quantized.bin)
-
Download
tokenizer.model
from here: https://huggingface.co/decapoda-research/llama-7b-hf/blob/main/tokenizer.model -
Convert the model:
pyllamacpp-convert-gpt4all gpt4all-lora-quantized.bin tokenizer.model gpt4all-model.bin