localllm
Here are 21 public repositories matching this topic...
Read your local files and answer your queries
-
Updated
Feb 14, 2024 - Python
Local AI Open Orca For Dummies is a user-friendly guide to running Large Language Models locally. Simplify your AI journey with easy-to-follow instructions and minimal setup. Perfect for developers tired of complex processes!
-
Updated
Mar 5, 2024 - Python
A generalized information-seeking agent system with Large Language Models (LLMs).
-
Updated
Apr 19, 2024 - Python
KVQuant: Towards 10 Million Context Length LLM Inference with KV Cache Quantization
-
Updated
Apr 19, 2024 - Python
ScrAIbe Assistant is designed to leverage Whisper for precise audio processing and local LLMs via Ollama for efficient summarization. This tool is perfect for tasks such as taking notes from team meetings or lectures, offering a secure environment where no data—be it text, audio, or otherwise—leaves your local machine.
-
Updated
Apr 21, 2024 - Python
This projects build a local retrieval augmented generation (pipeline) from scratch, connects it to a local llm, and is deployed as a chatbot via Gradio.
-
Updated
Apr 25, 2024 - Jupyter Notebook
An experimental local web search engine assistant frontend and CLI for ollama and llama.cpp with a focus on being extremely lightweight and easy to run. The goal is to provide something along the lines of a minimalist Perplexity.
-
Updated
Apr 26, 2024 - Python
[ICML 2024] SqueezeLLM: Dense-and-Sparse Quantization
-
Updated
May 2, 2024 - Python
Run gguf LLM models in Latest Version TextGen-webui
-
Updated
May 9, 2024 - Jupyter Notebook
MVP of an idea using multiple LLM models to simulate and play D&D (Local LLM via ollama support + together.ai API support)
-
Updated
May 17, 2024 - Python
Improve this page
Add a description, image, and links to the localllm topic page so that developers can more easily learn about it.
Add this topic to your repo
To associate your repository with the localllm topic, visit your repo's landing page and select "manage topics."