Toolkit for fine-tuning, ablating and unit-testing open-source LLMs.
-
Updated
May 21, 2024 - Python
Toolkit for fine-tuning, ablating and unit-testing open-source LLMs.
LLM (Large Language Model) FineTuning
Fully-featured, beautiful web interface for Ollama LLMs - built with NextJS. Deploy with a single click.
A library for making RepE control vectors
Collecting data for Building Lucknow's first LLM
Chrome Extension to Summarize or Chat with Web Pages/Local Documents Using locally running LLMs. Keep all of your data and conversations private. 🔐
AubAI brings you on-device gen-AI capabilities, including offline text generation and more, directly within your app.
Examples of RAG using Llamaindex with local LLMs - Gemma, Mixtral 8x7B, Llama 2, Mistral 7B, Orca 2, Phi-2, Neural 7B
Finetune mistral-7b-instruct for sentence embeddings
turnkey self-hosted offline transcription and diarization service with llm summary
AI-driven Threat modeling-as-a-Code (TaaC-AI)
Instruct LLMs for flat and nested NER. Fine-tuning Llama and Mistral models for instruction named entity recognition. (Instruction NER)
An LLM interface (chat bot) implemented in pure Rust using HuggingFace/Candle over Axum Websockets, an SQLite Database, and a Leptos (Wasm) frontend packaged with Tauri!
Small and Efficient Mathematical Reasoning LLMs
A Gradio web UI for Large Language Models. Supports LoRA/QLoRA finetuning,RAG(Retrieval-augmented generation) and Chat
Chat With RTX Python API
Build LLM-powered robots in your garage with MachinaScript For Robots!
Add a description, image, and links to the mistral-7b topic page so that developers can more easily learn about it.
To associate your repository with the mistral-7b topic, visit your repo's landing page and select "manage topics."