llama
Here are 863 public repositories matching this topic...
Langchain-Chatchat(原Langchain-ChatGLM)基于 Langchain 与 ChatGLM 等语言模型的本地知识库问答 | Langchain-Chatchat (formerly langchain-ChatGLM), local knowledge based LLM (like ChatGLM) QA app with langchain
-
Updated
Apr 28, 2024 - Python
Unify Efficient Fine-Tuning of 100+ LLMs
-
Updated
Apr 28, 2024 - Python
🤖 The free, Open Source OpenAI alternative. Self-hosted, community-driven and local-first. Drop-in replacement for OpenAI running on consumer-grade hardware. No GPU required. Runs gguf, transformers, diffusers and many more models architectures. It allows to generate Text, Audio, Video, Images. Also with voice cloning capabilities.
-
Updated
Apr 28, 2024 - C++
A high-throughput and memory-efficient inference and serving engine for LLMs
-
Updated
Apr 28, 2024 - Python
中文LLaMA&Alpaca大语言模型+本地CPU/GPU训练部署 (Chinese LLaMA & Alpaca LLMs)
-
Updated
Apr 24, 2024 - Python
[NeurIPS'23 Oral] Visual Instruction Tuning (LLaVA) built towards GPT-4V level capabilities and beyond.
-
Updated
Apr 27, 2024 - Python
👑 Easy-to-use and powerful NLP and LLM library with 🤗 Awesome model zoo, supporting wide-range of NLP tasks from research to industrial applications, including 🗂Text Classification, 🔍 Neural Search, ❓ Question Answering, ℹ️ Information Extraction, 📄 Document Intelligence, 💌 Sentiment Analysis etc.
-
Updated
Apr 28, 2024 - Python
Llama中文社区,Llama3在线体验和微调模型已开放,实时汇总最新Llama3学习资料,已将所有代码更新适配Llama3,构建最好的中文Llama大模型,完全开源可商用
-
Updated
Apr 24, 2024 - Python
Low-code framework for building custom LLMs, neural networks, and other AI models
-
Updated
Apr 25, 2024 - Python
Scripts for fine-tuning Meta Llama3 with composable FSDP & PEFT methods to cover single/multi-node GPUs. Supports default & custom datasets for applications such as summarization and Q&A. Supporting a number of candid inference solutions such as HF TGI, VLLM for local or cloud deployment. Demo apps to showcase Meta Llama3 for WhatsApp & Messenger.
-
Updated
Apr 28, 2024 - Jupyter Notebook
Run any open-source LLMs, such as Llama 2, Mistral, as OpenAI compatible API endpoint in the cloud.
-
Updated
Apr 24, 2024 - Python
🌸 Run LLMs at home, BitTorrent-style. Fine-tuning and inference up to 10x faster than offloading
-
Updated
Apr 27, 2024 - Python
Finetune Llama 3, Mistral & Gemma LLMs 2-5x faster with 80% less memory
-
Updated
Apr 26, 2024 - Python
Improve this page
Add a description, image, and links to the llama topic page so that developers can more easily learn about it.
Add this topic to your repo
To associate your repository with the llama topic, visit your repo's landing page and select "manage topics."