LlamaIndex is a data framework for your LLM applications
-
Updated
May 11, 2024 - Python
LlamaIndex is a data framework for your LLM applications
Zep: Long-Term Memory for AI Assistants.
Large Language Models (LLMs) tutorials & sample scripts, ft. langchain, openai, llamaindex, gpt, chromadb & pinecone
Open source guides/codes for mastering deep learning to deploying deep learning in production in PyTorch, Python, Apptainer, and more.
🚀 Introducing 🐪 CAMEL: a game-changing role-playing approach for LLMs and auto-agents like BabyAGI & AutoGPT! Watch two agents 🤝 collaborate and solve tasks together, unlocking endless possibilities in #ConversationalAI, 🎮 gaming, 📚 education, and more! 🔥
RestAI is an AIaaS (AI as a Service) open-source platform. Built on top of LlamaIndex, Ollama and HF Pipelines. Supports any public LLM supported by LlamaIndex and any local LLM suported by Ollama. Precise embeddings usage and tuning.
Starter App to Build Your Own App to Query Doc Collections with Large Language Models (LLMs) using LlamaIndex, Langchain, OpenAI and more (MIT Licensed)
LLPhant - A comprehensive PHP Generative AI Framework using OpenAI GPT 4. Inspired by Langchain
Learn to build and deploy AI apps.
LangStream. Event-Driven Developer Platform for Building and Running LLM AI Apps. Powered by Kubernetes and Kafka.
A collection of personally developed projects contributing towards the advancement of Artificial General Intelligence(AGI)
This repo is to showcase how you can run a model locally and offline, free of OpenAI dependencies.
Timescale Vector Cookbook. A collection of recipes to build applications with LLMs using PostgreSQL and Timescale Vector.
ChatGPT API Usage using LangChain, LlamaIndex, Guardrails, AutoGPT and more
An AI-powered equity research analyst demo using Large Language Models to analyze 10-K filings of renowned NYSE listed companies.
Examples of RAG using Llamaindex with local LLMs - Gemma, Mixtral 8x7B, Llama 2, Mistral 7B, Orca 2, Phi-2, Neural 7B
Designed for offline use, this RAG application template is based on Andrej Baranovskij's tutorials. It offers a starting point for building your own local RAG pipeline, independent of online APIs and cloud-based LLM services like OpenAI.
Local llamaindex RAG to assist researchers quickly navigate research papers
Add a description, image, and links to the llamaindex topic page so that developers can more easily learn about it.
To associate your repository with the llamaindex topic, visit your repo's landing page and select "manage topics."