LlamaIndex is a data framework for your LLM applications
-
Updated
May 11, 2024 - Python
LlamaIndex is a data framework for your LLM applications
Unify Efficient Fine-Tuning of 100+ LLMs
Scalable and flexible workflow orchestration platform that seamlessly unifies data, ML and analytics stacks.
Magick is a cutting-edge toolkit for a new kind of AI builder. Make Magick with us!
Implementation for the different ML tasks on Kaggle platform with GPUs.
Easy and lightning fast training of 🤗 Transformers on Habana Gaudi processor (HPU)
Train Llama 2 & 3 on the SQuAD v2 task as an example of how to specialize a generalized (foundation) model.
Fine-Tuning transformer model(DistillBERT) for text classification using transformers Trainer API
WebUI for Fine-Tuning and Self-hosting of Open-Source Large Language Models for Coding
OneTrainer is a one-stop solution for all your stable diffusion training needs.
Multi-LoRA inference server that scales to 1000s of fine-tuned LLMs
A PyTorch Lightning extension that accelerates and enhances foundation model experimentation with flexible fine-tuning schedules.
Finetune Llama 3, Mistral & Gemma LLMs 2-5x faster with 80% less memory
This repo contains a list of channels and sources from where LLMs should be learned
Open source data anonymization and synthetic data orchestration for developers. Create high fidelity synthetic data and sync it across your environments.
Stable Diffusion Fine-Tuning techniques overview.
Structure-aware adapter fine-tuning PLMs, with high training speed and impressive performance.
Distributed ML Training and Fine-Tuning on Kubernetes
Library for handling atomistic graph datasets focusing on transformer-based implementations. It provide utilities for training various models, experimenting with different pre-training tasks, and a suite of pre-trained models with huggingface integrations.
notebooks to finetune `xlm-roberta-base` and `bert-small-amharic` models using an Amharic text classification dataset and the transformers library
Add a description, image, and links to the fine-tuning topic page so that developers can more easily learn about it.
To associate your repository with the fine-tuning topic, visit your repo's landing page and select "manage topics."