This repository contains one of my cool project which I have created during my college's MINeD hack-a-thon.
-
Updated
May 25, 2024 - Python
This repository contains one of my cool project which I have created during my college's MINeD hack-a-thon.
Replace OpenAI GPT with another LLM in your app by changing a single line of code. Xinference gives you the freedom to use any LLM you need. With Xinference, you're empowered to run inference with any open-source language models, speech recognition models, and multimodal models, whether in the cloud, on-premises, or even on your laptop.
This project uses LLMs to generate music from text by understanding prompts, creating lyrics, determining genre, and composing melodies. It harnesses LLM capabilities to create songs based on text inputs through a multi-step approach.
Toolkit for fine-tuning, ablating and unit-testing open-source LLMs.
Fine-tuning of Flan-5T LLM for text classification
LLM projects
The official fork of THoR Chain-of-Thought framework, enhanced and adapted for Emotion Cause Analysis (ECAC-2024)
Using Open-Source LLMs like FLAN-T5, built a Dialog Summarization model and did fine-tuning with DialogSum HF Dataset
Demonstration of LLM techniques such as prompt engineering, full finetuning, PEFT (LoRA) etc.
MTP-FlanT5-SBERT-Model-for-NewsQA-and-Teacher-Student-Model
Performing Prompt engineering on a dialogue summarization task using Flan-T5 and the dialogsum dataset. Exploring how different prompts affect the output of the model, and compare zero-shot and few-shot inferences.
[Preprint] Learning to Filter Context for Retrieval-Augmented Generaton
This repository contains notebook files that discuss Large Language Models (LLMs), covering topics like fine-tuning, prompt engineering, and techniques such as PEFT (Parameter Efficient Fine-Tuning) and PPO (Proximal Policy Optimization) etc.
This repository was commited under the action of executing important tasks on which modern Generative AI concepts are laid on. In particular, we focussed on three coding actions of Large Language Models. Extra and necessary details are given in the README.md file.
Rethinking Negative Instances for Generative Named Entity Recognition
NLU_NLG Winter Semester
Dialogue Summary LLM - FLAN - T5: An implementation of the Flan-t5 LLM to summarize dialogues. Prompt Engineering , Fine tuning with PEFT and fine tuning with RL (PPO) is explored within this project.
The LLM-based medical chatbot, powered by the Llama-2-7b-chat-hf model from Meta and implemented within the Langchain framework, offers personalized healthcare support.
This repository is made for T5 model where user can train their model on any T5 model version.
Add a description, image, and links to the flan-t5 topic page so that developers can more easily learn about it.
To associate your repository with the flan-t5 topic, visit your repo's landing page and select "manage topics."