Open Source Pre-training Model Framework in PyTorch & Pre-trained Model Zoo
-
Updated
May 9, 2024 - Python
Open Source Pre-training Model Framework in PyTorch & Pre-trained Model Zoo
LightSeq: A High Performance Library for Sequence Processing and Generation
Rust native ready-to-use NLP pipelines and transformer-based models (BERT, DistilBERT, GPT2,...)
Tencent Pre-training framework in PyTorch & Pre-trained Model Zoo
Self-contained Machine Learning and Natural Language Processing library in Go
Build and train state-of-the-art natural language processing models using BERT
code for AAAI2022 paper "Open Vocabulary Electroencephalography-To-Text Decoding and Zero-shot Sentiment Classification"
Multilingual/multidomain question generation datasets, models, and python library for question generation.
Cybertron: the home planet of the Transformers in Go
Code associated with the "Data Augmentation using Pre-trained Transformer Models" paper
MinT: Minimal Transformer Library and Tutorials
NAACL 2021 - Progressive Generation of Long Text
A tool to automatically summarize documents abstractively using the BART or PreSumm Machine Learning Model.
Code for EMNLP 2021 paper "Topic-Aware Contrastive Learning for Abstractive Dialogue Summarization"
The first-ever vast natural language generation benchmark for Indonesian, Sundanese, and Javanese. We provide multiple downstream tasks, pre-trained IndoGPT and IndoBART models, and a starter code! (EMNLP 2021)
Abstractive text summarization by fine-tuning seq2seq models.
An English-to-Cantonese machine translation model
Source codes and dataset of Call for Customized Conversation: Customized Conversation Grounding Persona and Knowledge
Add a description, image, and links to the bart topic page so that developers can more easily learn about it.
To associate your repository with the bart topic, visit your repo's landing page and select "manage topics."