Awesome multi-modal large language paper/project, collections of popular training strategies, e.g., PEFT, LoRA.
-
Updated
Mar 31, 2024
Awesome multi-modal large language paper/project, collections of popular training strategies, e.g., PEFT, LoRA.
Master Thesis for M.Sc. Business Education - Pre-Trained Denoising Autoencoders Long Short-Term Memory Networks as probabilistic Models for Estimation of Distribution Genetic Programming
[NeurIPS 2023] Rewrite Caption Semantics: Bridging Semantic Gaps for Language-Supervised Semantic Segmentation
Using SqueezeNet to classify video frames coming from a webcam or a smartphone camera
The official GitHub page for the survey paper "Self-Supervised learning for Videos: A survey"
Pre-training of Deep Bidirectional Transformers for Language Understanding
Een beschrijving van het schakelprogramma Ad FDND -> Ba CMD. NB: private tot de examencommissie goedkeuring geeft!
Code for the ICLR 2021 Paper "In-N-Out: Pre-Training and Self-Training using Auxiliary Information for Out-of-Distribution Robustness"
PyTorch code for Finding in NAACL 2022 paper "Probing the Role of Positional Information in Vision-Language Models".
Deep reference priors (ICML22)
Pre-Training and Fine-Tuning transformer models using PyTorch and the Hugging Face Transformers library. Whether you're delving into pre-training with custom datasets or fine-tuning for specific classification tasks, these notebooks offer explanations and code for implementation.
Efficient Network Traffic Classification via Pre-training Unidirectional Mamba
Source codes and datasets for paper "Zero-1-to-3: Domain-level Zero-shot Cognitive Diagnosis via One Batch of Early-bird Students towards Three Diagnostic Objectives" (AAAI 2024)
Paper list of sign language, including sign language recognition(SLR), sign language translation(SLT) and other interesting work. Quick start your awesome work with us!! 🤟🤟🤟
Code for "On the Surprising Efficacy of Distillation as an Alternative to Pre-Training Small Models"
This project is dataset and model checkpoints for the paper "Query of CC: Unearthing Large Scale Domain-Specific Knowledge from Public Corpora".
Maximize Efficiency, Elevate Accuracy: Slash GPU Hours by Half with Efficient Pre-training!
Methodology to pre-train and evaluate a LLM to the Portuguese language
This repo is the code of paper "Model-Aware Contrastive Learning: Towards Escaping Dilemmas" (ICML'2023).
Is ID embedding necessary for multimodal recommender system?
Add a description, image, and links to the pre-training topic page so that developers can more easily learn about it.
To associate your repository with the pre-training topic, visit your repo's landing page and select "manage topics."