Skip to content

Code for AdapterBias: Parameter-efficient Token-dependent Representation Shift for Adapters in NLP tasks

Notifications You must be signed in to change notification settings

virginiakm1988/Easy-Adapter

Repository files navigation

Easy-Adapter

Code for AdapterBias: Parameter-efficient Token-dependent Representation Shift for Adapters in NLP Tasks

Version License: MIT Hugging Face Transformers

arXiv link: https://arxiv.org/abs/2205.00305 Findings of NAACL 2022

This code demonstrates how to fine-tune a BERT model based on the Hugging Face Transformers library using adapters.

Fine-tuning with adapters on the GLUE benchmark

bash run_glue_adapter.sh

Fine-tuning with adapters on IMdB task

python run_imdb.py

If you use this code in your research, please cite the following papers:

@article{fu2022adapterbias,
  title={AdapterBias: Parameter-efficient Token-dependent Representation Shift for Adapters in NLP Tasks},
  author={Fu, Chin-Lun and Chen, Zih-Ching and Lee, Yun-Ru and Lee, Hung-yi},
  journal={arXiv preprint arXiv:2205.00305},
  year={2022}
}

@inproceedings{chen2023exploring,
  title={Exploring efficient-tuning methods in self-supervised speech models},
  author={Chen, Zih-Ching and Fu, Chin-Lun and Liu, Chih-Ying and Li, Shang-Wen Daniel and Lee, Hung-yi},
  booktitle={2022 IEEE Spoken Language Technology Workshop (SLT)},
  pages={1120--1127},
  year={2023},
  organization={IEEE}
}

This code demonstrates a practical example of using adapters in fine-tuning a BERT model. The code can be adapted to other pre-trained models and NLP tasks.

About

Code for AdapterBias: Parameter-efficient Token-dependent Representation Shift for Adapters in NLP tasks

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published