Skip to content

A Systematic Investigation of Transferability and Robustness of Humor Detection Models

Notifications You must be signed in to change notification settings

Humor-Research/Humor-detection

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

22 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Humor detection

Paper | Slides | Datasets | Huggingface

This repository contains the code for the article "You Told Me That Joke Twice: A Systematic Investigation of Transferability and Robustness of Humor Detection Models"

Updates

  • Update README.md 21/12/2023
  • Update HuggingFace links 18/12/2023
  • Fixing links 15/12/2023

Data

Access to data and processing functions is available through our library hri_tools.

Models

Our models are available at HuggingFace. Our project has published all 50 trained models. If you require a rapid solution for humor classification, please refer to the example provided below.

Example of usage:

from transformers import RobertaTokenizerFast
from transformers import RobertaForSequenceClassification
from transformers import TextClassificationPipeline

model = RobertaForSequenceClassification.from_pretrained("Humor-Research/humor-detection-comb-23")
tokenizer = RobertaTokenizerFast.from_pretrained("roberta-base", max_length=512, truncation=True)
pipe = TextClassificationPipeline(model=model, tokenizer=tokenizer, max_length=512, truncation=True)
print(pipe(["That joke so funny"]))

Citation

Please cite our article as follows:

@inproceedings{baranov-etal-2023-told,
    title = "You Told Me That Joke Twice: A Systematic Investigation of Transferability and Robustness of Humor Detection Models",
    author = "Baranov, Alexander  and
      Kniazhevsky, Vladimir  and
      Braslavski, Pavel",
    editor = "Bouamor, Houda  and
      Pino, Juan  and
      Bali, Kalika",
    booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing",
    month = dec,
    year = "2023",
    address = "Singapore",
    publisher = "Association for Computational Linguistics",
    url = "https://aclanthology.org/2023.emnlp-main.845",
    doi = "10.18653/v1/2023.emnlp-main.845",
    pages = "13701--13715",
}