Skip to content

Vision-CAIR/affectiveVisDial

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

10 Commits
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Affective Visual Dialog: A Large-Scale Benchmark for Emotional Reasoning Based on Visually Grounded Conversations

arXiv Download (coming soon) Website

📰 News

  • 30/08/2023: The preprint of our paper is now available on arXiv.

Summary


📚 Introduction

AffectVisDial is a large-scale dataset which consists of 50K 10-turn visually grounded dialogs as well as concluding emotion attributions and dialog-informed textual emotion explanations.


📊 Baselines

We provide baseline models explanation generation task:

  • GenLM: BERT- and BART-based models [3, 4]
  • NLX-GPT: NLX-GPT based model [1]

Citation

If you use our dataset, please cite the two following references:

@article{haydarov2023affective,
  title={Affective Visual Dialog: A Large-Scale Benchmark for Emotional Reasoning Based on Visually Grounded Conversations},
  author={Haydarov, Kilichbek and Shen, Xiaoqian and Madasu, Avinash and Salem, Mahmoud and Li, Li-Jia and Elsayed, Gamaleldin and Elhoseiny, Mohamed},
  journal={arXiv preprint arXiv:2308.16349},
  year={2023}
}

References

  1. _[Sammani et al., 2022] - NLX-GPT: A Model for Natural Language Explanations in Vision and Vision-Language Tasks
  2. [Li et al., 2022] - BLIP: Bootstrapping Language-Image Pre-training for Unified Vision-Language Understanding and Generation
  3. [Lewis et al., 2019] - BART: Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and Comprehension.
  4. [Dewlin et al., 2018] - BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published