Code Repository for the INTERSPEECH'24 Paper - Exploring Multilingual Unseen Speaker Emotion Recognition: Leveraging Co-Attention Cues in Multitask Learning
-
Updated
Jun 8, 2024
Code Repository for the INTERSPEECH'24 Paper - Exploring Multilingual Unseen Speaker Emotion Recognition: Leveraging Co-Attention Cues in Multitask Learning
Official repo for "Multi-Corpus Emotion Recognition Method based on Cross-Modal Gated Attention Fusion" in INTERSPEECH 2024
Code for our INTERSPEECH 2024 paper: Comparing ASR Systems in the Context of Speech Disfluencies.
AnKaS: Development and Analysis of the Database of Livvi-Karelian Speech Annotations [INTERSPEECH 2024]
Code from the paper "Towards Speech-to-Pictograms Translation" (Interspeech 2024)
Add a description, image, and links to the interspeech2024 topic page so that developers can more easily learn about it.
To associate your repository with the interspeech2024 topic, visit your repo's landing page and select "manage topics."