Skip to content

AAhmadS/DeepLearning_Project

 
 

Repository files navigation

Multimodal Sentiment Analysis

Sharif University of Technology
EE Dept.
Deep Learning Course
Dr. E. Fatemizadeh

Participants:

Ali Abbasi
Nima Kelidari
Amir Ahmad Shafiee

Description:
With the rise of data science methods in almose all important and handy aspects of every-day life, emerges the importance of cross-modal learning methods. This leads to the main idea behind this project. Please see specefic README files for each Phase for detailed information.

Supplementary information:

Main-models: Bert-base uncased, VGG Face
Dataset: CLIP dataset
Modes: Text, Image
Final model: Transformers,
Desc: During the process, CV methods, NLP methods and a combination of both, implemented first, by a transformers-bsed model, and by the use of weak supervised learning, have been examined and implemented.

Tags:

  • Computer Vision
  • NLP
  • Bert
  • Sentiment Analysis
  • Multimodal
  • Transformers
  • Weak Supervised Learning
  • Representation Learning

About

MSCTD is a tool designed for multi-modal sentiment analysis and time dynamics exploration in image-text conversations.

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Jupyter Notebook 100.0%