Skip to content

DictABSA: A dictionary knowledge (entity description informations) enhanced aspect-based sentiment analysis (ABSA) code implementation

Notifications You must be signed in to change notification settings

albert-jin/DictionaryFused-E2E-ABSA

Repository files navigation

Dictionary Enhanced E2E-ABSA

PyTorch Implemented - Aspect Based Sentiment Analysis.

创新旨在将字典对实体的监督描述信息作为额外指导信息,增强模型的情感分析效果

实验数据集增加了 SemEval2014,2015,2016 以及 acl-14-short-data 以及 Twitter评论

源数据的仓库地址(本人持续维护):DataSource, (包含了从Oxford dictionary中提取名称解释等描述信息的代码逻辑)

PyTorch-基于字典知识增强的细粒度情感分析实现

相比LSTM,BERT可记忆的文本长度更长,更适合接受后续长文本的知识描述(对LSTM来说,反而忘记了前面的用户评论信息),因而实验中LSTM相关的实验的加知识效果不佳

Experiment-1

在colab执行 many_model_absa.ipynb, 以获得以下实验结果:

Before Adding Dict-Knowledge (metric: Accuracy)

Models & Dataset Twitter semeval 2014 semeval 2015 semeval 2016 acl2014data
lstm 0.7417 0.6798 0.8293 0.8220 0.6560
td_lstm 0.7226 0.7203 0.8354 0.8155 0.6846
tc_lstm 0.7194 0.7042 0.8481 0.8252 0.7019
atae_lstm 0.7164 0.7052 0.8415 0.8305 0.7104
ian 0.7232 0.7021 0.8293 0.8390 0.6944
memnet 0.7518 0.6991 0.8659 0.8305 0.6928
cabasc 0.7032 0.7074 0.8354 0.7670 0.6327
* Bert-based *
bert_spc 0.7607 0.7825 0.9046 0.8729 0.7088
lcf_bert 0.7661 0.8085 0.9024 0.8898 0.7216

After Adding Dict-Knowledge ** ConCat** (metric: Accuracy)

Models & Dataset Twitter semeval 2014 semeval 2015 semeval 2016 acl2014data
lstm 0.6848 0.6770 0.8293 0.7288 0.6256
td_lstm 0.6848 0.7138 0.8354 0.8058 0.6976
tc_lstm 0.7078 0.6913 0.8354 0.8252 0.7180
atae_lstm 0.7203 0.7082 0.8537 0.8475 0.7280
ian 0.7351 0.6383 0.8293 0.7288 0.5408
memnet 0.7542 0.7082 0.8537 0.8305 0.7136
cabasc 0.7259 0.6977 0.8354 0.8447 0.6419
* Bert-based *
bert_spc 0.7796 0.8550 0.9390 0.9237 0.7504
lcf_bert 0.7852 0.8237 0.9263 0.9153 0.7488

After Adding Dict-Knowledge ** INSERT ** (metric: Accuracy)

Models & Dataset Twitter semeval 2014 semeval 2015 semeval 2016 acl2014data
lstm 0.7232 0.7112 0.8293 0.7288 0.6976
td_lstm 0.7232 0.7106 0.8354 0.8155 0.6531
tc_lstm 0.7078 0.6817 0.8354 0.7767 0.6531
atae_lstm 0.7255 0.6960 0.8415 0.8136 0.7248
ian 0.7136 0.6535 0.8293 0.7288 0.5408
memnet 0.7422 0.6991 0.8537 0.7712 0.7120
cabasc 0.7199 0.6752 0.8354 0.7379 0.6252
* Bert-based *
bert_spc 0.7885 0.8393 0.9294 0.9153 0.7568
lcf_bert 0.7852 0.8419 0.9512 0.9278 0.7644

分析以上三表的指标变化对比,可以观察到bert相比lstm在知识增强上更有效,原因可能是lstm无法记录长期依赖

Experiment-2

在colab执行 deberta_abas.ipynb, 以获得以下实验结果:

Before Adding Dict-Knowledge (metric: Best Metric & ACC & F1)

deberta & Dataset Twitter semeval 2014 semeval 2015 semeval 2016 acl2014data
Best Metric 0.68487 0.7798 0.79898 0.88482 0.73911
ACC 0.75118 0.8187 0.88463 0.9158 0.7408
F1 0.62385 0.74088 0.70332 0.8105 0.7374

After Adding Dict-Knowledge Concat (metric: Best Metric & ACC & F1)

deberta & Dataset Twitter semeval 2014 semeval 2015 semeval 2016 acl2014data
Best Metric 0.69265 0.83354 0.80329 0.8937 0.7857
ACC 0.75829 0.86102 0.90243 0.94915 0.7872
F1 0.637308 0.80607 0.70415 0.8383 0.7842

After Adding Dict-Knowledge INSERT (metric: Best Metric & ACC & F1)

deberta & Dataset Twitter semeval 2014 semeval 2015 semeval 2016 acl2014data
Best Metric 0.7118 0.83204 0.79206 0.8764 0.79226
ACC 0.7701 0.8610 0.914634 0.9322 0.77014
F1 0.6535 0.8030 0.71828 0.8206 0.7635

分析以上两表的指标变化对比,观察deberta知识增强效果

If this work helps you, please cite it. Thanks!

@article{JIN2023103260,
title = {Back to common sense: Oxford dictionary descriptive knowledge augmentation for aspect-based sentiment analysis},
journal = {Information Processing & Management},
volume = {60},
number = {3},
pages = {103260},
year = {2023},
issn = {0306-4573},
doi = {https://doi.org/10.1016/j.ipm.2022.103260},
url = {https://www.sciencedirect.com/science/article/pii/S0306457322003612},
author = {Weiqiang Jin and Biao Zhao and Liwen Zhang and Chenxing Liu and Hang Yu},
}

以下为源代码仓库简介

ABSA-PyTorch

Aspect Based Sentiment Analysis, PyTorch Implementations.

基于方面的情感分析,使用PyTorch实现。

LICENSE Gitter

All Contributors

Requirement

  • pytorch >= 0.4.0
  • numpy >= 1.13.3
  • sklearn
  • python 3.6 / 3.7
  • transformers

To install requirements, run pip install -r requirements.txt.

Usage

Training

python train.py --model_name bert_spc --dataset restaurant

Inference

  • Refer to infer_example.py for both non-BERT-based models and BERT-based models.

Tips

  • For non-BERT-based models, training procedure is not very stable.
  • BERT-based models are more sensitive to hyperparameters (especially learning rate) on small data sets, see this issue.
  • Fine-tuning on the specific task is necessary for releasing the true power of BERT.

Framework

For flexible training/inference and aspect term extraction, try PyABSA, which includes all the models in this repository.

Reviews / Surveys

Qiu, Xipeng, et al. "Pre-trained Models for Natural Language Processing: A Survey." arXiv preprint arXiv:2003.08271 (2020). [pdf]

Zhang, Lei, Shuai Wang, and Bing Liu. "Deep Learning for Sentiment Analysis: A Survey." arXiv preprint arXiv:1801.07883 (2018). [pdf]

Young, Tom, et al. "Recent trends in deep learning based natural language processing." arXiv preprint arXiv:1708.02709 (2017). [pdf]

BERT-based models

BERT-ADA (official)

Rietzler, Alexander, et al. "Adapt or get left behind: Domain adaptation through bert language model finetuning for aspect-target sentiment classification." arXiv preprint arXiv:1908.11860 (2019). [pdf]

BERR-PT (official)

Xu, Hu, et al. "Bert post-training for review reading comprehension and aspect-based sentiment analysis." arXiv preprint arXiv:1904.02232 (2019). [pdf]

ABSA-BERT-pair (official)

Sun, Chi, Luyao Huang, and Xipeng Qiu. "Utilizing bert for aspect-based sentiment analysis via constructing auxiliary sentence." arXiv preprint arXiv:1903.09588 (2019). [pdf]

LCF-BERT (lcf_bert.py) (official)

Zeng Biqing, Yang Heng, et al. "LCF: A Local Context Focus Mechanism for Aspect-Based Sentiment Classification." Applied Sciences. 2019, 9, 3389. [pdf]

AEN-BERT (aen.py)

Song, Youwei, et al. "Attentional Encoder Network for Targeted Sentiment Classification." arXiv preprint arXiv:1902.09314 (2019). [pdf]

BERT for Sentence Pair Classification (bert_spc.py)

Devlin, Jacob, et al. "Bert: Pre-training of deep bidirectional transformers for language understanding." arXiv preprint arXiv:1810.04805 (2018). [pdf]

Non-BERT-based models

ASGCN (asgcn.py) (official)

Zhang, Chen, et al. "Aspect-based Sentiment Classification with Aspect-specific Graph Convolutional Networks." Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing. 2019. [pdf]

MGAN (mgan.py)

Fan, Feifan, et al. "Multi-grained Attention Network for Aspect-Level Sentiment Classification." Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing. 2018. [pdf]

AOA (aoa.py)

Huang, Binxuan, et al. "Aspect Level Sentiment Classification with Attention-over-Attention Neural Networks." arXiv preprint arXiv:1804.06536 (2018). [pdf]

Li, Xin, et al. "Transformation Networks for Target-Oriented Sentiment Classification." arXiv preprint arXiv:1805.01086 (2018). [pdf]

Cabasc (cabasc.py)

Liu, Qiao, et al. "Content Attention Model for Aspect Based Sentiment Analysis." Proceedings of the 2018 World Wide Web Conference on World Wide Web. International World Wide Web Conferences Steering Committee, 2018.

RAM (ram.py)

Chen, Peng, et al. "Recurrent Attention Network on Memory for Aspect Sentiment Analysis." Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing. 2017. [pdf]

MemNet (memnet.py) (official)

Tang, Duyu, B. Qin, and T. Liu. "Aspect Level Sentiment Classification with Deep Memory Network." Conference on Empirical Methods in Natural Language Processing 2016:214-224. [pdf]

IAN (ian.py)

Ma, Dehong, et al. "Interactive Attention Networks for Aspect-Level Sentiment Classification." arXiv preprint arXiv:1709.00893 (2017). [pdf]

ATAE-LSTM (atae_lstm.py)

Wang, Yequan, Minlie Huang, and Li Zhao. "Attention-based lstm for aspect-level sentiment classification." Proceedings of the 2016 conference on empirical methods in natural language processing. 2016.

Tang, Duyu, et al. "Effective LSTMs for Target-Dependent Sentiment Classification." Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics: Technical Papers. 2016. [pdf]

LSTM (lstm.py)

Hochreiter, Sepp, and Jürgen Schmidhuber. "Long short-term memory." Neural computation 9.8 (1997): 1735-1780. [pdf]

Note on running with RTX30*

If you are running on RTX30 series there may be some compatibility issues between installed/required versions of torch, cuda. In that case try using requirements_rtx30.txt instead of requirements.txt.

Contributors

Thanks goes to these wonderful people:


Alberto Paz

💻

jiangtao

💻

WhereIsMyHead

💻

songyouwei

💻

YangHeng

💻

rmarcacini

💻

Yikai Zhang

💻

Alexey Naiden

💻

hbeybutyan

💻

Pradeesh

💻

This project follows the all-contributors specification. Contributions of any kind welcome!

Licence

MIT

About

DictABSA: A dictionary knowledge (entity description informations) enhanced aspect-based sentiment analysis (ABSA) code implementation

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published