Skip to content

ICTMCG/LLM-for-misinformation-research

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

22 Commits
 
 
 
 

Repository files navigation

LLM-for-misinformation-research

A curated paper list of misinformation research using (multi-modal) large language models, i.e., (M)LLMs.

Methods for Detection and Verification

As an Information/Feature Provider, Data Generator, and Analyzer

An LLM can be seen as a (sometimes not reliable) knowledge provider, an experienced expert in specific areas, and a relatively cheap data generator (compared with collecting from the real world). For example, LLMs could be a good analyzer of social commonsense/conventions.

  • Cheap-fake Detection with LLM using Prompt Engineering[paper]
  • Faking Fake News for Real Fake News Detection: Propaganda-Loaded Training Data Generation[paper]
  • Bad Actor, Good Advisor: Exploring the Role of Large Language Models in Fake News Detection[paper]
  • Detecting Misinformation with LLM-Predicted Credibility Signals and Weak Supervision[paper]
  • FakeGPT: Fake News Generation, Explanation and Detection of Large Language Model[paper]
  • Fighting Fire with Fire: The Dual Role of LLMs in Crafting and Detecting Elusive Disinformation[paper]
  • Language Models Hallucinate, but May Excel at Fact Verification[paper]
  • Clean-label Poisoning Attack against Fake News Detection Models[paper]
  • Rumor Detection on Social Media with Crowd Intelligence and ChatGPT-Assisted Networks[paper]
  • LLMs are Superior Feedback Providers: Bootstrapping Reasoning for Lie Detection with Self-Generated Feedback[paper]
  • FACT-GPT: Fact-Checking Augmentation via Claim Matching with LLMs[paper]
  • Can Large Language Models Detect Rumors on Social Media?[paper]
  • TELLER: A Trustworthy Framework for Explainable, Generalizable and Controllable Fake News Detection[paper]
  • DELL: Generating Reactions and Explanations for LLM-Based Misinformation Detection[paper]
  • Enhancing large language model capabilities for rumor detection with Knowledge-Powered Prompting[paper]
  • An Implicit Semantic Enhanced Fine-Grained Fake News Detection Method Based on Large Language Model[paper]
  • RumorLLM: A Rumor Large Language Model-Based Fake-News-Detection Data-Augmentation Approach[paper]
  • Explainable Fake News Detection With Large Language Model via Defense Among Competing Wisdom[paper]
  • Message Injection Attack on Rumor Detection under the Black-Box Evasion Setting Using Large Language Model[paper]

As a Tool User

Let an LLM be an agent having access to external tools like search engines, deepfake detectors, etc.

  • Self-Checker: Plug-and-Play Modules for Fact-Checking with Large Language Models[paper]
  • FacTool: Factuality Detection in Generative AI -- A Tool Augmented Framework for Multi-Task and Multi-Domain Scenarios[paper]
  • FactLLaMA: Optimizing Instruction-Following Language Models with External Knowledge for Automated Fact-Checking[paper]
  • Explainable Claim Verification via Knowledge-Grounded Reasoning with Large Language Models[paper]
  • Language Models Hallucinate, but May Excel at Fact Verification[paper]
  • Towards LLM-based Fact Verification on News Claims with a Hierarchical Step-by-Step Prompting Method[paper]
  • Evidence-based Interpretable Open-domain Fact-checking with Large Language Models[paper]
  • TELLER: A Trustworthy Framework for Explainable, Generalizable and Controllable Fake News Detection[paper]
  • LEMMA: Towards LVLM-Enhanced Multimodal Misinformation Detection with External Knowledge Augmentation[paper]
  • Can Large Language Models Detect Misinformation in Scientific News Reporting?[paper]
  • The Perils and Promises of Fact-Checking with Large Language Models[paper]
  • SNIFFER: Multimodal Large Language Model for Explainable Out-of-Context Misinformation Detection[paper]
  • Re-Search for The Truth: Multi-round Retrieval-augmented Large Language Models are Strong Fake News Detectors[paper]
  • MMIDR: Teaching Large Language Model to Interpret Multimodal Misinformation via Knowledge Distillation[paper]
  • Reinforcement Retrieval Leveraging Fine-grained Feedback for Fact Checking News Claims with Black-Box LLM[paper]
  • TrumorGPT: Query Optimization and Semantic Reasoning over Networks for Automated Fact-Checking[paper]
  • Large Language Model Agent for Fake News Detection[paper]
  • Argumentative Large Language Models for Explainable and Contestable Decision-Making[paper]

As a Decision Maker/Explainer

An LLM can directly output the final prediction and (optional) explanations.

  • Large Language Models Can Rate News Outlet Credibility[paper]
  • Towards Reliable Misinformation Mitigation: Generalization, Uncertainty, and GPT-4[paper]
  • Self-Checker: Plug-and-Play Modules for Fact-Checking with Large Language Models[paper]
  • News Verifiers Showdown: A Comparative Performance Evaluation of ChatGPT 3.5, ChatGPT 4.0, Bing AI, and Bard in News Fact-Checking[paper]
  • Explainable Claim Verification via Knowledge-Grounded Reasoning withLarge Language Models[paper]
  • Language Models Hallucinate, but May Excel at Fact Verification[paper]
  • FakeGPT: Fake News Generation, Explanation and Detection of Large Language Model[paper]
  • Can Large Language Models Understand Content and Propagation for Misinformation Detection: An Empirical Study[paper]
  • Are Large Language Models Good Fact Checkers: A Preliminary Study[paper]
  • A Revisit of Fake News Dataset with Augmented Fact-checking by ChatGPT[paper]
  • Can Large Language Models Detect Rumors on Social Media?[paper]
  • FACT-GPT: Fact-Checking Augmentation via Claim Matching with LLMs[paper]
  • DELL: Generating Reactions and Explanations for LLM-Based Misinformation Detection[paper]
  • Assessing the Reasoning Abilities of ChatGPT in the Context of Claim Verification[paper]
  • LEMMA: Towards LVLM-Enhanced Multimodal Misinformation Detection with External Knowledge Augmentation[paper]
  • SoMeLVLM: A Large Vision Language Model for Social Media Processing[paper][project]
  • Can Large Language Models Detect Misinformation in Scientific News Reporting?[paper]
  • The Perils and Promises of Fact-Checking with Large Language Models[paper]
  • Potential of Large Language Models as Tools Against Medical Disinformation[paper]
  • FakeNewsGPT4: Advancing Multimodal Fake News Detection through Knowledge-Augmented LVLMs[paper]
  • SNIFFER: Multimodal Large Language Model for Explainable Out-of-Context Misinformation Detection[paper]
  • Multimodal Large Language Models to Support Real-World Fact-Checking[paper]
  • MMIDR: Teaching Large Language Model to Interpret Multimodal Misinformation via Knowledge Distillation[paper]
  • An Implicit Semantic Enhanced Fine-Grained Fake News Detection Method Based on Large Language Model[paper]
  • Explaining Misinformation Detection Using Large Language Models[paper]
  • Rumour Evaluation with Very Large Language Models[paper]
  • Argumentative Large Language Models for Explainable and Contestable Decision-Making[paper]
  • Exploring the Potential of the Large Language Models (LLMs) in Identifying Misleading News Headlines[paper]
  • Tell Me Why: Explainable Public Health Fact-Checking with Large Language Models[paper]
  • Mining the Explainability and Generalization: Fact Verification Based on Self-Instruction[paper]

Check-worthy Claim Detection

  • Are Large Language Models Good Fact Checkers: A Preliminary Study[paper]
  • Claim Check-Worthiness Detection: How Well do LLMs Grasp Annotation Guidelines?[paper]

Post-hoc Explanation Generation

  • Are Large Language Models Good Fact Checkers: A Preliminary Study[paper]
  • JustiLM: Few-shot Justification Generation for Explainable Fact-Checking of Real-world Claims[paper]
  • Can LLMs Produce Faithful Explanations For Fact-checking? Towards Faithful Explainable Fact-Checking via Multi-Agent Debate[paper]

Other Tasks

  • [Fake News Propagation Simulation] From Skepticism to Acceptance: Simulating the Attitude Dynamics Toward Fake News[paper]
  • [Misinformation Correction] Correcting Misinformation on Social Media with A Large Language Model[paper]

Resources

  • Combating Misinformation in the Age of LLMs: Opportunities and Challenges: A survey of the opportunities (can we utilize LLMs to combat misinformation) and challenges (how to combat LLM-generated misinformation) of combating misinformation in the age of LLMs.
  • ARG: A Chinese & English fake news detection dataset with adding rationales generated by GPT-3.5-Turbo.
  • MM-Soc: A benchmark for multimodal language models in social media platforms, containing a misinformation detection task.

Releases

No releases published

Packages

No packages published