Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

can i use this to detect fake news? #1

Open
ralyodio opened this issue Jul 27, 2023 · 1 comment
Open

can i use this to detect fake news? #1

ralyodio opened this issue Jul 27, 2023 · 1 comment

Comments

@ralyodio
Copy link

I'm looking for a way to detect fake news articles.

@EthanC111
Copy link
Collaborator

Hi @ralyodio ! Thanks for your interest in our work! The short answer is yes, but let me provide more context on this question to give a clearer picture.

FacTool is designed to fact-check a given claim against a set of evidences. For knowledge-based claims (which news is mostly composed of), the evidences are collected from search engine results. So, if there are conflicts within the evidences, or if the claim doesn't match the retrieved evidences, then ideally it will be classified as a false claim by FacTool.

However, since the reasoning process of FacTool is powered by GPT-4 (or GPT-3.5, but I would suggest using GPT-4 due to its superior reasoning capability compared to GPT-3.5), not an actual human being, the fact-checking process can still have mistakes due to the reasoning errors made by GPT-4.

But this is not the end of the story! FacTool is designed to be explainable, so it would be helpful to look into the reasoning process of the claim. Basically, if you call our API, you will get a response like the following:

{
  "average_claim_level_factuality": avg_claim_level_factuality
  "average_response_level_factuality": avg_response_level_factuality
  "detailed_information": [
    {
      'prompt': prompt_1, 
      'response': response_1, 
      'category': 'kbqa', 
      'claims': [claim_11, claim_12, ..., claims_1n], 
      'queries': [[query_111, query_112], [query_121, query_122], ..[query_1n1, query_1n2]], 
      'evidences': [[evidences_11], [evidences_12], ..., [evidences_1n]], 
      'claim_level_factuality': [{claim_11, reasoning_11, error_11, correction_11, factuality_11}, {claim_12, reasoning_12, error_12, correction_12, factuality_12}, ..., {claim_1n, reasoning_1n, error_1n, correction_1n, factuality_1n}], 
      'response_level_factuality': factuality_1
    },
    {
      'prompt': prompt_2, 
      'response': response_2, 
      'category': 'kbqa',
      'claims': [claim_21, claim_22, ..., claims_2n], 
      'queries': [[query_211, query_212], [query_221, query_222], ..., [query_2n1, query_2n2]], 
      'evidences': [[evidences_21], [evidences_22], ..., [evidences_2n]], 
      'claim_level_factuality': [{claim_21, reasoning_21, error_21, correction_21, factuality_21}, {claim_22, reasoning_22, error_22, correction_22, factuality_22}, ..., {claim_2n, reasoning_2n, error_2n, correction_2n, factuality_2n}],
      'response_level_factuality': factuality_2,
    },
    ...
  ]
}

You could look into the reasoning part of claim_level_factuality to gain insights on the decision process of GPT-4.
You could also look into evidences to see what evidences are retrieved.

Looking forward to your feedback! Feel free to ask more questions! Will be happy to answer them!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants