Skip to content
View GPT-Protecter's full-sized avatar
Block or Report

Block or report GPT-Protecter

Block user

Prevent this user from interacting with your repositories and sending you notifications. Learn more about blocking users.

You must be logged in to block users.

Please don't include any personal information such as legal names or email addresses. Maximum 100 characters, markdown supported. This note will be visible to only you.
Report abuse

Contact GitHub support about this user’s behavior. Learn more about reporting abuse.

Report abuse
GPT-Protecter/README.md

🌟GPT-Protecter

🌟 Description

This project aims to provide a solution for quantitatively estimating the damage caused by "hallucinations" in large language models. It implements an insurance mechanism that afford redress for the damage caused by hallucinations, helping users to address this phenomenon effectively.

πŸ’‘ Features

  • Quantitative estimation of hallucination-induced damage
  • Compensation for hallucination-induced damage
  • Easy integration with large language models
  • User-friendly interface for managing insurance claims

πŸš€ Quick Start

Please follow the instructions here to use it.

Step 1: Call interface (https://mock.apifox.com/m1/3993944-0-default/get_public_key) to obtain the public key

Step 2: Perform RSA calculation based on public key and take a significant bit every 2 bits. Don't worry about data security, as no one can access your real content after doing so, so your data is secure.

Step 3: Use the result of the second step as the body of HTTP to call interface ( https://mock.apifox.com/m1/3993944-0-default/purchase_insurance_for_content) to insure the content generated by GPT

Step 3: Call interface (https://mock.apifox.com/m1/3993944-0-default/claimant) to claim compensation

πŸ’‘ How does the Insurance Mechanism Work?

Our insurance mechanism is designed to identify and measure the impact of hallucinations in language models. It uses advanced algorithms to analyze the output of the models, detect hallucinations, and estimate their potential damage.

🀝 Why Contribute to this Project?

By contributing to this project, you can help improve the reliability of large language models and make a positive impact in addressing the hallucinations phenomenon. Your contributions can help develop more effective insurance mechanisms and compensation systems.

License

This project is licensed under the MIT License. Please see the license file for more information.

Contact Information

For any questions or feedback, please contact us at [email protected].

Popular repositories

  1. GPT-Protecter GPT-Protecter Public

    This project aims to provide a solution for quantitatively estimating the damage caused by "hallucinations" in large language models. It implements an insurance mechanism that afford redress for th…