Skip to content
View AnthenaMatrix's full-sized avatar
💭
Brainstorming 🧠
💭
Brainstorming 🧠
Block or Report

Block or report AnthenaMatrix

Block user

Prevent this user from interacting with your repositories and sending you notifications. Learn more about blocking users.

You must be logged in to block users.

Please don't include any personal information such as legal names or email addresses. Maximum 100 characters, markdown supported. This note will be visible to only you.
Report abuse

Contact GitHub support about this user’s behavior. Learn more about reporting abuse.

Report abuse
AnthenaMatrix/README.md

Anthena Matrix

Securing the Future of AI

Anthena Matrix is on a mission to ensure the safety and integrity of AI systems. We believe in bringing security to the forefront of AI development, safeguarding against potential vulnerabilities, and promoting responsible AI innovation.

About Us

At Anthena Matrix, we are committed to advancing the field of AI security through open-source collaboration and community-driven initiatives. Our goal is to empower developers, researchers, and organizations to build secure and trustworthy AI systems.

Website: https://anthenamatrix.com

Projects

Here are some of the open-source projects we've released on GitHub:

  • Prompt Injection Testing Tool: The Prompt Injection Testing Tool is a Python script designed to assess the security of your AI system's prompt handling against a predefined list of user prompts commonly used for injection attacks. This tool utilizes the OpenAI GPT-3.5 model to generate responses to system-user prompt pairs and outputs the results to a CSV file for analysis.

  • ASCII Art Prompt Injection: ASCII Art Prompt Injection is a novel approach to hacking AI assistants using ASCII art. This project leverages the distracting nature of ASCII art to bypass security measures and inject prompts into large language models, such as GPT-4, leading them to provide unintended or harmful responses.

  • AI Audio Data Poisoning: AI Audio Data Poisoning is a Python script that demonstrates how to add adversarial noise to audio data. This technique, known as audio data poisoning, involves injecting imperceptible noise into audio files to manipulate the behavior of AI systems trained on this data.

  • AI Image Data Poisoning: AI Image Data Poisoning is a Python script that demonstrates how to add imperceptible perturbations to images, known as adversarial noise, which can disrupt the training process of AI models. This technique aims to protect artists' images by introducing subtle modifications that hinder the performance of AI algorithms without significantly altering the appearance of the images to human observers.

  • Image Prompt Injection: Image Prompt Injection is a Python script that demonstrates how to embed a secret prompt within an image using steganography techniques. This hidden prompt can be later extracted by an AI system for various applications.

  • AI Vulnerability Assessment Framework: The AI Vulnerability Assessment Framework is an open-source checklist designed to guide users through the process of assessing the vulnerability of artificial intelligence (AI) systems to various threats and attacks.

  • AI Prompt Injection List: AI Prompt Injection List is a curated collection of prompts designed for testing AI or Large Language Models (LLMs) for prompt injection vulnerabilities. This list aims to provide a comprehensive set of prompts for security testing purposes.

  • Website Prompt Injection: Website Prompt Injection is a concept that allows for the injection of prompts into an AI system via a website's interaction. This technique exploits the interaction between users, websites, and AI systems to assess security vulnerabilities.

Support Anthena Matrix

If you find our work valuable and would like to support Anthena Matrix, you can contribute to our efforts by donating cryptocurrency:

  • Bitcoin: bc1qxvvtgz0vf3n2cuxt0suvf39jleegpt9wawxazn
  • Ethereum: 0xE73E90779B3e8F6D65306B40E02878f437408b4E
  • BNB: 0xE73E90779B3e8F6D65306B40E02878f437408b4E
  • Dogecoin: D827LpfJu9pcVc3Kky82sTrNnsE7pLGqeV
  • Solana: AJtGEJvoVoS2eeqeHQvf7usRs2nSQM1yLtBSdKp1KBY5

Disclaimer

Anthena Matrix projects are provided for educational and research purposes only. We do not take any responsibility for the misuse or unintended consequences of using our projects. Users are encouraged to use them responsibly and in compliance with applicable laws and regulations.

Pinned

  1. Website-Prompt-Injection Website-Prompt-Injection Public

    Website Prompt Injection is a concept that allows for the injection of prompts into an AI system via a website's. This technique exploits the interaction between users, websites, and AI systems to …

    HTML 32 6

  2. Image-Prompt-Injection Image-Prompt-Injection Public

    Image Prompt Injection is a Python script that demonstrates how to embed a secret prompt within an image using steganography techniques. This hidden prompt can be later extracted by an AI system fo…

    Python 18 12

  3. Prompt-Injection-Testing-Tool Prompt-Injection-Testing-Tool Public

    The Prompt Injection Testing Tool is a Python script designed to assess the security of your AI system's prompt handling against a predefined list of user prompts commonly used for injection attack…

    Python 20 4

  4. Many-Shot-Jailbreaking Many-Shot-Jailbreaking Public

    Research on "Many-Shot Jailbreaking" in Large Language Models (LLMs). It unveils a novel technique capable of bypassing the safety mechanisms of LLMs, including those developed by Anthropic and oth…

    13