Skip to content

Prompts Methods to find the vulnerabilities in Generative Models

Notifications You must be signed in to change notification settings

promptslab/LLM-Prompt-Vulnerabilities

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

9 Commits
 
 

Repository files navigation

LLM & Prompt Vulnerabilities

Finding and documentating vulnerabilities in Generative Models based on prompt-engineering

Name Description proof
Prompt In the Middle (PITM)? Injecting prompt to access other's output [Proof]
Nested Prompt Attack (Need a better name :D) While Providing nested prompts, the model ignores the initial instructions [Proof]