Skip to content

Visually compare fill-in-the-blank LLM prompts to uncover learned biases and associations!

License

Notifications You must be signed in to change notification settings

AdamCoscia/KnowledgeVIS

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

1 Commit
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

KnowledgeVIS

license arxiv badge DOI:10.1109/TVCG.2023.3346713

Visually compare fill-in-the-blank LLM prompts to uncover learned biases and associations!

🤔🧠📊🌈🫂

The KnowledgeVIS System

What is KnowledgeVIS?

Large language models (LLMs) such as BERT and GPT-3 have seen significant improvements in performance on natural language tasks, enabling them to help people answer questions, generate essays, summarize long articles, and more. Yet understanding what these models have learned and why they work is still an open challenge. For natural language processing (NLP) researchers and engineers who increasingly train and deploy LLMs as ``black boxes'' for generating text, exploring how learned behaviors during training manifest in downstream tasks can help them improve model development; e.g., by surfacing harmful stereotypes.

KnowledgeVIS is a human-in-the-loop visual analytics system for comparing fill-in-the-blank prompts to uncover associations from learned text representations. KnowledgeVIS helps developers create effective sets of prompts, probe multiple types of relationships between words, test for different associations that have been learned, and find insights across several sets of predictions for any BERT-based language model.

  1. First, we designed an intuitive visual interface that structures the query process to encourage both creativity and rapid prompt generation and testing.
  2. Then, to reduce the complexity of the prompt prediction space, we developed a novel clustering technique that groups predictions by semantic similarity.
  3. Finally, we provided several expressive and interactive text visualizations to promote exploration and discovery of insights at multiple levels of data abstraction: a heat map; a set view inspired by parallel tag clouds; and scatterplot with dust-and-magnet positioning of axes.

Collectively, these visualizations help the user identify the likelihood and uniqueness of individual predictions, compare sets of predictions between prompts, and summarize patterns and relationships between predictions across all prompts.

This code accompanies the research paper:

KnowledgeVIS: Interpreting Language Models by Comparing Fill-in-the-Blank Prompts
Adam Coscia, Alex Endert
IEEE Transactions on Visualization and Computer Graphics (TVCG), 2023 (to appear)
| 📖 Paper | ▶️ Live Demo | 🎞️ Demo Video | 🧑‍💻 Code |

Features

🌈 Rapid, creative and scalable "fill-in-the-blank" prompt generation interface:
📊 Automatically cluster semantically-similar responses to reveal high-level patterns:
🔍 Visually explore and discover insights at multiple levels of data abstraction:

Demo Video

🎞️ Watch the demo video for a full tutorial here: https://youtu.be/hBX4rSUMr_I

Live Demo

🚀 For a live demo, visit: https://adamcoscia.com/papers/knowledgevis/demo/

Getting Started

🌱 You can test our visualizations on your own LLMs in just a few easy steps!

git clone [email protected]:AdamCoscia/KnowledgeVIS.git

# use --depth if you don't want to download the whole commit history
git clone --depth 1 [email protected]:AdamCoscia/KnowledgeVIS.git

Interface

  • A frontend vanilla HTML/CSS/JavaScript web app powered by D3.js and Semantic UI!
  • Additional details can be found in interface/README.md

Navigate to the interface folder:

cd interface
  • If you are running Windows:
py -3.9 -m http.server
  • If you are running MacOS / Linux:
python3.9 -m http.server

Navigate to localhost:8000. You should see KnowledgeVIS running in your browser :)

Server

  • A backend Python 3.9 Flask web app to run local LLM models downloaded from Hugging Face!
  • Additional details can be found in server/README.md

Navigate to the server folder:

cd server

Create a virtual environment:

  • If you are running Windows:
# Start a virtual environment
py -3.9 -m venv venv

# Activate the virtual environment
.\venv\Scripts\activate
  • If you are running MacOS / Linux:
# Start a virtual environment
python3.9 -m venv venv

# Activate the virtual environment
source venv/bin/activate

Install dependencies:

python -m pip install -r requirements.txt

Install PyTorch v2.0.x (instructions)

PyTorch is installed separately because some systems may support CUDA, which requires a different installation process and can significantly speed up the tool.

  1. First, check if your GPU can support CUDA (link)
  2. Then, follow the instructions linked above to determine if your system can support CUDA for computation.

Then run the server:

python main.py

Credits

Led by Adam Coscia, KnowledgeVIS is a result of a collaboration between visualization experts in human centered computing and interaction design from Interlocking GT Georgia Tech. KnowledgeVIS is created by Adam Coscia and Alex Endert.

Citation

To learn more about KnowledgeVIS, please read our research paper (to appear in IEEE TVCG).

@article{Coscia:2023:KnowledgeVIS,
  author={Coscia, Adam and Endert, Alex},
  journal={IEEE Transactions on Visualization and Computer Graphics},
  title={KnowledgeVIS: Interpreting Language Models by Comparing Fill-in-the-Blank Prompts},
  year={2023},
  volume={},
  number={},
  pages={1-13},
  doi={10.1109/TVCG.2023.3346713}
}

License

The software is available under the MIT License.

Contact

If you have any questions, feel free to open an issue or contact Adam Coscia.