Fast text toxicity classification model
-
Updated
May 31, 2024 - Python
Fast text toxicity classification model
Trained models & code to predict toxic comments on all 3 Jigsaw Toxic Comment Challenges. Built using ⚡ Pytorch Lightning and 🤗 Transformers. For access to our API, please email us at [email protected].
Identify and classify toxic online comments
A web-app to identify toxic comments in a youtube channel and delete them.
How to make multiple requests per second to Perspective API
Repository of the paper "Supporting Online Toxicity Detection with Knowledge Graphs" (ICWSM 22).
Comparing Toxic Texts with Transformers
TOXIC-BOY-OP/MUSIC-BOTOP
WordPress plugin that encourages authors of toxic comments to re-phrase them to be more kind instead.
A DL project that helps in classifying Toxic Comment weather it is positive or not.
Build a multi-headed model that’s capable of detecting different types of of toxicity like threats, obscenity, insults, and identity-based hate.
Profanity detection using fastText.
Telegram bot that detects toxic comments based on Perspective API
Natural Language Processing: Toxic Comments Detection and Classification
Identify toxicity in online comments
A project to implement toxic comment classification based on the Jigsaw/Conversational AI challenge.
A Machine Learning model to detect Toxic comment on Kaggel dataset.
Analyzing Jigsaw's toxic comments Kaggle challenge using fastai + pytorch
Convolutional Neural Networks for Toxicity Detection in Online Comments
Add a description, image, and links to the toxic-comments topic page so that developers can more easily learn about it.
To associate your repository with the toxic-comments topic, visit your repo's landing page and select "manage topics."