Trained models & code to predict toxic comments on all 3 Jigsaw Toxic Comment Challenges. Built using ⚡ Pytorch Lightning and 🤗 Transformers. For access to our API, please email us at [email protected].
-
Updated
May 16, 2024 - Python
Trained models & code to predict toxic comments on all 3 Jigsaw Toxic Comment Challenges. Built using ⚡ Pytorch Lightning and 🤗 Transformers. For access to our API, please email us at [email protected].
A web-app to identify toxic comments in a youtube channel and delete them.
AntiToxicBot is a bot that detects toxics in a chat using Data Science and Machine Learning technologies. The bot will warn admins about toxic users. Also, the admin can allow the bot to ban toxics.
A supervised learning based tool to identify toxic code review comments
A revolutionary AI-powered platform to help you solve doubts instantly, make learning easy, and achieve academic success.
This repository contains the code for the paper: "DeToxy: A Large-Scale Multimodal Dataset for Toxicity Classification in Spoken Utterances"
Module for predicting toxicity messages in Russian and English
NLP deep learning model for multilingual toxicity detection in text 📚
Toxformer is an attempt at using transformers to predict the toxicity of molecules from their molecular structure using the T3DB database.
An AI to Scan for Toxic Tweets
Offensive Language Identification Dataset for Brazilian Portuguese.
Genshin Impact Twitter Toxicity Research
Fast text toxicity classification model
This repository contains code for the paper: Cisco at SemEval-2021 Task 5: What’s Toxic?: Leveraging Transformers for Multiple Toxic Span Extraction from Online Comments
Classifying users on social media, using text embeddings from OpenAI and others
It is a trained Deep Learning model to predict different level of toxic comments. Toxicity like threats, obscenity, insults, and identity-based hate.
A REST API for detecting toxicity in a sentence. Using Tensorflow.js in the backend to detect parameters like identity_attack, insult, obscene, severe_toxicity, sexual_explicit, threat.
This is my repository and all the code needed to complete my Bachelor thesis on the detection of toxic comments.
Trabajo final de la cátedra "Text Mining" de Laura Alonso Alemany - FaMAF UNC. 2021.
Build a model to identify toxic statements and reduce bias in classification
Add a description, image, and links to the toxicity-classification topic page so that developers can more easily learn about it.
To associate your repository with the toxicity-classification topic, visit your repo's landing page and select "manage topics."