Skip to content

In this work we applied multilingual zero-shot transfer concept for the task of toxic comments detection. This concept allows a model trained only on a single-language dataset to work in arbitrary language, even low-resource.

License

Notifications You must be signed in to change notification settings

SyrexMinus/cross_lingual_nlp

Repository files navigation

Cross-Lingual Zero-Shot Transfer Learning for Toxic Comments Detection

In this work we applied multilingual zero-shot transfer concept for the task of toxic comments detection. This concept allows a model trained only on a single-language dataset to work in arbitrary language, even low-resource. We achieved the concept by using embedding models XLM_RoBERTa and DistilBERT that transform a text from any language to a single vector space. We demonstrate that a classifier trained on "toxic comments by Jigsaw Google" English dataset can reach 75% accuracy on a manually created multilingual dataset of 50 different languages. The applications of our models include flagging toxic comments in mutlilingual social platforms. We share all the code and data for training and deployment in our repository in GitHub.

Authors:

  • Jayveersinh Raj
  • Makar Shevchenko
  • Nikolay Pavlenko

Report on the project can be accessed here.

Pipeline

The figure below illustrates sample model pipeline. The pipeline consist of an embedder followed by a classifier model. In fact, in place of the classifier in the project, neural network, naive bayes and decision tree were tested.

image

Credit: Samuel Leonardo Gracio

Motivation

The idea of implementing a zero-shot multilingual model is to cover rare languages without the need for additional training in them. The figure below illustrates the distribution of languages by video in some video streaming service. This illustrates that minority languages are used much less often than English or French. Accordingly, there is much less data for them, which creates a problem for training models in such languages. However, using the zero-shot technique allows inference in such rare languages without using additional training data.

Daily motion

Credit: Samuel Leonardo Gracio

Dataset

The dataset that we use, namely jigsaw-toxic-comment-classification was taken from Kaggle. It could be accessed through this link.

In preprocessing step we merged all the classes of toxicity to one super-class to deal with sparsity of them. The expected application of our model (ban toxic comments) allows not to distinguish specifics of toxicity.

Tech stack

In the work we used the following tools and frameworks:

pytorch python python Jupyter Notebook NumPy pandas Matplotlib seaborn scikit_learn seaborn seaborn seaborn seaborn

About

In this work we applied multilingual zero-shot transfer concept for the task of toxic comments detection. This concept allows a model trained only on a single-language dataset to work in arbitrary language, even low-resource.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages