Photo by Volodymyr Hryshchenko on Unsplash
How well do hate speech, toxicity, abusive and offensive language classification models generalize across datasets?
A considerable body of research deals with the automatic identification of hate speech and related phenomena. However, cross-dataset model generalization remains a challenge. In this context, we address two still open central questions: (i) to what extent does the generalization depend on the model and the composition and annotation of the training datain terms of different categories?, and (ii) do specific features of the datasets or models influencethe generalization potential? To answer (i), we experiment with BERT, ALBERT, fastText, and SVMmodels trained on nine common public English datasets, whose class (or category) labels are standardized (and thus made comparable), in intra- and cross-dataset setups. The experiments show that indeed the generalization varies from model to model and that some of the categories (e.g., ‘toxic’, ‘abusive’, or ‘offensive’) serve better as cross-dataset training categories than others (e.g., ‘hate speech’). To answer (ii), we use a Random Forest modelfor assessing the relevance of different model and dataset features during the prediction of the performance of 450 BERT, 450 ALBERT, 450 fastText, and 348 SVM binary abusive language classifiers(1698 in total). We find that in order to generalize well, a model already needs to perform well in an intra-dataset scenario. Furthermore, we find that some other parameters are equally decisive for the success of the generalization, including, e.g., the training and target categories and the percentage of the out-of-domain vocabulary.
Fortuna, P., Soler, J., & Wanner, L. (2021). How well do hate speech, toxicity, abusive and offensive language classification models generalize across datasets? Inf. Process. Manag., 58, 102524.
https://www.sciencedirect.com/science/article/pii/S0306457321000339