Communication Dans Un Congrès Année : 2023

Hate speech and offensive language detection using an emotion-aware shared encoder

1 IP Paris - Institut Polytechnique de Paris (Route de Saclay, 91120 Palaiseau Cedex, France - France)
"> IP Paris - Institut Polytechnique de Paris
2 TSP - RS2M - Département Réseaux et Services Multimédia Mobiles (Télécom SudParis - 9 rue Charles Fourier - 91011 Évry cedex - France)
"> TSP - RS2M - Département Réseaux et Services Multimédia Mobiles
3 R3S-SAMOVAR - Réseaux, Systèmes, Services, Sécurité (TELECOM Sudparis 9 rue Charles Fourier 91011 EVRY - France)
"> R3S-SAMOVAR - Réseaux, Systèmes, Services, Sécurité

Résumé

The rise of emergence of social media platforms has fundamentally altered how people communicate, and among the results of these developments is an increase in online use of abusive content. Therefore, automatically detecting this content is essential for banning inappropriate information, and reducing toxicity and violence on social media platforms. The existing works on hate speech and offensive language detection produce promising results based on pre-trained transformer models, however, they considered only the analysis of abusive content features generated through annotated datasets. This paper addresses a multi-task joint learning approach which combines external emotional features extracted from another corpora in dealing with the imbalanced and scarcity of labeled datasets. Our analysis are using two well-known Transformer-based models, BERT and mBERT, where the later is used to address abusive content detection in multilingual scenarios. Our model jointly learns abusive content detection with emotional features by sharing representations through transformers' shared encoder. This approach increases data efficiency, reduce overfitting via shared representations, and ensure fast learning by leveraging auxiliary information. Our findings demonstrate that emotional knowledge helps to more reliably identify hate speech and offensive language across datasets. Our hate speech detection Multi-task model exhibited 3% performance improvement over baseline models, but the performance of multi-task models were not significant for offensive language detection task. More interestingly, in both tasks, multi-task models exhibits less false positive errors compared to single task scenario.

Fichier principal
Vignette du fichier
2302.08777 (1).pdf (399.58 Ko) Télécharger le fichier
Origine Fichiers produits par l'(les) auteur(s)

Dates et versions

hal-04003849 , version 1 (24-02-2023)

Identifiants

Citer

Khouloud Mnassri, Praboda Rajapaksha, Reza Farahbakhsh, Noel Crespi. Hate speech and offensive language detection using an emotion-aware shared encoder. IEEE International Conference on Communications (ICC), May 2023, Roma, Italy, France. ⟨10.1109/ICC45041.2023.10279690⟩. ⟨hal-04003849⟩
133 Consultations
275 Téléchargements

Altmetric

Partager

  • More