Online Textual Hate Content Recognition using Fine-tuned Transformer Models

Main Article Content

Sneha Chinivar, Roopa M S, Arunalatha J S, Venugopal K R

Abstract

The popularity, anonymity, and easy accessibility of social media have enabled it as a convenient platform to outspread hate speech. Hate speech can take many forms, viz., racial, political, LGBTQ+, religious, gender-based, nationality-based, etc., overlapping and intersecting with numerous forms of persecution and discrimination, leading to severe harmful impacts on society. It has become crucial to address the problem of online hate speech and create an inclusive and safe online environment. Several techniques have already been investigated to address the issue of online hate speech and have obtained reasonable results. But, their contextual understanding needs to be stronger, and it is quite a complex task as they need larger datasets to take complete advantage of the model’s architecture. In this work, we explored the usage of transformer-based pre-trained models, particularly Bidirectional Encoder Representations from Transformers (BERT) and Robustly Optimized BERT (RoBERTa), to fine-tune them further to detect online hate speech efficiently. Our approach performed well and improved Accuracy and F1-score metrics results by 9.65 percent, Precision and Recall by 10.28 and 8.96 percent, respectively, compared to state-of-art methods with a subsampled dataset, limited resources and time.

Article Details

How to Cite
Sneha Chinivar, et al. (2023). Online Textual Hate Content Recognition using Fine-tuned Transformer Models. International Journal on Recent and Innovation Trends in Computing and Communication, 11(9), 4767–4776. https://doi.org/10.17762/ijritcc.v11i9.10031
Section
Articles