Abstract
The rapid growth of social media platforms has led toanincrease in hate speech. This haspromptedthe development of effective detection mechanisms that aimtomitigate the potential hazards and threats it poses to society.BERT(Bidirectional Encoder Representations from Transformers)hasproduced cutting-edge results in this field. This review paperaims toidentify and analyzethe whole process of using the BERT model totacklethe challengesassociated with the hate speech detection problem. This academic discussion will begin by addressing the training datasets and the preprocessing methods involved. Subsequently, the use of the BERT model will be explored, followed by an examination of the contributions made to address theissuesencountered. Finally, we will discussthe evaluation phase.The use of BERT included the application of two primary approaches.Inthe feature-basedapproach, BERT accepts textual input and generates its corresponding representation as output. The resulting output is then used as input forany classification model. The second approachinvolves the process of fine-tuning BERTusing labeled datasetsandthen employing it directly for classification purposes.The controversial issues and open challengesthatappeared ateach stage were discussed. The results indicate that in both approaches, BERT hasshown its efficacy relative to other models under contention. However, there is a need forgreaterattention and advancement to effectivelysolve the existing issues and constraints in the future
Recommended Citation
Aljawazeri, Jinan Ali and Jasim, Mahdi Nsaif
(2024)
"Addressing Challenges in Hate Speech Detection Using BERT-Based Models: A Review,"
Iraqi Journal for Computer Science and Mathematics: Vol. 5:
Iss.
2, Article 4.
DOI: https://doi.org/10.52866/ijcsm.2024.05.02.001
Available at:
https://ijcsm.researchcommons.org/ijcsm/vol5/iss2/4