•  
  •  
 

Abstract

The rapid growth of social media platforms has led toanincrease in hate speech. This haspromptedthe development of effective detection mechanisms that aimtomitigate the potential hazards and threats it poses to society.BERT(Bidirectional Encoder Representations from Transformers)hasproduced cutting-edge results in this field. This review paperaims toidentify and analyzethe whole process of using the BERT model totacklethe challengesassociated with the hate speech detection problem. This academic discussion will begin by addressing the training datasets and the preprocessing methods involved. Subsequently, the use of the BERT model will be explored, followed by an examination of the contributions made to address theissuesencountered. Finally, we will discussthe evaluation phase.The use of BERT included the application of two primary approaches.Inthe feature-basedapproach, BERT accepts textual input and generates its corresponding representation as output. The resulting output is then used as input forany classification model. The second approachinvolves the process of fine-tuning BERTusing labeled datasetsandthen employing it directly for classification purposes.The controversial issues and open challengesthatappeared ateach stage were discussed. The results indicate that in both approaches, BERT hasshown its efficacy relative to other models under contention. However, there is a need forgreaterattention and advancement to effectivelysolve the existing issues and constraints in the future

Share

COinS