Automatic Approach to Identify Patronizing and Condescending Language Towards Vulnerable Community
Keywords:
Natural language process (NLP), Bidirectional Encoder Representation from transforms (BERT) Approach, Prediction, Patronizing wordsAbstract
In this study, we introduce a newly clarified dataset that is intended to help NLP algorithms identify and categorize language that is patronizing and condescending to weak communities (for example homeless peoples, Refugees, Poor families). While the use of such language in the general media has long been shown to have negative effects, it differs from other forms of damaging language in that it is frequently used unintentionally and with good intentions. We also believe that the sometimes-covert character of insulting and demeaning language poses an intriguing professional challenge for the NLP community. Our analysis of the presented dataset reveals that typical NLP models, with language models, have trouble recognizing PCL (Patronizing and Condescending Language). Also work on the research that are investigate the existing accuracy of patronizing and condescending language, accuracy of a huge dataset be compared to the methods already in use and he patronizing accuracy of language of vast datasets be improved? We furthermore believe that the often-subtle nature of patronizing and condescending language (PCL) presents an interesting technical challenge for the NLP community. Our analysis of the proposed dataset shows that identifying PCL is hard for standard NLP models, with language models such as BERT achieving the best results
Downloads
Published
How to Cite
Issue
Section
License
Copyright (c) 2023 UCP Journal of Engineering & Information Technology
This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.