Blacks is to Anger as Whites is to Joy? Understanding Latent Affective Bias in Large Pre-trained Neural Language Models

Anoop K.1, Deepak P.2, Sahely Bhadra3, Manjary P. Gangan1, Lajish V. L.1
1Department of Computer Science, University of Calicut, India
2School of Electronics, Electrical Engineering and Computer Science, Queen’s University Belfast, UK.
3 Department of Data Science, Indian Institute of Technology Palakkad, India

logo

logo

Abstract: Groundbreaking inventions and highly significant performance improvements in deep learning based Natural Language Processing are witnessed through the development of transformer based large Pre-trained Language Models (PLMs). The wide availability of unlabeled data within human generated data deluge along with self-supervised learning strategy helps to accelerate the success of large PLMs in language generation, language understanding, etc. But at the same time, latent historical bias/unfairness in human minds towards a particular gender, race, etc., encoded unintentionally/intentionally into the corpora harms and questions the utility and efficacy of large PLMs in many real-world applications, particularly for the protected groups. In this paper, we present an extensive investigation towards understanding the existence of "Affective Bias" in large PLMs to unveil any biased association of emotions such as anger, fear, joy, etc., towards a particular gender, race or religion with respect to the downstream task of textual emotion detection. We conduct our exploration of affective bias from the very initial stage of corpus level affective bias analysis by searching for imbalanced distribution of affective words within a domain, in large scale corpora that are used to pre-train and fine-tune PLMs. Later, to quantify affective bias in model predictions, we perform an extensive set of class-based and intensity-based evaluations using various bias evaluation corpora. Our results show the existence of statistically significant affective bias in the PLM based emotion detection systems, indicating biased association of certain emotions towards a particular gender, race, and religion


📝 Paper: https://arxiv.org/abs/2301.09003
🌍 GitHub: https://github.com/anoopkdcs/affective_bias_in_plm


People

  1. 1. Anoop K , University of Calicut, Kerala, India. (anoopk_dcs@uoc.ac.in)
  2. 2. Deepk P. , Queen’s University Belfast, Northern Ireland, UK. (deepaksp@acm.org)
  3. 3. Sahely Bhadra, Indian Institute of Technology Palakkad, India.
  4. 4. Manjary P Gangan , University of Calicut, Kerala, India.
  5. 5. Lajish V L, University of Calicut, Kerala, India.

Other Related Work

affective bias
Towards an Enhanced Understanding of Bias in Pre-trained Neural Language Models: A Survey with Special Emphasis on Affective Bias
Anoop K.1, Manjary P. Gangan1, Deepak P.2, Lajish V. L.1
1Department of Computer Science, University of Calicut, India
2School of Electronics, Electrical Engineering and Computer Science, Queen’s University Belfast, Northern Ireland, UK.
The remarkable progress in Natural Language Processing (NLP) brought about by deep learning, particularly with the recent advent of large pre-trained neural language models, is brought into scrutiny as several studies began to discuss and report potential biases in NLP applications. Bias in NLP is found to originate from latent historical biases encoded by humans into textual data which gets perpetuated or even amplified by NLP algorithm. We present a survey to comprehend bias in large pre-trained language models and analyze the stages at which they occur in these models, and various ways in which these biases could be quantified and mitigated. Considering wide applicability of textual affective computing-based downstream tasks in real-world systems such as business, health care, and education, we give a special emphasis on investigating bias in the context of affect (emotion) i.e., Affective Bias, in large pre-trained language models. We present a summary of various bias evaluation corpora that help to aid future research and discuss challenges in the research on bias in pre-trained language models. We believe that our attempt to draw a comprehensive view of bias in pre-trained language models, and especially the exploration of affective bias will be highly beneficial to researchers interested in this evolving field. Read More...
📝 Paper: https://doi.org/10.1007/978-981-19-4453-6_2 || 🌍 GitHub || 📝 Pre-print