REDAffectiveLM: Leveraging Affect Enriched Embedding and Transformer-based Neural Language Model for Readers' Emotion Detection

Anoop K.1, Deepak P.2,Manjary P. Gangan1, Savitha Sam Abraham3, Lajish V. L.1
1Department of Computer Science, University of Calicut, India
2School of Electronics, Electrical Engineering and Computer Science, Queen’s University Belfast, UK.
3 School of Science and Technology, Örebro University, Sweden.

logo

logo

Abstract: Technological advancements in web platforms allow people to express and share emotions towards textual write-ups written and shared by others. This brings about different interesting domains for analysis; emotion expressed by the writer and emotion elicited from the readers. In this paper, we propose a novel approach for Readers' Emotion Detection from short-text documents using a deep learning model called REDAffectiveLM. Within state-of-the-art NLP tasks, it is well understood that utilizing context-specific representations from transformer-based pre-trained language models helps achieve improved performance. Within this affective computing task, we explore how incorporating affective information can further enhance performance. Towards this, we leverage context-specific and affect enriched representations by using a transformer-based pre-trained language model in tandem with affect enriched Bi-LSTM+Attention. For empirical evaluation, we procure a new dataset REN-20k, besides using RENh-4k and SemEval-2007. We evaluate the performance of our REDAffectiveLM rigorously across these datasets, against a vast set of state-of-the-art baselines, where our model consistently outperforms baselines and obtains statistically significant results. Our results establish that utilizing affect enriched representation along with context-specific representation within a neural architecture can considerably enhance readers' emotion detection. Since the impact of affect enrichment specifically in readers' emotion detection isn't well explored, we conduct a detailed analysis over affect enriched Bi-LSTM+Attention using qualitative and quantitative model behavior evaluation techniques. We observe that compared to conventional semantic embedding, affect enriched embedding increases the ability of the network to effectively identify and assign weightage to the key terms responsible for readers' emotion detection to improve prediction.


📝 Paper(pre-print): https://arxiv.org/abs/2301.08995
🌍 GitHub: https://github.com/anoopkdcs/REDAffectiveLM
🌍 Dataset: https://dcs.uoc.ac.in/cida/resources/ren-20k.html


People

  1. 1. Anoop K , University of Calicut, Kerala, India. (anoopk_dcs@uoc.ac.in)
  2. 2. Deepk P. , Queen’s University Belfast, Northern Ireland, UK. (deepaksp@acm.org)
  3. 3. Manjary P Gangan , University of Calicut, Kerala, India.
  4. 4. Savitha Sam Abraham, School of Science and Technology, Örebro University, Sweden.
  5. 5. Lajish V L, University of Calicut, Kerala, India.

Other Related Work

affective bias
Readers’ affect: predicting and understanding readers’ emotions with deep learning
Anoop K.1, Deepak P.2, Savitha Sam Abraham3, Manjary P. Gangan1, Lajish V. L.1
1Department of Computer Science, University of Calicut, India
2School of Electronics, Electrical Engineering and Computer Science, Queen’s University Belfast, Northern Ireland, UK.
3 School of Science and Technology, Örebro University, Sweden.

The remarkable progress in Natural Language Processing (NLP) brought about by deep learning, particularly with the recent advent of large pre-trained neural language models, is brought into scrutiny as several studies began to discuss and report potential biases in NLP applications. Bias in NLP is found to originate from latent historical biases encoded by humans into textual data which gets perpetuated or even amplified by NLP algorithm. We present a survey to comprehend bias in large pre-trained language models and analyze the stages at which they occur in these models, and various ways in which these biases could be quantified and mitigated. Considering wide applicability of textual affective computing-based downstream tasks in real-world systems such as business, health care, and education, we give a special emphasis on investigating bias in the context of affect (emotion) i.e., Affective Bias, in large pre-trained language models. We present a summary of various bias evaluation corpora that help to aid future research and discuss challenges in the research on bias in pre-trained language models. We believe that our attempt to draw a comprehensive view of bias in pre-trained language models, and especially the exploration of affective bias will be highly beneficial to researchers interested in this evolving field. Read More...
📝 Paper: https://doi.org/10.1186/s40537-022-00614-2