Abstract
Background: Reddit comments are a valuable source of natural language data where emotion plays a key role in human communication. However, emotion recognition might be a difficult task that requires understanding the context and sentiment of the texts. In this paper, we aim to compare the effectiveness of four Recurrent Neural Network (RNN) models for classifying the emotions of Reddit comments.
Methods: We use a small dataset of 4,922 comments labeled with four emotions: approval, disapproval, love, and annoyance. We also use pre-trained Glove.840B.300d embeddings as the input representation for all models. The models we compare are SimpleRNN, Long Shortterm Memory (LSTM), bidirectional LSTM, and Gated Recurrent Unit (GRU). We experiment with different text preprocessing steps, such as removing stopwords and applying stemming, removing negation from stopwords, and the effect of setting the embedding layer as trainable on the models.
Results: We find that GRU outperforms all other models, achieving an accuracy of 74%. Bidirectional LSTM and LSTM are close behind, while SimpleRNN performs the worst. We observe that the low accuracy is likely due to the presence of sarcasm, irony, and complexity in the texts. We also notice that setting the embedding layer as trainable improves the performance of LSTM but increases the computational cost and training time significantly. We analyze some examples of misclassified texts by GRU and identify the challenges and limitations of the dataset and the models.
Conclusion: In our study GRU was found to be the best model for emotion classification of Reddit comments among the four RNN models we compared. We also discuss some future directions for research to improve the emotion recognition task on Reddit comments. Furthermore, we provide an extensive discussion of the applications and methods behind each technique in the context of the paper.
Graphical Abstract
[http://dx.doi.org/10.1016/j.procs.2015.02.088]
[http://dx.doi.org/10.3115/1218955.1218989]
[http://dx.doi.org/10.2991/ijcis.d.190710.001]
[http://dx.doi.org/10.1007/978-1-4842-4947-5]
[http://dx.doi.org/10.1145/3439726]
[http://dx.doi.org/10.1002/widm.1253]
[http://dx.doi.org/10.1207/s15516709cog1402_1]
[http://dx.doi.org/10.3115/v1/D14-1179]
[http://dx.doi.org/10.1162/neco.1997.9.8.1735] [PMID: 9377276]
[http://dx.doi.org/10.1109/78.650093]
[http://dx.doi.org/10.18653/v1/2020.acl-main.372]
[http://dx.doi.org/10.3115/v1/D14-1162]