Abstract
Objective: One key task of fine-grained opinion mining on product review is to extract product aspects and their corresponding opinion expressed by users. Previous work has demonstrated that precise modeling of opinion targets within the surrounding context can improve performances. However, how to effectively and efficiently learn hidden word semantics and better represent targets and the context still needs to be further studied. Recent years have seen a revival of the Long Short- Term Memory (LSTM), with its effectiveness being demonstrated on a wide range of problems. However, LSTM based approaches are still limited to linear data processing since it processes the information sequentially. As a result, they may perform poorly on user-generated texts, such as product reviews, tweets, etc., whose syntactic structure is not precise.
Methods: In this research paper, we propose a constituency tree long short term memory neural network- based approach. We compare our model with state-of-the-art baselines on SemEval 2014 datasets.
Results: Experiment results show that our models obtain competitive performances compared to various supervised LSTM architectures.
Conclusion: Our work contributes to the improvement of state-of-the-art aspect-level opinion mining methods and offers a new approach to support the human decision-making process based on opinion mining results.
Keywords: Opinion mining, sentiment analysis, LSTM, deep learning, word embedding, constituency tree-LSTM, neural network.
Graphical Abstract
[http://dx.doi.org/10.1016/j.inffus.2017.02.003]
[http://dx.doi.org/10.1007/978-3-319-55394-8]
[http://dx.doi.org/10.1007/s12559-018-9549-x]
[http://dx.doi.org/10.18653/v1/P17-1081]
[http://dx.doi.org/10.1016/j.jfranklin.2017.06.007]
[http://dx.doi.org/10.1287/mnsc.1070.0704]
[http://dx.doi.org/10.1145/775047.775098]
[http://dx.doi.org/10.3115/1225733.1225750]
[http://dx.doi.org/10.1162/coli_a_00034]
[http://dx.doi.org/10.3115/v1/S14-2051]
[http://dx.doi.org/10.18653/v1/P17-1036]
[http://dx.doi.org/10.1145/1242572.1242596]
[http://dx.doi.org/10.1162/COLI_a_00049]
[http://dx.doi.org/10.1145/1341531.1341561]
[http://dx.doi.org/10.1145/2401603.2401605]
[http://dx.doi.org/10.1162/coli.08-012-R1-06-90]
[http://dx.doi.org/10.1109/ASRU.2013.6707742]
[http://dx.doi.org/10.1109/CVPR.2015.7298935]
[http://dx.doi.org/10.1162/neco.1997.9.8.1735] [PMID: 9377276]
[http://dx.doi.org/10.1016/0020-0190(87)90114-1]
[http://dx.doi.org/10.18653/v1/D16-1058]
[http://dx.doi.org/10.18653/v1/D17-1047]
[http://dx.doi.org/10.18653/v1/N18-2043]
[http://dx.doi.org/10.3115/1218955.1219016]
[http://dx.doi.org/10.3115/v1/W14-5905]
[http://dx.doi.org/10.3115/v1/S14-2076]
[http://dx.doi.org/10.3115/1609067.1609142]
[http://dx.doi.org/10.31181/dmame180113p]
[http://dx.doi.org/10.1007/978-3-540-79709-8_34]
[http://dx.doi.org/10.31181/dmame1802079s]
[http://dx.doi.org/10.1016/j.neucom.2016.08.150]
[http://dx.doi.org/10.1155/2016/6965725]
[http://dx.doi.org/10.2174/2213275912666181227144256]
[http://dx.doi.org/10.1016/j.knosys.2016.06.009]
[http://dx.doi.org/10.3115/v1/P14-2009]
[http://dx.doi.org/10.1007/978-3-319-66790-4_3]
[http://dx.doi.org/10.1109/72.279181] [PMID: 18267787]
[http://dx.doi.org/10.3115/v1/D14-1082]
[http://dx.doi.org/10.3115/v1/S14-2038]
[http://dx.doi.org/10.18653/v1/S16-1002]
[http://dx.doi.org/10.18653/v1/D15-1168]
[http://dx.doi.org/10.18653/v1/D17-1310]
[http://dx.doi.org/10.18653/v1/D16-1059]