|
Athiwaratkun, B., & Stokes, J. W. (2017, 05-09 March ). Malware classification with LSTM and GRU language models and a character-level CNN. 2017 IEEE international conference on acoustics, speech and signal processing (ICASSP), New Orleans, LA, USA. Behdenna, S., Barigou, F., & Belalem, G. (2018). Document level sentiment analysis: a survey. EAI Endorsed Transactions on Context-aware Systems and Applications, 4(13), e2-e2. https://doi.org/10.4108/eai.14-3-2018.154339 Chung, J., Gulcehre, C., Cho, K., & Bengio, Y. (2014). Empirical evaluation of gated recurrent neural networks on sequence modeling. arXiv preprint arXiv:1412.3555. Devlin, J., Chang, M.-W., Lee, K., & Toutanova, K. (2018). Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805. https://arxiv.org/abs/1810.04805 Dey, R., & Salem, F. M. (2017, 06-09 August ). Gate-variants of gated recurrent unit (GRU) neural networks. 2017 IEEE 60th international midwest symposium on circuits and systems (MWSCAS), Boston, MA, USA. Fu, R., Zhang, Z., & Li, L. (2016, 11-13 November). Using LSTM and GRU neural network methods for traffic flow prediction. 2016 31st Youth academic annual conference of Chinese association of automation (YAC), Gao, Z., Feng, A., Song, X., & Wu, X. (2019). Target-dependent sentiment classification with BERT. IEEE Access, 7, 154290-154299. Guggilla, C., Miller, T., & Gurevych, I. (2016). CNN-and LSTM-based claim classification in online user comments. Proceedings of COLING 2016, the 26th international conference on computational linguistics: technical papers, Joshi, M., Chen, D., Liu, Y., Weld, D. S., Zettlemoyer, L., & Levy, O. (2020). SpanBERT: Improving Pre-training by Representing and Predicting Spans. Transactions of the association for computational linguistics, 8, 64-77. Lan, Z., Chen, M., Goodman, S., Gimpel, K., Sharma, P., & Soricut, R. (2019). Albert: A lite bert for self-supervised learning of language representations. arXiv preprint arXiv:1909.11942. Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., & Stoyanov, V. (2019). Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692. Medhat, W., Hassan, A., & Korashy, H. (2014). Sentiment analysis algorithms and applications: A survey. Ain Shams Engineering, 5(4), 1093-1113. Meng, L. (2019). Attacking BERT:The power and migration of the giant in the NLP world. Retrieved Oct 15 from https://leemeng.tw/attack_on_bert_transfer_learning_in_nlp.html O’Reilly. (2006). What Is Web 2.0. Retrieved 9/22 from https://www.oreilly.com/pub/a//web2/archive/what-is-web-20.html Pang, B., Lee, L., & Vaithyanathan, S. (2002). Thumbs up? Sentiment classification using machine learning techniques. arXiv preprint cs/0205070. https://doi.org/https://doi.org/10.48550/arXiv.cs/0205070 Radford, A., Narasimhan, K., Salimans, T., & Sutskever, I. (2018). Improving language understanding by generative pre-training. Rhanoui, M., Mikram, M., Yousfi, S., & Barzali, S. (2019). A CNN-BiLSTM model for document-level sentiment analysis. Machine Learning and Knowledge Extraction, 1(3), 832-847. Sanh, V., Debut, L., Chaumond, J., & Wolf, T. (2019). DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108. Sarzynska-Wawer, J., Wawer, A., Pawlak, A., Szymanowska, J., Stefaniak, I., Jarkiewicz, M., & Okruszek, L. (2018). Detecting formal thought disorder by deep contextualized word representations. Psychiatry Research, 304, 114135. Schouten, K., & Frasincar, F. (2015). Survey on aspect-level sentiment analysis. IEEE Transactions on Knowledge and Data Engineering, 28(3), 813-830. https://doi.org/10.1109/TKDE.2015.2485209 Sharma, A., & Dey, S. (2012). A document-level sentiment analysis approach using artificial neural network and sentiment lexicons. ACM SIGAPP Applied Computing Review, 12(4), 67-75. Singh, V. K., Piryani, R., Uddin, A., & Waila, P. (2013, 22-23 March 2013). Sentiment analysis of movie reviews: A new feature-based heuristic for aspect-level sentiment classification. 2013 International mutli-conference on automation, computing, communication, control and compressed sensing (imac4s), Kottayam, India. Teng, Z., Vo, D. T., & Zhang, Y. (2016). Context-sensitive lexicon features for neural sentiment analysis. Proceedings of the 2016 conference on empirical methods in natural language processing, Austin, Texas. Turney, P. D. (2002). Thumbs Up or Thumbs Down? Semantic Orientation Applied to Unsupervised Classification of Reviews. arXiv preprint cs/0212032. Vanaja, S., & Belwal, M. (2018, 11-12 July 2018). Aspect-level sentiment analysis on e-commerce data. 2018 International Conference on Inventive Research in Computing Applications (ICIRCA), Coimbatore, India. Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A. N., Kaiser, Ł., & Polosukhin, I. (2017). Attention Is All You Need. Advances in neural information processing systems, 30. Wang, J., Yu, L.-C., Lai, K. R., & Zhang, X. (2016). Dimensional sentiment analysis using a regional CNN-LSTM model. Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, Berlin, Germany. Zhang, L., Wang, S., & Liu, B. (2018). Deep learning for sentiment analysis: A survey. Wiley Interdisciplinary Reviews:Data Mining and Knowledge Discovery, 8(4), e1253. Zhang, Z., Robinson, D., & Tepper, J. (2018, 03 June). Detecting hate speech on twitter using a convolution-gru based deep neural network. The Semantic Web: 15th International Conference, ESWC 2018, Heraklion, Crete, Greece, June 3–7, 2018, Proceedings 15,
|