帳號:guest(3.133.111.16)          離開系統
字體大小: 字級放大   字級縮小   預設字形  

詳目顯示

以作者查詢圖書館館藏以作者查詢臺灣博碩士論文系統以作者查詢全國書目勘誤回報
作者:李維
作者(英文):Wei Lee
論文名稱:以BERT-GRU模型對旅宿業評論進行情感分析
論文名稱(英文):Analyzing tourism reviews with BERT-GRU models
指導教授:陳林志
指導教授(英文):Lin-Chih Chen
口試委員:陳林志
劉英和
陳大仁
口試委員(英文):Lin-Chih Chen
Ying-Ho Liu
Da-Ren Chen
學位類別:碩士
校院名稱:國立東華大學
系所名稱:資訊管理學系
學號:611035107
出版年(民國):112
畢業學年度:111
語文別:英文
論文頁數:43
關鍵詞(英文):BERTsentiment analysisGRU
相關次數:
  • 推薦推薦:0
  • 點閱點閱:27
  • 評分評分:系統版面圖檔系統版面圖檔系統版面圖檔系統版面圖檔系統版面圖檔
  • 下載下載:1
  • 收藏收藏:0
Natural language processing is one of the branches of artificial intelligence, and it has many and extensive applications. In 2018, Google launched the BERT model, which surpassed many language models at the time and sparked heated discussions. In this paper, we propose to use BERT combined with the GRU network architecture to increase the speed of training. Experiments have confirmed that our proposed model can reduce training time while maintaining performance.
I. Introduction 1
II. Related work 5
Sentiment Analysis 5
BERT 7
GRU 12
III.BERT-GRU Model 15
Web Crawler 17
NLP Process 17
Model Establish 22
Model Evaluation 28
IV. Experiment result 30
V. Conclusion and future works 41
Reference: 42
Athiwaratkun, B., & Stokes, J. W. (2017, 05-09 March ). Malware classification with LSTM and GRU language models and a character-level CNN. 2017 IEEE international conference on acoustics, speech and signal processing (ICASSP), New Orleans, LA, USA.
Behdenna, S., Barigou, F., & Belalem, G. (2018). Document level sentiment analysis: a survey. EAI Endorsed Transactions on Context-aware Systems and Applications, 4(13), e2-e2. https://doi.org/10.4108/eai.14-3-2018.154339
Chung, J., Gulcehre, C., Cho, K., & Bengio, Y. (2014). Empirical evaluation of gated recurrent neural networks on sequence modeling. arXiv preprint arXiv:1412.3555.
Devlin, J., Chang, M.-W., Lee, K., & Toutanova, K. (2018). Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805. https://arxiv.org/abs/1810.04805
Dey, R., & Salem, F. M. (2017, 06-09 August ). Gate-variants of gated recurrent unit (GRU) neural networks. 2017 IEEE 60th international midwest symposium on circuits and systems (MWSCAS), Boston, MA, USA.
Fu, R., Zhang, Z., & Li, L. (2016, 11-13 November). Using LSTM and GRU neural network methods for traffic flow prediction. 2016 31st Youth academic annual conference of Chinese association of automation (YAC),
Gao, Z., Feng, A., Song, X., & Wu, X. (2019). Target-dependent sentiment classification with BERT. IEEE Access, 7, 154290-154299.
Guggilla, C., Miller, T., & Gurevych, I. (2016). CNN-and LSTM-based claim classification in online user comments. Proceedings of COLING 2016, the 26th international conference on computational linguistics: technical papers,
Joshi, M., Chen, D., Liu, Y., Weld, D. S., Zettlemoyer, L., & Levy, O. (2020). SpanBERT: Improving Pre-training by Representing and Predicting Spans. Transactions of the association for computational linguistics, 8, 64-77.
Lan, Z., Chen, M., Goodman, S., Gimpel, K., Sharma, P., & Soricut, R. (2019). Albert: A lite bert for self-supervised learning of language representations. arXiv preprint arXiv:1909.11942.
Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., & Stoyanov, V. (2019). Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692.
Medhat, W., Hassan, A., & Korashy, H. (2014). Sentiment analysis algorithms and applications: A survey. Ain Shams Engineering, 5(4), 1093-1113.
Meng, L. (2019). Attacking BERT:The power and migration of the giant in the NLP world. Retrieved Oct 15 from https://leemeng.tw/attack_on_bert_transfer_learning_in_nlp.html
O’Reilly. (2006). What Is Web 2.0. Retrieved 9/22 from https://www.oreilly.com/pub/a//web2/archive/what-is-web-20.html
Pang, B., Lee, L., & Vaithyanathan, S. (2002). Thumbs up? Sentiment classification using machine learning techniques. arXiv preprint cs/0205070. https://doi.org/https://doi.org/10.48550/arXiv.cs/0205070
Radford, A., Narasimhan, K., Salimans, T., & Sutskever, I. (2018). Improving language understanding by generative pre-training.
Rhanoui, M., Mikram, M., Yousfi, S., & Barzali, S. (2019). A CNN-BiLSTM model for document-level sentiment analysis. Machine Learning and Knowledge Extraction, 1(3), 832-847.
Sanh, V., Debut, L., Chaumond, J., & Wolf, T. (2019). DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108.
Sarzynska-Wawer, J., Wawer, A., Pawlak, A., Szymanowska, J., Stefaniak, I., Jarkiewicz, M., & Okruszek, L. (2018). Detecting formal thought disorder by deep contextualized word representations. Psychiatry Research, 304, 114135.
Schouten, K., & Frasincar, F. (2015). Survey on aspect-level sentiment analysis. IEEE Transactions on Knowledge and Data Engineering, 28(3), 813-830. https://doi.org/10.1109/TKDE.2015.2485209
Sharma, A., & Dey, S. (2012). A document-level sentiment analysis approach using artificial neural network and sentiment lexicons. ACM SIGAPP Applied Computing Review, 12(4), 67-75.
Singh, V. K., Piryani, R., Uddin, A., & Waila, P. (2013, 22-23 March 2013). Sentiment analysis of movie reviews: A new feature-based heuristic for aspect-level sentiment classification. 2013 International mutli-conference on automation, computing, communication, control and compressed sensing (imac4s), Kottayam, India.
Teng, Z., Vo, D. T., & Zhang, Y. (2016). Context-sensitive lexicon features for neural sentiment analysis. Proceedings of the 2016 conference on empirical methods in natural language processing, Austin, Texas.
Turney, P. D. (2002). Thumbs Up or Thumbs Down? Semantic Orientation Applied to Unsupervised Classification of Reviews. arXiv preprint cs/0212032.
Vanaja, S., & Belwal, M. (2018, 11-12 July 2018). Aspect-level sentiment analysis on e-commerce data. 2018 International Conference on Inventive Research in Computing Applications (ICIRCA), Coimbatore, India.
Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A. N., Kaiser, Ł., & Polosukhin, I. (2017). Attention Is All You Need. Advances in neural information processing systems, 30.
Wang, J., Yu, L.-C., Lai, K. R., & Zhang, X. (2016). Dimensional sentiment analysis using a regional CNN-LSTM model. Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, Berlin, Germany.
Zhang, L., Wang, S., & Liu, B. (2018). Deep learning for sentiment analysis: A survey. Wiley Interdisciplinary Reviews:Data Mining and Knowledge Discovery, 8(4), e1253.
Zhang, Z., Robinson, D., & Tepper, J. (2018, 03 June). Detecting hate speech on twitter using a convolution-gru based deep neural network. The Semantic Web: 15th International Conference, ESWC 2018, Heraklion, Crete, Greece, June 3–7, 2018, Proceedings 15,
(此全文20280815後開放外部瀏覽)
01.pdf
 
 
 
 
第一頁 上一頁 下一頁 最後一頁 top
* *