|
T. Beysolow II, What Is Natural Language Processing?, pp. 1–12. Berkeley,CA: Apress, 2018.
X. Rong, “word2vec parameter learning explained,” arXiv preprintarXiv:1411.2738, 2014.
I. Sutskever, O. Vinyals, and Q. V. Le, “Sequence to sequence learning withneural networks,” arXiv preprint arXiv:1409.3215, 2014.
A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez,L. Kaiser, and I. Polosukhin, “Attention is all you need,” arXiv preprintarXiv:1706.03762, 2017.
J. Devlin, M.-W. Chang, K. Lee, and K. Toutanova, “Bert: Pre-training ofdeep bidirectional transformers for language understanding,” arXiv preprintarXiv:1810.04805, 2018.
K. Cho, B. Van Merriënboer, C. Gulcehre, D. Bahdanau, F. Bougares,H. Schwenk, and Y. Bengio, “Learning phrase representations usingrnn encoder-decoder for statistical machine translation,” arXiv preprintarXiv:1406.1078, 2014.
F. A. R. lab, “fasttext.” https://fasttext.cc/. [14-July-2021 檢視為有效連結].
P. Clark, O. Etzioni, D. Khashabi, T. Khot, B. D. Mishra, K. Richardson,A. Sabharwal, C. Schoenick, O. Tafjord, N. Tandon, et al., “From’f’to’a’on theny regents science exams: An overview of the aristo project,” arXiv preprintarXiv:1909.01958, 2019.
國家教育研究院, “全國中小學題庫網.” https://exam.naer.edu.tw/. [07-July-2021 檢視為有效連結].
M. Saad, S. Aslam, W. Yousaf, M. Sehnan, S. Anwar, and D. Rehman, “Studenttesting and monitoring system (stms) using nlp.,” International Journal ofModern Education & Computer Science, vol. 11, no. 9, 2019.
T. H. F. Team, “Summary of the tokenizers.” https://huggingface.co/transformers/tokenizer_summary.html. [07-July-2021 檢視為有效連結].
S. Hochreiter and J. Schmidhuber, “Long short-term memory,” Neural computation, vol. 9, no. 8, pp. 1735–1780, 1997.
R. Sennrich, B. Haddow, and A. Birch, “Neural machine translation of rarewords with subword units,” arXiv preprint arXiv:1508.07909, 2015.
M. Schuster and K. Nakajima, “Japanese and korean voice search,” in 2012IEEE International Conference on Acoustics, Speech and Signal Processing(ICASSP), pp. 5149–5152, IEEE, 2012.
T. Kudo and J. Richardson, “Sentencepiece: A simple and language independent subword tokenizer and detokenizer for neural text processing,” arXivpreprint arXiv:1808.06226, 2018.
google, “Github - google/sentencepiece: Unsupervised text tokenizerfor neural network-based text generation..” https://github.com/google/sentencepiece. [07-July-2021 檢視為有效連結].
nltk, “nltk.tokenize package —nltk 3.6.2 documentation.” https://www.nltk.org/api/nltk.tokenize.html. [07-July-2021 檢視為有效連結].
fxsjy, “Github - fxsjy/jieba: 结巴中文分词.” https://www.nltk.org/api/nltk.tokenize.html. [07-July-2021 檢視為有效連結].
Ckip, “Github - ckiplab/ckiptagger: Ckip neural chinese word segmentation,pos tagging, and ner.” https://github.com/ckiplab/ckiptagger. [07-July2021 檢視為有效連結].
Droidtown, “Articut 中 文 斷 詞 暨 詞 性 標 記 服 務.” https://github.com/Droidtown/ArticutAPI. [07-July-2021 檢視為有效連結].
J. Schmidhuber, “Deep learning in neural networks: An overview,” Neural networks, vol. 61, pp. 85–117, 2015.
K. Yao, T. Cohn, K. Vylomova, K. Duh, and C. Dyer, “Depth-gated recurrentneural networks,” arXiv preprint arXiv:1508.03790, vol. 9, 2015.
F. A. Gers, N. N. Schraudolph, and J. Schmidhuber, “Learning precise timingwith lstm recurrent networks,” Journal of machine learning research, vol. 3,no. Aug, pp. 115–143, 2002.
D. Bahdanau, K. Cho, and Y. Bengio, “Neural machine translation by jointlylearning to align and translate,” arXiv preprint arXiv:1409.0473, 2014.
Ckip, “ckiplab/bert-base-chinese ·hugging face.” https://huggingface.co/ckiplab/bert-base-chinese. [08-July-2021 檢視為有效連結].
D. P. Kingma and J. Ba, “Adam: A method for stochastic optimization,” arXivpreprint arXiv:1412.6980, 2014.
I. Loshchilov and F. Hutter, “Decoupled weight decay regularization,” arXivpreprint arXiv:1711.05101, 2017.
google research, “google-research/bert: Tensorflow code and pre-trained modelsfor bert.” https://github.com/google-research/bert. [14-July-2021 檢視為有效連結].
misobelica, “Github - miso-belica/sumy: Module for automatic summarizationof text documents and html pages..” https://github.com/miso-belica/sumy.[09-July-2021 檢視為有效連結].
nltk, “Natural language toolkit —nltk 3.6.2 documentation.” https://www.nltk.org/index.html. [10-July-2021 檢視為有效連結].
G. A. M. et al., “Wordnet | a lexical database for english.” https://wordnet.princeton.edu/. [10-July-2021 檢視為有效連結].
F. A. R. lab, “Pytorch.” https://pytorch.org/. [11-July-2021 檢視為有效連結].
開源軟體,貢獻者詳見其網站, “Seleniumhq browser automation.” https://www.selenium.dev/. [11-July-2021 檢視為有效連結].
L. R. et al., “Beautiful soup: We called him tortoise because he taught us..”https://www.crummy.com/software/BeautifulSoup/. [11-July-2021 檢視為有效連結].
K. R. et al., “Requests: Http for humans™ —requests 2.25.1 documentation.”https://docs.python-requests.org/en/master/. [11-July-2021 檢視為有效連結].
J. X. McKie, “Pymupdf documentation —pymupdf 1.18.14 documentation.”https://pymupdf.readthedocs.io/en/latest/index.html. [11-July-2021檢視為有效連結].
fi. J. T. et al., “Requests: Http for humans™ —requests 2.25.1 documentation.”https://jupyter.org/about. [11-July-2021 檢視為有效連結].
T. Mikolov, I. Sutskever, K. Chen, G. Corrado, and J. Dean, “Distributed representations of words and phrases and their compositionality,” arXiv preprintarXiv:1310.4546, 2013. |