|
[1] Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones,Aidan N.Gomez, Lukasz Kaiser, Illia Polosukhin.”Attention is all you need.”arXiv preprint arXiv:1706.03762v5,2017. [2] Samy Bengio, Oriol Vinyals, Navdeep Jaitly, Noam Shazeer.”Scheduled Sampling for Sequence Prediction with Recurrent Neural Networks.”arXiv preprint arXiv:1506.03099v3,2015. [3] Ayodele A. Adebiyi, Aderemi O. Adewumi, Charles K. Ayo.”Stock Price Prediction Using the ARIMA Model.”In UKSim-AMSS 16th International Conference on Computer Modelling and Simulation,2014. [4] Guokun Lai, Wei-Cheng Chang, Yiming Yang, Hanxiao Liu.”Modeling Long-and Short-Term Temporla Patterns with Deep Neural Networks.” arXiv preprint arXiv:1703.07015v3,2018. [5] Neo Wu, Bradley Green, Xue Ben, Shawn O’Banion.”Deep Transformer Models for Time Series Forecasting: The Influenza Prevalence Case.”arXiv preprint arXiv:2001.08317v1,2020. [6] Yuqi Nie, Nam H. Nguyen, Phanwadee Sinthong, Jayany Kalagnanam.”A Time Series Is Worth 64 Words : Long-Term Forecasting With Transformers.”arXiv preprint arXiv:2211.14730v2,2023. [7] Jacob Devlin, Ming-Wei Chang, Kenton Lee, Kristina Toutanova. “BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding.”arXiv preprint arXiv:1810.04805v2,2019. [8] Alec Radford, Karthik Narasimhan, Tim Salimans, Ilya Sutskever. “Improving Language Understanding by Generative Pre-Training.” OpenAI,June.2018. [9] Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, Ilya Sutskever.”Language Models are Unsupervised Multitask Learners.”OpenAI,Feb.2019. [10] OpenAI.”Language Models are Few-Shot Learners.”arXiv preprint arXiv: 2005.14165v4,2020. [11] OpenAI.”GPT-4 Technical Report.”arXiv preprint arXiv:2303.08774v3,2023. [12] Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, Jakob Uszkoreit, Neil Houlsby.”An Image Is Worth 16x16 Words: Transformers For Image Recognition At Scale.”In International Conference on Learning Representations (ICLR),2021. [13] Natasha Klingenbrunn.”Transformers for Time-series Forecasting.”Feb.2021. [14] Meta Platforms.”Quarterly Report.”Meta Platforms,Apr.2023 [15] Haoyi Zhou, Shanghang Zhang, Jieqi Peng, Shuai Zhang, Jianxin Li, Hui Xiong, Wancai Zhang.”Informer:Beyond Efficient Transformer for Long Sequence Time-Series Forecasting.”arXiv preprint arXiv:2012.07436v3,2021. [16] Box G, Jenkins G.“Time series analysis: forecasting and control.”,1976. [17] Michael Phi.”Illustrated Guide to LSTM’s and GRU’s :A step by step explanation.”Towards Data Science,https://towardsdatascience.com/illustrated-guide-to-lstms-and-gru-s-a-step-by-step-explanation-44e9eb85bf21,Sep.2018(accessed July.2023). [18] Jay Alammar.”The Illustrated Transformer.”Github, http://jalammar.github.io/illustrated-transformer/,June.2018(accessed July.2023).
|