|
[1] World Health Organization. (2016). Ambient air pollution: A global assessment of exposure and burden of disease. Retrieved January 15, 2023, from https://apps.who.int/iris/bitstream/handle/10665/250141/9789241511/9789241511/9789241511353-eng.pdf [2] Schraufnagel, D. E., Balmes, J. R., Cowl, C. T., De Matteis, S., Jung, S. H., Mortimer, K., ... & Wuebbles, D. J. (2019). Air pollution and noncommunicable diseases: A review by the Forum of International Respiratory Societies’ Environmental Committee, Part 2: Air pollution and organ systems. Chest, 155(2), 417-426. [3] Jocher, G., Chaurasia, A., & Qiu, J. (2023). YOLO by Ultralytics (Version 8.0.0) [Computer software]. https://github.com/ultralytics/ultralytics [4] McCulloch, W. S., & Pitts, W. (1943). A logical calculus of the ideas immanent in nervous activity. The bulletin of mathematical biophysics, 5(4), 115-133. [5] Voulodimos, A., Doulamis, N., Doulamis, A., & Protopapadakis, E. (2018). Deep learning for computer vision: A brief review. Computational intelligence and neuroscience, 2018. [6] Young, T., Hazarika, D., Poria, S., & Cambria, E. (2018). Recent trends in deep learning based natural language processing. ieee Computational intelligenCe magazine, 13(3), 55-75. [7] Najafabadi, M. M., Villanustre, F., Khoshgoftaar, T. M., Seliya, N., Wald, R., & Muharemagic, E. (2015). Deep learning applications and challenges in big data analytics. Journal of big data, 2(1), 1-21. [8] Cunningham, P., Cord, M., & Delany, S. J. (2008). Supervised learning. Machine learning techniques for multimedia: case studies on organization and retrieval, 21-49. [9] Barlow, H. B. (1989). Unsupervised learning. Neural computation, 1(3), 295-311. [10] Zhu, X., & Goldberg, A. B. (2009). Introduction to semi-supervised learning. Synthesis lectures on artificial intelligence and machine learning, 3(1), 1-130. [11] Li, Y. (2017). Deep reinforcement learning: An overview. arXiv preprint arXiv:1701.07274. [12] Silver, D., Huang, A., Maddison, C. J., Guez, A., Sifre, L., Van Den Driessche, G., ... & Hassabis, D. (2016). Mastering the game of Go with deep neural networks and tree search. nature, 529(7587), 484-489. [13] Zeiler, M. D., & Fergus, R. (2014, September). Visualizing and understanding convolutional networks. In European conference on computer vision (pp. 818-833). Springer, Cham. [14] Ren, S., He, K., Girshick, R., & Sun, J. (2015). Faster r-cnn: Towards real-time object detection with region proposal networks. Advances in neural information processing systems, 28. [15] Girshick, R., Donahue, J., Darrell, T., & Malik, J. (2014). Rich feature hierarchies for accurate object detection and semantic segmentation. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 580-587). [16] Girshick, R. (2015). Fast r-cnn. In Proceedings of the IEEE international conference on computer vision (pp. 1440-1448). [17] Freepik. (2018). Free vector: Stem cell diagram on White Background. Retrieved December 7, 2022, from https://www.freepik.com/free-vector/stem-cell-diagram-white-background_2480958.htm [18] He, K., Zhang, X., Ren, S., & Sun, J. (2016). Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 770-778). [19] Cole, M. R. (2018). Hands-on neural network programming with C#: Add powerful neural network capabilities to your C# enterprise applications. Packt Publishing Ltd. [20] G. E. Hinton., "Learning multiple layers of representation," Trends in Cognitive Sciences, 11, pp. 428–434, 2007. [21] Dertat, A. (2017). Applied deep learning-part 1: Artificial neural networks. Retrieved December 8, 2022, from https://towardsdatascience.com/applied-deep-learning-part-1-artificial-neural-networks-d7834f67a4f6. [22] Weng, J., Ahuja, N., & Huang, T. S. (1992, June). Cresceptron: a self-organizing neural network which grows adaptively. In [Proceedings 1992] IJCNN International Joint Conference on Neural Networks (Vol. 1, pp. 576-581). IEEE. [23] Ratan, P. (2021). Convolutional Neural Network Architecture. Retrieved January 7, 2023, from https://www.analyticsvidhya.com/blog/2020/10/what-is-the-convolutional-neural-network-architecture/. [24] Dong, H., Zhang, L., & Zou, B. (2020). PolSAR image classification with lightweight 3D convolutional networks. Remote Sensing, 12(3), 396. [25] Uijlings, J. R., Van De Sande, K. E., Gevers, T., & Smeulders, A. W. (2013). Selective search for object recognition. International journal of computer vision, 104(2), 154-171. [26] Terven, J., & Cordova-Esparza, D. (2023). A comprehensive review of YOLO: From YOLOv1 to YOLOv8 and beyond. arXiv preprint arXiv:2304.00501. [27] Jin, J. (2019). Dice Similarity Coefficent vs. IoU. Retrieved December 8, 2022, from https://www.cnblogs.com/jins-note/p/11674770.html [28] Redmon, J., Divvala, S., Girshick, R., & Farhadi, A. (2016). You only look once: Unified, real-time object detection. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 779-788). [29] Redmon, J., & Farhadi, A. (2017). YOLO9000: better, faster, stronger. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 7263-7271). [30] Redmon, J., & Farhadi, A. (2018). Yolov3: An incremental improvement. arXiv preprint arXiv:1804.02767. [31] Bochkovskiy, A., Wang, C. Y., & Liao, H. Y. M. (2020). Yolov4: Optimal speed and accuracy of object detection. arXiv preprint arXiv:2004.10934. [32] Jocher, G., Changyu, L., Hogan, A., Lijun Yu 于力军, Changyu98, Rai, P., & Sullivan, T. (2020). ultralytics/yolov5: Initial Release (v1.0) [Computer software]. Zenodo. https://doi.org/10.5281/ZENODO.3908560 [33] Li, C., Li, L., Jiang, H., Weng, K., Geng, Y., Li, L., ... & Wei, X. (2022). YOLOv6: A single-stage object detection framework for industrial applications. arXiv preprint arXiv:2209.02976. [34] Wang, C. Y., Bochkovskiy, A., & Liao, H. Y. M. (2022). YOLOv7: Trainable bag-of-freebies sets new state-of-the-art for real-time object detectors. arXiv preprint arXiv:2207.02696. [35] Targ, S., Almeida, D., & Lyman, K. (2016). Resnet in resnet: Generalizing residual architectures. arXiv preprint arXiv:1603.08029. [36] Wang, C. Y., Liao, H. Y. M., Wu, Y. H., Chen, P. Y., Hsieh, J. W., & Yeh, I. H. (2020). CSPNet: A new backbone that can enhance learning capability of CNN. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition workshops (pp. 390-391). [37] Rezatofighi, H., Tsoi, N., Gwak, J., Sadeghian, A., Reid, I., & Savarese, S. (2019). Generalized intersection over union: A metric and a loss for bounding box regression. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (pp. 658-666). [38] Ding, X., Zhang, X., Ma, N., Han, J., Ding, G., & Sun, J. (2021). Repvgg: Making vgg-style convnets great again. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (pp. 13733-13742). [39] Ge, Z., Liu, S., Wang, F., Li, Z., & Sun, J. (2021). Yolox: Exceeding yolo series in 2021. arXiv preprint arXiv:2107.08430. [40] Zhang, Z., He, T., Zhang, H., Zhang, Z., Xie, J., & Li, M. (2019). Bag of freebies for training object detection neural networks. arXiv preprint arXiv:1902.04103. [41] Wang, C. Y., Liao, H. Y. M., & Yeh, I. H. (2022). Designing Network Design Strategies Through Gradient Path Analysis. arXiv preprint arXiv:2211.04800. [42] MMYOLO Contributors. (2022). MMYOLO: OpenMMLab YOLO series toolbox and benchmark. Retrieved from [43] Lin, T. Y., Dollár, P., Girshick, R., He, K., Hariharan, B., & Belongie, S. (2017). Feature pyramid networks for object detection. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 2117-2125). [44] Rawat, W., & Wang, Z. (2017). Deep convolutional neural networks for image classification: A comprehensive review. Neural computation, 29(9), 2352-2449. [45] Zhao, Z. Q., Zheng, P., Xu, S. T., & Wu, X. (2019). Object detection with deep learning: A review. IEEE transactions on neural networks and learning systems, 30(11), 3212-3232. [46] Hafiz, A. M., & Bhat, G. M. (2020). A survey on instance segmentation: state of the art. International journal of multimedia information retrieval, 9(3), 171-189. [47] Luo, W., Xing, J., Milan, A., Zhang, X., Liu, W., & Kim, T. K. (2021). Multiple object tracking: A literature review. Artificial intelligence, 293, 103448. [48] Dang, Q., Yin, J., Wang, B., & Zheng, W. (2019). Deep learning based 2d human pose estimation: A survey. Tsinghua Science and Technology, 24(6), 663-676. [49] Tian, Z., Shen, C., Chen, H., & He, T. (2019). Fcos: Fully convolutional one-stage object detection. In Proceedings of the IEEE/CVF international conference on computer vision (pp. 9627-9636). [50] Lin, T. Y., Maire, M., Belongie, S., Hays, J., Perona, P., Ramanan, D., ... & Zitnick, C. L. (2014). Microsoft coco: Common objects in context. In Computer Vision–ECCV 2014: 13th European Conference, Zurich, Switzerland, September 6-12, 2014, Proceedings, Part V 13 (pp. 740-755). Springer International Publishing. [51] Mahto, P., Garg, P., Seth, P., & Panda, J. (2020). Refining Yolov4 for vehicle detection. International Journal of Advanced Research in Engineering and Technology (IJARET), 11(5). [52] Hsu, W. Y., & Lin, W. Y. (2021). Adaptive fusion of multi-scale YOLO for pedestrian detection. IEEE Access, 9, 110063-110073. [53] Wu, T. H., Wang, T. W., & Liu, Y. Q. (2021, June). Real-time vehicle and distance detection based on improved yolo v5 network. In 2021 3rd World Symposium on Artificial Intelligence (WSAI) (pp. 24-28). IEEE. [54] Al-qaness, M. A., Abbasi, A. A., Fan, H., Ibrahim, R. A., Alsamhi, S. H., & Hawbani, A. (2021). An improved YOLO-based road traffic monitoring system. Computing, 103, 211-230. [55] Liang, S., Wu, H., Zhen, L., Hua, Q., Garg, S., Kaddoum, G., ... & Yu, K. (2022). Edge YOLO: Real-time intelligent object detection system based on edge-cloud cooperation in autonomous vehicles. IEEE Transactions on Intelligent Transportation Systems, 23(12), 25345-25360. [56] Lippi, M., Bonucci, N., Carpio, R. F., Contarini, M., Speranza, S., & Gasparri, A. (2021, June). A yolo-based pest detection system for precision agriculture. In 2021 29th Mediterranean Conference on Control and Automation (MED) (pp. 342-347). IEEE. [57] Tian, Y., Yang, G., Wang, Z., Wang, H., Li, E., & Liang, Z. (2019). Apple detection during different growth stages in orchards using the improved YOLO-V3 model. Computers and electronics in agriculture, 157, 417-426. [58] Hammami, M., Friboulet, D., & Kéchichian, R. (2020, October). Cycle GAN-based data augmentation for multi-organ detection in CT images via YOLO. In 2020 IEEE International Conference on Image Processing (ICIP) (pp. 390-393). IEEE. [59] Montalbo, F. J. P. (2020). A Computer-Aided Diagnosis of Brain Tumors Using a Fine-Tuned YOLO-based Model with Transfer Learning. KSII Transactions on Internet & Information Systems, 14(12). [60] Ting, H. W., Chung, S. L., Chen, C. F., Chiu, H. Y., & Hsieh, Y. W. (2020). A drug identification model developed using deep learning technologies: experience of a medical center in Taiwan. BMC health services research, 20(1), 1-9. [61] Degadwala, S., Vyas, D., Chakraborty, U., Dider, A. R., & Biswas, H. (2021, March). Yolo-v4 deep learning model for medical face mask detection. In 2021 International Conference on Artificial Intelligence and Smart Systems (ICAIS) (pp. 209-213). IEEE. [62] Cai, W., Wang, C., Huang, H., & Wang, T. (2020, December). A Real-Time Smoke Detection Model Based on YOLO-SMOKE Algorithm. In 2020 Cross Strait Radio Science & Wireless Technology Conference (CSRSWTC) (pp. 1-3). IEEE. [63] Chaturvedi, S., Khanna, P., & Ojha, A. (2022). A survey on vision-based outdoor smoke detection techniques for environmental safety. ISPRS Journal of Photogrammetry and Remote Sensing, 185, 158-187. [64] Frizzi, S., Kaabi, R., Bouchouicha, M., Ginoux, J. M., Moreau, E., & Fnaiech, F. (2016, October). Convolutional neural network for video fire and smoke detection. In IECON 2016-42nd Annual Conference of the IEEE Industrial Electronics Society (pp. 877-882). IEEE. [65] Liu, Y., Qin, W., Liu, K., Zhang, F., & Xiao, Z. (2019). A dual convolution network using dark channel prior for image smoke classification. IEEE Access, 7, 60697-60706. [66] Töreyin, B. U., Dedeoğlu, Y., & Cetin, A. E. (2005, September). Wavelet based real-time smoke detection in video. In 2005 13th European signal processing conference (pp. 1-4). IEEE. [67] Chen, T. H., Yin, Y. H., Huang, S. F., & Ye, Y. T. (2006, December). The smoke detection for early fire-alarming system base on video processing. In 2006 international conference on intelligent information hiding and multimedia (pp. 427-430). IEEE. [68] Kim, D., & Wang, Y. F. (2009, March). Smoke detection in video. In 2009 WRI World Congress on Computer Science and Information Engineering (Vol. 5, pp. 759-763). IEEE. [69] Kaabi, R., Frizzi, S., Bouchouicha, M., Fnaiech, F., & Moreau, E. (2017, February). Video smoke detection review: State of the art of smoke detection in visible and IR range. In 2017 International Conference on Smart, Monitored and Controlled Cities (SM2C) (pp. 81-86). IEEE. [70] Matlani, P., & Shrivastava, M. (2018). A survey on video smoke detection. In Information and Communication Technology for Sustainable Development (pp. 211-222). Springer, Singapore. [71] Khan, S., Muhammad, K., Hussain, T., Del Ser, J., Cuzzolin, F., Bhattacharyya, S., ... & de Albuquerque, V. H. C. (2021). Deepsmoke: Deep learning model for smoke detection and segmentation in outdoor environments. Expert Systems with Applications, 182, 115125. [72] Yuan, F., Dong, Z., Zhang, L., Xia, X., & Shi, J. (2022). Cubic-cross convolutional attention and count prior embedding for smoke segmentation. Pattern Recognition, 131, 108902. [73] Wang, Z., Yin, D., & Ji, S. (2022). Real-time factory smoke detection based on two-stage relation-guided algorithm. Scientific Reports, 12(1), 1-15. [74] 楊倩華(2022)。應用YOLOv5於車輛排煙偵測系統之研究。國立東華大學資訊管理學系未出版碩士論文,台灣,花蓮。 [75] Seliya, N., Khoshgoftaar, T. M., & Van Hulse, J. (2009, November). A study on the relationships of classifier performance metrics. In 2009 21st IEEE international conference on tools with artificial intelligence (pp. 59-66). IEEE. |