帳號:guest(18.224.59.192)          離開系統
字體大小: 字級放大   字級縮小   預設字形  

詳目顯示

以作者查詢圖書館館藏以作者查詢臺灣博碩士論文系統以作者查詢全國書目勘誤回報
作者:鄭仲珉
作者(英文):Chung-Min Cheng
論文名稱:應用YOLO於煙囪排放煙霧與水蒸氣辨識之研究
論文名稱(英文):Research on the application of YOLO in the chimney smoke and steam detection
指導教授:侯佳利
指導教授(英文):Jia-Li Hou
口試委員:林耀堂
劉英和
口試委員(英文):Yao-Tang Lin
Ying-Ho Liu
學位類別:碩士
校院名稱:國立東華大學
系所名稱:資訊管理學系
學號:611135101
出版年(民國):112
畢業學年度:111
語文別:中文
論文頁數:69
關鍵詞:YOLO煙霧辨識水蒸氣辨識煙囪排放機器學習深度學習
關鍵詞(英文):YOLOsmoke detectionsteam detectionchimney emissionmachine learningdeep learning
相關次數:
  • 推薦推薦:0
  • 點閱點閱:20
  • 評分評分:系統版面圖檔系統版面圖檔系統版面圖檔系統版面圖檔系統版面圖檔
  • 下載下載:7
  • 收藏收藏:0
  近年來隨著科技的進步,機器學習 (Machine learning) 與深度學習 (Deep Learning) 領域迅速發展,也因為電腦不會疲累、專注度高、反應速度快等特性,讓機器學習與深度學習被廣泛應用在電腦視覺、醫療診斷、數據分析等領域,協助人們能有更好的決策以及即早發現問題,甚至是輔助人們快速完成枯燥且單一的工作流程。
  根據《空氣污染防制法》之規定,如果工業的廢氣排放造成空氣污染,影響周圍的生活環境及民眾的健康時,主管機關能依法進行開罰,然而煙霧排放異常的判斷亦屬於上述流程枯燥且容易使人感到疲累的工作之一。此外,傳統空氣污染感測器因機體較大、空間受限等因素,大多只能設置在公家機關、學校,同時還有感測範圍不遠的問題,並沒有辦法有效找出異常排煙的源頭,導致煙霧判斷工作者需要在煙霧可能出現的區域,透過觀看大量監視影像或經由民眾投訴後,才能找出可能出現煙霧的地點。因為煙霧排放後煙霧範圍會隨著風向、光線等因素快速改變,在判斷工業排放的氣體是否為污染煙霧的時候,往往需要較有經驗的檢測人員或環保稽核人員先畫出煙霧特徵,才能使新進人員有判斷的依據,讓這項工作需要投入大量人力與時間才能完成。
  在實際與檢測人員討論後,本研究發現在實務上煙霧與水蒸氣排放之辨識是很容易產生混淆的,然而後者屬於法規允許排放之氣體,因此在檢測時需要將煙霧與水蒸氣分別辨識。本研究以監控攝影機監測工廠排煙之狀況,結合電腦視覺及機器學習訓練YOLO 模型進行異常排煙之判斷,使煙霧排放偵測可以在大範圍執行,改善相關從業人員的工作流程。
In recent years, the field of machine learning and deep learning has been rapidly developing with the advancement of technology. Because computers have features such as no fatigue, high concentration, and fast response, they are widely used in fields such as computer vision, medical diagnosis, and data analysis to assist people. Machine learning can help people find problems earlier, make better decisions, and improve efficiency.
According to the Air Pollution Control Law regulations, if the emissions of industry chimneys contribute to air pollution and affect the surrounding environment, the competent authority can impose fines. The traditional air pollution sensor has a large body that needs a big space to place, but its sensing range is not far away. We also need experienced inspectors to draw the smoke features before detecting it. It takes a lot of manpower and time to complete this work.
Therefore, we want to reduce the smoke detection process through machine learning. We monitor the smoke exhaust situation of the factory by cameras to get smoke samples and train the YOLO model to identify abnormal smoke and steam.
摘要 v
Abstract vi
第一章 緒論 1
1.1 研究動機 1
1.2 研究目的 2
第二章 文獻探討 3
2.1 深度學習 (Deep Learning) 3
2.1.1 類神經網路 (ANN) 4
2.1.2 卷積式神經網路 (CNN) 6
2.1.3 區域卷積式神經網路 (R-CNN) 8
2.1.4 快速區域卷積式神經網路 (Fast R-CNN) 9
2.1.5 更快的區域卷積式神經網路 (Faster R-CNN) 9
2.2 YOLO : You Only Look Once 11
2.2.1 YOLOv1 13
2.2.2 YOLOv2 13
2.2.3 YOLOv3 14
2.2.4 YOLOv4 16
2.2.5 YOLOv5 17
2.2.6 YOLOv6 17
2.2.7 YOLOv7 18
2.2.8 YOLOv8 20
2.2.9 YOLO應用領域 24
2.3 煙霧偵測技術相關研究 24
第三章 研究方法 27
3.1 研究流程 27
3.2 研究設計 29
第四章 研究過程與成果 31
4.1 研究過程 31
4.1.1 資料前處理 31
4.1.2 模型評估方法 36
4.1.3 實驗一:模型選擇 37
4.1.4 實驗二:YOLOv8參數實驗 38
4.1.5 實驗三:煙霧與水蒸氣排放偵測 43
4.2 研究成果 49
第五章 結論及未來展望 51
參考文獻 52
[1] World Health Organization. (2016). Ambient air pollution: A global assessment of exposure and burden of disease. Retrieved January 15, 2023, from https://apps.who.int/iris/bitstream/handle/10665/250141/9789241511/9789241511/9789241511353-eng.pdf
[2] Schraufnagel, D. E., Balmes, J. R., Cowl, C. T., De Matteis, S., Jung, S. H., Mortimer, K., ... & Wuebbles, D. J. (2019). Air pollution and noncommunicable diseases: A review by the Forum of International Respiratory Societies’ Environmental Committee, Part 2: Air pollution and organ systems. Chest, 155(2), 417-426.
[3] Jocher, G., Chaurasia, A., & Qiu, J. (2023). YOLO by Ultralytics (Version 8.0.0) [Computer software]. https://github.com/ultralytics/ultralytics
[4] McCulloch, W. S., & Pitts, W. (1943). A logical calculus of the ideas immanent in nervous activity. The bulletin of mathematical biophysics, 5(4), 115-133.
[5] Voulodimos, A., Doulamis, N., Doulamis, A., & Protopapadakis, E. (2018). Deep learning for computer vision: A brief review. Computational intelligence and neuroscience, 2018.
[6] Young, T., Hazarika, D., Poria, S., & Cambria, E. (2018). Recent trends in deep learning based natural language processing. ieee Computational intelligenCe magazine, 13(3), 55-75.
[7] Najafabadi, M. M., Villanustre, F., Khoshgoftaar, T. M., Seliya, N., Wald, R., & Muharemagic, E. (2015). Deep learning applications and challenges in big data analytics. Journal of big data, 2(1), 1-21.
[8] Cunningham, P., Cord, M., & Delany, S. J. (2008). Supervised learning. Machine learning techniques for multimedia: case studies on organization and retrieval, 21-49.
[9] Barlow, H. B. (1989). Unsupervised learning. Neural computation, 1(3), 295-311.
[10] Zhu, X., & Goldberg, A. B. (2009). Introduction to semi-supervised learning. Synthesis lectures on artificial intelligence and machine learning, 3(1), 1-130.
[11] Li, Y. (2017). Deep reinforcement learning: An overview. arXiv preprint arXiv:1701.07274.
[12] Silver, D., Huang, A., Maddison, C. J., Guez, A., Sifre, L., Van Den Driessche, G., ... & Hassabis, D. (2016). Mastering the game of Go with deep neural networks and tree search. nature, 529(7587), 484-489.
[13] Zeiler, M. D., & Fergus, R. (2014, September). Visualizing and understanding convolutional networks. In European conference on computer vision (pp. 818-833). Springer, Cham.
[14] Ren, S., He, K., Girshick, R., & Sun, J. (2015). Faster r-cnn: Towards real-time object detection with region proposal networks. Advances in neural information processing systems, 28.
[15] Girshick, R., Donahue, J., Darrell, T., & Malik, J. (2014). Rich feature hierarchies for accurate object detection and semantic segmentation. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 580-587).
[16] Girshick, R. (2015). Fast r-cnn. In Proceedings of the IEEE international conference on computer vision (pp. 1440-1448).
[17] Freepik. (2018). Free vector: Stem cell diagram on White Background. Retrieved December 7, 2022, from https://www.freepik.com/free-vector/stem-cell-diagram-white-background_2480958.htm
[18] He, K., Zhang, X., Ren, S., & Sun, J. (2016). Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 770-778).
[19] Cole, M. R. (2018). Hands-on neural network programming with C#: Add powerful neural network capabilities to your C# enterprise applications. Packt Publishing Ltd.
[20] G. E. Hinton., "Learning multiple layers of representation," Trends in Cognitive Sciences, 11, pp. 428–434, 2007.
[21] Dertat, A. (2017). Applied deep learning-part 1: Artificial neural networks. Retrieved December 8, 2022, from https://towardsdatascience.com/applied-deep-learning-part-1-artificial-neural-networks-d7834f67a4f6.
[22] Weng, J., Ahuja, N., & Huang, T. S. (1992, June). Cresceptron: a self-organizing neural network which grows adaptively. In [Proceedings 1992] IJCNN International Joint Conference on Neural Networks (Vol. 1, pp. 576-581). IEEE.
[23] Ratan, P. (2021). Convolutional Neural Network Architecture. Retrieved January 7, 2023, from https://www.analyticsvidhya.com/blog/2020/10/what-is-the-convolutional-neural-network-architecture/.
[24] Dong, H., Zhang, L., & Zou, B. (2020). PolSAR image classification with lightweight 3D convolutional networks. Remote Sensing, 12(3), 396.
[25] Uijlings, J. R., Van De Sande, K. E., Gevers, T., & Smeulders, A. W. (2013). Selective search for object recognition. International journal of computer vision, 104(2), 154-171.
[26] Terven, J., & Cordova-Esparza, D. (2023). A comprehensive review of YOLO: From YOLOv1 to YOLOv8 and beyond. arXiv preprint arXiv:2304.00501.
[27] Jin, J. (2019). Dice Similarity Coefficent vs. IoU. Retrieved December 8, 2022, from https://www.cnblogs.com/jins-note/p/11674770.html
[28] Redmon, J., Divvala, S., Girshick, R., & Farhadi, A. (2016). You only look once: Unified, real-time object detection. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 779-788).
[29] Redmon, J., & Farhadi, A. (2017). YOLO9000: better, faster, stronger. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 7263-7271).
[30] Redmon, J., & Farhadi, A. (2018). Yolov3: An incremental improvement. arXiv preprint arXiv:1804.02767.
[31] Bochkovskiy, A., Wang, C. Y., & Liao, H. Y. M. (2020). Yolov4: Optimal speed and accuracy of object detection. arXiv preprint arXiv:2004.10934.
[32] Jocher, G., Changyu, L., Hogan, A., Lijun Yu 于力军, Changyu98, Rai, P., & Sullivan, T. (2020). ultralytics/yolov5: Initial Release (v1.0) [Computer software]. Zenodo. https://doi.org/10.5281/ZENODO.3908560
[33] Li, C., Li, L., Jiang, H., Weng, K., Geng, Y., Li, L., ... & Wei, X. (2022). YOLOv6: A single-stage object detection framework for industrial applications. arXiv preprint arXiv:2209.02976.
[34] Wang, C. Y., Bochkovskiy, A., & Liao, H. Y. M. (2022). YOLOv7: Trainable bag-of-freebies sets new state-of-the-art for real-time object detectors. arXiv preprint arXiv:2207.02696.
[35] Targ, S., Almeida, D., & Lyman, K. (2016). Resnet in resnet: Generalizing residual architectures. arXiv preprint arXiv:1603.08029.
[36] Wang, C. Y., Liao, H. Y. M., Wu, Y. H., Chen, P. Y., Hsieh, J. W., & Yeh, I. H. (2020). CSPNet: A new backbone that can enhance learning capability of CNN. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition workshops (pp. 390-391).
[37] Rezatofighi, H., Tsoi, N., Gwak, J., Sadeghian, A., Reid, I., & Savarese, S. (2019). Generalized intersection over union: A metric and a loss for bounding box regression. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (pp. 658-666).
[38] Ding, X., Zhang, X., Ma, N., Han, J., Ding, G., & Sun, J. (2021). Repvgg: Making vgg-style convnets great again. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (pp. 13733-13742).
[39] Ge, Z., Liu, S., Wang, F., Li, Z., & Sun, J. (2021). Yolox: Exceeding yolo series in 2021. arXiv preprint arXiv:2107.08430.
[40] Zhang, Z., He, T., Zhang, H., Zhang, Z., Xie, J., & Li, M. (2019). Bag of freebies for training object detection neural networks. arXiv preprint arXiv:1902.04103.
[41] Wang, C. Y., Liao, H. Y. M., & Yeh, I. H. (2022). Designing Network Design Strategies Through Gradient Path Analysis. arXiv preprint arXiv:2211.04800.
[42] MMYOLO Contributors. (2022). MMYOLO: OpenMMLab YOLO series toolbox and benchmark. Retrieved from
[43] Lin, T. Y., Dollár, P., Girshick, R., He, K., Hariharan, B., & Belongie, S. (2017). Feature pyramid networks for object detection. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 2117-2125).
[44] Rawat, W., & Wang, Z. (2017). Deep convolutional neural networks for image classification: A comprehensive review. Neural computation, 29(9), 2352-2449.
[45] Zhao, Z. Q., Zheng, P., Xu, S. T., & Wu, X. (2019). Object detection with deep learning: A review. IEEE transactions on neural networks and learning systems, 30(11), 3212-3232.
[46] Hafiz, A. M., & Bhat, G. M. (2020). A survey on instance segmentation: state of the art. International journal of multimedia information retrieval, 9(3), 171-189.
[47] Luo, W., Xing, J., Milan, A., Zhang, X., Liu, W., & Kim, T. K. (2021). Multiple object tracking: A literature review. Artificial intelligence, 293, 103448.
[48] Dang, Q., Yin, J., Wang, B., & Zheng, W. (2019). Deep learning based 2d human pose estimation: A survey. Tsinghua Science and Technology, 24(6), 663-676.
[49] Tian, Z., Shen, C., Chen, H., & He, T. (2019). Fcos: Fully convolutional one-stage object detection. In Proceedings of the IEEE/CVF international conference on computer vision (pp. 9627-9636).
[50] Lin, T. Y., Maire, M., Belongie, S., Hays, J., Perona, P., Ramanan, D., ... & Zitnick, C. L. (2014). Microsoft coco: Common objects in context. In Computer Vision–ECCV 2014: 13th European Conference, Zurich, Switzerland, September 6-12, 2014, Proceedings, Part V 13 (pp. 740-755). Springer International Publishing.
[51] Mahto, P., Garg, P., Seth, P., & Panda, J. (2020). Refining Yolov4 for vehicle detection. International Journal of Advanced Research in Engineering and Technology (IJARET), 11(5).
[52] Hsu, W. Y., & Lin, W. Y. (2021). Adaptive fusion of multi-scale YOLO for pedestrian detection. IEEE Access, 9, 110063-110073.
[53] Wu, T. H., Wang, T. W., & Liu, Y. Q. (2021, June). Real-time vehicle and distance detection based on improved yolo v5 network. In 2021 3rd World Symposium on Artificial Intelligence (WSAI) (pp. 24-28). IEEE.
[54] Al-qaness, M. A., Abbasi, A. A., Fan, H., Ibrahim, R. A., Alsamhi, S. H., & Hawbani, A. (2021). An improved YOLO-based road traffic monitoring system. Computing, 103, 211-230.
[55] Liang, S., Wu, H., Zhen, L., Hua, Q., Garg, S., Kaddoum, G., ... & Yu, K. (2022). Edge YOLO: Real-time intelligent object detection system based on edge-cloud cooperation in autonomous vehicles. IEEE Transactions on Intelligent Transportation Systems, 23(12), 25345-25360.
[56] Lippi, M., Bonucci, N., Carpio, R. F., Contarini, M., Speranza, S., & Gasparri, A. (2021, June). A yolo-based pest detection system for precision agriculture. In 2021 29th Mediterranean Conference on Control and Automation (MED) (pp. 342-347). IEEE.
[57] Tian, Y., Yang, G., Wang, Z., Wang, H., Li, E., & Liang, Z. (2019). Apple detection during different growth stages in orchards using the improved YOLO-V3 model. Computers and electronics in agriculture, 157, 417-426.
[58] Hammami, M., Friboulet, D., & Kéchichian, R. (2020, October). Cycle GAN-based data augmentation for multi-organ detection in CT images via YOLO. In 2020 IEEE International Conference on Image Processing (ICIP) (pp. 390-393). IEEE.
[59] Montalbo, F. J. P. (2020). A Computer-Aided Diagnosis of Brain Tumors Using a Fine-Tuned YOLO-based Model with Transfer Learning. KSII Transactions on Internet & Information Systems, 14(12).
[60] Ting, H. W., Chung, S. L., Chen, C. F., Chiu, H. Y., & Hsieh, Y. W. (2020). A drug identification model developed using deep learning technologies: experience of a medical center in Taiwan. BMC health services research, 20(1), 1-9.
[61] Degadwala, S., Vyas, D., Chakraborty, U., Dider, A. R., & Biswas, H. (2021, March). Yolo-v4 deep learning model for medical face mask detection. In 2021 International Conference on Artificial Intelligence and Smart Systems (ICAIS) (pp. 209-213). IEEE.
[62] Cai, W., Wang, C., Huang, H., & Wang, T. (2020, December). A Real-Time Smoke Detection Model Based on YOLO-SMOKE Algorithm. In 2020 Cross Strait Radio Science & Wireless Technology Conference (CSRSWTC) (pp. 1-3). IEEE.
[63] Chaturvedi, S., Khanna, P., & Ojha, A. (2022). A survey on vision-based outdoor smoke detection techniques for environmental safety. ISPRS Journal of Photogrammetry and Remote Sensing, 185, 158-187.
[64] Frizzi, S., Kaabi, R., Bouchouicha, M., Ginoux, J. M., Moreau, E., & Fnaiech, F. (2016, October). Convolutional neural network for video fire and smoke detection. In IECON 2016-42nd Annual Conference of the IEEE Industrial Electronics Society (pp. 877-882). IEEE.
[65] Liu, Y., Qin, W., Liu, K., Zhang, F., & Xiao, Z. (2019). A dual convolution network using dark channel prior for image smoke classification. IEEE Access, 7, 60697-60706.
[66] Töreyin, B. U., Dedeoğlu, Y., & Cetin, A. E. (2005, September). Wavelet based real-time smoke detection in video. In 2005 13th European signal processing conference (pp. 1-4). IEEE.
[67] Chen, T. H., Yin, Y. H., Huang, S. F., & Ye, Y. T. (2006, December). The smoke detection for early fire-alarming system base on video processing. In 2006 international conference on intelligent information hiding and multimedia (pp. 427-430). IEEE.
[68] Kim, D., & Wang, Y. F. (2009, March). Smoke detection in video. In 2009 WRI World Congress on Computer Science and Information Engineering (Vol. 5, pp. 759-763). IEEE.
[69] Kaabi, R., Frizzi, S., Bouchouicha, M., Fnaiech, F., & Moreau, E. (2017, February). Video smoke detection review: State of the art of smoke detection in visible and IR range. In 2017 International Conference on Smart, Monitored and Controlled Cities (SM2C) (pp. 81-86). IEEE.
[70] Matlani, P., & Shrivastava, M. (2018). A survey on video smoke detection. In Information and Communication Technology for Sustainable Development (pp. 211-222). Springer, Singapore.
[71] Khan, S., Muhammad, K., Hussain, T., Del Ser, J., Cuzzolin, F., Bhattacharyya, S., ... & de Albuquerque, V. H. C. (2021). Deepsmoke: Deep learning model for smoke detection and segmentation in outdoor environments. Expert Systems with Applications, 182, 115125.
[72] Yuan, F., Dong, Z., Zhang, L., Xia, X., & Shi, J. (2022). Cubic-cross convolutional attention and count prior embedding for smoke segmentation. Pattern Recognition, 131, 108902.
[73] Wang, Z., Yin, D., & Ji, S. (2022). Real-time factory smoke detection based on two-stage relation-guided algorithm. Scientific Reports, 12(1), 1-15.
[74] 楊倩華(2022)。應用YOLOv5於車輛排煙偵測系統之研究。國立東華大學資訊管理學系未出版碩士論文,台灣,花蓮。
[75] Seliya, N., Khoshgoftaar, T. M., & Van Hulse, J. (2009, November). A study on the relationships of classifier performance metrics. In 2009 21st IEEE international conference on tools with artificial intelligence (pp. 59-66). IEEE.
(此全文20280715後開放外部瀏覽)
01.pdf
 
 
 
 
第一頁 上一頁 下一頁 最後一頁 top
* *