帳號:guest(3.129.23.220)          離開系統
字體大小: 字級放大   字級縮小   預設字形  

詳目顯示

以作者查詢圖書館館藏以作者查詢臺灣博碩士論文系統以作者查詢全國書目勘誤回報
作者:蘇柏綸
作者(英文):Bo-Lun Su
論文名稱:基於不良天氣下,人工智慧目標檢測增強資料集訓練,準確率之優化探討與比較
論文名稱(英文):Toward adverse weather conditions, AI object detection enhanced data training, Discussion and comparison of optimization of accuracy rate
指導教授:陳偉銘
指導教授(英文):Wei-Ming Chen
口試委員:簡暐哲
張耀中
陳偉銘
口試委員(英文):Wei-Che Chien
Yao-Chung Chang
Wei-Ming Chen
學位類別:碩士
校院名稱:國立東華大學
系所名稱:資訊管理學系
學號:611035106
出版年(民國):112
畢業學年度:111
語文別:中文
論文頁數:77
關鍵詞:目標檢測深度學習準確率模擬雨增強資料集
關鍵詞(英文):object detectiondeep learnningaccuracyimulated synthetic rainaugmentation dataset
相關次數:
  • 推薦推薦:0
  • 點閱點閱:14
  • 評分評分:系統版面圖檔系統版面圖檔系統版面圖檔系統版面圖檔系統版面圖檔
  • 下載下載:21
  • 收藏收藏:0
目標檢測是AI人工智慧計算機視覺領域中的重要任務。在包括下雨天在內的具有挑戰性不良天氣條件的情況下,目標檢測方法的準確性能可能會有顯著下降。 然而目標檢測方法的準確性與效能絕大部分取決於所使用的訓練資料和演算法邏輯架構,因此要改進這些問題必須從這兩方面著手。可是現今大多數的訓練資料集都缺乏這些具有挑戰性不良條件的場景, 所以我們期望利用baseline資料與模擬合成雨、高斯及椒鹽躁聲等不良條件資料組成增強資料集來做模型訓練以提高目標檢測的準確性及穩健性,達到不僅提高對不良條件的準確率還能保有對良好條件該有的準確率的目標, 進而在驗證階段作實驗參數的修正與訓練方法的調整。 本文將會一一介紹克服不良環境目標檢測的先進和新興技術,並且針對我們提出增強訓練資料的方法來和真實降雨資料集進行驗證與準確性比較。
Object detection is an important task in the field of AI artificial intelligence computer vision, in case of challenging weather conditions including rainy days, The accuracy of object detection methods may drop significantly. However, the accuracy and performance of object detection methods largely depend on the training data and logic structure of algorithmic used. Therefore, improving these issues requires addressing both aspects. Unfortunately, most training datasets lack scenes with challenging adverse conditions. Thus, we propose enhancing the training dataset using simulated synthetic rain、 Gaussian 、 salt & pepper noise and other unfavorable condition data to form an enhanced dataset for model training to improve the accuracy and robustness of object detection and achieve the goal of not only improving the accuracy rate for bad conditions but also maintaining the accuracy rate for good conditions. Additionally, we perform experiments to validate parameter adjustments and training methods during the validation phase. This paper will introduce advanced and emerging techniques to overcome object detection in adverse environments and compare the effectiveness of our proposed data augmentation method with real rainy datasets in terms of accuracy.
摘要 i
致謝 iii
目錄 iv
圖目錄 vi
表目錄 ix
第一章、緒論 1
1.1 前言與動機 1
1.2 文獻探討 4
1.2.1 目標檢測深度學習原理 4
1.2.2 資料集闡述 5
1.2.3 模型評估與NMS 7
1.2.4 損失函數描述 9
1.2.5 YOLOv4架構介紹 11
1.2.6 改善不良環境檢測精度的方法 14
1.2.7 YOLO版本說明與比較 17
第二章、相關工作 21
2.1 增強資料集目標檢測架構介紹 21
2.2 kitti資料集類別詳述 23
2.3 YOLO系統環境建置 24
2.4 Kitti資料集標註類型轉換 24
2.5 模擬雨合成系統研究綜述 26
2.6 高斯噪聲GN與椒鹽噪聲SPN說明 29
2.7 圖像標註annotation 30
2.8 真實雨資料集彙集與標註 31
2.9 YOLO 實例預訓練 34
2.10 模擬雨圖像資料集預訓練及mAP驗證 36
第三章、研究方法 39
3.1 模擬合成雨的模型、實驗設計與程序 39
3.2 Baseline,高斯噪聲,椒鹽噪聲,模擬雨訓練,驗證 41
3.3 增強資料的架構、整備程序 44
3.4 增強資料集的實現步驟訓練,驗證 45
第四章、實驗結果 49
4.1 模擬雨效果評估與比較 49
4.2 模擬雨參數對目標檢測效能的影響 51
4.3 各種資料集(模擬與增強)權重模型準確率的比較 52
第五章、相關問題討論 59
5.1 模擬雨與增強資料集方法的優勢和問題 59
5.2 實驗結果的說明和分析 61
第六章、結論與未來展望 63
6.1 總結研究貢獻 63
6.2 未來工作和改進的方向 63
參考文獻 65
[1] Mohsen Soori, Behrooz Arezoo, Roza Dastres. (2023). Artificial intelligence, machine learning and deep learning in advanced robotics, a review, Cognitive Robotics, Volume 3, Pages 54-70.
[2] Parekh, Darsh, Nishi Poddar, Aakash Rajpurkar, Manisha Chahal, Neeraj Kumar, Gyanendra Prasad Joshi, and Woong Cho. "A Review on Autonomous Vehicles: Progress, Methods and Challenges" Electronics 11, no. 14: 2162, (2022).https://doi.org/10.3390/electronics11142162
[3] Zhao, Zhong-Qiu & Zheng, Peng & Xu, Shou-Tao & Wu, Xindong. Object Detection With Deep Learning: A Review. IEEE Transactions on Neural Networks and Learning Systems. PP. 1-21. 10.1109/TNNLS.2018.2876865.(2019).
[4] C. Wang, A. Bochkovskiy, and H. Liao. (June 2021). Scaled-YOLOv4: Scaling Cross Stage Partial Network Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), page 13029-13038.
[5] Navneet Dalal, Bill. Histograms of Oriented Gradients for Human Detection Triggs, INRIA Rhone-Alps, ˆ 655 avenue de l’Europe, Montbonnot 38334, France
[6] Vargas, J.; Alsweiss, S.; Toker, O.; Razdan, R.; Santos, J. An Overview of Autonomous Vehicles Sensors and Their Vulnerability to Weather Conditions. Sensors ,21, 5397, (2021) https://doi.org/10.3390/s21165397
[7] Akhil M. Kurup, Jeremy P. Bos. "Winter adverse driving dataset for autonomy in inclement winter weather Optical Engineering ", Vol. 62, Issue 3, 031207 (January 2023). https://doi.org/10.1117/1.OE.62.3.031207
[8] Tan Ji Yong, Dian Song Yi, Guo Zi Zhao, Li Jian. (2020). 500 Deep Learning Questions -A Collection of Interviews for AI Engineers, pp89-95.
[9] J. Redmon, S. Divvala, R. Girshick and A. Farhadi. (2016). "You Only Look Once: Unified, Real-Time Object Detection," 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, pp. 779-788, doi: 10.1109/CVPR.2016.91.
[10] Bochkovskiy, A., Wang, C., & Liao, H.M. (2020). YOLOv4: Optimal Speed and Accuracy of Object Detection. ArXiv, abs/2004.10934.
[11] Everingham, M., Van Gool, L., Williams, C.K.I. et al. (2010). The PASCAL Visual Object Classes (VOC) Challenge. Int J Comput Vis 88, pp. 303–338.
[12] Lin, T., Maire, M., Belongie, S.J., Hays, J., Perona, P., Ramanan, D., Dollár, P., & Zitnick, C.L. (2014). Microsoft COCO: Common Objects in Context. European Conference on Computer Vision. arXiv:1405.0312.
[13] Kuznetsova, A., Rom, H., Alldrin, N. et al. (2020). The Open Images Dataset V4. Int J Comput Vis 128, pp. 1956–1981. arXiv:1811.00982.
[14] J. Deng, W. Dong, R. Socher, L. -J. Li, Kai Li and Li Fei-Fei. (2009). "ImageNet: A large-scale hierarchical image database," IEEE Conference on Computer Vision and Pattern Recognition, pp. 248-255,
[15] G. Xia, et al. (2018). "DOTA: A Large-Scale Dataset for Object Detection in Aerial Images," pp. 3974-3983. arXiv:1711.10398.
[16] Geiger A, Lenz P, Stiller C, Urtasun R. (2013). Vision meets robotics: The KITTI dataset. The International Journal of Robotics Research, pp. 1231-1237.
[17] Zitnick, C.L., Dollár, P. (2014). Edge Boxes: Locating Object Proposals from Edges. In: Fleet, D., Pajdla, T., Schiele, B., Tuytelaars, T. (eds) Computer Vision – ECCV 2014.ECCV 2014.Lecture Notes in Computer Science,vol,8693.Springer,Cham.https://doi.org/10.1007/978-3-319-10602-1_26
[18] Damian Mrowca, Marcus Rohrbach, Judy Hoffman, Ronghang Hu, Kate Saenko. (2015). Trevor Darrell. Spatial semantic regularisation for large scale object detection. in proceedings of the IEEE international conference on computer vision, pages 2003-2011.
[19] 蔡忠良 (2019)。「架構在不同損失函數下的深度學習」。碩士論文,國立清華大學應用數學系所。
[20] Rob J. Hyndman, Anne B. Koehler. (2006). Another look at measures of forecast accuracy, International Journal of Forecasting, Volume 22, Issue 4, Pages 679-688
[21] G. Volk, Stefan Müller, +2 authors. (2019). Towards Robust CNN-based Object Detection through Augmentation with Synthetic Rain Variations. O. Bringmann Published 1 October 2019 Computer Science 2019 IEEE Intelligent Transportation Systems Conference (ITSC)
[22] S. Dodge and L. Karam. (2016). “Understanding how image quality affects deep neural networks,” in 2016 Eighth International Conference on Quality of Multimedia Experience (QoMEX), Jun, pp. 1–6.
[23] G. B. P. da Costa, W. A. Contato, T. S. Nazare, J. E. S. B. Neto, and M. Ponti. (2016). “An empirical study on the effects of different types of noise in image classification tasks,” CoRR, vol. abs/1609.02781.
[24] T. S. Nazare, G. B. P. da Costa, W. A. Contato, and M. Ponti. (2018). “Deep Convolutional Neural Networks and Noisy Images,” in Progress in Pattern Recognition, Image Analysis, Computer Vision, and Applications. Cham: Springer International Publishing, pp. 416–424.
[25] D. M. Montserrat, Q. Lin, J. Allebach, and E. J. Delp. (2017). “Training Object Detection and Recognition CNN Models Using Data Augmentation,” in Electronic Imaging - Imaging and Multimedia Analytics in a Web and Mobile World 2017, pp. 27–36.
[26] T. Dreossi, S. Ghosh, X. Yue, K. Keutzer, A. Sangiovanni-Vincentelli,and S. A. Seshia. (2018). “Counterexample-Guided Data Augmentation,” in Proceedings of the Twenty-Seventh International Joint Conference on Artificial Intelligence, Stockholm, Sweden, Jul, pp. 2071–2078.
[27] S. Zheng, Y. Song, T. Leung, and I. Goodfellow. (2016). “Improving the Robustness of Deep Neural Networks via Stability Training,” in 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Jun, pp. 4480–4488.
[28] S. Hasirlioglu and A. Riener. (2018). “A Model-Based Approach to Simulate Rain Effects on Automotive Surround Sensor Data,” in 2018 IEEE International Conference on Intelligent Transportation Systems (ITSC).Maui, Hawaii, USA: IEEE, Nov, pp. 2609–2615.
[29] J. Mukherjee, K. Praveen, and V. Madumbu. (2018) “Visual Quality Enhancement Of Images Under Adverse Weather Conditions”.
21st International Conference on Intelligent Transportation Systems
(ITSC). Maui, Hawaii, USA: IEEE, Nov, pp. 3059–3066.
[30] P. Luc, C. Couprie, J. Verbeek, and L. J. Kuntzmann. (2016). “Semantic Segmentation using Adversarial Networks,” NIPS Workshop on Adversarial Training.
[31] L. Karacan, Z. Akata, A. Erdem, and E. Erdem. (2016). “Learning to Generate Images of Outdoor Scenes from Attributes and Semantic Layouts”, CoRR, vol. abs/1612.00215, Dec.
[32] Sovit Rath. (2023). Vision meets robotics: YOLOv8 Ultralytics: State-of-the-Art YOLO Models.
[33] Al-Haija, Q.A.; Gharaibeh, M.; Odeh, A. Detection in Adverse Weather Conditions for Autonomous Vehicles via Deep Learning. A/3, pp. 303-317. https://doi.org/10.3390/ai3020019 ,(2022).

 
 
 
 
第一頁 上一頁 下一頁 最後一頁 top
* *