帳號:guest(18.221.13.120)          離開系統
字體大小: 字級放大   字級縮小   預設字形  

詳目顯示

以作者查詢圖書館館藏以作者查詢臺灣博碩士論文系統以作者查詢全國書目勘誤回報
作者:陳禹仲
作者(英文):Yu-Jung Chen
論文名稱:基於影像辨識技術之傷口資訊辨識系統:以輔助居家壓瘡傷口看護為例
論文名稱(英文):Wound information identification system based on image recognition technology: a case of assisting home pressure ulcer wound care
指導教授:羅壽之
指導教授(英文):Shou-Chih Lo
口試委員:李官陵
羅淑芬
口試委員(英文):Guan-Ling Lee
Shu-Fen Lo
學位類別:碩士
校院名稱:國立東華大學
系所名稱:資訊工程學系
學號:610621225
出版年(民國):108
畢業學年度:107
語文別:中文
論文頁數:59
關鍵詞:深度學習壓瘡傷口影像辨識
關鍵詞(英文):Deep learningPressure UlcerImage recognition
相關次數:
  • 推薦推薦:0
  • 點閱點閱:37
  • 評分評分:系統版面圖檔系統版面圖檔系統版面圖檔系統版面圖檔系統版面圖檔
  • 下載下載:5
  • 收藏收藏:0
在居家傷口看護中,病人壓瘡傷口的日常照顧通常由主要照顧者負責,而護理人員則需一段時間定期去做壓瘡傷口紀錄。而隨著影像辨識與深度學習技術的快速發展,近年來許多醫學上的研究提出影像辨識技術去解決問題。因此本論文希望能結合影像辨識與深度學習技術,輔助傷口記錄人員在對傷口患者紀錄傷口時,減少患者露出傷口的時間,並提供一些有用的傷口資訊輔助人員分析。
本論文的目的為設計一套傷口資訊辨識系統(Wound information identification system, WIIS),針對開放式且相連接的慢性傷口,只要讓傷口跟提供的比例色塊一起拍照後,就能透過傷口的影像辨識出其傷口資訊,提供輔助傷口記錄人員的傷口資訊如傷口面積、傷口顏色比例分布等,來達到輔助傷口紀錄者,減少人員的負擔。
本論文提出基於深度學習的傷口輪廓檢測模型WcpCNN(Wound contour positioning based on CNN)與WcpMLP(Wound contour positioning based on MLP),利用WcpCNN與WcpMLP檢測其輪廓,跟傳統Canny相比,WcpMLP傷口面積誤差最低。並根據實驗歸納出當相機鏡頭與傷口保持平行時,旋轉不會造成誤差,但當相機角度傾斜時,角度越大,旋轉影響的效能就會越大。
At home wound care, the daily care of pressure ulcers is usually the responsibility of the primary caregiver, and the nursing staff need to make regular pressure ulcer records. With the rapid development of image recognition and AI technology, many medical researches in recent years have proposed image recognition technology to solve the problem. Therefore, this paper hopes to combine image recognition and deep learning techniques to assist wound record personnel in reducing the time for patients to expose wounds when recording wound statuses, and to provide some useful information on wound to assist personnel analysis.
The purpose of this paper is to design a wound information identification system(WIIS). For open and connected chronic wounds, after the wound is photographed together with the provided color patches, the wound information can be identified through the wound image, and the wound information can be provided to assist the recording. For example, the wound area, the proportion distribution of the wound color, etc., to achieve the auxiliary wound record, reducing the burden on the person.
This paper presents a Wound contour positioning based on Deep Learning, WcpCNN (Wound contour positioning based on CNN) and WcpMLP (Wound contour positioning based on MLP). Using WcpMLP to detect wound contour, compared with the traditional Canny, can achieve the lowest wound area error. And according to the experiment, when the camera lens is parallel to the wound, the rotation will not cause an error. However, when the camera angle is tilted, the larger the angle, the greater the error caused by the rotation.
誌謝 I
摘 要 II
ABSTRACT III
目 錄 IV
圖目錄 VI
表目錄 VIII
第1章 前言 1
1-1 研究背景與動機 1
1-2 研究目的 2
1-3 論文綱要 2
第2章 背景知識 5
2-1 傷口簡介 5
2-1-1 壓瘡傷口 5
2-1-2 壓瘡定性評估 5
2-1-3 壓瘡定量評估 6
2-1-4 傷口測量方法介紹 10
2-2 人工智慧(ARTIFICIAL INTELLIGENCE) 10
2-2-1 機器學習 11
2-2-2 深度學習 12
2-3 影像辨識技術 14
2-3-1 色彩空間 14
2-3-2 影像前處理 14
2-3-3 邊緣檢測 15
2-3-4 輪廓檢測 16
2-4 影像辨識在醫學影像上的應用 16
2-5 影像處理在傷口影像上的應用 18
第3章 傷口資訊辨識系統WIIS 21
3-1 WIIS設計 21
3-2 手動定位傷口位置 22
3-3 傷口資訊辨識流程 23
3-4 傷口輪廓檢測模型 24
3-4-1 基於Canny的傷口輪廓檢測模型 24
3-4-2 基於深度學習的傷口輪廓檢測模型 29
3-5 傷口實際面積辨識 33
3-6 傷口長寬測量 35
3-7 傷口顏色統計分析 39
第4章 傷口辨識效能分析 43
4-1 實驗環境 43
4-2 傷口輪廓檢測效能分析 43
4-3 拍照角度效能分析 48
4-4 WIIS在BWAT上的輔助 50
第5章 結論與未來工作 53
5-1 結論 53
5-2 未來工作 53
第6章 參考文獻 55

[1]Palaiahnakote Shivakumara,Dongqi Tang,Maryam Asadzadehkaljahi,Tong Lu,Umapada Pal, "CNN-RNN based method for license plate recognition," CAAI Transactions on Intelligence Technology, vol. 3, pp. 169 - 175, 29 November 2018.
[2]Zhilu Chen and Xinming Huang,, "End-to-end learning for lane keeping of self-driving cars," in 2017 IEEE Intelligent Vehicles Symposium (IV), Los Angeles, CA, USA, 2017.
[3]Jun-Cheng Chen,Vishal M. Patel,Rama Chellappa, "Unconstrained face verification using deep CNN features," in 2016 IEEE Winter Conference on Applications of Computer Vision (WACV), Lake Placid, NY, USA, 2016.
[4]Y. Wen, K. Zhang, Z. Li and Y. Qiao, "A Discriminative Feature Learning Approach for Deep Face Recognition," in ECCV 2016: Computer Vision, 2016.
[5]Dinggang Shen, Guorong Wu and Heung-Il Suk, "Deep learning in medical image analysis," Annual Review of Biomedical Engineering, vol. 19, pp. 221-248, 9 March 2017.
[6]M. Chen, X. Shi, Y. Zhang, D. Wu and M. Guizani, "Deep Features Learning for Medical Image Analysis with Convolutional Autoencoder Neural Network," IEEE Transactions on Big Data, 20 June 2017.
[7]N. Tajbakhsh, J. Y. Shin, S. R. Gurudu, R. T. Hurst, C. B. Kendall, M. B. Gotway and J. Liang, "Convolutional Neural Networks for Medical Image Analysis: Full Training or Fine Tuning?," IEEE TRANSACTIONS ON MEDICAL IMAGING, vol. 35, no. 5, pp. 1299-1312, MAY 2016.
[8]F. Milletari, N. Navab 且 S.-A. Ahmadi, “V-Net: Fully Convolutional Neural Networks for Volumetric Medical Image Segmentation,” 於 2016 Fourth International Conference on 3D Vision, Stanford, CA, USA, 2016.
[9]于博芮、羅淑芬等人, 最新傷口護理學, 華杏出版股份有限公司, 2007.
[10]D. Krasner, "Wound Care How to Use the Red-Yellow-Black System," The American Journal of Nursing, vol. 95, no. 5, pp. 44-47, May 1995.
[11]羅淑芬、胡文郁, "慢性傷口之評估與測量原則," 護理雜誌, vol. 54, no. 2, pp. 62-67, 2007.
[12]E. P. Choi, W. Y. Chin, E. Y. Wan and C. L. Lam, "Evaluation of the internal and external responsiveness of the Pressure Ulcer Scale for Healing (PUSH) tool for assessing acute and chronic wounds," The Authors.Journal of Advanced Nursing Published, pp. 1134-1142, 11 Januar 2016.
[13]Y. Matsui, M. Furue, H. Sanada, T. Tachibana, T. Nakayama, J. Sugama, K. Furuta, M. Tachi, K. Tokunaga and Y. Miyachi, "Development of the DESIGN‐R with an observational study: An absolute evaluation tool for monitoring pressure ulcer wound healing," Wound Repair and Regeneration, vol. 19, no. 3, pp. 309-315, 16 March 2011.
[14]C. Harris, B. Bates-Jensen, N. Parslow, R. Raizman, M. Singh and R. Ketchen, "Bates-Jensen Wound Assessment Tool: Pictorial Guide Validation Project," Journal of Wound, Ostomy and Continence Nursing, vol. 37, pp. 253-259, 2010.
[15]游朝慶, “https://woundmaster.blogspot.com/2010/03/blog-post_03.html,” 3 3 2010. [線上].
[16]L. D. Persio and O. Honchar, "Artificial Neural Networks architectures for stock price prediction:comparisons and applications," INTERNATIONAL JOURNAL OF CIRCUITS, SYSTEMS AND SIGNAL PROCESSING, vol. 10, pp. 403-413, 2016.
[17]ZakariaAlameer, M. AbdElaziz, A. A.Ewees, HaiwangYe and ZhangJianhua, "Forecasting gold price fluctuations using improved multilayer perceptron neural network and whale optimization algorithm," Resources Policy, vol. 61, pp. 250-260, June 2019.
[18]W. Jiang, G. He, T. Long, Y. Ni, H. Liu, Y. Peng, K. Lv and G. Wang, "Multilayer Perceptron Neural Network for Surface Water Extraction in Landsat 8 OLI Satellite Images," Remote sensing, vol. 10, no. 5, 15 May 2018.
[19]E. Ahmed, M. Jones and T. K. Marks, "An improved deep learning architecture for person re-identification," in 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Boston, MA, USA, 2015.
[20]T. Wang, D. J. Wu, A. Coates and A. Y. Ng, "End-to-end text recognition with convolutional neural networks," in 21st International Conference on Pattern Recognition (ICPR), Tsukuba, Japan, 2012.
[21]O. Abdel-Hamid, A.-r. Mohamed, H. Jiang, L. Deng, G. Penn and D. Yu, "Convolutional Neural Networks for Speech Recognition," IEEE/ACM Transactions on Audio, Speech, and Language Processing, vol. 22, pp. 1533-1545, 16 July 2014.
[22]A. Karpathy, G. Toderici, S. Shetty, T. Leung, R. Sukthankar and L. Fei-Fei, "Large-scale Video Classification with Convolutional Neural Networks," The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 1725-1732, 2014.
[23]J. Nagi, F. Ducatelle, G. A. D. Caro, D. Cireşan, U. Meier, A. Giusti, F. Nagi, J. Schmidhuber and L. M. Gambardella, "Max-pooling convolutional neural networks for vision-based hand gesture recognition," 2011 IEEE International Conference on Signal and Image Processing Applications (ICSIPA), pp. 342-347, 2011.
[24]C. Connolly and T. Fleiss, "A study of efficiency and accuracy in the transformation from RGB to CIELAB color space," IEEE Transactions on Image Processing, vol. 6, pp. 1046-1048, Jul 1997.
[25]S. Sural, G. Qian and S. Pramanik, "Segmentation and histogram generation using the HSV color space for image retrieval," Proceedings. International Conference on Image Processing, pp. 589-592, 10 December 2002.
[26]K. BashaShaik, P.Ganesan, V.Kalist, B.S.Sathish and J. M. MaryJenitha, "Comparative Study of Skin Color Detection and Segmentation in HSV and YCbCr Color Space," Procedia Computer Science, vol. 57, pp. 41-48, 2015.
[27]"Recognition of Images Degraded by Gaussian Blur," IEEE TRANSACTIONS ON IMAGE PROCESSING, vol. 25, no. 2, pp. 790-806, FEBRUARY 2016.
[28]P. s. B. Tambe, P. D. Kulhare, M. D. Nirmal and P. G. Prajapati, "Image Processing (IP) Through Erosion and Dilation Methods," International Journal of Emerging Technology and Advanced Engineering, vol. 3, pp. 285-289, July 2013.
[29]C. Spampinato, S. Palazzo, D. Giordano, M. Aldinucci and R. Leonardi, "Deep learning for automated skeletal bone age assessment in X-ray images," Medical Image Analysis, vol. 36, pp. 41-51, February 2017.
[30]E. Nasr-Esfahani, S. Samavi, N. Karimi, S. Soroushmehr, K. Ward, M. Jafari, B. Felfeliyan, B. Nallamothu 且 K. Najarian, “Vessel extraction in X-ray angiograms using deep learning,” 2016 38th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), pp. 643-646, 2016.
[31]P. Lakhani and B. Sundaram, "Deep Learning at Chest Radiography: Automated Classification of Pulmonary Tuberculosis by Using Convolutional Neural Networks," Radiology, vol. 284, no. 2, pp. 574-582, Apr 2017.
[32]H. Chen, D. Ni, J. Qin, S. Li, X. Yang, T. Wang and P. A. Heng, "Standard Plane Localization in Fetal Ultrasound via Domain Transferred Deep Neural Networks," IEEE Journal of Biomedical and Health Informatics, vol. 19, pp. 1627 - 1636, 2015.
[33]D. MENG, L. ZHANG, G. CAO, W. CAO, G. ZHANG and A. B. HU, "Liver Fibrosis Classification Based on Transfer Learning and FCNet for Ultrasound Images," IEEE Access, vol. 5, pp. 5804 - 5810, March 2017.
[34]G. González, S. Y. Ash, G. Vegas-Sánchez-Ferrero, J. O. Onieva, F. N. Rahaghi, J. C. Ross, A. Díaz, R. S. J. Estépar and G. R. Washko, "Disease Staging and Prognosis in Smokers Using Deep Learning in Chest Computed Tomography," American Journal of Respiratory and Critical Care Medicine, vol. 197, no. 2, JAN 2018.
[35]H. R. Roth, C. T. Lee, H.-C. Shin, A. Seff, L. Kim, J. Yao, L. Lu and R. M. Summers, "Anatomy-specific classification of medical images using deep convolutional nets," in 2015 IEEE 12th International Symposium, NY, USA, 2015.
[36]X. W. Gao, R. Hui and Z. Tian, "Classification of CT brain images based on deep learning networks," Computer Methods and Programs in Biomedicine, vol. 138, pp. 49-56, 2017.
[37]H. Oduncu, A. Hoppe, M. Clark, R. J. Williams and K. G. Harding, "Analysis of Skin Wound Images Using Digital Color Image Processing: A Preliminary Communication," LOWER EXTREMITY WOUNDS, vol. 3, no. 3, pp. 151-156, 2004.
[38]C. Chang, T.-W. Ho, J.-M. Wu, H.-H. Tsai, C. C.-P. Chen, F. Lai, H.-C. Tai 且 N.-C. Cheng, “Robust Dermatological Wound Image Segmentation,” 2015 E-Health and Bioengineering Conference (EHB), 2015.
[39]A. Haider, M. Alhashim, K. Tavakolian and R. Fazel-Rezai, "Computer-assisted image processing technique for tracking wound progress," 2016 IEEE International Conference on Electro Information Technology (EIT), pp. 0750-0754, 2016.
[40]C. Wang, X. Yan, M. Smith, K. Kochhar, M. Rubin, S. M. Warren, J. Wrobel and H. Lee, "A unified framework for automatic wound segmentation and analysis with deep convolutional neural networks," in 2015 37th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), Milan, Italy, 2015.
[41]C. Rother, V. Kolmogorov and A. Blake, "Grabcut: Interactive foreground extraction using iterated graph cuts," ACM Transactions on Graphics (TOG), vol. 23, pp. 309-314, August 2004.
[42]S. Zahia, D. Sierra-Sosa, B. Garcia-Zapirain and A. Elmaghraby, "Tissue classification and segmentation of pressure injuries using convolutional neural networks," Computer Methods and Programs in Biomedicine, vol. 159, pp. 51-58, June 2018.
[43]田巍 and 莊鎮泉, "基於HSV色彩空間的自適應膚色檢測," 計算機工程與應用, vol. 14, pp. 81-85, 2004.
[44]趙飛, 屈景輝, 李飛江, 喬璐 and 林勤, "適用於創傷修復研究的傷口面積測量系統設計," 醫療衛生裝備, vol. 28, no. 11, pp. 28-30, 15 11 2007.
[45]M. F. A. Fauzi, I. Khansa, K. Catignani, G. Gordillo, C. K. Sen 且 M. N. Gurcan, “Segmentation and Management of Chronic Wound Images: A Computer-Based Approach,” Recent Clinical Techniques, Results, and Research in Wounds, pp. 1-20, 28 July 2018.
[46]J. Long, E. Shelhamer and T. Darrell, "Fully Convolutional Networks for Semantic Segmentation," in CVPR, 2015.
[47]S. A. Ahmed, S. Dey and K. K. Sarma, "Image Texture Classification Using Artificial Neural Network (ANN)," in 2011 2nd National Conference on Emerging Trends and Applications in Computer Science, 2011.
 
 
 
 
第一頁 上一頁 下一頁 最後一頁 top
* *