帳號:guest(18.118.10.127)          離開系統
字體大小: 字級放大   字級縮小   預設字形  

詳目顯示

以作者查詢圖書館館藏以作者查詢臺灣博碩士論文系統以作者查詢全國書目勘誤回報
作者:潘宜婷
作者(英文):YI-TING PAN
論文名稱:人工智慧物件偵測-應用於街景定位及方向判斷功能
論文名稱(英文):Artificial intelligence Object Detection – Applied to street view positioning and direction judgment functions
指導教授:陳偉銘
指導教授(英文):Wei-Ming Chen
口試委員:張耀中
簡暐哲
口試委員(英文):Yao-Chung Chang
Wei-Che Chien
學位類別:碩士
校院名稱:國立東華大學
系所名稱:資訊管理學系
學號:611035101
出版年(民國):112
畢業學年度:111
語文別:中文
論文頁數:76
關鍵詞:物件偵測特徵擷取街景資料集
關鍵詞(英文):object detectionfeature extractionstreet view dataset
相關次數:
  • 推薦推薦:0
  • 點閱點閱:14
  • 評分評分:系統版面圖檔系統版面圖檔系統版面圖檔系統版面圖檔系統版面圖檔
  • 下載下載:0
  • 收藏收藏:0
科技進步使得導航系統成為現代人必備的工具,然而由於衛星定位的特性,導航中GPS定位精確度和穩定性仍然存在問題,例如訊號被遮蔽導致定位不準確、方向無法判斷等。本研究提出了一個物件偵測比對系統,利用物件偵測技術來比對街景圖片,以輔助GPS定位。此外,該系統還能解決方向判斷錯誤問題,透過辨識招牌物件和距離排序確認當下方向,並配合導航後續引導路線使用。該系統對設備要求不高,可應用於各種車載和手機導航,尤其是手機導航,可以透過拍照比對當下街景,快速解決迷路問題。本研究經過實驗證明,該系統能有效解決GPS定位問題,當GPS不準確時,系統能夠正確提出可能的定位點並修正方向,提供更準確的導航指引。該系統具有可行性和實用性,對於輔助現有導航系統解決GPS定位問題具有參考價值和實際意義。本研究的主要貢獻包括:(1)將物件偵測技術應用於街景環境中,並成功建立了街景物件偵測與比對系統,提高了街景資訊的利用價值;(2)建構的定位系統能有效輔助導航,解決定位不準確問題,提供準確的定位及方向,提高導航在警及狀況下的可用性;(3)利用物件偵測技術輔助在空間中定位,為未來自動機器人和自駕車行車路線的準確度提供可能的發展方向。
In recent years, with the advance of science and technology, technology products are everywhere all around us. Nowadays, people no longer need maps to travel, they can easily use navigation system which is composed of GPS positioning and mathematical calculations. Using navigation system, you can have the information about where you are, where you want to go and even how to go there. Navigation system can successfully reduce the time spend on looking up maps; meanwhile, the function of the position solves the problem that people always don’t know the location where they are. Navigation system is a perfect product for people to use in their daily life, however, due to the characteristics of GPS, there are still some problems when using it. First, the signal of GPS usually be blocked by the roof of buildings, therefore, when people walk into buildings, the position might be incorrect. Second, when the navigation system is just starting, there will be a few minutes out of action. Third, in some situation when the two position is nearby or they are in the same point but up and down, the navigation system can’t distinguish. For example, we can’t distinguish on the navigation system to know when we need to go on or off the bridge. In order to solve these problems, this study proposes a new method, hoping to use street view to find fixed objects such as signboards and road signs by means of object detection. Therefore, we can confirm the accuracy of position that GPS detected. Moreover, by comparing the objects and calculating the distance from the objects, we can also find out the direction we should go.
[1] Bahl, P., & Padmanabhan, V. N. (2000, March). RADAR: An in-building RF-based user location and tracking system. In Proceedings IEEE INFOCOM 2000. Conference on computer communications. Nineteenth annual joint conference of the IEEE computer and communications societies (Cat. No. 00CH37064) (Vol. 2, pp. 775-784). Ieee.
[2] Bodla, N., Singh, B., Chellappa, R., & Davis, L. S. (2017). Soft-NMS--improving object detection with one line of code. In Proceedings of the IEEE international conference on computer vision (pp. 5561-5569).
[3] Bochkovskiy, A., Wang, C. Y., & Liao, H. Y. M. (2020). Yolov4: Optimal speed and accuracy of object detection. arXiv preprint arXiv:2004.10934.
[4] ervinyo (2021). Signboard-datasets. Retrieved from https://github.com/ervinyo/ Signboard-datasets (2022, May).
[5] Girshick, R., Donahue, J., Darrell, T., & Malik, J. (2014). Rich feature hierarchies for accurate object detection and semantic segmentation. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 580-587).
[6] Girshick, R. (2015). Fast r-cnn. In Proceedings of the IEEE international conference on computer vision (pp. 1440-1448).
[7] Google. Google api. Retrieved from https://cloud.google.com/apis/docs/overview. (2021, November).
[8] He, K., Gkioxari, G., Dollár, P., & Girshick, R. (2017). Mask r-cnn. In Proceedings of the IEEE international conference on computer vision (pp. 2961-2969).
[9] He, K., Zhang, X., Ren, S., & Sun, J. (2016). Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 770-778).
[10] Hoang, M. T., Yuen, B., Ren, K., Elmoogy, A., Dong, X., Lu, T., ... & Tarimala, K. R. (2021). Passive Indoor Localization with WiFi Fingerprints. arXiv preprint arXiv:2111.14281.
[11] Hu, J., Shen, L., & Sun, G. (2018). Squeeze-and-excitation networks. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 7132-7141).
[12] Krizhevsky, A., Sutskever, I., & Hinton, G. E. (2012). Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems, 25.
[13] Li, B., Quader, I. J., & Dempster, A. G. (2008). On outdoor positioning with Wi-Fi. Positioning, 1(13).
[14] Lin, T. Y., Dollár, P., Girshick, R., He, K., Hariharan, B., & Belongie, S. (2017). Feature pyramid networks for object detection. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 2117-2125).
[15] Lin, T. Y., Goyal, P., Girshick, R., He, K., & Dollár, P. (2017). Focal loss for dense object detection. In Proceedings of the IEEE international conference on computer vision (pp. 2980-2988).
[16] Liu, W., Anguelov, D., Erhan, D., Szegedy, C., Reed, S., Fu, C. Y., & Berg, A. C. (2016, October). Ssd: Single shot multibox detector. In European conference on computer vision (pp. 21-37). Springer, Cham.
[17] Prasithsangaree, P., Krishnamurthy, P., & Chrysanthis, P. (2002, September). On indoor position location with wireless LANs. In The 13th IEEE international symposium on personal, indoor and mobile radio communications (Vol. 2, pp. 720-724). IEEE.
[18] Ren, S., He, K., Girshick, R., & Sun, J. (2015). Faster r-cnn: Towards real-time object detection with region proposal networks. Advances in neural information processing systems, 28.
[19] Redmon, J., Divvala, S., Girshick, R., & Farhadi, A. (2016). You only look once: Unified, real-time object detection. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 779-788).
[20] Redmon, J., & Farhadi, A. (2018). Yolov3: An incremental improvement. arXiv preprint arXiv:1804.02767.
[21] Redmon, J., & Farhadi, A. (2017). YOLO9000: better, faster, stronger. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 7263-7271).
[22] Sayed, A. H., Tarighat, A., & Khajehnouri, N. (2005). Network-based wireless location: challenges faced in developing techniques for accurate wireless location information. IEEE signal processing magazine, 22(4), 24-40.
[23] Sermanet, P., Eigen, D., Zhang, X., Mathieu, M., Fergus, R., & LeCun, Y. (2013). Overfeat: Integrated recognition, localization and detection using convolutional networks. arXiv preprint arXiv:1312.6229.
[24] Simonyan, K., & Zisserman, A. (2014). Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556.
[25] ultralytics (2021). ultralytics-YOLOv3. Retrieved from https://github.com/ ultralytics/yolov3 (2022, January).
[26] Wang, C. Y., Bochkovskiy, A., & Liao, H. Y. M. (2022). YOLOv7: Trainable bag-of-freebies sets new state-of-the-art for real-time object detectors. arXiv preprint arXiv:2207.02696.
[27] Yeh, S. C., Hsu, W. H., Su, M. Y., Chen, C. H., & Liu, K. H. (2009, March). A study on outdoor positioning technology using GPS and WiFi networks. In 2009 International Conference on Networking, Sensing and Control (pp. 597-601). IEEE.
[28] Yohannes, E., Lin, C. Y., Shih, T. K., Hong, C. Y., Enkhbat, A., & Utaminingrum, F. (2021). Domain Adaptation Deep Attention Network for Automatic Logo Detection and Recognition in Google Street View. IEEE Access, 9, 102623-102635.
 
 
 
 
第一頁 上一頁 下一頁 最後一頁 top
* *