帳號:guest(3.145.51.153)          離開系統
字體大小: 字級放大   字級縮小   預設字形  

詳目顯示

以作者查詢圖書館館藏以作者查詢臺灣博碩士論文系統以作者查詢全國書目勘誤回報
作者:林宛姿
作者(英文):Wan-Zi Lin
論文名稱:手勢互動之360度影片式虛擬實境導覽
論文名稱(英文):360-degrees Video-based Virtual Reality Tour Guide with Gesture Interaction
指導教授:楊茂村
指導教授(英文):Mau-Tsuen Yang
口試委員:王錫澤
陳文盛
口試委員(英文):Hsi-Tse Wang
Wen-Sheng Chen
學位類別:碩士
校院名稱:國立東華大學
系所名稱:資訊工程學系
學號:610721213
出版年(民國):109
畢業學年度:108
語文別:中文
論文頁數:46
關鍵詞:虛擬實境HTC VIVE CosmosUnity虛擬實境導覽手勢互動
關鍵詞(英文):Virtual RealityHTC VIVE CosmosUnityVR tour guideHand Gesture Interaction
相關次數:
  • 推薦推薦:0
  • 點閱點閱:26
  • 評分評分:系統版面圖檔系統版面圖檔系統版面圖檔系統版面圖檔系統版面圖檔
  • 下載下載:6
  • 收藏收藏:0
虛擬實境(Virtual Reality,VR)的沉浸式體驗能讓使用者有身歷其境的感覺,這項特點非常適合用於環境導覽的應用。本研究使用Unity遊戲引擎結合視覺式手勢辨識,開發一個運行於電腦上使用HTC VIVE Cosmos VR頭戴式顯示器的VR導覽系統並且讓使用者僅用手就能代替VR控制器與導覽系統互動。
本研究將會詳細敘述研究方法、系統架構以及詳細開發過程。最後以使用者研究分析受測者進行手勢操作以及控制器操作的比較,同時評估系統功能的體驗結果。
The technology of virtual reality (VR) allows users to feel immersive experience. This feature is very suitable for the application of environmental virtual tour guide. Combining the Unity game engine and vision-based hand gesture recognition, we develop a VR tour guide app for Android based devices as well as PC based VR head-mounted display (HMD) such as HTC VIVE Cosmos. Users can interact with the virtual environment with bare hands instead of using the VR controllers. We describe the research methodology, system architecture, and implementation details of the development process. At the end, user feedback and questionnaires are analyzed to compare the VR experience of the proposed hand gesture interaction with the conventional VR controllers.
第一章 緒論 1
第一節 研究動機 1
第二節 研究目的 2
第三節 論文概述 2
第二章 文獻探討 3
第一節 頭戴式顯示器 3
第二節 手勢辨識 4
第三節 關鍵點偵測 6
第四節 手勢訓練 8
第三章 研究方法 13
第一節 系統架構 13
第二節 導覽文字 15
第三節 環場影片 16
第四節 地圖資料 17
第五節 手部模型 18
第六節 手勢辨識 20
第七節 按鈕控制 23
第四章 手勢訓練 25
第一節 偵測器及分類器 25
第二節 訓練資料 27
第三節 訓練參數 28
第四節 辨識流程 29
第五節 靜態手勢 30
第六節 動態手勢 31
第五章 實驗結果 33
第一節 問卷設計 33
第二節 測試流程 33
第三節 結果分析 34
第四節 問卷回饋 36
第六章 結論 37
第一節 評估與改善 37
第二節 研究應用 37
第三節 未來研究方向 37
參考文獻 39
附錄一:系統易用性量表 43
附錄二:自訂問卷 45

[1]A. B. Ray and S. Deb, "Smartphone Based Virtual Reality Systems in Classroom Teaching — A Study on the Effects of Learning Outcome," in 2016 IEEE Eighth International Conference on Technology for Education (T4E), 2-4 Dec. 2016, pp. 68-71.
[2]C. Malinchi, A. Ciupe, S. Meza, and B. Orza, "A Mobile Exploration Solution for Virtual Libraries in Higher Education," in 2017 IEEE 17th International Conference on Advanced Learning Technologies (ICALT), 3-7 July 2017, pp. 490-492.
[3]Microsoft, "HoloLens," Available:https://docs.microsoft.com/en-us/dynamics365/ mixed-reality/guides/operator-gestures, 2020.
[4]M. Leap, "CREATOR," Available:https://creator.magicleap.com/learn/guides/lumin-sdk-handtracking, 3 Dec. 2019.
[5]HTC, "VIVE Hand Tracking SDK Guide," Available:https://developer.viveport.com /documents/sdk/zh-tw/vivehandtracking_index.html, 3 Dec.2019.
[6]J. Tompson, "NYU Hand Pose Dataset," Available:https://jonathantompson. github.io/NYU_Hand_Pose_Dataset.htm#overview, 2020.
[7]H. Joo, T. Simon, D. Xiang, Y. Raaj, and P. Y. Sheikh, "CMU Panoptic Dataset," Available:http://domedb.perception.cs.cmu.edu/index.html, 2020.
[8]Y. Zhang, "EgoGesture Dataset," Available:http://www.nlpr.ia.ac.cn/iva/yfzhang/ datasets/egogesture.html, 2020.
[9]V. Jain, R. Perla, and R. Hebbalaguppe, "[POSTER] AirGestAR: Leveraging Deep Learning for Complex Hand Gestural Interaction with Frugal AR Devices," in 2017 IEEE International Symposium on Mixed and Augmented Reality (ISMAR-Adjunct), 2017, pp. 235-239.
[10]Y. Lecun, L. Bottou, Y. Bengio, and P. Haffner, "Gradient-based learning applied to document recognition," Proceedings of the IEEE, vol. 86, no. 11, pp. 2278-2324, 1998.
[11]A. Krizhevsky, I. Sutskever, and G. E. Hinton, "Imagenet classification with deep convolutional neural networks," in Advances in neural information processing systems, 2012, pp. 1097-1105.
[12]K. Simonyan and A. J. a. p. a. Zisserman, "Very deep convolutional networks for large-scale image recognition," in 2015International Conference on Learning Representations (ICLR), 2014.
[13]C. Szegedy et al., "Going deeper with convolutions," in 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2015, pp. 1-9.
[14]K. He, X. Zhang, S. Ren, and J. Sun, "Deep residual learning for image recognition," in Proceedings of the IEEE conference on computer vision and pattern recognition, 2016, pp. 770-778.
[15]G. Huang, Z. Liu, L. Van Der Maaten, and K. Q. Weinberger, "Densely connected convolutional networks," in Proceedings of the IEEE conference on computer vision and pattern recognition, 2017, pp. 4700-4708.
[16]A. G. Howard et al., "Mobilenets: Efficient convolutional neural networks for mobile vision applications," Google Inc., 2017.
[17]X. Zhang, X. Zhou, M. Lin, and J. Sun, "Shufflenet: An extremely efficient convolutional neural network for mobile devices," in Proceedings of the IEEE conference on computer vision and pattern recognition, 2018, pp. 6848-6856.
[18]M. Tan and Q. V. J. a. p. a. Le, "Efficientnet: Rethinking model scaling for convolutional neural networks," Google Inc., 2019.
[19]N. Otberdout, L. Ballihi, and D. Aboutajdine, "Hand pose estimation based on deep learning depth map for hand gesture recognition," in 2017 Intelligent Systems and Computer Vision (ISCV), 2017, pp. 1-8.
[20]P. Molchanov, X. Yang, S. Gupta, K. Kim, S. Tyree, and J. Kautz, "Online Detection and Classification of Dynamic Hand Gestures with Recurrent 3D Convolutional Neural Networks," in 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2016, pp. 4207-4215.
[21]T. Yamashita and T. Watasue, "Hand posture recognition based on bottom-up structured deep convolutional neural network with curriculum learning," in 2014 IEEE International Conference on Image Processing (ICIP), 2014, pp. 853-857.
[22]V. John, A. Boyali, S. Mita, M. Imanishi, and N. Sanma, "Deep Learning-Based Fast Hand Gesture Recognition Using Representative Frames," in 2016 International Conference on Digital Image Computing: Techniques and Applications (DICTA), 2016, pp. 1-8.
[23]G. Strezoski, D. Stojanovski, I. Dimitrovski, and G. Madjarov, "Hand gesture recognition using deep convolutional neural networks," in International Conference on ICT Innovations, 2016, pp. 49-58: Springer.
[24]T. Chaikhumpha and P. Chomphuwiset, "Real — time two hand gesture recognition with condensation and hidden Markov models," in 2018 International Workshop on Advanced Image Technology (IWAIT), 2018, pp. 1-4.
[25]Z. Lu, Z. Li-Shuang, S. Lei, and Z. Xue-Bo, "Dynamic hand gesture recognition using HMM-BPNN model," in 2016 IEEE International Conference on Real-time Computing and Robotics (RCAR), 2016, pp. 422-426.
[26]S. Chen, A. Hernawan, Y. Lee, and J. Wang, "Hand gesture recognition based on Bayesian sensing hidden Markov models and Bhattacharyya divergence," in 2017 IEEE International Conference on Image Processing (ICIP), 2017, pp. 3535-3539.
[27]N. H. Dardas and N. D. Georganas, "Real-Time Hand Gesture Detection and Recognition Using Bag-of-Features and Support Vector Machine Techniques," IEEE Transactions on Instrumentation and Measurement, vol. 60, no. 11, pp. 3592-3607, 2011.
[28]M. P. Tarvekar, "Hand Gesture Recognition System for Touch-Less Car Interface Using Multiclass Support Vector Machine," in 2018 Second International Conference on Intelligent Computing and Control Systems (ICICCS), 2018, pp. 1929-1932.
[29]Y. Ren and F. Zhang, "Hand Gesture Recognition Based on MEB-SVM," in 2009 International Conference on Embedded Software and Systems, 2009, pp. 344-349.
[30]S. Shiravandi, M. Rahmati, and F. Mahmoudi, "Hand gestures recognition using dynamic Bayesian networks," in 2013 3rd Joint Conference of AI & Robotics and 5th RoboCup Iran Open International Symposium, 2013, pp. 1-6.
[31]H. Suk, B. Sin, and S. Lee, "Robust modeling and recognition of hand gestures with dynamic Bayesian network," in 2008 19th International Conference on Pattern Recognition, 2008, pp. 1-4.
[32]W. A. Wang and T. Chun-Liang, "Dynamic hand gesture recognition using hierarchical dynamic Bayesian networks through low-level image processing," in 2008 International Conference on Machine Learning and Cybernetics, 2008, vol. 6, pp. 3247-3253.
[33]譚翔, "智慧型手機平台之全景影片式虛擬實境導覽," http://etd.ndhu.edu.tw/cgi-bin/gs32/gsweb.cgi?o=dstdcdr&s=id=%22G0610521231%22.&searchmode=basic, 2018.
[34]Steam, "SteamVR Plugin," Available:https://assetstore.unity.com/packages/tools/ integration/steamvr-plugin-32647, 2020.
[35]O. Köpüklü, A. Gunduz, N. Kose, and G. Rigoll, "Real-time hand gesture detection and classification using convolutional neural networks," in 2019 14th IEEE International Conference on Automatic Face & Gesture Recognition (FG 2019), 2019, pp. 1-8: IEEE.
[36]NVIDIA, "NVIDIA Dynamic Hand Gesture Dataset," Available:https://docs.google. com/forms/d/e/1FAIpQLSc7ZcohjasKVwKszhISAH7DHWi8ElounQd1oZwORkSFzrdKbg/viewform, 2020.
[37]J. J. U. e. i. i. Brooke, "SUS-A quick and dirty usability scale," Usability evaluation in industry, vol. 189, no. 194, pp. 4-7, 1996.
[38]R. J. A. o. p. Likert, "A technique for the measurement of attitudes," New York : The Science Press, 1932.
 
 
 
 
第一頁 上一頁 下一頁 最後一頁 top
* *