帳號:guest(3.21.159.82)          離開系統
字體大小: 字級放大   字級縮小   預設字形  

詳目顯示

以作者查詢圖書館館藏以作者查詢臺灣博碩士論文系統以作者查詢全國書目勘誤回報
作者:王永慶
作者(英文):Yung-Ching Wang
論文名稱:基於虛擬實境技術的逆向運動學化身與虛擬攝影棚
論文名稱(英文):Inverse Kinematics Avatar and Virtual Studio based on Virtual Reality Technology
指導教授:楊茂村
指導教授(英文):Mau-Tsuen Yang
口試委員:黃成永
王錫澤
口試委員(英文):Cheng-Yong Huang
Hsi-Tse Wang
學位類別:碩士
校院名稱:國立東華大學
系所名稱:資訊工程學系
學號:610421226
出版年(民國):108
畢業學年度:108
語文別:中文
論文頁數:46
關鍵詞:虛擬實境化身逆向運動學動作捕捉虛擬攝影棚
關鍵詞(英文):Virtual RealityAvatarInverse KinematicsMotion CaptureVirtual Studio
相關次數:
  • 推薦推薦:0
  • 點閱點閱:43
  • 評分評分:系統版面圖檔系統版面圖檔系統版面圖檔系統版面圖檔系統版面圖檔
  • 下載下載:33
  • 收藏收藏:0
不同於過往在發展與普及的道路上走得跌跌撞撞,虛擬實境相關技術的進步在近年間飛速成長,其發展來到了歷史上最蓬勃的時刻,原因在於製造技術的突破使一般消費者能夠負擔得起高效能、高沉浸感的VR設備,普及度的提升從而吸引各路開發者投入開發各式VR應用,然而由於對其「身歷其境」特性的直觀理解,VR應用的種類大多以娛樂、模擬為主。
同時,網路影音社群平台崛起,在這個以小型媒體、自媒體為主的生態中以虛擬化身與觀眾互動的形象經營方式成為了新興的流行,這些經營著虛擬化身形象卻資本有限的媒體創作者們亟欲一套能以負擔得起的成本實現比2D動畫效果更好的化身呈現方案。
本篇論文的研究期望能突破VR應用種類的藩籬、探討VR應用作為媒體內容製作工具的可行性。方法是設計一套虛擬攝影棚系統:利用高階消費級VR設備的定位原理輔以逆向運動學套件,達到全身動作捕捉的效果用以操作3D虛擬化身使其栩栩如生,最後透過虛擬攝影機進行網路直播或影片錄製。
Recently, the advances of virtual reality (VR) technology make numerous VR applications feasible in areas of entertainment and simulation. At the same time, online video platforms become popular and virtual avatars are adopted by many famous YouTubers. We explore the possibility to generate 3D animations of a virtual avatar based on the movements of a real user. A few optical trackers are attached to the user’s limbs as motion capture. Then 3D positions of more joints in the human skeletal system are inferred through inverse kinematics. The automatic mapping from a real user to a virtual avatar can make online broadcast more interesting and appealing in virtual studio.
致謝 i
摘要 iii
Abstract v
目錄 vii
圖目錄 ix
表目錄 xi
第一章 序論 1
1.1 研究動機 1
1.2 研究目的 2
第二章 文獻探討 5
2.1 虛擬實境 5
2.2 3D平台-Unity 5
2.3 HTC Vive 5
2.4 逆向運動學 7
2.5 文獻比較 8
第三章 研究方法 11
3.1 系統架構 11
3.2 虛擬攝影棚的建置 13
3.2.1 設置SteamVR Plugin 13
3.2.2 導入Avatar Model並綁定IK 17
3.2.3 Avatar表情控制與棚內全身鏡 19
3.2.4 虛擬攝影機畫面輸出 24
3.3 系統與使用者的互動方式 27
3.3.1 功能與使用方式 27
3.3.2 兩種畫面輸出模式 30
第四章 實驗成果 35
4.1 成效驗證 35
4.2 成本比較 39
第五章 結論 43
5.1 結果評估 43
5.2 未來發展方向 43
參考文獻 45
[1]Google, Cardboard, Available: https://vr.google.com/cardboard/, 25 Jun. 2019.
[2]Samsung, Gear VR, Available: https://www.samsung.com/global/galaxy/gear-vr/, 27 Nov. 2018.
[3]HTC, HTC Vive, Available: https://www.vive.com/, 5 Apr. 2019.
[4]Oculus VR, Oculus Rift, Available: https://www.oculus.com/rift/, 28 Mar. 2019.
[5]Ichikara Inc., Mito Tsukino, Available: https://nijisanji.ichikara.co.jp/member/mito-tsukino/, 4 Feb. 2019.
[6]Unity Technologies, Unity, Available: https://unity.com/, 8 Jun. 2019.
[7]D. Go, H. Hyung, D. Lee and H. U. Yoon, "Andorid Robot Motion Generation Based on Video-Recorded Human Demonstrations," 2018 27th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN), Nanjing, 2018, pp. 476-478.
[8]H. Cho, S. Jung and H. Jee, "Real-time interactive AR system for broadcasting," 2017 IEEE Virtual Reality (VR), Los Angeles, CA, 2017, pp. 353-354.
[9]M. Husinsky and F. Bruckner, "Virtual Stage: Interactive Puppeteering in Mixed Reality," 2018 IEEE 1st Workshop on Animation in Virtual and Augmented Environments (ANIVAE), Reutlingen, 2018, pp. 1-7.
[10]J. Daemen, P. Haufs-Brusberg and J. Herder, "Markerless actor tracking for virtual (TV) studio applications," 2013 International Joint Conference on Awareness Science and Technology & Ubi-Media Computing (iCAST 2013 & UMEDIA 2013), Aizu-Wakamatsu, 2013, pp. 790-796.
[11]T. He et al., "Immersive and collaborative Taichi motion learning in various VR environments," 2017 IEEE Virtual Reality (VR), Los Angeles, CA, 2017, pp. 307-308.
[12]J. C. P. Chan, H. Leung, J. K. T. Tang and T. Komura, "A Virtual Reality Dance Training System Using Motion Capture Technology," in IEEE Transactions on Learning Technologies, vol. 4, no. 2, pp. 187-195, April-June 2011.
[13]D. T. Han, S. P. Sargunam and E. D. Ragan, "Simulating anthropomorphic upper body actions in virtual reality using head and hand motion data," 2017 IEEE Virtual Reality (VR), Los Angeles, CA, 2017, pp. 387-388.
[14]A. Becher, C. Axenie and T. Grauschopf, "VIRTOOAIR: Virtual Reality TOOlbox for Avatar Intelligent Reconstruction," 2018 IEEE International Symposium on Mixed and Augmented Reality Adjunct (ISMAR-Adjunct), Munich, Germany, 2018, pp. 275-279.
[15]B. Bodenheimer, S. Creem-Regehr, J. Stefanucci, E. Shemetova and W. B. Thompson, "Prism aftereffects for throwing with a self-avatar in an immersive virtual environment," 2017 IEEE Virtual Reality (VR), Los Angeles, CA, 2017, pp. 141-147.
[16]OBS Project, Open Broadcaster Software, Available: https://obsproject.com/, 1 Sep. 2018.
[17]Valve Corporation, SteamVR, Available: https://store.steampowered.com/steamvr, 5 Apr. 2019.
[18]VRM Consortium, VRM, Available: https://vrm.dev/en/, 16 Apr. 2019.
[19]DWANGO Co., Ltd., Alicia Solid, Available: https://3d.nicovideo.jp/works/td32797, 16 Apr. 2019.
[20]Oculus VR, Oculus Lipsync Unity, Available: https://developer.oculus.com/downloads/package/oculus-lipsync-unity/, 30 Jun. 2019.
[21]Noitom, Perception Neuron PRO, Available: https://neuronmocap.com/content/perception-neuron-pro, 30 Jun. 2019.
[22]NaturalPoint, Inc., OptiTrack, Available: https://optitrack.com/systems/#motive-body/prime-41/12, 30 Jun. 2019.
 
 
 
 
第一頁 上一頁 下一頁 最後一頁 top
* *