|
CAMSENSE (2018/08/09)。什麼是視覺空間定位?取自https://kknews.cc/zh-tw/tech/6or6vo3.html HERESY (2011/01/08)。用FAAST把Kinect當Windows鍵盤用! 取自https://kheresy.wordpress.com/2011/01/08/faast_input_key_via_kinect/ Rice Yang (2021/04/12)。用MediaPipe快速搭建Hand Tracking。取自https://u9534056.medium.com/mediapipe-%E7%B0%A1%E5%96%AE%E6%98%93%E7%94%A8%E7%9A%84%E6%B7%B1%E5%BA%A6%E5%AD%B8%E7%BF%92%E6%8E%A8%E7%90%86%E6%A1%86%E6%9E%B6-4898eed9f839 VR領域答主 (2022/02/12)。VR定位技術Outside-in VS Inside-out外向內VS內向外 追蹤技術。取自https://zhuanlan.zhihu.com/p/243300575 李文恩 (2020/01/31)。手勢控制再進化,Glamos帶來虛擬空氣觸控螢幕。取自https://www.techbang.com/posts/75902-gesture-control-reevolves-glamos-brings-virtual-air-touch-screen 李宗翰 (2022/02/27)。從家家有電腦到人人有電腦,英特爾預期PC市場仍存在成長動能,並揭露未來三年登場個人電腦處理器平臺。取自https://www.ithome.com.tw/news/149572 游陳叡(2021)。基於深度學習和MediaPipe的手勢輔助虛擬觸控系統。國立中正大學資訊工程研究所碩士論文,嘉義縣。 取自https://hdl.handle.net/11296/743479 萌繪(2016/06/17)。零基礎漫畫入門-16.手部畫法的解剖。取自https://www.moehui.com/8446.html 黃群翔(2022)。結合 Google MediaPipe實現一手勢辨識控制智能家電之物聯網系統。明志科技大學電子工程系碩士班碩士論文,新北市。 取自https://hdl.handle.net/11296/36ny6a 邱庭毅(2021)。無人機環境感知與階層式手勢控制於人機協作任務應用。﹝碩士論文。國立政治大學﹞臺灣博碩士論文知識加值系統。 https://hdl.handle.net/11296/sh6eby。 Alexander, Andrey, and Karina(2022/09). HaGRID Classification 512p 127k. Retrieved March 27, 2023, form https://www.kaggle.com/datasets/innominate817/hagrid-classification-512p-127k?resource=download Google-research-datasets (n.d.). Objectron. Retrieved September 8, 2022, from https://github.com/google-research-datasets/Objectron/ International Journal of Mon-Machine Studies (1969). Retrieved from https://dblp.org/db/journals/ijmms/ijmms15.html Ivan Grishchenko and Valentin Bazarevsky (2020/10/10). MediaPipe Holistic – Simultaneous Face, Hand and Pose Prediction, on Device. Retrieved September 8, 2022, from https://ai.googleblog.com/2020/12/mediapipe-holistic-simultaneous-face.html Ming Guang Yong, (2019/10/10). Object Detection and Tracking using MediaPipe. Retrieved September 8, 2022, from https://developers.googleblog.com/2019/12/object-detection-and-tracking-using-mediapipe.html Myoung-Kyu Sohn, Sang-Heon Lee, Dong-Ju Kim, &Hyunduk Kim (2011)。Hand Gesture Key Emulation Toolkit(HandGKET). Retrieved September 8, 2022, from https://sites.google.com/site/kinectapps/handgket Zhicheng Wang , Genzhi Ye, & MediaPipe team (2020/04/22). MediaPipe KNIFT : Template-based feature matching. Retrieved from https://developers.googleblog.com/2020/04/mediapipe-knift-template-based-feature-matching.html Ablavatski, A., Vakunov, A., Grishchenko, I., Raveendran, K., & Zhdanovich, M. (2020). Real-time Pupil Tracking from Monocular Video for Digital Puppetry. arXiv preprint arXiv:2006.11341. Ahmadyan, A., Zhang, L., Ablavatski, A., Wei, J., & Grundmann, M. (2021). Objectron: A large scale dataset of object-centric videos in the wild with pose annotations. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (pp. 7822-7831). Bazarevsky, V., Grishchenko, I., Raveendran, K., Zhu, T., Zhang, F., & Grundmann, M. (2020). Blazepose: On-device real-time body pose tracking. arXiv preprint arXiv:2006.10204. Bazarevsky, V., Kartynnik, Y., Vakunov, A., Raveendran, K., & Grundmann, M. (2019). Blazeface: Sub-millisecond neural face detection on mobile gpus. arXiv preprint arXiv:1907.05047. Cohen, P. R., & Oviatt, S. L. (1995). The role of voice input for human-machine communication. proceedings of the National Academy of Sciences, 92(22), 9921-9927. Hasan, H. S., & Kareem, S. A. (2012, November). Human computer interaction for vision based hand gesture recognition: a survey. In 2012 International Conference on Advanced Computer Science Applications and Technologies (ACSAT) (pp. 55-60). IEEE. Hershey, J. R., Chen, Z., Le Roux, J., & Watanabe, S. (2016, March). Deep clustering: Discriminative embeddings for segmentation and separation. In 2016 IEEE international conference on acoustics, speech and signal processing (ICASSP) (pp. 31-35). IEEE. Jacob, R. J., & Karn, K. S. (2003). Eye tracking in human-computer interaction and usability research: Ready to deliver the promises. In The mind's eye (pp. 573-605). North-Holland. Jaimes, A., & Sebe, N. (2007). Multimodal human–computer interaction: A survey. Computer vision and image understanding, 108(1-2), 116-134. Kartynnik, Y., Ablavatski, A., Grishchenko, I., & Grundmann, M. (2019). Real-time facial surface geometry from monocular video on mobile GPUs. arXiv preprint arXiv:1907.06724. Kaushik, D., & Jain, R. (2014). Natural user interfaces: Trend in virtual interaction. arXiv preprint arXiv:1405.0101. Lee, D. L., & You, W. S. (2018). Recognition of complex static hand gestures by using the wristband‐based contour features. IET Image Processing, 12(1), 80-87. Licklider, J. C. (1960). Man-computer symbiosis. IRE transactions on human factors in electronics, (1), 4-11. Lin, T. Y., Dollár, P., Girshick, R., He, K., Hariharan, B., & Belongie, S. (2017). Feature pyramid networks for object detection. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 2117-2125). Lin, T. Y., Goyal, P., Girshick, R., He, K., & Dollár, P. (2017). Focal loss for dense object detection. In Proceedings of the IEEE international conference on computer vision (pp. 2980-2988). Liu, W., Anguelov, D., Erhan, D., Szegedy, C., Reed, S., Fu, C. Y., & Berg, A. C. (2016, October). Ssd: Single shot multibox detector. In European conference on computer vision (pp. 21-37). Springer, Cham. Majaranta, P., & Bulling, A. (2014). Eye tracking and eye-based human–computer interaction. Advances in physiological computing, 39-65. Mantiuk, R., Kowalik, M., Nowosielski, A., & Bazyluk, B. (2012, January). Do-it-yourself eye tracker: Low-cost pupil-based eye tracker for computer graphics applications. In International Conference on Multimedia Modeling (pp. 115-125). Springer, Berlin, Heidelberg. Mesbahi, S. C., Mahraz, M. A., Riffi, J., & Tairi, H. (2018, April). Hand gesture recognition based on convexity approach and background subtraction. In 2018 International Conference on Intelligent Systems and Computer Vision (ISCV) (pp. 1-5). IEEE. Moscovich, T. (2007). Principles and applications of multi-touch interaction. Neubeck, A., & Van Gool, L. (2006, August). Efficient non-maximum suppression. In 18th International Conference on Pattern Recognition (ICPR'06) (Vol. 3, pp. 850-855). IEEE. Ogihara, A., Matsumoto, H., & Shiozaki, A. (2005, December). Hand region extraction by background subtraction with renewable background for hand gesture recognition. In 2006 International Symposium on Intelligent Signal Processing and Communications (pp. 227-230). IEEE. Rautaray, S. S., & Agrawal, A. (2015). Vision based hand gesture recognition for human computer interaction: a survey. Artificial intelligence review, 43(1), 1-54. Reddy, V. V., Dhyanchand, T., Krishna, G. V., & Maheshwaram, S. (2020, September). Virtual Mouse Control Using Colored Finger Tips and Hand Gesture Recognition. In 2020 IEEE-HYDCON (pp. 1-5). IEEE. Shackel, B. (1959). Ergonomics for a computer. Design, 120(1), 36-39. Shajideen, S. M. S., & Preetha, V. H. (2018, December). Hand Gestures-Virtual Mouse for Human Computer Interaction. In 2018 International Conference on Smart Systems and Inventive Technology (ICSSIT) (pp. 543-546). IEEE. Shannon, C. E. (1948). A mathematical theory of communication. The Bell system technical journal, 27(3), 379-423. Tan, P., Han, X., Zou, Y., Qu, X., Xue, J., Li, T., ... & Wang, Z. L. (2022). Self‐Powered Gesture Recognition Wristband Enabled by Machine Learning for Full Keyboard and Multicommand Input. Advanced Materials, 34(21), 2200793. Tkachenka, A., Karpiak, G., Vakunov, A., Kartynnik, Y., Ablavatski, A., Bazarevsky, V., & Pisarchyk, S. (2019). Real-time hair segmentation and recoloring on mobile gpus. arXiv preprint arXiv:1907.06740. Wei, J., Ye, G., Mullen, T., Grundmann, M., Ahmadyan, A., & Hou, T. (2019). Instant motion tracking and its applications to augmented reality. arXiv preprint arXiv:1907.06796. Wilk, M. P., Torres-Sanchez, J., Tedesco, S., & O'Flynn, B. (2018, August). Wearable human computer interface for control within immersive VAMR gaming environments using data glove and hand gestures. In 2018 IEEE Games, Entertainment, Media Conference (GEM) (pp. 1-9). IEEE. Wolpaw, J. R., McFarland, D. J., Neat, G. W., & Forneris, C. A. (1991). An EEG-based brain-computer interface for cursor control. Electroencephalography and clinical neurophysiology, 78(3), 252-259. Yao, Y., & Fu, Y. (2014). Contour model-based hand-gesture recognition using the Kinect sensor. IEEE Transactions on Circuits and Systems for Video Technology, 24(11), 1935-1944. Zhang, F., Bazarevsky, V., Vakunov, A., Tkachenka, A., Sung, G., Chang, C. L., & Grundmann, M. (2020). Mediapipe hands: On-device real-time hand tracking. arXiv preprint arXiv:2006.10214. Zhao, S., Tan, W., Wu, C., Liu, C., & Wen, S. (2009, June). A novel interactive method of virtual reality system based on hand gesture recognition. In 2009 Chinese Control and Decision Conference (pp. 5879-5882). IEEE.
|