|
[1]Yann LeCun, Leon Bottou, Yoshua Bengio, et al. (1998). “Gradient-based learning applied to document recognition.” Proceedings of the IEEE, 1998, 86(11):2278-2324. [2]李戈副教授,「深度學習技術與應用」,北京大學軟件工程研究所教材,中華民國一○四年十月。 [3]Stanford CS231n note : Convolutional Neural Networks for Visual Recognition. [4]Breiman, L. (2001). Random forests. Machine learning, 45(1), 5-32. [5]Breiman, L., Friedman, J. H., Olshen, R. A., & Stone, C. J. (1993). Classification and Regression Trees. Wadsworth, 1984. Google Scholar. [6] Chen, T. Y., Ting, P. W., Wu, M. Y., & Fu, L. C. (2017). Learning a deep network with spherical part model for 3D hand pose estimation. In Robotics and Automation (ICRA), 2017 IEEE International Conference on (pp. 2600-2605). IEEE. [7]Sinha, A., Choi, C., & Ramani, K. (2016). Deephand: Robust hand pose estimation by completing a matrix imputed with deep features. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 4150-4158). [8]Oberweger, M., Wohlhart, P., & Lepetit, V. (2015). Hands deep in deep learning for hand pose estimation. arXiv preprint arXiv:1502.06807. [9]Tompson, J., Stein, M., Lecun, Y., & Perlin, K. (2014). Real-time continuous pose recovery of human hands using convolutional networks. ACM Transactions on Graphics (ToG), 33(5), 169. [10]Ren, S., He, K., Girshick, R., & Sun, J. (2015). Faster r-cnn: Towards real-time object detection with region proposal networks. Advances in neural information processing systems, 91-99. [11]Girshick, R. (2015). Fast r-cnn. arXiv preprint arXiv:1504.08083. [12]Liu, W., Anguelov, D., Erhan, D., Szegedy, C., Reed, S., Fu, C. Y., & Berg, A. C. (2016, October). Ssd: Single shot multibox detector. In European conference on computer vision (pp. 21-37). Springer, Cham. [13]Erhan, D., Szegedy, C., Toshev, A., & Anguelov, D. (2014). Scalable object detection using deep neural networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (pp. 2147-2154). [14]Fan, J., Xu, W., Wu, Y., & Gong, Y. (2010). Human tracking using convolutional neural networks. IEEE Transactions on Neural Networks, 21(10), 1610-1623. [15]Molchanov, P., Gupta, S., Kim, K., & Kautz, J. (2015). Hand gesture recognition with 3D convolutional neural networks. In Proceedings of the IEEE conference on computer vision and pattern recognition workshops (pp. 1-7). [16]Garg, A., Noyola, J., & Bagadia, S. (2016). Lip reading using CNN and LSTM. Technical report, Stanford University, CS231n project report. [17]Chen, C. P., Chen, Y. T., Lee, P. H., Tsai, Y. P., & Lei, S. (2011). Real-time hand tracking on depth images. Visual Communications and Image Processing (VCIP), 2011 IEEE, 1-4. [18]Garg, P., Aggarwal, N., & Sofat, S. (2009). Vision based hand gesture recognition. World Academy of Science, Engineering and Technology, 49(1), 972-977. [19]Park, S., Yu, S., Kim, J., Kim, S., & Lee, S. (2012). 3D hand tracking using Kalman filter in depth space. EURASIP Journal on Advances in Signal Processing, 2012(1), 36. [20]Athitsos, V., & Sclaroff, S. (2003, June). Estimating 3D hand pose from a cluttered image. In Computer Vision and Pattern Recognition, 2003. Proceedings. 2003 IEEE Computer Society Conference on (Vol. 2, pp. II-432). IEEE. [21]Yeo, H. S., Lee, B. G., & Lim, H. (2015). Hand tracking and gesture recognition system for human-computer interaction using low-cost hardware. Multimedia Tools and Applications, 74(8), 2687-2715. [22]Wand, M., Koutník, J., & Schmidhuber, J. (2016, March). Lipreading with long short-term memory. In Acoustics, Speech and Signal Processing (ICASSP), 2016 IEEE International Conference on (pp. 6115-6119). IEEE. [23]Breiman, L. (1996). Bagging predictors. Machine learning, 24(2), 123-140. [24]Efron, B., & Tibshirani, R. J. (1994). An introduction to the bootstrap. CRC press. [25]He, K., Zhang, X., Ren, S., & Sun, J. (2016). Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 770-778). [26]Sinha, A., Choi, C., & Ramani, K. DeepHand: Robust Hand Pose Estimation by Completing a Matrix Imputed with Deep Features-Supplementary Material. In Robotics and Automation (ICRA), 2017 IEEE International Conference on (pp. 2600-2605). IEEE.
|