帳號:guest(3.138.67.221)          離開系統
字體大小: 字級放大   字級縮小   預設字形  

詳目顯示

以作者查詢圖書館館藏以作者查詢臺灣博碩士論文系統以作者查詢全國書目勘誤回報
作者:游一宏
作者(英文):Yi-Hong Yu
論文名稱:使用深度學習運算開發低功率數位微旋轉積體電路之演算法
論文名稱(英文):Developing Searching Algorithm for CORDIC with Deep Learning
指導教授:劉耿銘
指導教授(英文):Keng-Ming Liu
口試委員:方士豪
何盈杰
口試委員(英文):Shih-Hau Fang
Ying-Chieh Ho
學位類別:碩士
校院名稱:國立東華大學
系所名稱:電機工程學系
學號:610523002
出版年(民國):107
畢業學年度:107
語文別:中文
論文頁數:38
關鍵詞:數位座標旋轉器深度學習類神經網路
關鍵詞(英文):CORDICdeep learningartificial neural network
相關次數:
  • 推薦推薦:0
  • 點閱點閱:9
  • 評分評分:系統版面圖檔系統版面圖檔系統版面圖檔系統版面圖檔系統版面圖檔
  • 下載下載:4
  • 收藏收藏:0
因應資訊科技的進步,手持與穿戴式裝置的使用逐漸提升,是由於輕薄方便攜帶的優點,並且整合多種功能。然而在有限的空間,對於電池的容量必須有所取捨,因此在硬體的使用上需選擇低功耗的產品。穿戴式裝置中擁有分析訊號的能力,包含影像辨識、語音辨識等,透過傅立葉頻域分析,有良好的效果。
數位座標旋轉器是傅立葉頻域分析在硬體實現上的低成本、低功率的重要功具,但是角度集必須經過演算法計算後預存在記憶體中。為了減少演算法成本,本研究使用類神經網路進行深度學習,進行數位座標旋轉器的角度搜索,以取代傳統搜尋演算法需要龐大記憶體的優點。不久的將來,中央處理單元都會支援神經網路的深度學習,藉由調整類神經路來搜索角度集參數,其複雜度將因硬體支援而大幅降低。包含類神經網路的層數以及各層級的神經元數目,能夠提升網路學習效果。依學習結果與測試,選擇使用五層類神經網路,其輸入單元為一個,有四層隱藏層,而第一層至第四層之神經元數目分別為10個、12個、15個、15個,輸出層則有兩個神經元。此類神經網路應用於擴展角度數位座標旋轉器之訊號雜訊比(Signal-to-Noise Ratio, SNR)平均值為64.44dB,最大值為93.79dB,最小值為43.12dB。
With the improvement of information technologies, the usage of handheld and wearable devices are rapidly increased. However, based on thin, light and multiple functions integrated in a limited space, there must be trade-off between the capacities of the battery Accordingly, low-power hardware are needed for these consumer products. Fourier transform is a popular method to analyze characteristics of signals. That is, the wearable device then, has the good ability to analyze signals, including image recognition, speech recognition, etc. However, Fast Fourier Transform (FFT) is common implementation with high hardware complexity.
Coordinate Rotation Digital Computer (CORDIC) is a digital calculator, it replaces the complex multiplier with low cost design. However both CORDIC and complex multiplier require large amounts of memory based on searching algorithms. This study uses a neural network for deep learning to search angles for angle rotation. By adjusting the complexity of the neural network, including the number of hidden layers and the number of neurons at each level, the network learning performance can be improved. Based on the learning results and tests, a five-layer neural network is selected. The network has one input unit, and the number of hidden layers is four. The number of neurons from the 1st to 4th layer is 10, 12, 15, and15, respectively. Besides, the output layer has two neurons. The neural network is applied to the Extended Elementary-Angle Set (EEAS) CORDIC, which results in an average signal-to-noise ratio (SNR) of 64.44dB, a maximum value of 93.79dB, and a minimum value of 43.12dB.
第一章 簡介1
第二章 數位座標旋轉器5
第三章 類神經網路15
第四章 網路之建立與結果25
第五章 研究結果與分析29
第六章 結論與未來方向35
參考文獻37
1. Volder, J.E., The CORDIC Trigonometric Computing Technique. IRE Transactions on Electronic Computers, 1959. EC-8(3): p. 330-334.

2. Meher, P.K., et al., 50 Years of CORDIC: Algorithms, Architectures, and Applications. IEEE Transactions on Circuits and Systems I: Regular Papers, 2009. 56(9): p. 1893-1907.

3. Hu, Y.H., CORDIC-based VLSI architectures for digital signal processing. IEEE Signal Processing Magazine, 1992. 9(3): p. 16-35.

4. Cheng-Shing, W. and W. An-Yeu. A novel rotational VLSI architecture based on extended elementary angle set CORDIC algorithm. in Proceedings of Second IEEE Asia Pacific Conference on ASICs. AP-ASIC 2000 (Cat. No.00EX434). 2000.

5. Cheng-Shing, W. and W. An-Yen, Modified vector rotational CORDIC (MVR-CORDIC) algorithm and architecture. IEEE Transactions on Circuits and Systems II: Analog and Digital Signal Processing, 2001. 48(6):p.548-561

6. Meryar Mohri, Afshine Rostamizadeh, Ameet Talwalker (2012).
Fundations of Machine Learning, The MIT Press ISNB 9780262018258.

7. S.Kotsiantis, Supervised Machine Learning: A Review of Classification Techniques, Informatica Journal 31 (2007) 249-268.

8. Rumelhart, David E. Hinton, Geoffrey E. Willian, Ronald J. "Learning representations by back-propagating errors". Nature, (8 OCtober 1986), 323 (6088) : 533-536.

9. LeCun, Y., Y. Bengio, and G. Hinton, Deep learning. nature, 2015. 521(7553):p.436.

10. Batista, G., & Monard, M.C., , An Analysis of Four Missing Data Treatment Methods for Supervised Learning, Applied Artificial Intelligence, (2003), vol. 17, pp.519-533.
 
 
 
 
第一頁 上一頁 下一頁 最後一頁 top
* *