Skip to Main Content
 

Global Search Box

 
 
 
 

Files

ETD Abstract Container

Abstract Header

Built-In Self Training of Hardware-Based Neural Networks

Anderson, Thomas

Abstract Details

2017, MS, University of Cincinnati, Engineering and Applied Science: Computer Engineering.
Arti cial neural networks and deep learning are a topic of increasing interest in computing. This has spurred investigation into dedicated hardware like accelerators to speed up the training and inference processes. This work proposes a new hardware architecture called Built-In Self Training (BISTr) for both training a network and performing inferences. The architecture combines principles from the Built-In Self Testing (BIST) VLSI paradigm with the backpropagation learning algorithm. The primary focus of the work is to present the BISTr architecture and verify its efficacy. The development of the architecture began with analysis of the backpropagation algorithm and the derivation of new equations. Once the derivations were complete, the hardware was designed and all of the functional components were tested using VHDL from the bottom to top level. An automatic synthesis tool was created to generate the code used and tested in the experimental phase. The application tested during the experiments was function approximation. The new architecture was trained successfully for a couple of the test cases. The other test cases were not successful, but this was due to the data representation used in the VHDL code and not a result of the hardware design itself. The area overhead of the added hardware and speed performance were analyzed briefly. The results showed that: (1) the area overhead was signifi cant (around 3 times the area without the additional hardware) and (2) the theoretical speed performance of the design is very good. The new BISTr architecture was proven to work and had a good theoretical speed performance. However, the architecture presented in this work cannot be implemented for large neural networks due to the large amount of area overhead. Further work would be required to expand upon the idea presented in this paper and improve it: (1) development of an alternative design that is more practical to implement, (2) more rigorous testing of area and speed, (3) implementation of other training methods and functionality, and (4) additions to the synthesis tool to increase its capability.
Wen-Ben Jone, Ph.D. (Committee Chair)
Ali Minai, Ph.D. (Committee Member)
Ranganadha Vemuri, Ph.D. (Committee Member)
123 p.

Recommended Citations

Citations

  • Anderson, T. (2017). Built-In Self Training of Hardware-Based Neural Networks [Master's thesis, University of Cincinnati]. OhioLINK Electronic Theses and Dissertations Center. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1512039036199393

    APA Style (7th edition)

  • Anderson, Thomas. Built-In Self Training of Hardware-Based Neural Networks. 2017. University of Cincinnati, Master's thesis. OhioLINK Electronic Theses and Dissertations Center, http://rave.ohiolink.edu/etdc/view?acc_num=ucin1512039036199393.

    MLA Style (8th edition)

  • Anderson, Thomas. "Built-In Self Training of Hardware-Based Neural Networks." Master's thesis, University of Cincinnati, 2017. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1512039036199393

    Chicago Manual of Style (17th edition)