Skip to Main Content
Frequently Asked Questions
Submit an ETD
Global Search Box
Need Help?
Keyword Search
Participating Institutions
Advanced Search
School Logo
Files
File List
32941.pdf (2.9 MB)
ETD Abstract Container
Abstract Header
Fault tolerance and re-training analysis on neural networks
Author Info
George, Abhinav Kurian
Permalink:
http://rave.ohiolink.edu/etdc/view?acc_num=ucin1552391639148868
Abstract Details
Year and Degree
2019, MS, University of Cincinnati, Engineering and Applied Science: Computer Engineering.
Abstract
In the current age of big data, artificial intelligence and machine learning technologies have gained much popularity. Due to the increasing demand for such applications, neural networks are being targeted toward hardware solutions. Owing to the shrinking feature size, number of physical defects are on the rise. These growing number of defects are preventing designers from realizing the full potential of the on-chip design. The challenge now is not only to find solutions that balance high-performance and energy-efficiency but also, to achieve fault-tolerance of a computational model. Neural computing, due to its inherent fault tolerant capabilities, can provide promising solutions to this issue. The primary focus of this thesis is to gain deeper understanding of fault tolerance in neural network hardware. As a part of this work, we present a comprehensive analysis of fault tolerance by exploring effects of faults on popular neural models: multi-layer perceptron model and convolution neural network. We built the models based on conventional 64-bit floating point representation. In addition to this, we also explore the recent 8-bit integer quantized representation. A fault injector model is designed to inject stuck-at faults at random locations in the network. The networks are trained with the basic backpropagation algorithm and tested against the standard MNIST benchmark. For training pure quantized networks, we propose a novel backpropagation strategy. Depending on the performance degradation, the faulty networks are re-trained to recover their accuracy. Results suggest that: (1) neural networks cannot be considered as completely fault tolerant; (2) quantized neural networks are more susceptible to faults; (3) using a novel training algorithm for quantized networks, comparable accuracy is achieved; (4) re-training is an effective strategy to improve fault tolerance. In this work, 30% improvement in quantized network is achieved as compared to 6% improvement in floating point networks using the basic backpropagation algorithm. We believe that using more advanced re-training strategies can enhance fault tolerance to a greater extent.
Committee
Wen-Ben Jone, Ph.D. (Committee Chair)
Carla Purdy, Ph.D. (Committee Member)
Ranganadha Vemuri, Ph.D. (Committee Member)
Pages
109 p.
Subject Headings
Computer Engineering
Keywords
neural networks
;
artificial intelligence
;
fault tolerance
;
quantization
Recommended Citations
Refworks
EndNote
RIS
Mendeley
Citations
George, A. K. (2019).
Fault tolerance and re-training analysis on neural networks
[Master's thesis, University of Cincinnati]. OhioLINK Electronic Theses and Dissertations Center. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1552391639148868
APA Style (7th edition)
George, Abhinav Kurian.
Fault tolerance and re-training analysis on neural networks.
2019. University of Cincinnati, Master's thesis.
OhioLINK Electronic Theses and Dissertations Center
, http://rave.ohiolink.edu/etdc/view?acc_num=ucin1552391639148868.
MLA Style (8th edition)
George, Abhinav Kurian. "Fault tolerance and re-training analysis on neural networks." Master's thesis, University of Cincinnati, 2019. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1552391639148868
Chicago Manual of Style (17th edition)
Abstract Footer
Document number:
ucin1552391639148868
Download Count:
669
Copyright Info
© 2019, all rights reserved.
This open access ETD is published by University of Cincinnati and OhioLINK.
Release 3.2.12