We analyze the fundamental impact of noise propagation in deep neural network (DNN) comprising nonlinear neurons and with connections optimized by training. Our motivation is to understand the impact of noise in analogue neural network realizations. We consider the influence of additive and multiplicative, correlated and uncorrelated types of internal noise in DNNs. We find general properties of the noise impact depending on the noise type, activation function, depth and the statistics of connection matrices and show that noise accumulation can be efficiently avoided. Our work is based on analytical methods predicting the noise levels in all layers of the network.
Analog neural networks are promising candidates for overcoming the sever energy challenges of digital Neural Network processors. However, noise is an inherent part of analogue circuitry independent if electronic, optical or electro-optical integration is the target. I will discuss fundamental aspects of noise in analogue circuits and will then introduce our analytical framwork describing noise propagation in fully trained deep neural networks comprising nonlinear neurons. Most importantly, we found that noise accumulation can be very efficiently supressed under realistic hardware conditions. As such, neural networks implemented in analog hardware should be very robust to internal noise, which is of fundamental importance for future hardware realizations.
Maximal computing performance can only be achieved if neural networks are fully hardware implemented. Besides the potentially large benefits, such parallel and analogue hardware platforms face new, fundamental challenges. An important concern is that such systems might ultimately succumb to the detrimental impact of noise. We study of noise propagation through deep neural networks with various neuron nonlinearities and trained via back-propagation for image recognition and time-series prediction. We consider correlated and uncorrelated, multiplicative and additive noise and use noise amplitudes extracted from a physical experiment. The developed analytical framework is of great relevance for future hardware neural networks. It allows predicting the noise level at the system’s output based on the properties of its constituents. As such it is an essential tool for future hardware neural network engineering and performance estimation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.