Fundamental Limits on Energy Dissipation for Neural Network Implementations
Abstract
The vast majority of all data generated in human history has been generated in the last few years. We are very clearly in the era of big data. Handling large amount of data and learning from it has become one of the primary... [ view full abstract ]
The vast majority of all data generated in human history has been generated in the last few years. We are very clearly in the era of big data. Handling large amount of data and learning from it has become one of the primary goals of the computing industry. Improving implementation of machine learning and combinatorial optimization algorithms in hardware will go a long way in achieving this. Phase change material (PCM) and memristor-based neuromorphic systems are example of energy efficient realizations. In this poster, we explore fundamental limits on energy dissipation in Hopfield and Boltzmann neural network implementations. Lower bounds on dissipation for the case of simulated annealing in a simple Boltzmann network are evaluated for different annealing schedules. We also introduced a new dissipation complexity measure to classify computing problems in terms of the fundamental dissipation associated with their implementation strategies. The analysis will pave the way for studies of more complex networks, including backpropagation and provide a better understanding of the greater efficiency achievable through neural computing over traditional architectures.
Authors
-
Natesh Ganesh
(University of Massachusetts)
-
Neal Anderson
(University of Mas)
Topic Area
Topics: Neuromorphic, or “brain inspired”, computing
Session
PS-1 » Poster Session (19:00 - Monday, 17th October, Ballroom Foyer)
Paper
ICRC_Poster.pdf
Presentation Files
The presenter has not uploaded any presentation files.