Overcoming the Static Learning Bottleneck - the Need for Adaptive Neural Learning
Abstract
Amidst the rising impact of machine learning and the popularity of deep neural networks, learning theory is not a solved problem. With the emergence of neuromorphic computing as a means of addressing the von Neumann... [ view full abstract ]
Amidst the rising impact of machine learning and the popularity of deep neural networks, learning theory is not a solved problem. With the emergence of neuromorphic computing as a means of addressing the von Neumann bottleneck, it is not simply a matter of employing existing algorithms on new hardware technology, but rather richer theory is needed to guide advances. In particular, there is a need for a richer understanding of the role of adaptivity in neural learning to provide a foundation upon which architectures and devices may be built. Modern machine learning algorithms lack adaptive learning, in that they are dominated by a costly training phase after which they no longer learn. The brain on the other hand is continuously learning and provides a basis for which new mathematical theories may be developed to greatly enrich the computational capabilities of learning systems. Game theory provides one alternative mathematical perspective analyzing strategic interactions and as such is well suited to learning theory.
Authors
-
Craig Vineyard
(Sandia National Laboratories)
-
Stephen Verzi
(Sandia National Laboratories)
Topic Area
Topics: Neuromorphic, or “brain inspired”, computing
Session
OS-06B » Neuromorphic 4 (15:30 - Tuesday, 18th October, Del Mar Ballroom AB)
Paper
ID082_ICRC2016_finalpaper.pdf
Presentation Files
The presenter has not uploaded any presentation files.