Hubbry Logo
search
logo
1954909

Stephen Grossberg

logo
Community Hub0 Subscribers
Write something...
Be the first to start a discussion here.
Be the first to start a discussion here.
See all
Stephen Grossberg

Stephen Grossberg (born December 31, 1939) is a cognitive scientist, theoretical and computational psychologist, neuroscientist, mathematician, biomedical engineer, and neuromorphic technologist. He is the Wang Professor of Cognitive and Neural Systems and a professor emeritus of Mathematics & Statistics, Psychological & Brain Sciences, and Biomedical Engineering at Boston University.

Grossberg first lived in Woodside, Queens, in New York City. His father died from Hodgkin's lymphoma when he was one year old. His mother remarried when he was five years old. He then moved with his mother, stepfather, and older brother, Mitchell, to Jackson Heights, Queens. He attended Stuyvesant High School in lower Manhattan after passing its competitive entrance exam. He graduated first in his class from Stuyvesant in 1957.

He began undergraduate studies at Dartmouth College in 1957, where he first conceived of the paradigm of using nonlinear differential equations to describe neural networks that model brain dynamics, as well as the basic equations that many scientists use for this purpose today.[citation needed] He then continued to study both psychology and neuroscience. He received a B.A. in 1961 from Dartmouth as its first joint major in mathematics and psychology.

Grossberg then went to Stanford University, from which he graduated in 1964 with an MS in mathematics and transferred to The Rockefeller Institute for Medical Research (now The Rockefeller University) in Manhattan. In his first year at Rockefeller, he wrote a 500-page monograph summarizing his discoveries to that time. It is called The Theory of Embedding Fields with Applications to Psychology and Neurophysiology. Grossberg received a PhD in mathematics from Rockefeller in 1967 for a thesis that proved the first global content addressable memory theorems about the neural learning models that he had discovered at Dartmouth. His PhD thesis advisor was Gian-Carlo Rota.

Grossberg was hired in 1967 as an assistant professor of applied mathematics at MIT following strong recommendations from Mark Kac and Rota. In 1969, Grossberg was promoted to associate professor after publishing a stream of conceptual and mathematical results about many aspects of neural networks, including a series of foundational articles in the Proceedings of the National Academy of Sciences between 1967 and 1971.

Grossberg was hired as a full professor at Boston University in 1975, where he is still on the faculty today. While at Boston University, he founded the Department of Cognitive and Neural Systems, several interdisciplinary research centers, and various international institutions.

Grossberg is a pioneer of the fields of computational neuroscience, connectionist cognitive science, and neuromorphic technology. His work focuses upon the design principles and mechanisms that enable the behavior of individuals, or machines, to adapt autonomously in real time to unexpected environmental challenges. This research has included neural models of vision and image processing; object, scene, and event learning, pattern recognition, and search; audition, speech and language; cognitive information processing and planning; reinforcement learning and cognitive-emotional interactions; autonomous navigation; adaptive sensory-motor control and robotics; self-organizing neurodynamics; and mental disorders. Grossberg also collaborates with experimentalists to design experiments that test theoretical predictions and fill in conceptually important gaps in the experimental literature, carries out analyses of the mathematical dynamics of neural systems, and transfers biological neural models to applications in engineering and technology. He has published 18 books or journal special issues, over 560 research articles, and has 7 patents.

Grossberg has studied how brains give rise to minds since he took the introductory psychology course as a freshman at Dartmouth College in 1957. At that time, Grossberg introduced the paradigm of using nonlinear systems of differential equations to show how brain mechanisms can give rise to behavioral functions. This paradigm is helping to solve the classical mind/body problem, and is the basic mathematical formalism that is used in biological neural network research today. In particular, in 1957–1958, Grossberg discovered widely used equations for (1) short-term memory (STM), or neuronal activation (often called the Additive and Shunting models, or the Hopfield model after John Hopfield's 1984 application of the Additive model equation); (2) medium-term memory (MTM), or activity-dependent habituation (often called habituative transmitter gates, or depressing synapses after Larry Abbott's 1997 introduction of this term); and (3) long-term memory (LTM), or neuronal learning (often called gated steepest descent learning). One variant of these learning equations, called Instar Learning, was introduced by Grossberg in 1976 into Adaptive Resonance Theory and Self-Organizing Maps for the learning of adaptive filters in these models. This learning equation was also used by Kohonen in his applications of Self-Organizing Maps starting in 1984. Another variant of these learning equations, called Outstar Learning, was used by Grossberg starting in 1967 for spatial pattern learning. Outstar and Instar learning were combined by Grossberg in 1976 in a three-layer network for the learning of multi-dimensional maps from any m-dimensional input space to any n-dimensional output space. This application was called Counter-propagation by Hecht-Nielsen in 1987.

See all
User Avatar
No comments yet.