The journal Neuron has a rare open-source article on the growing influence of theoretical neuroscience.
Read the whole article.Theoretical Neuroscience Rising
L.F. Abbott1,
AbstractTheoretical neuroscience has experienced explosive growth over the past 20 years. In addition to bringing new researchers into the field with backgrounds in physics, mathematics, computer science, and engineering, theoretical approaches have helped to introduce new ideas and shape directions of neuroscience research. This review presents some of the developments that have occurred and the lessons they have taught us.Main Text
Introduction
Twenty years ago, when Neuron got its start, theoretical neuroscience was experiencing a start of its own. Of course, there were important theoretical contributions to neuroscience long before 1988, most notably: the development of what we now call the integrate-and-fire model by Lapicque in 1907; the modeling of the action potential by Hodgkin and Huxley, a brilliant theoretical offshoot of their experimental work; the development of dendritic and axonal cable theory by Wilfred Rall; and the broad insights of David Marr. Nevertheless, over the past 20 years, theoretical neuroscience has changed from a field practiced by a few multitalented experimentalists and dedicated theorists (Jack Cowan, Steven Grossberg, John Rinzel, and Terry Sejnowski being early examples) sparsely scattered around the world to an integral component of virtually every scientific meeting and major department. Something has changed. How did this happen, and what impact has it had?
Two developments in the mid-1980s set the stage for the rapid expansion of theoretical neuroscience. One was the popularization of the backpropagation algorithm for training artificial neural networks (Rumelhart and McClelland, 1986). This greatly expanded the range of tasks that artificial neural networks could perform and led to a number of people entering neural network research. Around the same time, Amit, Gutfreund, and Sompolinsky (Amit et al., 1985) showed how a memory model proposed by Hopfield, 1982 could be analyzed using methods of statistical physics originally designed for spin glasses. The sheer beauty of this calculation drew a large batch of physicists into the field. These new immigrants entered with high confidence-to-knowledge ratios that, hopefully, have been reduced through large growth in the denominators and more modest adjustments of the numerators.
What has a theoretical component brought to the field of neuroscience? Neuroscience has always had models (how would it be possible to contemplate experimental results in such complex systems without a model in one's head?), but prior to the invasion of the theorists, these were often word models. There are several advantages of expressing a model in equations rather than words. Equations force a model to be precise, complete, and self-consistent, and they allow its full implications to be worked out. It is not difficult to find word models in the conclusions sections of older neuroscience papers that sound reasonable but, when expressed as mathematical models, turn out to be inconsistent and unworkable. Mathematical formulation of a model forces it to be self-consistent and, although self-consistency is not necessarily truth, self-inconsistency is certainly falsehood.
A skillful theoretician can formulate, explore, and often reject models at a pace that no experimental program can match.This is a major role of theory--to generate and vet ideas prior to full experimental testing. Having active theoretical contributors in the field allows us collectively to contemplate a vastly greater number of solutions to the many problems we face in neuroscience. Both theorists and experimentalists generate and test ideas, but due to the more rapid turnover time in mathematical and computational compared to experimental analyses, theorists can act as initial filters of ideas prior to experimental investigation. In this regard, it is the theorist's job to develop, test, frequently reject, and sometimes promote new ideas.
Theoretical neuroscience is sometimes criticized for not making enough predictions. This is part of a pre-versus-post debate about the field that has nothing to do with synapses. Although there are notable examples of predictions made by theorists and later verified by experimentalists in neuroscience, examples of postdictions are far more numerous and often more interesting. To apply prediction as the ultimate test of a theory is a distortion of history. Many of the most celebrated moments in quantitative science--the gravitational basis of the shape of planetary orbits, the quantum basis of the spectrum of the hydrogen atom, and the relativistic origin of the precession of the orbit of Mercury--involved postdictions of known and well-characterized phenomena. In neuroscience especially, experimentalists have gotten a big head start. There is nothing wrong with a model that postdicts previously known phenomena. The key test of the value of a theory is not necessarily whether it predicts something new, but whether it makes postdictions that generalize to other systems and provide valuable new ways of thinking.
The development of a theoretical component to neuroscience research has had significant educational impact across the biological sciences. The Sloan-Swartz initiative, for example, has supported almost 80 researchers who successfully transitioned from other fields to faculty positions in neuroscience. Jim Bower and Christof Koch set up the computational neuroscience course at Woods Hole, a summer course that is still educating people with backgrounds in both the biological and physical sciences and that has been copied in courses around the world. Biology used to be a refuge for students fleeing mathematics, but now many life sciences students have a solid knowledge of basic mathematics and computer programming, and those that don't at least feel guilty about it. A number of developments have led to this shift, the rise of theoretical neuroscience certainly being one of them.
The following sections provide a sparse sampling of theoretical developments that have occurred over the past 20 years and discuss some of the things they have taught us. The presentation is idiosyncratic, with some developments presented in a different context than when they first appeared and perhaps from what their creators intended, and many important achievements ignored entirely. The focus is on lessons learned from a subset of the theoretical advances over the past 20 years.
No comments:
Post a Comment