Pages

Sunday, June 09, 2013

Harvard’s George Whitesides Gives Brilliant Critique of Mammoth U.S. Brain Project

Not everyone in the neuroscience and psychology worlds are excited by President Obama's $100 million Brain Activity Map Project—or the Brain Research through Advancing Innovative Neurotechnologies (BRAIN) Initiative. George Whitesides (Harvard chemist and veteran of big government ventures in support of nanotechnology) recently gave a very good critique of the BRAIN Initiative.

Harvard’s Whitesides Gives Brilliant Critique of Mammoth U.S. Brain Project

By Gary Stix | May 29, 2013

George Whitesides

The Obama administration’s Big Brain project—$100 million for a map of some sort of what lies beneath the skull—has captured the attention of the entire field of neuroscience. The magnitude of the cash infusion can’t help but draw notice, eliciting both huzzahs mixed with gripes that the whole effort might sap support for other perhaps equally worthy neuro-related endeavors.

The Brain Activity Map Project—or the Brain Research through Advancing Innovative Neurotechnologies (BRAIN) Initiative—is intended to give researchers tools to elicit the real-time functioning of neural circuits, providing a better picture of what happens in the brain when immersed in thought or when brain cells are beset by a degenerative condition like Parkinson’s or Alzheimer’s. Current technologies are either too slow or lack the resolution to achieve these goals.

One strength of the organizers—perhaps a portent of good things to come—is that they don’t seem to mind opening themselves to public critiques. At a planning meeting earlier this month, George Whitesides, the eminent Harvard chemist and veteran of big government ventures in support of nanotechnology, weighed in on how the project appeared to an informed outsider. Edited excerpting of some of his comments follows. This posting is a bit long, but Whitesides is eloquent and it’s worth reading what he has to say because his views apply to any large-scale sci-tech foray.

Whitesides began his talk after listening to a steady cavalcade of big-name neuroscientists furnish their personal wish lists for the program: ultrasound to induce focal lesions, more fruit fly studies to find computational nervous system primitives, more studies on zebra fish, studies on wholly new types of model organisms, avoiding too much emphasis on practical applications and so on.
“Listening to you this morning has been intensely interesting for me,” Whitesides began. “It has very much the flavor of a thousand flowers blooming. That is to say a problem which we all agree is intensely important: what is the brain how does it think, what is mind. It fits right there with issues such as what is life and where does life come from. It fits with the great problems of the next century.” 
“The question of whether people outside understand what is going on and where it leads is more complicated,” he continued. “I’ll just make a point to set a starting point. When I first heard about…the brain map I checked with a bunch of people who are good scientists and neurobiologists and everybody’s opposed, almost universally…There’s very deep skepticism that this approach, physical mapping at that scale, is going to work and lead to something.”
To promote the program, Whitesides emphasized the critical need to get non-neuroscientists to understand the problem being addressed—and to think carefully about something as simple as what the project should be called. Would the name “brain map” convey anything intelligible to someone not conversant with technical papers that bear titles like “Climbing Fiber Input Shapes Reciprocity of Purkinje Cell Firing”?

Whitesides suggested reverting to first principles in trying to describe to the world at large the importance of spending $100 million to gain better insight into the minutiae of neural circuitry. He recommended a cross-disciplinary collaboration by drawing upon knowledge, not from geneticists or bioengineers, but by borrowing across the divide of C.P. Snow’s Two Cultures: In other words, bringing in the English teachers. Going as basic as it gets, Whitesides told a room packed with full professors from elite universities that they should craft the story of the Big Brain project with the structural elements of a murder mystery.
“You have to have a puzzle or problem: Who killed lady of house? Was it the butler or somebody else? There has to be a puzzle or conflict or problem you want to resolve. The second element, a journey or trek, how you get there. You’ve spent much time talking here about that: what technical methods to get data or to formulate experiments. 
“The third component: There has to be a surprise. If you don’t have a surprise nobody’s interested. You have to catch the attention of people. To say [you want to come up with] a theory of mind is too far off. You want something shorter term that people can get a grip on. Finally, you need a resolution. The cat killed the lady of house, not the butler…But you need a resolution. Often in science you call it an application. 
“If you don’t have those components, you don’t have much to work with when talking to people who are not neuroscientists. Everybody’s fascinated by the particular tack they take to a particular piece of research. Outside it’s a different story. People want to know what you’re all doing and it has to be simple enough for people to understand that. It’s very difficult to do. 
“It’s very, very difficult to do and one of the issues here is to start hammering out that story. In genomics, it was: ‘we’re going to understand genomics, and based on the genome we’re going to understand cancer, and based on that understanding we’re going to cure cancer; and based on that your mother is going to live for a longer period of time.’ 
“Now it’s turned out to be more complicated than that as everything is in biology is. But here you’ve got an even more complicated problem. So how do you simplify this very complicated problem with top and bottom-level stories in such a fashion that I as an outsider can understand what the field is going to be doing, what it’s deliverables are going to be? … 
“Now in that context there are just a couple of things to remember. One of them is the question of ‘Why now?’ This problem has been around for a long time. And this is hardly the first group that’s thought about the nature of the brain. So why is now the time where we expect something astonishing to happen? With genomics, it happened because the technology of sequencing became so good that virtually anyone could generate floods of data and then begin to think about what could be done with that. What’s the corresponding thing here? I don’t know the answer to that. 
“The second issue which is in the same general issue of ‘why now’ is ‘Who cares?’” Obviously you care because you care about problems. But outside of this room with people who are not neuroscientists, what do they care about? What is the problem that you say you’re going to solve that they care about and I don’t think there are any shortage of these problems, all the way from alleviating tremor in Parkinson’s to beginning to think about depression, which is one of the great problems in public health.”
Whitesides then went on to talk about other considerations for structuring the project so that it retains some relevance beyond the neuroscience community. “I think that it’s really important to have deliverables and outcomes.” he said. “They don’t have to be the things that have the characteristic that they have to be the ultimate goal, but you need milestones along the way so you can go to the outside world and say we have done this. It’s not a compelling case to say that we’re here and in 100 years we’ll have a theory of mind and there’s nothing to show you in between because it’s all too complicated to understand. So what are they going to be and what do they look like?
“Second there’s a question of reductionism vs. higher-level stuff. If you think that a theory of mind is going to come by understanding the function of individual synapses and then building up from there, that tends not to work too well with really complicated systems. It works well with engineered systems like transistors or integrated circuits or devices or the Internet or Facebook. Those systems are engineered systems. Picking really complicated systems apart is hard to do. So what often one does is go from the end and look at higher-level behaviors in terms of black boxes and if you have good working models, you can pick those black boxes apart. 
“Just to give you an example of how the fully reductionist approach can run into difficulties, again we can go to genomics. If you talk now to the people in the pharmaceutical industry, what they will say is they’re moving massively away from target-based medicine to phenotypic assays. That is to say, if you want to find out whether a mouse gets better, you give a mouse stuff and see what happens, you don’t ask too many detailed questions. The detailed questions haven’t worked out very well. Here I don’t have a sense where the dividing line is between things that are best done at high-level and things that are best done by going reductionist. But there’s probably a place for everything.” 
“Zebra fish are a nice transparent model, but they’re probably not going to tell us very much about depression. People are probably more interested in depression than they are in zebra fish outside the room. That’s an interesting question. 
“The third point is about balance and inclusion. We are at the tail end of a pretty successful program in the United States on nanoscience or nanotechnology. And the question of why was this successful is complicated. But one of the reasons is that when this program emerged, it was phrased in such a way that virtually every area of science saw there was something in it for them; that is, the chemists, the biologists, the physicists, the device guys; everybody saw that there was some value in nanoscience for them. 
“And there was a supporting enormously important technology which is the technology of integrated circuits. And what’s happened over the course of time is what the engineers at Intel have done which is almost beyond belief in terms of its sophistication. Two generations or maybe one generation from now, microprocessors will have minimum feature sizes that are on the order of maybe 8 nanometers. I still can’t believe this and that’s using 190 nanometer light. 
So they provided an enormous practical push for this area and then everybody had something interesting to do at the nanoscale. The question is how to does one open this community in such a fashion that everyone thinks there’s something interesting and important…[For the brain project], it has to include engineering, it has to include clinical medicine it has to include the molecular it has to include cells and animals. The whole story has to be there somehow but making the story inclusive will make a much stronger case for building a strong community. 
The last point I’ll make is inclusion of industry…Let me tell you another short story which comes from a component of genomics that was Illumina, the sequencer that has been as important as many other things in genomics. The inventor of the technology at the very beginning was David Walt of Tufts…I was at a seminar with David in which one of the people in the audience at the end asked the following question: [which was] ‘how do you handle the conflict of interest problem in an academic lab. when you’re working on this and a company is working on the same thing’ and he [Walt] said ‘there’s never a problem and the reason there’s never a problem is that once industry takes up an idea; and good engineers, mature engineers, begin to work on it, an academic laboratory can never compete.’ 
“Now the relevance to this [the brain project] if you think about what Illumina and other sequencers made possible in genomics you can ask the question: are there corresponding things in this area where really good, skilled industrial engineers can make a capability available to the community in a way that makes it possible to collect all the data, all the structure, function, the measurements that you want to collect because it’s going to be vastly, vastly easier if it’s done as a centralized function, with real people paying real dollars to get it done really, really well. 
“And it may be premature to do it at this point. I don’t know the answer to that but it’s something for you to think about. And I think the earlier you get people who are professional engineers and, on the other end, clinicians actively involved in the work that you’re doing; the more likely you are to find components that you can use and motivations for using them that will help keep the field strong. 
“So it’s a fantastic area, unbelievably complicated. Outside it looks less straightforward than it looks to you inside and inside it looks pretty chaotic, so you can imagine what it looks like from outside.”

~ About the Author: Gary Stix, a senior editor, commissions, writes, and edits features, news articles and Web blogs for SCIENTIFIC AMERICAN. His area of coverage is neuroscience. He also has frequently been the issue or section editor for special issues or reports on topics ranging from nanotechnology to obesity. He has worked for more than 20 years at SCIENTIFIC AMERICAN, following three years as a science journalist at IEEE Spectrum, the flagship publication for the Institute of Electrical and Electronics Engineers. He has an undergraduate degree in journalism from New York University. With his wife, Miriam Lacob, he wrote a general primer on technology called Who Gives a Gigabyte? Follow on Twitter @gstix1.


The views expressed are those of the author and are not necessarily those of Scientific American.

No comments:

Post a Comment