Sunday, March 20, 2011

Press for V.S. Ramachandran's "The Tell-Tale Brain: A Neuroscientist’s Quest for What Makes Us Human"

V.S. Ramachandran's new book, The Tell-Tale Brain: A Neuroscientist’s Quest for What Makes Us Human, has been getting a lot of press. This post includes a review from the New York Review of Books and two interviews (from Neurophilosopher and ScienceDirect, from 2005). The interview by Mo at Neurophilosopher is a bit testy at one point - a little disagreement about mirror neurons.

The Tell-Tale Brain: A Neuroscientist’s Quest for What Makes Us Human
by V.S. Ramachandran
Norton, 357 pp., $26.95

mcginn_1-032411.jpg

Dancing stone nymph, Uttar Pradesh, India, early twelfth century. In The Tell-Tale Brain V.S. Ramachandran asks about this sculpture, ‘Does it stimulate mirror neurons?’

Is studying the brain a good way to understand the mind? Does psychology stand to brain anatomy as physiology stands to body anatomy? In the case of the body, physiological functions—walking, breathing, digesting, reproducing, and so on—are closely mapped onto discrete bodily organs, and it would be misguided to study such functions independently of the bodily anatomy that implements them. If you want to understand what walking is, you should take a look at the legs, since walking is what legs do. Is it likewise true that if you want to understand thinking you should look at the parts of the brain responsible for thinking?

Is thinking what the brain does in the way that walking is what the body does? V.S. Ramachandran, director of the Center for Brain and Cognition at the University of California, San Diego, thinks the answer is definitely yes. He is a brain psychologist: he scrutinizes the underlying anatomy of the brain to understand the manifest process of the mind. He approvingly quotes Freud’s remark “Anatomy is destiny”—only he means brain anatomy, not the anatomy of the rest of the body.

But there is a prima facie hitch with this approach: the relationship between mental function and brain anatomy is nowhere near as transparent as in the case of the body—we can’t just look and see what does what. The brain has an anatomy, to be sure, though it is boneless and relatively homogeneous in its tissues; but how does its anatomy map onto psychological functions? Are there discrete areas for specific mental faculties or is the mapping more diffuse (“holistic”)?

The consensus today is that there is a good deal of specialization in the brain, even down to very fine-grained capacities, such as our ability to detect color, shape, and motion—though there is also a degree of plasticity. The way a neurologist like Ramachandran investigates the anatomy–psychology connection is mainly to consider abnormal cases: patients with brain damage due to stroke, trauma, genetic abnormality, etc. If damage to area A leads to disruption of function F, then A is (or is likely to be) the anatomical basis of F.

This is not the usual way that biologists investigate function and structure, but it is certainly one way—if damage to the lungs hinders breathing, then the lungs are very likely the organ for breathing. The method, then, is to understand the normal mind by investigating the abnormal brain. Brain pathology is the key to understanding the healthy mind. It is as if we set out to understand political systems by investigating corruption and incompetence—a skewed vision, perhaps, but not an impossible venture. We should judge the method by the results it achieves.

Ramachandran discusses an enormous range of syndromes and topics in The Tell-Tale Brain. His writing is generally lucid, charming, and informative, with much humor to lighten the load of Latinate brain disquisitions. He is a leader in his field and is certainly an ingenious and tireless researcher. This is the best book of its kind that I have come across for scientific rigor, general interest, and clarity—though some of it will be a hard slog for the uninitiated. In what follows I can only provide a glimpse of the full range of material covered, by selecting a sample of case studies.

Read the whole review.

* * * * * *

Looking into Ramachandran's broken mirror

Posted on: March 10, 2011 5:20 AM, by Mo

ramachandran.jpg

I visited Vilayanur S. Ramachandran's lab at the University of California, San Diego recently, and interviewed him and several members of his lab about their work. Rama and I talked, among other things, about the controversial broken mirror hypothesis, which he and others independently proposed in the early 1990s as an explanation for autism. I've written a short article about it for the Simons Foundation Autism Research Initiative (SFARI), and the transcript of that part of the interview is below. I also wrote an article summarizing the latest findings about the molecular genetics of autism, which were presented in a symposium held at the Society for Neuroscience annual meeting last November.

Read the interview.

* * * * * *

The interview from ScienceDirect was posted in 2005, so I am including the whole piece - to offers little more background into Ramachandran and where is coming from in his research.

Current Biology
Volume 15, Issue 17, 6 September 2005, Pages R647-R648

Q & A: V.S. Ramachandran

Center for Brain and Cognition, University of California at San Diego, La Jolla, California 92093-0109,USA.

V.S. Ramachandran is professor and director, Center for Brain and Cognition, University of California at San Diego and adjunct professor at the Salk Institute, La Jolla. He originally trained as a physician but switched to research in 1978 and obtained a PhD in neurophysiology from Trinity college, Cambridge. His main contributions are in visual psychophysics and behavioral neurology. He has received many honors including a fellowship at All Souls college, Oxford, the Ramon Y Cajal award from the international neuropsychiatry society, the Ariens-Kappers medal from the Royal Nederlands Academy of sciences, and the presidential lecture award from the American Academy of neurology. He gave the Decade of the brain plenary lecture at the annual meeting of the Society for neuroscience, and the 2003 BBC Reith lectures. His books include Phantoms in the Brain and A Brief Tour of Human Consciousness.

What made you choose Biology? Let me answer by disagreeing with one of the scientists you have already interviewed, Steve Pinker (whom I admire, by the way). He says that we should not trust scientists’ recollections about their careers because memories are highly unreliable. Using a Pinker-style evolutionary logic I would argue the very opposite: memories are highly reliable, otherwise we would not have survived! The fact that it is occasionally fallible does not mean that memory should not be trusted, any more than I should not trust my senses just because I occasionally hallucinate or enjoy visual illusions!

I remember well why I got into biology. I found physics too exact, and sterile for my taste, and psychology too woolly. Biology was the right combination of precision and complexity. When I later got into human vision and neurology, I found that they were areas in which one could still do ‘Victorian’ style experiments that could have been done 100 years ago, but weren’t. You see I have this perverse streak; I enjoy doing things which make my competitors say “That is so simple; why didn’t I think of it?”, or “It is too simple — it cannot be right”.

A third, more mundane reason is that I always enjoyed collecting fossils and sea shells, and through taxonomy and comparative anatomy I became hooked on evolution. Surprisingly, evolutionary thinking is rare in neurology, but it permeates every aspect of my work, including my early work on color and motion, and the visual perception of object shape from shading information, and my recent speculations on synesthesia, the evolution of metaphor and autism. For example, in 2000, I suggested that the synesthesia gene(s) survived because it makes some outliers in the population more ‘metaphorical’ and creative; there is a hidden agenda, as with the sickle-cell anemia gene.

Were you a good student? Yes and no; I was erratic. My performance in science was perfectly respectable, but in languages, and humanities in general, it was abysmal. But I also surprised all my classmates when a paper I sent to Nature when I was just 20 was accepted and published without revision!

Which paper had the most influence on you? My early interest in Vision was sparked by Richard Gregory’s 1958 paper on why the world remains stable during eye movements (Eye movements and the stability of the visual world. Nature 182, 1214-1216) and his subsequent squabbles with Donald McKay. (I think they were both right, by the way.) And Bela Julesz’s papers taught me that one could draw important conclusions from amazingly simple experiments.

Any scientific heroes? Michael Faraday and Thomas Huxley. Faraday moved a magnet within a coil of wire and linked two entire fields of physics: electricity and magnetism. From this I learnt that there is no correlation between the sophistication of methodology and technology and the importance of the result. And Huxley for his overall approach, for his wit and pugnacity and for bringing science to ‘the common people’ — his phrase — without dumbing it down.

Also the unknown Indian genius in the first millennium BC who combined the use of place value in number representations, base 10 (far more practical than the Sumerian 60) and, most importantly, zero as an independent number and place holder. This marks the dawn of mathematics.

What about modern-day heroes? Richard Gregory, Norm Geschwind and Francis Crick, all of whom have had more sheer fun doing science than anyone else I know.

Do you think the peer review system works? I can do no better than quote Semir Zeki: “referees are swine but sometimes swine can lead you to the truffle”.

What is the best advice you have ever been given? All from Francis Crick, as I pointed out in a recent memorial at the Salk Institute (The astonishing Francis Crick. Perception 33, 1151–1154).

First, the importance of sheer intellectual daring — chutzpah. It is better to tackle ten fundamental problems and solve one than to tackle ten trivial ones and solve them all! Fundamental problems are not necessarily more inherently difficult than trivial ones. Nature is not conspiring against us to make fundamental problems more difficult.

Second don’t become trapped in a small, specialised cul-de-sac just because you feel comfortable or your immediate peers reward you for it. Don’t strive for approval from the majority of your colleagues, but only for the respect of those few exceptional people at the top of your field whom you genuinely admire. And never listen to ‘experts’ — recall how both Erwin Chargaff and William Bragg strongly discouraged Crick from pursuing DNA!

Do you have a favorite conference? I don’t like any of them. I bet I could write a computer program that randomly strings together the key words from this year’s abstracts at the big neuroscience meetings and produces perfectly acceptable abstracts for next year. Of course you don’t want to be an ostrich, but the danger for young people is that they might get drawn into fashionable trends instead of tackling fundamental questions starting from first principles.

Why the interesting in popularizing science? I do it for three reasons. First, because it is fun. Second, we owe it to the tax-paying public whose patronage we enjoy. And third, as a reaction to the phase we went through when it was not considered the ‘proper’ thing for a researcher to do. There are many outstanding scientists who now “popularize” science in their spare time: Lewis Wolpert, John Barrow, Steve Pinker, Stephen Hawking, Roger Penrose, Edward Wilson, and Mike Gazzaniga, to name just a few.

So overall it is a good thing to do, although inevitably there will be a few envious colleagues who secretly wish to do the same but lack the talent. A greater danger is that one might inadvertently oversimplify some of the concepts and offend some experts. But as Lord Reith said, “There are some people whom it is one’s duty to offend”.

What are the key problems in your field that interest you? One is the neural basis of abstract thinking — how do we use neurons to juggle ideas sequentially in our heads? As when you say: A is bigger than B, B is bigger than C, therefore A must be bigger than C. Is our ability to make such a deduction about the transitivity of relations learned through induction, from the way that, every time you saw that A>B and B>C, then it always turned out empirically that A>C? And if so, is this ability to induce rules acquired through learning or hardwired through natural selection? Did transitivity evolve mainly to make beneficial social inferences — that chap A just beat up B, and B beat me up, so clearly A is stronger than me and I had better watch out for him — before it was adopted for more abstract thinking? If so, are the great apes capable of transitive inferences in a social situation but not for abstract properties?

Another big question concerns consciousness. Francis Crick and Christof Koch galvanized the scientific community by daring to suggest — correctly, I believe — that the nature of consciousness is a tractable scientific question. But I disagree with their specific view that there are “consciousness neurons” (I suspect they were just being provocative in proposing the existence of such neurons). I think that consciousness arises, not from individual neurons nor from the entire brain, but rather from small specialized circuits unique to — or very highly developed in — humans which allow the brain to create an explicit ‘metarepresentation’ of sensory representations created at earlier stages in the information-processing pathway (which we do share with lower primates). This is accompanied by a sense of ‘agency’ and self and qualia, of juggling symbols off-line entirely in your brain and, especially, by that feature which we consider uniquely human — knowing that you know or that you perceive (qualia), or that you don’t know.

These abilities are all closely interdependent in a way that we don’t yet clearly understand. What I’m calling the metarepresentation bears an uncanny resemblance to the ‘homunculus’ — but unlike the homunculus it does not lead to an endless regress. It bears the same relationship to the earlier sensory representations as the latter do to external world events. Its purpose is to create abbreviated representations of representations highlighting certain aspects to create tokens that can be used for internal juggling of symbols — ‘thinking’ — or for communicating ideas to others. Both language and one’s sense of ‘agency’ are involved in this in some way that we don’t yet clearly understand.

As I said in my Reith lectures, the brain structures involved seem to be the amygdala, the angular gurus (‘abstraction’ on the left, ‘body image’ on the right), the supramarginal gyrus and anterior cingulate cortex (‘will and want’) and Wernicke’s area (‘meaning’). Find out how these circuits work and how they interact and you will have figured out what it means to be a conscious human being — just as the structural logic of DNA dictates the functional logic of heredity. Anatomy is destiny, as Freud said.

Will neuroscience have to confront ethical issues? The big ethical dilemma will emerge in 3 to 500 years from now, when a neuroscientist will be able to transplant your brain into a vat in a culture medium — I’m serious — and artificially create patterns of activity that will make you feel like you are living the lives of Francis Crick, Bill Gates, Hugh Heffner and Mark Spitz, while at the same time retaining your identity. Given a choice, would you rather pick this scenario or just be the ‘real’ you? Ironically most people I know, even scientists who are not religious, pick the latter on the grounds that it is ‘real’. Yet there is absolutely no rational justification for this choice, because in a sense you already are a ‘brain in a vat’ — a vat called the cranial vault, nurtured by cerebrospinal fluid and bombarded by photons. All I am asking you is which vat you prefer — and you pick the crummy one! There is a sense in which this is the ultimate ethical dilemma . I personally would choose the ‘real’ me, by the way, although I don’t know why. Maybe it is a sentimental attachment to my current reality or maybe I secretly believe there is something else, after all.


No comments: