Showing posts with label brain scans. Show all posts
Showing posts with label brain scans. Show all posts

Friday, July 18, 2014

Neuroscience: "I Can Read Your Mind" (BBC Future)

From the BBC Future blog, this is an interesting article on new efforts in neuroscience to read minds (sort of), or least to identify images received through sensory channels as well as those experienced in dreams. All of this part of the European Human Brain Project.

This article looks at a little bit of the research currently being conducted and how it fits into bigger projects.

Neuroscience: ‘I can read your mind’

Rose Eveleth | 18 July 2014


(Getty Images)

What are you looking at? Scientist Jack Gallant can find out by decoding your thoughts, as Rose Eveleth discovers.

After leaders of the billion-euro Human Brain Project hit back at critics, six top neuroscientists have expressed "dismay" at their public response

Jack Gallant can read your mind. Or at least, he can figure out what you’re seeing if you’re in his machine watching a movie he’s playing for you.

Gallant, a researcher at the University of California, Berkeley, has a brain decoding machine – a device that uses brain scanning to peer into people’s minds and reconstruct what they’re seeing. If mind-reading technology like this becomes more common, should we be concerned? Ask Gallant this question, and he gives a rather unexpected answer.

In Gallant’s experiment, people were shown movies while the team measured their brain patterns. An algorithm then used those signals to reconstruct a fuzzy, composite image, drawing on a massive database of YouTube videos. In other words, they took brain activity and turned it into pictures, revealing what a person was seeing.
For Gallant and his lab, this was just another demonstration of their technology. While his device has made plenty of headlines, he never actually set out to build a brain decoder. “It was one of the coolest things we ever did,” he says, “but it’s not science.” Gallant’s research focuses on figuring out how the visual system works, creating models of how the brain processes visual information. The brain reader was a side project, a coincidental offshoot of his actual scientific research. “It just so happens that if you build a really good model of the brain, then that turns out to be the best possible decoder.”

Science or not, the machine strokes the dystopian futurists among people who fear that the government could one day tap into our innermost thoughts. This might seem like a silly fear, but Gallant says it’s not. “I actually agree that you should be afraid,” he says, “but you don’t have to be afraid for another 50 years.” It will take that long to solve two of the big challenges in brain-reading technology: the portability, and the strength of the signal.



Decoding brain signals from scans can reveal what somebody is looking at (SPL)
Right now, in order for Gallant to read your thoughts, you have to slide into a functional magnetic resonance imaging (MRI) machine – a huge, expensive device that measures where the blood is flowing in the brain. While MRI is one of the best ways to measure the activity of the brain, it’s not perfect, nor is it portable. Subjects in an MRI machine can’t move, and the devices are expensive and huge. 
And while comparing the brain image and the movie image side by side makes their connection apparent, the image that Gallant’s algorithm can build from brain signals isn’t quite like peering into a window. The resolution on MRI scans simply isn’t high enough to create something that generates a clear picture. “Until somebody comes up with a method for measuring brain activity better than we can today there won’t be many portable brain-decoding devices that will be built for a general use,” he says.

Dream reader

While Gallant isn’t working on trying to build any more decoding machines, others are. One team in Japan is currently trying to make a dream reader, using the same fMRI technique. But unlike in the movie experiment, where researchers know what the person is seeing and can confirm that image in the brain readouts, dreams are far trickier.

To try and train the system, researchers put subjects in an MRI machine and let them slip into that weird state between wakefulness and dreaming. They then woke up the subject and ask what they had seen. Using that information, they could correlate the reported dream images—everything from ice picks to keys to statues – to train the algorithm.


Researchers are trying to decode dreams by studying brain activity (Thinkstock)
Using this database, the Japanese team was able to identify around 60% of the types of images dreamers saw. But there’s a key hurdle between these experiments and a universal dream decoder: each dreamer’s signals are different. Right now, the decoder has to be trained to each individual. So even if you were willing to sleep in an MRI machine, there’s no universal decoder that can reveal your nightly adventures.

Even though he’s not working on one, Gallant knows what kind of brain decoder he might build, should he chose to. “My personal opinion is that if you wanted to build the best one, you would decode covert internal speech. If you could build something that takes internal speech and translates into external speech,” he says, “then you could use it to control a car. It could be a universal translator.”

Inner speech

Some groups are edging closer to this goal; a team in the Netherlands, for instance, scanned the brains of bilingual speakers to detect the concepts each participant were forming – such as the idea of a horse or cow, correctly identifying the meaning whether the subjects were thinking in English or Dutch. Like the dream decoder, however, the system needed to be trained on each individual, so it is a far cry from a universal translator.

If nothing else, the brain reader has sparked more widespread interest in Gallant’s work. “If I go up to someone on the street and tell them how their brains work their eyes glaze over,” he says. When he shows them a video of their brains actually at work, they start to pay attention.

If you would like to comment on this, or anything else you have seen on Future, head over to our Facebook or Google+ page, or message us on Twitter.

Tuesday, May 13, 2014

The Psychopath Within - All in the Mind

 
When James Fallon's The Psychopath Inside: A Neuroscientist's Personal Journey into the Dark Side of the Brain (2013) was published, he and the book received a lot of attention, mostly good (The Smithsonian, NPR's Science Friday, The Atlantic), but also some strong criticisms from people in the neuroscience community and elsewhere.

Even Publisher's Weekly was not impressed: "Fallon’s memoir of realizations is emotionally flat (which is perhaps unfair criteria to judge a psychopath by), lazily assembled, and amounts to little more than a confessional booth’s enumeration of sins."

Neurocritic wonders if he completed the Psychopathy Checklist and score over 30? Otherwise, are we to believe that he made this diagnosis from a simple pet scan of his brain?


the fallacy of reverse inference, confusing correlation with causation, and the confirmation bias.

Finally, Jordan Smoller, in the Los Angeles Review of Books, says, "If most psychopaths have a Y chromosome (that is, they are men), and I have a Y chromosome, then I’m likely to be a psychopath inside. If I told you this, you would easily see the error of my logic. But, surprisingly, the neuroscientist James Fallon bases his new book on just this kind of premise."

Be that as it may (criticism is often ignored when it attempts to make popular science conform to the rigors of "real" science), the book has legs. In last week's All in the Mind from Australia's Radio National, Fallon was the guest, along with Mark Dadds.

The Psychopath Within

Sunday 4 May 2014 | Lynne Malcolm
 

When neuroscientist James Fallon was studying the brain scans of serial killers he noticed that his own scan looked remarkably like one of his psychopathic subjects. When you hear about some of his character traits, and his seemy family background – it begins to make sense. Plus, can we prevent so-called 'callous and unemotional' kids from becoming psychopathic adults?

Guests 
  • Professor James Fallon, Professor of psychiatry , neuroscience, human behaviour & neurobiology at the University of California, Irvine
  • Professor Mark Dadds, Professor of Clinical Child Psychology at the University of N.S.W, Director of the Child Behaviour Research Clinic at the University of N.S.W.
Publications

The Psychopath Inside: A Neuroscientist's Personal Journey into the Dark Side of the Brain, by James Fallon

Further Information

Tuesday, October 29, 2013

Am I a Psychopath? by James Fallon

James Fallon is the author of The Pyschopath Inside: A Neuroscientist’s Personal Journey into the Dark Side of the Brain, a rather unique account of a researcher into abnormal brain patterns associated with psychopathology. In his subjects, there are clear patterns:
The brains belonging to these killers shared a rare and alarming pattern of low brain function in certain parts of the frontal and temporal lobes—areas commonly associ­ated with self-control and empathy. This makes sense for those with a history of inhuman violence, since the reduction of activity in these regions suggests a lack of a normal sense of moral reason­ing and of the ability to inhibit their impulses.
However, having had his own brain scanned for a separate project on Alzheimer's Disease, he discovers that one of the scans of his own family (who were serving as the control group) has a pattern nearly identical to the psychopaths . . . and it was his scan.

Am I a Psychopath?

by James Fallon
Oct. 25, 2013


One October day in 2005, as the last vestiges of an Indian summer moved across Southern California, I was inputting some last-minute changes into a paper I was planning to submit to the Ohio State Journal of Criminal Law. I had titled it “Neu­roanatomical Background to Understanding the Brain of a Young Psychopath” and based it on a long series of analyses I had performed, on and off for a decade, of individual brain scans of psychopathic murderers. These are some of the baddest dudes you can imagine—they’d done some heinous things over the years, things that would make you cringe if I didn’t have to ad­here to confidentiality agreements and could tell you about them.

But their pasts weren’t the only things that separated them from the rest of us. As a neuroscientist well into the fourth decade of my career, I’d looked at a lot of brain scans over the years, and these had been different. The brains belonging to these killers shared a rare and alarming pattern of low brain function in certain parts of the frontal and temporal lobes—areas commonly associ­ated with self-control and empathy. This makes sense for those with a history of inhuman violence, since the reduction of activity in these regions suggests a lack of a normal sense of moral reason­ing and of the ability to inhibit their impulses. I explained this pattern in my paper, submitted it for publication, and turned my attention to the next project.

At the same time I’d been studying the murderers’ scans, my lab had been conducting a separate study exploring which genes, if any, are linked to Alzheimer’s disease. As part of our research, my colleagues and I had run genetic tests and taken brain scans of several Alzheimer’s patients as well as several members of my fam­ily, who were serving as the normal, control group.

On this same October day, I sat down to analyze my family’s scans and noticed that the last scan in the pile was strikingly odd. In fact it looked exactly like the most abnormal of the scans I had just been writing about, suggesting that the poor individual it belonged to was a psychopath—or at least shared an uncomfort­able amount of traits with one. Not suspicious of any of my family members, I naturally assumed that their scans had somehow been mixed with the other pile on the table. I generally have a lot of research going on at one time, and even though I try to keep my work organized it was entirely possible for things to get mixed up. Unfortunately, since we were trying to keep the scans anonymous, we’d coded them to hide the names of the individuals they be­longed to. To be sure I hadn’t mixed anything up, I asked our lab technician to break the blind code.

When I found out who the scan belonged to, I had to believe there was a mistake. In a fit of pique, I asked the technician to check the scanner and all the notes from the other imaging and database technicians.

But there had been no mistake.

The scan was mine.
___________________

Reprinted from The Pyschopath Inside: A Neuroscientist’s Personal Journey into the Dark Side of the Brain, by James Fallon with permission of Current, a member of Penguin Group (USA) LLC, A Penguin Random House Company. Copyright (c) James Fallon, 2013.


About the Author

James H. Fallon, Ph.D., is professor of psychiatry and human behavior and emeritus professor of anatomy and neurobiology in the School of Medicine at the University of California, Irvine. His research interests include adult stem cells, chemical neuroanatomy and circuitry, higher brain functions, and brain imaging. He has served as Chairman of the University faculty and Chair and President of the School of Medicine faculty. He is a Sloan Scholar, Senior Fulbright Fellow, National Institutes of Health Career Awardee, and recipient of a range of honorary degrees and awards, and he sits on several corporate boards and national think tanks for science, biotechnology, the arts, and the U.S. military. He is a Subject Matter Expert in the field of "cognition and war" to the Pentagon's Joint Command. In addition to his neuroscience research, James Fallon has lectured and written on topics ranging from art and the brain, architecture and the brain, law and the brain, consciousness, creativity, the brain of the psychopathic murderer, and the Vietnam War. He has appeared on numerous documentaries, radio, and TV shows.

~ Author photo by Daniel Anderson/UC Irvine

RELATED SCIENCE FRIDAY LINK

Uncovering the Brain of a Psychopath

Brain, from Shutterstock



Sunday, October 27, 2013

Scientists Seek to Decode People's Thoughts, Dreams, and Even Their Intentions (Nature)


From Nature News, Kerri Smith offers an overview of the current research by neuroscientists who are seeking to decode human thoughts, dreams, and intentions. The trick is to convert neuronal electrical patterns of synapses, networks, and modules into something that appears coherent to us.

Full Citation:
Smith, K. (2013, Oct 24). Brain decoding: Reading minds. Nature, 502(7472): 428–430. doi:10.1038/502428a

Brain decoding: Reading minds

By scanning blobs of brain activity, scientists may be able to decode people's thoughts, their dreams and even their intentions. 

Kerri Smith
23 October 2013


Cracking the code - See how scientists decode vision, dreamscapes and hidden mental states from brain activity.


Jack Gallant perches on the edge of a swivel chair in his lab at the University of California, Berkeley, fixated on the screen of a computer that is trying to decode someone's thoughts.

On the left-hand side of the screen is a reel of film clips that Gallant showed to a study participant during a brain scan. And on the right side of the screen, the computer program uses only the details of that scan to guess what the participant was watching at the time.

Anne Hathaway's face appears in a clip from the film Bride Wars, engaged in heated conversation with Kate Hudson. The algorithm confidently labels them with the words 'woman' and 'talk', in large type. Another clip appears — an underwater scene from a wildlife documentary. The program struggles, and eventually offers 'whale' and 'swim' in a small, tentative font.

“This is a manatee, but it doesn't know what that is,” says Gallant, talking about the program as one might a recalcitrant student. They had trained the program, he explains, by showing it patterns of brain activity elicited by a range of images and film clips. His program had encountered large aquatic mammals before, but never a manatee.

Groups around the world are using techniques like these to try to decode brain scans and decipher what people are seeing, hearing and feeling, as well as what they remember or even dream about.
Listen

Neuroscientists can predict what a person is seeing or dreaming by looking at their brain activity.

Go to full podcast
Media reports have suggested that such techniques bring mind-reading “from the realms of fantasy to fact”, and “could influence the way we do just about everything”. The Economist in London even cautioned its readers to “be afraid”, and speculated on how long it will be until scientists promise telepathy through brain scans. Although companies are starting to pursue brain decoding for a few applications, such as market research and lie detection, scientists are far more interested in using this process to learn about the brain itself. Gallant's group and others are trying to find out what underlies those different brain patterns and want to work out the codes and algorithms the brain uses to make sense of the world around it. They hope that these techniques can tell them about the basic principles governing brain organization and how it encodes memories, behaviour and emotion (see 'Decoding for dummies').

Applying their techniques beyond the encoding of pictures and movies will require a vast leap in complexity. “I don't do vision because it's the most interesting part of the brain,” says Gallant. “I do it because it's the easiest part of the brain. It's the part of the brain I have a hope of solving before I'm dead.” But in theory, he says, “you can do basically anything with this”.

Beyond blobology

Brain decoding took off about a decade ago1, when neuroscientists realized that there was a lot of untapped information in the brain scans they were producing using functional magnetic resonance imaging (fMRI). That technique measures brain activity by identifying areas that are being fed oxygenated blood, which light up as coloured blobs in the scans. To analyse activity patterns, the brain is segmented into little boxes called voxels — the three-dimensional equivalent of pixels — and researchers typically look to see which voxels respond most strongly to a stimulus, such as seeing a face. By discarding data from the voxels that respond weakly, they conclude which areas are processing faces. 
Decoding techniques interrogate more of the information in the brain scan. Rather than asking which brain regions respond most strongly to faces, they use both strong and weak responses to identify more subtle patterns of activity. Early studies of this sort proved, for example, that objects are encoded not just by one small very active area, but by a much more distributed array. 
These recordings are fed into a 'pattern classifier', a computer algorithm that learns the patterns associated with each picture or concept. Once the program has seen enough samples, it can start to deduce what the person is looking at or thinking about. This goes beyond mapping blobs in the brain. Further attention to these patterns can take researchers from asking simple 'where in the brain' questions to testing hypotheses about the nature of psychological processes — asking questions about the strength and distribution of memories, for example, that have been wrangled over for years. Russell Poldrack, an fMRI specialist at the University of Texas at Austin, says that decoding allows researchers to test existing theories from psychology that predict how people's brains perform tasks. “There are lots of ways that go beyond blobology,” he says. 
In early studies1, 2 scientists were able to show that they could get enough information from these patterns to tell what category of object someone was looking at — scissors, bottles and shoes, for example. “We were quite surprised it worked as well as it did,” says Jim Haxby at Dartmouth College in New Hampshire, who led the first decoding study in 2001. 
Soon after, two other teams independently used it to confirm fundamental principles of human brain organization. It was known from studies using electrodes implanted into monkey and cat brains that many visual areas react strongly to the orientation of edges, combining them to build pictures of the world. In the human brain, these edge-loving regions are too small to be seen with conventional fMRI techniques. But by applying decoding methods to fMRI data, John-Dylan Haynes and Geraint Rees, both at the time at University College London, and Yukiyasu Kamitani at ATR Computational Neuroscience Laboratories, in Kyoto, Japan, with Frank Tong, now at Vanderbilt University in Nashville, Tennessee, demonstrated in 2005 that pictures of edges also triggered very specific patterns of activity in humans3, 4. The researchers showed volunteers lines in various orientations — and the different voxel mosaics told the team which orientation the person was looking at. 
ILLUSTRATION BY PETER QUINNELL; PHOTO: KEVORK DJANSEZIAN/GETTY
 
Edges became complex pictures in 2008, when Gallant's team developed a decoder that could identify which of 120 pictures a subject was viewing — a much bigger challenge than inferring what general category an image belongs to, or deciphering edges. They then went a step further, developing a decoder that could produce primitive-looking movies of what the participant was viewing based on brain activity5. 
From around 2006, researchers have been developing decoders for various tasks: for visual imagery, in which participants imagine a scene; for working memory, where they hold a fact or figure in mind; and for intention, often tested as the decision whether to add or subtract two numbers. The last is a harder problem than decoding the visual system says Haynes, now at the Bernstein Centre for Computational Neuroscience in Berlin, “There are so many different intentions — how do we categorize them?” Pictures can be grouped by colour or content, but the rules that govern intentions are not as easy to establish. 
Gallant's lab has preliminary indications of just how difficult it will be. Using a first-person, combat-themed video game called Counterstrike, the researchers tried to see if they could decode an intention to go left or right, chase an enemy or fire a gun. They could just about decode an intention to move around; but everything else in the fMRI data was swamped by the signal from participants' emotions when they were being fired at or killed in the game. These signals — especially death, says Gallant — overrode any fine-grained information about intention. 
The same is true for dreams. Kamitani and his team published their attempts at dream decoding in Science earlier this year6. They let participants fall asleep in the scanner and then woke them periodically, asking them to recall what they had seen. The team tried first to reconstruct the actual visual information in dreams, but eventually resorted to word categories. Their program was able to predict with 60% accuracy what categories of objects, such as cars, text, men or women, featured in people's dreams. 
The subjective nature of dreaming makes it a challenge to extract further information, says Kamitani. “When I think of my dream contents, I have the feeling I'm seeing something,” he says. But dreams may engage more than just the brain's visual realm, and involve areas for which it's harder to build reliable models. 

Reverse engineering 

Decoding relies on the fact that correlations can be established between brain activity and the outside world. And simply identifying these correlations is sufficient if all you want to do, for example, is use a signal from the brain to command a robotic hand (see Nature 497, 176–178; 2013). But Gallant and others want to do more; they want to work back to find out how the brain organizes and stores information in the first place — to crack the complex codes the brain uses. 
That won't be easy, says Gallant. Each brain area takes information from a network of others and combines it, possibly changing the way it is represented. Neuroscientists must work out post hoc what kind of transformations take place at which points. Unlike other engineering projects, the brain was not put together using principles that necessarily make sense to human minds and mathematical models. “We're not designing the brain — the brain is given to us and we have to figure out how it works,” says Gallant. “We don't really have any math for modelling these kinds of systems.” Even if there were enough data available about the contents of each brain area, there probably would not be a ready set of equations to describe them, their relationships, and the ways they change over time. 
Computational neuroscientist Nikolaus Kriegeskorte at the MRC Cognition and Brain Sciences Unit in Cambridge, UK, says that even understanding how visual information is encoded is tricky — despite the visual system being the best-understood part of the brain (see Nature 502, 156–158; 2013). “Vision is one of the hard problems of artificial intelligence. We thought it would be easier than playing chess or proving theorems,” he says. But there's a lot to get to grips with: how bunches of neurons represent something like a face; how that information moves between areas in the visual system; and how the neural code representing a face changes as it does so. Building a model from the bottom up, neuron by neuron, is too complicated — “there's not enough resources or time to do it this way”, says Kriegeskorte. So his team is comparing existing models of vision to brain data, to see what fits best. 

Real world 

Devising a decoding model that can generalize across brains, and even for the same brain across time, is a complex problem. Decoders are generally built on individual brains, unless they're computing something relatively simple such as a binary choice — whether someone was looking at picture A or B. But several groups are now working on building one-size-fits-all models. “Everyone's brain is a little bit different,” says Haxby, who is leading one such effort. At the moment, he says, “you just can't line up these patterns of activity well enough”. 
Standardization is likely to be necessary for many of the talked-about applications of brain decoding — those that would involve reading someone's hidden or unconscious thoughts. And although such applications are not yet possible, companies are taking notice. Haynes says that he was recently approached by a representative from the car company Daimler asking whether one could decode hidden consumer preferences of test subjects for market research. In principle it could work, he says, but the current methods cannot work out which of, say, 30 different products someone likes best. Marketers, he says, should stick to what they know for now. “I'm pretty sure that with traditional market research techniques you're going to be much better off.” 
Companies looking to serve law enforcement have also taken notice. No Lie MRI in San Diego, California, for example, is using techniques related to decoding to claim that it can use a brain scan to distinguish a lie from a truth. Law scholar Hank Greely at Stanford University in California, has written in the Oxford Handbook of Neuroethics (Oxford University Press, 2011) that the legal system could benefit from better ways of detecting lies, checking the reliability of memories, or even revealing the biases of jurors and judges. Some ethicists have argued that privacy laws should protect a person's inner thoughts and desires as private, but Julian Savulescu, a neuroethicist at the University of Oxford, UK, sees no problem in principle with deploying decoding technologies. “People have a fear of it, but if it's used in the right way it's enormously liberating.” Brain data, he says, are no different from other types of evidence. “I don't see why we should privilege people's thoughts over their words,” he says. 
Haynes has been working on a study in which participants tour several virtual-reality houses, and then have their brains scanned while they tour another selection. Preliminary results suggest that the team can identify which houses their subjects had been to before. The implication is that such a technique might reveal whether a suspect had visited the scene of a crime before. The results are not yet published, and Haynes is quick to point out the limitations to using such a technique in law enforcement. What if a person has been in the building, but doesn't remember? Or what if they visited a week before the crime took place? Suspects may even be able to fool the scanner. “You don't know how people react with countermeasures,” he says. 
Other scientists also dismiss the implication that buried memories could be reliably uncovered through decoding. Apart from anything else, you need a 15-tonne, US$3-million fMRI machine and a person willing to lie very still inside it and actively think secret thoughts. Even then, says Gallant, “just because the information is in someone's head doesn't mean it's accurate”. Right now, psychologists have more reliable, cheaper ways of getting at people's thoughts. “At the moment, the best way to find out what someone is going to do,” says Haynes, “is to ask them.”  

References 

1. Haxby, J. V. et al. Science 293, 2425–2430 (2001). Article 
2. Cox, D. D. & Savoy, R. L. et al. NeuroImage 19, 261–270 (2003). Article 
3. Haynes, J.-D. & Rees, G. Nature Neurosci. 8, 686–691 (2005). Article 
4. Kamitani, Y. & Tong, F. Nature Neurosci. 8, 679–685 (2005). Article 
5. Nishimoto, S. et al. Curr. Biol. 21, 1641–1646 (2011). Article 
6. Horikawa, T., Tamaki, M., Miyawaki, Y. & Kamitani, Y. Science 340, 639–642 (2013). Article 

Related stories and links from Nature.com

Friday, June 28, 2013

Brainwashed: Seductive Appeal of Mindless Neuroscience (FORA.tv)


Brainwashed: The Seductive Appeal of Mindless Neuroscience (2013) has recieved excellent reviews from a lot of major publications, including the Wall Street Journal and New York Times (from David Brooks, who moderates the discussion below).

Here are a couple of the blurbs:
The New Scientist“The intrepid outsider needs expert guidance through this rocky terrain – and there's no better place to start than Brainwashed by Sally Satel and Scott O. Lilienfeld. Satel, a practising psychiatrist, and Lilienfeld, a clinical psychologist, are terrific sherpas. They are clear-sighted, considered and forgiving of the novice's ignorance” 
Nature“Satel and Lilienfeld provide an engaging overview of the technical and conceptual factors that complicate the interpretation of brain scans obtained by functional magnetic resonance imaging and other techniques…. Brainwashed offers much to bolster popular understanding of what brain imaging can and cannot achieve.”
And here is the publisher's summary of the book:
What can’t neuroscience tell us about ourselves? Since fMRI—functional magnetic resonance imaging—was introduced in the early 1990s, brain scans have been used to help politicians understand and manipulate voters, determine guilt in court cases, and make sense of everything from musical aptitude to romantic love. But although brain scans and other neurotechnologies have provided groundbreaking insights into the workings of the human brain, the increasingly fashionable idea that they are the most important means of answering the enduring mysteries of psychology is misguided—and potentially dangerous. 
In Brainwashed, psychiatrist and AEI scholar Sally Satel and psychologist Scott O. Lilienfeld reveal how many of the real-world applications of human neuroscience gloss over its limitations and intricacies, at times obscuring—rather than clarifying—the myriad factors that shape our behavior and identities. Brain scans, Satel and Lilienfeld show, are useful but often ambiguous representations of a highly complex system. Each region of the brain participates in a host of experiences and interacts with other regions, so seeing one area light up on an fMRI in response to a stimulus doesn’t automatically indicate a particular sensation or capture the higher cognitive functions that come from those interactions. The narrow focus on the brain’s physical processes also assumes that our subjective experiences can be explained away by biology alone. As Satel and Lilienfeld explain, this “neurocentric” view of the mind risks undermining our most deeply held ideas about selfhood, free will, and personal responsibility, putting us at risk of making harmful mistakes, whether in the courtroom, interrogation room, or addiction treatment clinic.

A provocative account of our obsession with neuroscience, Brainwashed brilliantly illuminates what contemporary neuroscience and brain imaging can and cannot tell us about ourselves, providing a much-needed reminder about the many factors that make us who we are.
The fact that one of the authors of this book has been writing books for the American Enterprize Institute (a conservative policy organization), and has co-written a book with the conservative Christina Hoff Sommers, makes me a little skeptical about ulterior motives for this book.

This is why I am skeptical, from the above text about the book:
Satel and Lilienfeld explain, this “neurocentric” view of the mind risks undermining our most deeply held ideas about selfhood, free will, and personal responsibility, putting us at risk of making harmful mistakes, whether in the courtroom, interrogation room, or addiction treatment clinic.
Personal responsibility and free will are essential to the conservative agenda, especially in the legal realm. We can't have people being acquitted of crimes due to brain defects resulting from abuse, neglect, or other traumas. We can't stop putting addicts in jail simply because they had little control over their tendency toward addiction and the environmental factors that made drugs seem like a useful copiung strategy.

Hell, if we took those things into account, our prisons would be empty and the legal system . . . yadda, yadda, yadda.




Brainwashed: Seductive Appeal of Mindless Neuroscience
from American Enterprise Institute on FORA.tv


Brainwashed: Seductive Appeal of Mindless Neuroscience

Partner: American Enterprise Institute
Location: American Enterprise Institute
Washington, D.C.
Event Date: 06.17.13

Summary


"Brainwashed: The Seductive Appeal of Mindless Neuroscience" (Basic Books, June 2013), by psychiatrist and AEI scholar Sally Satel and Emory University psychologist Scott Lilienfeld, follows the migration of brain science - and brain imaging in particular - out of the lab and into the public sphere.

Join New York Times columnist David Brooks as he engages the authors in a discussion of popular neuroscience (both the mindless and the mindful), of biological explanations of human behavior and their implications, and of the centrality of the concept of the mind in an age of neuroscience. Books will be available for purchase at the event.

Speakers


David Brooks has been an op-ed columnist for The New York Times since 2003. Previously, he was an editor at The Wall Street Journal, a senior editor at The Weekly Standard, and a contributing editor at Newsweek and The Atlantic. Currently a commentator on PBS’s “The NewsHour with Jim Lehrer,” Brooks is also the author, most recently, of The Social Animal: The Hidden Sources of Love, Character. His earlier books are Bobos in Paradise: The New Upper Class and How They Got There and On Paradise Drive: How We Live Now (And Always Have) in the Future Tense. He has contributed essays and articles to many publications, including The New Yorker, Forbes, The Public Interest, The New Republic, and Commentary. He is a frequent commentator on NPR, CNN’s “Late Edition,” and “The Diane Rehm Show.”

Scott Lilienfeld is a clinical psychologist and Professor of Psychology at Emory University in Atlanta. Scott earned his bachelor's degree in psychology from Cornell University and his Ph.D. from the University of Minnesota. His principal areas of research are personality disorders, psychiatric classification and diagnosis, evidence-based practices in psychology, and the challenges posed by pseudo-science to clinical psychology. Scott received the 1998 David Shakow Award for Early Career Contributions to Clinical Psychology, is a Fellow of the Association for Psychological Science, and is a past president of the Society for a Science of Clinical Psychology. He is the co-author of Science and Pseudoscience in Clinical Psychology and Psychology: From Inquiry to Understanding.

Sally Satel, M.D., a practicing psychiatrist and lecturer at the Yale University School of Medicine, examines mental health policy as well as political trends in medicine. Her publications include PC, M.D.: How Political Correctness Is Corrupting Medicine (Basic Books, 2001); The Health Disparities Myth (AEI Press, 2006); When Altruism Isn't Enough: The Case for Compensating Organ Donors (AEI Press, 2009); and One Nation under Therapy (St. Martin's Press, 2005), co-authored with Christina Hoff Sommers.

Friday, June 14, 2013

Lauren Kirchner - Brain-Scan Lie Detectors Just Don’t Work

Well, imagine that. Traditional lie detectors are not admissible in court, and brain-scan versions are not accurate - both of which are likely beatable by any sociopath.

This article comes from Pacific Standard.

Brain-Scan Lie Detectors Just Don’t Work

Perpetrators can suppress “crime memories,” study finds.

June 10, 2013 • By Lauren Kirchner
Dr. Zara Bergstrom and Dr. Jon Simons examine the electrical brain activity of another of the paper's authors, Marie Buda. (PHOTO: UNIVERSITY OF CAMBRIDGE'S DEPARTMENT OF PSYCHOLOGY)
It sounds just like something out of a sci-fi police procedural show—and not necessarily a good one.

In a darkened room, a scientist in a white lab coat attaches a web of suction cups, wires, and electrodes to a crime suspect’s head. The suspect doesn’t blink as he tells the detectives interrogating him, “I didn’t do it.”

The grizzled head detective bangs his fist on the table. “We know you did!” he yells.

The scientist checks his machine. “Either he’s telling the truth … or he’s actively suppressing his memories of the crime,” says the scientist.

Some law enforcement agencies really are using brain-scan lie detectors, and it really is possible to beat them, new research shows. 
“Dammit,” says the detective, shaking his head, “this one’s good.” 
But it isn’t fiction. Some law enforcement agencies really are using brain-scan lie detectors, and it really is possible to beat them, new research shows. 
The polygraph, the more familiar lie detection method, works by “simultaneously recording changes in several physiological variables such as blood pressure, pulse rate, respiration, electrodermal activity,” according to a very intriguing group called the International League of Polygraph Examiners. Despite what the League (and television) might have you believe, polygraph results are generally believed to be unreliable, and are only admitted as evidence in U.S. courts in very specific circumstances. 
The brain-scan “guilt detection test” is a newer technology that supposedly measures electrical activity in the brain, which would be triggered by specific memories during an interrogation. “When presented with reminders of their crime, it was previously assumed that their brain would automatically and uncontrollably recognize these details,” explains a new study published last week by psychologists at the University of Cambridge. “Using scans of the brain’s electrical activity, this recognition would be observable, recording a ‘guilty’ response.” 
Law enforcement agencies in Japan and India have started to use this tool to solve crimes, and even to try suspects in court. These types of tests have not caught on with law enforcement in the U.S., though they are commercially available here. That’s probably a good thing; the researchers of this study found that “some people can intentionally and voluntarily suppress unwanted memories.” 
The experiment was pretty straightforward, and the participants were no criminal masterminds. Ordinary people were asked to stage mock crimes, and then were asked to “suppress” their “crime memories,” all while having their brains scanned for electric activity. Most people could do it, the researchers found: “a significant proportion of people managed to reduce their brain’s recognition response and appear innocent.” 
Not everyone could, though. “Interestingly, not everyone was able to suppress their memories of the crime well enough to beat the system,” said Dr. Michael Anderson, of the Medical Research Council Cognition and Brain Sciences Unit in Cambridge. “Clearly, more research is needed to identify why some people were much more effective than others.” 
Separate studies on guilt-detection scans, conducted by cognitive neuroscientists at Stanford University, had similar findings. Anthony Wagner at Stanford’s Memory Lab had study participants take thousands of digital photos of their daily activities for several weeks. Wagner and his colleagues then showed sequences of photos to the participants, and measured their brain activity while the participants saw both familiar and unfamiliar photos. 
The researchers could identify which photos were familiar to the participants and which ones were not, with 91 percent accuracy, Wagner said. However, when the researchers told the participants to try to actively suppress their recognition of the photos that were theirs—to “try to beat the system”—the researchers had much less success. 
Scientists still don’t know how this “suppression” actually works; like so many questions about the inner workings of the human brain, it remains a mystery. But the fact that so many test subjects could, somehow, do it on command, led the authors of both the Cambridge and Stanford studies to come to the same conclusions. 
In short, brain-scan guilt-detection type tests are beatable, their results are unreliable, and they shouldn’t be used as evidence in court. Except on television.