Thursday, April 24, 2008

Psychology and Neuroscience in the News

Another batch of articles on psychology and neuroscience research that has made the news and/or that generated my interest. Click the title link to see the whole article.

In no particular order. Up first a series of articles from PhysOrg.

Human brain appears 'hard-wired' for hierarchy
“The processing of hierarchical information seems to be hard-wired, occurring even outside of an explicitly competitive environment, underscoring how important it is for us,” said Zink.

Key study findings included:

-- The area that signals an event’s importance, called the ventral striatum, responded to the prospect of a rise or fall in rank as much as it did to the monetary reward, confirming the high value accorded social status.

-- Just viewing a superior human “player,” as opposed to a perceived inferior one or a computer, activated an area near the front of the brain that appears to size people up – making interpersonal judgments and assessing social status. A circuit involving the mid-front part of the brain that processes the intentions and motives of others and emotion processing areas deep in the brain activated when the hierarchy became unstable, allowing for upward and downward mobility.

-- Performing better than the superior “player” activated areas higher and toward the front of the brain controlling action planning, while performing worse than an inferior “player” activated areas lower in the brain associated with emotional pain and frustration.

-- The more positive the mood experienced by participants while at the top of an unstable hierarchy, the stronger was activity in this emotional pain circuitry when they viewed an outcome that threatened to move them down in status. In other words, people who felt more joy when they won also felt more pain when they lost.

“Such activation of emotional pain circuitry may underlie a heightened risk for stress-related health problems among competitive individuals,” suggested Meyer-Lindenberg.
Does this mean that the anti-hierarchy "cultural creatives" are fighting against their own brains?

Praise = money?
Why are we nice to others? One answer provided by social psychologists is because it pays off. A social psychological theory stated that we do something nice to others for a good reputation or social approval just like we work for salary.

Consistent with this idea, a research team led by Norihiro Sadato, a professor, at the Japanese National Institute for Physiological Sciences, NIPS (SEIRIKEN), and Keise Izuma, a graduate student of the Graduate University for Advanced Studies, in Okazaki, Japan, now have neural evidence that perceiving one's good reputation formed by others activates the striatum, the brain's reward system, in a similar manner to monetary reward.

The team reports their findings on April 24 in Neuron (Cell Press).

The research group conducted functional magnetic resonance imaging (fMRI) experiments on 19 people with monetary and social rewards. The acquisition of one's good reputation robustly activated reward-related brain areas, notably the striatum, and these overlapped with the areas activated by monetary rewards. These results strongly suggest that social reward is processed in the striatum like monetary reward.

Considering a pivotal role played by a good reputation in social interactions, this study provides an important first step toward neural explanation for our everyday social behaviors.
This is pretty basic, but the upshot -- which goes unstated -- is that we experience social praise as feeling good. We generally prefer to do things that feel good.


Study breaks ground in revealing how neurons generate movement

The findings, the researchers say, could inform efforts to develop neural prosthetics to treat paralysis and motor dysfunctions, such as those resulting from stroke. “The brain’s messages don’t reach the muscles in these conditions,” says Schoppik, “so it’s critical that the drive to these prosthetics reflect what the brain is trying to do to move muscles. Understanding how multiple neurons work together could influence the type of software created to drive these devices.”

The investigation of how neurons give rise to motor behaviors has been stymied until now, says Schoppik, by the difficulties inherent in studying more than one neuron in action at a time during the course of a behavior. In the current study, the scientists overcame this obstacle in a study of macaque monkeys that had been trained to track a moving object with their eyes.

Basing their approach on two key pieces of information -- first, that when a neuron responds to a stimulus there is always a slight variation in its performance, a phenomenon that neuroscientists traditionally refer to as “noise,” and, second, that each attempt of the eye to pursue a moving target is also unique – they proposed that some aspects of neural variation may reflect behavioral variation.

They used this inherent variability as a probe. Using a formula from financial securities market analysis that looks at how individual stocks behave at a given time within the context of fluctuations in the larger financial market, they explored how individual neurons would behave relative to their neighbors.

They compared the deviations from the average spiking activity of single neurons and simultaneous deviations from the mean eye velocity. They also measured the degree to which variation shared across two pairs of concurrently active neurons.

The data demonstrated that individual neurons encode different aspects of behavior, controlling eye velocity fluctuations at particular moments during the course of eye movement, while the population of neurons collectively tiles the entire duration of the movement.

The analysis also revealed the strength of correlations in the eye movement predictions derived from pairs of simultaneously recorded neurons, and suggests, the researcher say, either that a small number of neurons are sufficient to drive the behavior at any given time or that many neurons operate collectively at each moment.

The finding, says Lisberger, underscores the importance of recording for more than one neuron at a time. “There is a lot that we can learn from how multiple neurons interact.”
The more we learn about how the brain processes sensation and information, the more clear it becomes that we seldom locate a specific function in any one part of the brain. Memories are maybe the best example in that they are not located in one place, but rather, they are a matrix of various brain regions associated with different aspects of the given memory (scent, vision, sound, feeling tone, and so on).


Study Captures Brain's Activity Processing Speech
Rad, Lad. You might be able to hear the difference, but to many children and adults, these words sound exactly the same. The problem isn’t that they can’t hear the sounds. The problem is that they can’t tell them apart.
One in 20 children in kindergarten has difficulties understanding speech that are not related to hearing or problems with their ears. The reason is that speech discrimination is a problem solved in the brain, not in the ear. How does the brain process speech sounds? Very little was known, until now.

Enter Dr. Michael Kilgard and Crystal Engineer. Kilgard is a neuroscientist in the School of Behavioral and Brain Sciences at the University of Texas at Dallas. His lab is one of the few in the world that studies how individual neurons process speech stimuli. Engineer is one of Professor Kilgard’s doctoral students. Together they conducted a study to provide the first-ever description of how speech sounds are processed by neurons in the brain. This insight may offer a new approach to treating children with speech processing disorders.

“Now that we’ve cracked the door on this important problem, we should be able to understand the neural basis of many common speech processing disorders and use this information to develop new treatments,” said Dr. Kilgard.

The study is part of Engineer’s dissertation, “Cortical activity patterns predict speech discrimination ability,” and will be published in the May issue of Nature Neuroscience, the top research journal in the field of neuroscience. The advance, online publication of the study is now available on the Nature Neuroscience Web site.
Since we are on the topic of language, sort of, here is an interesting article that was in the New York Times a couple of days ago.

When Language Can Hold the Answer

The finding may not seem surprising, but it is fodder for one side in a traditional debate about language and perception, including the thinking that creates and names groups.

In stark form, the debate was: Does language shape what we perceive, a position associated with the late Benjamin Lee Whorf, or are our perceptions pure sensory impressions, immune to the arbitrary ways that language carves up the world?

The latest research changes the framework, perhaps the language of the debate, suggesting that language clearly affects some thinking as a special device added to an ancient mental skill set. Just as adding features to a cellphone or camera can backfire, language is not always helpful. For the most part, it enhances thinking. But it can trip us up, too.

The traditional subject of the tug of war over language and perception is color. Because languages divide the spectrum differently, researchers have asked whether language affected how people see color. English, for example, distinguishes blue from green. Most other languages do not make that distinction. Is it possible that only English speakers really see those colors as different?

Past investigations have had mixed results. Some experiments suggested that color terms influenced people in the moment of perception. Others suggested that the language effect kicked in only after some basic perception occurred.

The consensus was that different ways to label color probably did not affect the perception of color in any systematic way.

Last year, Lera Boroditsky and colleagues published a study in The Proceedings of the National Academy of Sciences showing that language could significantly affect how quickly perceptions of color are categorized. Russian and English speakers were asked look at three blocks of color and say which two were the same.

Russian speakers must distinguish between lighter blues, or goluboy, and darker blues, siniy, while English speakers do not have to, using only “blue” for any shade. If the Russians were shown three blue squares with two goluboy and one siniy, or the other way around, they picked the two matching colors faster than if all three squares were shades from one blue group. English makes no fundamental distinction between shades of blue, and English speakers fared the same no matter the mix of shades.

In two different tests, subjects were asked to perform a nonverbal task at the same time as the color-matching task. When the Russians simultaneously carried out a nonverbal task, they kept their color-matching advantage. But when they had to perform a verbal task at the same time as color-matching, their advantage began to disappear. The slowdown suggested that the speed of their reactions did not result just from a learned difference but that language was actively involved in identifying colors as the test was happening. Two other recent studies also demonstrated an effect of language on color perception and provided a clue as to why previous experimental results have been inconclusive. In The Proceedings of the National Academy of Sciences, Dr. Paul Kay of the International Computer Science Institute at Berkeley and colleagues hypothesized that if language is dominant on the left side of the brain, it should affect color perception in the right visual field. (The right visual field is connected to the left side of the brain, and vice versa.)

Be sure to read the rest of this fascinating article.

Here is one more quote, featuring Steven Pinker's argument in opposition to some of this research.
This separation of language and thought is emphasized in a recent book by Steven Pinker, at Harvard University, a skeptic of “neo-whorfianism.” In “The Stuff of Thought: Language as a Window Into Human Nature,” Pinker explores the complicated ways that language and thought relate to each other. He cautions against confusing the “many ways in which language connects to thought.” “Language surely affects thought,” he writes, but he argues that there is little evidence for the claims that language can force people to have particular thoughts or make it impossible for them to think in certain ways. With numbers, the importance of language evidence is much clearer. It appears that the ability to count is necessary to deal with large, specific numbers. And the only way to count past a certain point is with language.
This reminds me of the discussion we had once in a lower division philosophy course. The question: Assuming no word exists that means "freedom" in a given culture, can they understand what the concept of freedom means? Well, can they?

Moving on, sort of. This next article looks at neuroaesthetics. Yep, that's what they're calling it.

Neuroaesthetics, the latest trend in literary theory, provides a window on the academy's weaknesses.

Vladimir Nabokov said that a work of art shouldn't make you think, it should make you shiver. If the budding field of neuroaesthetics takes off in the way its adherents hope, we may soon be able to chart this shiver on a series of graphs, break its effects down into specific components, isolate the active ingredient of literary greatness, and – who knows? – synthetically produce it. I find the idea attractive. It would save us all the fuss and bother of writing and reading and thinking about writing and reading.

Until then, we will continue to have debates like the one raging over the pages of the most recent Times Literary Supplement. Back in September of 2006, novelist A.S. Byatt published an article arguing that neurobiology, and its analysis of synaptic chemical "grammar," might contribute valuable insight to literary criticism. That piece sparked much debate and earlier this month, Raymond Tallis, an professor of Geriatric Medicine at the University of Manchester, responded with a wonderfully lively attack on her argument: Science has no place in the literary conversation, he says. Neurobiology get lost. He uses the occasion to dismiss a new anthology called Evolutionary and Neurocognitive Approaches to Aesthetics, Creativity and the Arts as well. The movement seems to be spreading and Tallis is having none of it.

Most literary scholars agree that in certain limited circumstances, science can be valuable to literary work, particularly when attempting to date plays or poems. Because of recent advances in stylometry, the statistical analysis of an author's style, we realize now that the lesser known playwright Thomas Middleton wrote several key scenes in Shakespeare's Timon of Athens. The mathematical model has saved us from a bad error, and an entire Shakespearean play has to be reconsidered because of it.

But academics before Byatt who have tried to inject a scientific flavour to the larger frame of literary criticism always fail: Scientific interpretations of literature tend to be grossly reductionist. Even a theory as well established as Darwin's theory of evolution cannot shed much light on specific moments of the literary experience; it can only give a general sense of the vague and unconscious motivations behind a work of art. And there's another reason why Tallis probably shouldn't worry much about the new neurobiological movement. The market forces of academia will do away with it long before its intellectual silliness has a chance to become apparent.

Let's hope he's correct. I can't think of a better way to ruin literature than to let it be colonized by neuroscience. Aargh!

Finally, a bit more that suggests dominant personalities are biased toward the vertical (or rather, that they see hierarchy easier than they see translation).

"It's beneath me": How dominant personalities are biased towards the vertical

People who are more dominant are quicker at processing information that appears in the vertical dimension of space, psychologists have found. The result comes from an expanding field of psychology looking at the ways that personality and culture can affect how we interact with the world.

Sara Moeller and colleagues asked dozens of students to identify as fast and as accurately as possible whether a letter on a computer screen was a "p" or a "q". On each trial, the letter always appeared randomly in one of four locations: to the left, right, above or below the screen mid-point. The test was repeated hundreds of times, with the students' attention always brought back to the centre of the screen after each letter presentation.

Students with more dominant personalities (judged by their agreement with statements like "I impose my will on others") were far quicker at identifying letters that appeared above or below the midpoint than would be expected based on their speed at identifying letters appearing to the left or right. By contrast, speed of response to the horizontal letters was not associated with personality dominance.

A second experiment replicated the finding with more students, a different measure of personality dominance and with the letters' positioning following a predictable rather than a random pattern.

Our language is littered with dominance metaphors that refer to the vertical dimension: we speak of "upper" classes and of people "high" in authority. Past research has shown that priming participants to think of verticality speeds their response to stimuli that are related to dominance in some way. According to Moeller and her co-workers, the new finding goes a step further by showing that having a dominant personality can actually bias people towards the vertical dimension.
OK, then, that's a wrap.


No comments: