Tuesday, February 24, 2009

Seed - That Voodoo That Scientists Do

Seed Magazine takes a look at science publishing on the web. Is it a good thing when findings are released in advance of an article's publication? I'd say yes, if it's done as open access. Others disagree.

The case here is a study that suggests neuroimaging techniques used in neuroscience studies are sometimes deeply flawed. The study results were released by the author on a blog long before the paper was published. The controversy swirls around the pre-release of the results, the language used on the blog, and other issues. Interesting stuff.

When findings are debated online, as with a yet to be released paper that calls out the field of social neuroscience, who wins?


A debate over the internet based on an early release of findings in advance of its formal publication raises important questions for science. "The internet is full of wonderful information — but it is also full of disinformation and errors," says Ed Diener, editor of Perspectives in Psychological Science.

Few endeavors have been affected more by the tools and evolution of the internet than science publishing. Thousands of journals are available online, and an increasing number of science bloggers are acting as translators, often using lay language to convey complex findings previously read only by fellow experts within a discipline. Now, in the wake of a new paper challenging the methodology of a young field, there is a case study for how the internet is changing the way science itself is conducted.

That area of research is the burgeoning subfield of social neuroscience, which seeks to understand the neurobiological basis of social behavior. Using neuroimaging techniques such as fMRI, researchers correlate neural activity with social and behavioral measures in order to pinpoint areas of the brain associated with social decision making or emotional reactivity.

Late last year, Ed Vul, a graduate student at MIT working with neuroscientist Nancy Kanwisher and UCSD psychologist Hal Pashler, prereleased "Voodoo Correlations in Social Neuroscience" on his website. The journal Perspectives in Psychological Science accepted the paper but will not formally publish it until May.

The paper argues that the way many social neuroimaging researchers are analyzing their data is so deeply flawed that it calls into question much of their methodology. Specifically, Vul and his coauthors claim that many, if not most, social neuroscientists commit a nonindependence error in their research in which the final measure (say, a correlation between behavior and brain activity in a certain region) is not independent of the selection criteria (how the researchers chose which brain region to study), thus allowing noise to inflate their correlation estimates. Further, the researchers found that the methods sections that were clearing peer review boards were woefully inadequate, often lacking basic information about how data was analyzed so that others could evaluate their methods. (Read Vul et al.'s entire in-press paper here.)

A number of online science writers and bloggers, including the widely read Sharon Begley of Newsweek, immediately wrote about the paper. The vast majority of the online responses to the paper were extremely positive, with Begley suggesting that "like so many researchers in the social sciences, psychologists have physics envy, and think that the illusory precision and solidity of neuroimaging can give their field some rigor." Vaughan Bell of Mind Hacks predicted that the paper "has the potential to really shake up the world of social cognitive neuroscience."

In the paper, Vul and his coauthors cite specific studies, many of which were published in leading journals such as Nature and Science, going so far as to call some of the studies "entirely spurious." Suddenly, a number of researchers found themselves under attack. The paper began filling neuroscientists' inboxes. Two groups of neuroimaging scientists, shocked by the speed with which this paper was being publicly disseminated, wrote rebuttals and posted them in the comments section of several blogs, including Begley's. Vul followed up in kind, linking to a rebuttal of the rebuttals in the comment sections of several blogs. This kind of scientific discourse — which typically takes place in the front matter of scholarly journals or over the course of several conferences — developed at a breakneck pace, months before the findings were officially published, and among the usual chaos of blog comments: inane banter, tangents, and valid opinions from the greater public.

Tor Wager, a Columbia University cognitive neuroscientist, whose work was not mentioned in Vul's paper but who helped prepare one of the rebuttals, says that it was important to respond both publicly and swiftly. "The public and the news media operate on sound bites, and the real scientific issues are quite complex." His complaints focus not only on the content of Vul's paper, but also on the authors' diction — specifically, the title, and its use of "voodoo."

"When the conversation gets complex — and with statistics it always is — many blog readers will form opinions based on very simple things," says Wager. "Like words such as 'voodoo correlations.' There's no reason to use such loaded words when making a statistical argument. The argument should be able to stand on its own."

Vul admits that his choice of language was intentional. "Some of the wording was probably a bit more provocative than was needed to draw people's attention," he says. Though reflective on the implications of the language, Vul is open about what his goals were. He and Kanwisher had previously written a similar paper discussing the statistical point on its own, and it went largely unnoticed. This time, he said, "we wanted to make the paper entertaining and to increase its readership. We wanted our paper to have some impact. If people don't know about these statistical problems, nothing will be done to fix them." As Vul points out, his approach seems to have worked: The paper is being widely discussed across the neuroscience community.


How much can we really learn from a single set of functional images of the human brain?

Since the debate erupted on the pages and in the comment sections of blogs and online newspapers, the editor of Perspectives, Ed Diener, in conversation with Vul and his rebutters, has decided to strike the word "voodoo" from the paper's name. Yet for Wager and social neuroscientists, it feels like a hollow victory that's come too late, and they find themselves wondering why Diener and the reviewers approved the title in the first place. According to Wager, the paper has made grant administration officers more wary, and it has affected the peer review process: "Everyone knows about the voodoo thing now, even though it's getting taken out of the journal article," he says. "The idea is out there, and it's hard to correct." Vul has a different take. Once their paper had passed peer review, Vul and his colleagues argue, it was the public's right to read about it, and respond to it, however they chose — especially given that it sought to reveal flaws in publicly funded research that gets widespread media coverage.

More generally, Vul believes that the response to this article represents the power of 21st-century online media. The reaction to his findings signals changes in discourse, peer review, and potentially even funding. Most important, it has forced scientists to re-evaluate their methodologies, as even Wager's rebuttal concedes that the nonindependence error Vul discusses can in fact inflate correlations (much of the argument is over how much). Furthermore, online writers, who had been writing regularly about just the type of social neuroimaging papers Vul criticizes, are the same ones who have most thoroughly praised his paper. It shows, Vul says, that the online media "is doing their job of informing the public of both the exciting developments in the field, and also the reasons that one might want to doubt certain findings."

For Wager, the rapid pace and public forum of this debate has shown him what can go wrong when science is weighed in the open. He sees a direct connection between public opinion and science funding and support. "At the end of the day," Wager says, "funding has to be justified to Congress. Public opinion definitely does shape which trends in science we think are worth following. A paper like this can affect funding decisions and the course of science if it makes a public impact, even before its scientific validity has thoroughly been assessed over time. Using language like 'voodoo' to create an emotional effect is a huge problem because of how it affects public perception, even if the scientific arguments are not well founded."

The founding editor of Perspectives, Diener, admits that the early release of Vul's paper and its respective coverage has been a difficult process. "As an editor for 16 years, I've never witnessed a paper so widely distributed before its publication," he says. "There are some very important questions that this raises for science. Most important, how can we guarantee quality in what is sent around? The internet is full of wonderful information — but it is also full of disinformation and errors. How can readers know whether what they are reading is solid information?"

Though Diener sees these as important problems, he has faith in the scientific community's ability to address them. Peer review, he suggests, may no longer be enough. He envisages a wider formal online-review process, in which scientists could respond to papers, with their comments weighted based on their own publication record. As for the Vul paper, Diener is keenly aware of both the conversation he has started and the criticism he has received, and he hopes that his journal's approach to the paper — several critical commentaries will be published alongside it — will be more revealing than damaging. "The Vul paper has stirred considerable debate, even heated debate. My goal is that, ultimately, more light is generated than heat by the set of articles. This will happen if we are able to pinpoint the best statistical practices, as well as optimal descriptions of their methods, and motivate scientists to use them. If this happens, my left prefrontal lobes will light up."

No comments: