Pages

Saturday, June 28, 2014

The Story Behind Facebook's Secret Mood Manipulation Experiment - Even the Study Editor Thought It Was Creepy

Do you remember signing a consent to be a part of a research experiment conducted by making specific alterations to your news feed? Yeah, me neither. That's because we were never given the option, nor were we informed that we were taking part in an experiment designed to see how changes in the news feed we see on Facebook might change our moods.

NO research anywhere is conducted on human subjects without an informed consent. To do so is not only unethical, but it would violate the ethics of every major organization's or university's Internal Review Board, an independent ethics committee that rules on the ethics of any proposed experiment, especially those with human subjects.

If I were conducting the research exposed in these two articles from The Atlantic, I would need an informed consent from EVERY person involved as a subject, AND I would need to make available to them some form of intervention in the case that the mood alterations generated by the experimental conditions became overwhelming or otherwise disturbing.

"Creepy" is an understatement.

Even the Editor of Facebook's Mood Study Thought It Was Creepy

"It's ethically okay from the regulations perspective, but ethics are kind of social decisions."

Adrienne LaFrance | Jun 28 2014



Catching a glimpse of the puppet masters who play with the data trails we leave online is always disorienting. And yet there's something new-level creepy about a recent study that shows Facebook manipulated what users saw when they logged into the site as a way to study how it would affect their moods.

But why? Psychologists do all kinds of mood research and behavior studies. What made this study, which quickly stirred outrage, feel so wrong?

Even Susan Fiske, the professor of psychology at Princeton University who edited the study for Proceedings of the National Academy of Sciences of America, had doubts when the research first crossed her desk.

"I was concerned," she told me in a phone interview, "until I queried the authors and they said their local institutional review board had approved it—and apparently on the grounds that Facebook apparently manipulates people's News Feeds all the time... I understand why people have concerns. I think their beef is with Facebook, really, not the research."

Institutional review boards, or IRBs, are the entities that review researchers' conduct in experiments that involve humans. Universities and other institutions that get federal funding are required to have IRBs, which often rely on standards like the Common Rule—one of the main ethical guideposts that says research subjects must give their consent before they're included in an experiment. "People are supposed to be, under most circumstances, told that they're going to be participants in research and then agree to it and have the option not to agree to it without penalty," Fiske said. (I emailed the study's authors on Saturday afternoon to request interviews. Author Jamie Guillory responded but declined to talk, citing Facebook's request to handle reporters' questions directly.)

But Facebook, as a private company, doesn't have to agree to the same ethical standards as federal agencies and universities, Fiske said.

"A lot of the regulation of research ethics hinges on government supported research, and of course Facebook's research is not government supported, so they're not obligated by any laws or regulations to abide by the standards," she said. "But I have to say that many universities and research institutions and even for-profit companies use the Common Rule as a guideline anyway. It's voluntary. You could imagine if you were a drug company, you'd want to be able to say you'd done the research ethically because the backlash would be just huge otherwise."

The backlash, in this case, seems tied directly to the sense that Facebook manipulated people—used them as guinea pigs—without their knowledge, and in a setting where that kind of manipulation feels intimate. There's also a contextual question. People may understand by now that their News Feed appears differently based on what they click—this is how targeted advertising works—but the idea that Facebook is altering what you see to find out if it can make you feel happy or sad seems in some ways cruel.

Mood researchers have been toying with human emotion since long before the Internet age, but it's hard to think of a comparable experiment offline. It might be different, Fiske suggests, if a person were to find a dime in a public phone booth, then later learn that a researcher had left the money there to see what might happen to it.

"But if you find money on the street and it makes you feel cheerful, the idea that someone placed it there, it's not as personal," she said. "I think part of what's disturbing for some people about this particular research is you think of your News Feed as something personal. I had not seen before, personally, something in which the researchers had the cooperation of Facebook to manipulate people... Who knows what other research they're doing."

Fiske still isn't sure whether the research, which she calls "inventive and useful," crossed a line. "I don't think the originality of the research should be lost," she said. "So, I think it's an open ethical question. It's ethically okay from the regulations perspective, but ethics are kind of social decisions. There's not an absolute answer. And so the level of outrage that appears to be happening suggests that maybe it shouldn't have been done...I'm still thinking about it and I'm a little creeped out, too."

* * * * *

Everything We Know About Facebook's Secret Mood Manipulation Experiment

It was probably legal. But was it ethical?

Robinson Meyer | Jun 28 2014



Facebook’s News Feed—the main list of status updates, messages, and photos you see when you open Facebook on your computer or phone—is not a perfect mirror of the world.

But few users expect that Facebook would change their News Feed in order to manipulate their emotional state.

We now know that’s exactly what happened two years ago. For one week in January 2012, data scientists skewed what almost 700,000 Facebook users saw when they logged into its service. Some people were shown content with a preponderance of happy and positive words; some were shown content analyzed as sadder than average. And when the week was over, these manipulated users were more likely to post either especially positive or negative words themselves.

This tinkering was just revealed as part of a new study, published in the prestigious Proceedings of the National Academy of Sciences. Many previous studies have used Facebook data to examine “emotional contagion,” as this one did. This study is different because, while other studies have observed Facebook user data, this one set out to manipulate it.

The experiment is almost certainly legal. In the company’s current terms of service, Facebook users relinquish the their data “data analysis, testing, [and] research.” Is it ethical, though? Since news of the study first emerged, I’ve seen and heard both privacy advocates and casual users express surprise at the audacity of the experiment.

We’re tracking the ethical, legal, and philosophical response to this Facebook experiment here. We’ve also asked the authors of the study for comment. Author Jamie Guillory replied and referred us to a Facebook spokesman. We'll update this space when we hear back.
What did the paper itself find?

The study found that by manipulating the News Feeds displayed to 689,003 Facebook users users, it could affect the content which those users posted to Facebook. More negative News Feeds led to more negative status messages, as more positive News Feeds led to positive statuses.

As far as the study was concerned, this meant that it had shown “that emotional states can be transferred to others via emotional contagion, leading people to experience the same emotions without their awareness.” It touts that this emotional contagion can be achieved without “direct interaction between people” (because the unwitting subjects were only seeing each others’ News Feeds).

The researchers add that never during the experiment could they read individual users’ posts.

Two interesting things stuck out to me in the study.

The first? The effect the study documents is very small, as little as one-tenth of a percent of an observed change. That doesn’t mean it’s unimportant, though, as the authors add:
Given the massive scale of social networks such as Facebook, even small effects can have large aggregated consequences. […] After all, an effect size of d = 0.001 at Facebook’s scale is not negligible: In early 2013, this would have corresponded to hundreds of thousands of emotion expressions in status updates per day.
The second was this line:
Omitting emotional content reduced the amount of words the person subsequently produced, both when positivity was reduced (z = −4.78, P < 0.001) and when negativity was reduced (z = −7.219, P < 0.001).
In other words, when researchers reduced the appearance of either positive or negative sentiments in people’s News Feeds—when the feeds just got generally less emotional—those people stopped writing so many words on Facebook.

Make people’s feeds blander and they stop typing things into Facebook.

Was the study well designed?

Perhaps not, says John Grohol, the founder of psychology website Psych Central. Grohol believes the study’s methods are hampered by the misuse of tools: Software better matched to analyze novels and essays, he says, is being applied toward the much shorter texts on social networks.
Let’s look at two hypothetical examples of why this is important. Here are two sample tweets (or status updates) that are not uncommon:
  • “I am not happy.
  • “I am not having a great day.”
An independent rater or judge would rate these two tweets as negative — they’re clearly expressing a negative emotion. That would be +2 on the negative scale, and 0 on the positive scale.

But the LIWC 2007 tool doesn’t see it that way. Instead, it would rate these two tweets as scoring +2 for positive (because of the words “great” and “happy”) and +2 for negative (because of the word “not” in both texts).
“What the Facebook researchers clearly show,” writes Grohol, “is that they put too much faith in the tools they’re using without understanding — and discussing — the tools’ significant limitations.”

Did an institutional review board—an independent ethics committee that vets research that involves humans—approve the experiment?

Yes, according to Susan Fiske, the Princeton University psychology professor who edited the study for publication.

“I was concerned,” Fiske told The Atlantic, “until I queried the authors and they said their local institutional review board had approved it—and apparently on the grounds that Facebook apparently manipulates people's News Feeds all the time.”

Fiske added that she didn’t want the “the originality of the research” to be lost, but called the experiment “an open ethical question.”

“It's ethically okay from the regulations perspective, but ethics are kind of social decisions. There's not an absolute answer. And so the level of outrage that appears to be happening suggests that maybe it shouldn't have been done...I'm still thinking about it and I'm a little creeped out, too.”


From what we know now, were the experiment’s subjects able to provide informed consent?

In its ethical principles and code of conduct, the American Psychological Association (APA) defines informed consent like this:
When psychologists conduct research or provide assessment, therapy, counseling, or consulting services in person or via electronic transmission or other forms of communication, they obtain the informed consent of the individual or individuals using language that is reasonably understandable to that person or persons except when conducting such activities without consent is mandated by law or governmental regulation or as otherwise provided in this Ethics Code.
As mentioned above, the research seems to have been carried out under Facebook’s extensive terms of service. The company’s current data use policy, which governs exactly how it may use users’ data, runs to more than 9,000 words. Does that constitute “language that is reasonably understandable”?

The APA has further guidelines for so-called “deceptive research” like this, where the real purpose of the research can’t be made available to participants during research. The last of these guidelines is:
Psychologists explain any deception that is an integral feature of the design and conduct of an experiment to participants as early as is feasible, preferably at the conclusion of their participation, but no later than at the conclusion of the data collection, and permit participants to withdraw their data.
At the end of the experiment, did Facebook tell the user-subjects that their News Feeds had been altered for the sake of research? If so, the study never mentions it.

James Grimmelmann, a law professor at the University of Maryland, believes the study did not secure informed consent. And he adds that Facebook fails even its own standards, which are lower than that of the academy:
A stronger reason is that even when Facebook manipulates our News Feeds to sell us things, it is supposed—legally and ethically—to meet certain minimal standards. Anything on Facebook that is actually an ad is labelled as such (even if not always clearly.) This study failed even that test, and for a particularly unappealing research goal: We wanted to see if we could make you feel bad without you noticing. We succeeded.
Do these kind of News Feed tweaks happen at other times?

At any one time, Facebook said last year, there were on average 1,500 pieces of content that could show up in your News Feed. The company uses an algorithm to determine what to display and what to hide.

It talks about this algorithm very rarely, but we know it’s very powerful. Last year, the company changed News Feed to surface more news stories. Websites like BuzzFeed and Upworthy proceeded to see record-busting numbers of visitors.

So we know it happens. Consider Fiske’s explanation of the research ethics here—the study was approved “on the grounds that Facebook apparently manipulates people's News Feeds all the time.” And consider also that from this study alone Facebook knows at least one knob to tweak to get users to post more words on Facebook.

No comments:

Post a Comment