ADVERTISEMENT
Filtered By: Hashtag
Hashtag

The legal and ethical issues behind Facebook's massive psychological experiment


Unbeknownst to the rest of the world, Facebook’s data science team, in collaboration with Cornell University and the University of California's Center for Tobacco Research, ran an experiment in 2012 to test how moods like happiness or depression can be transmitted through social media. 
 
They did this by tweaking the newsfeed algorithm of over 600,000 Facebook users so it would show low numbers of positive or negative posts and observed how this influenced the posts of the users in the study. The researchers released their findings in the Proceedings of the National Academy of Science this March.
 
The study was conducted to observe and test a phenomenon called "emotional contagion," the transfer of long term moods or emotions of people within a network. These studies usually involve real life interactions. This study, which is led by Adam Kramer of Facebook’s Core Data Science Team, was done to see if emotional contagion can occur even without in-person or nonverbal cues.
 
They observed that when Facebook users see fewer positive posts (or more negative posts) on their newsfeed, they were more likely to share negative posts or have fewer positive posts. While those who receive more positive posts were more likely to have share positive posts. 
 
Issues abound
 
Following the release of the study, which has since gone viral online, the researchers have been criticized for their apparently unscrupulous use of unsuspecting Facebook users. Critics have questioned the ethics of the study and the legality of Facebook’s actions in including over half a million participants without their consent.
 
James Grimmelmann, Professor of Law from the University of Maryland, argued in his blog that the Facebook is illegal and unethical given that the participants of the study:
  • did not give informed consent, which is required by Federal law.
  • that Facebook’s  Data Use Policy is too general and not give user’s the option to opt-out
  • it’s entirely possible that some users maybe harmed by the experiment.

Grimmelmann explains, “even when Facebook manipulates our News Feeds to sell us things, it is supposed—legally and ethically—to meet certain minimal standards. Anything on Facebook that is actually an ad is labelled as such (even if not always clearly.) This study failed even that test, and for a particularly unappealing research goal: We wanted to see if we could make you feel bad without you noticing. We succeeded.”

Even Susan Fiske, the editor of the study, expressed some reservations about it herself. "I was concerned," says Fiske, "until I queried the authors and they said their local institutional review board had approved it—and apparently on the grounds that Facebook apparently manipulates people's News Feeds all the time... I understand why people have concerns. I think their beef is with Facebook, really, not the research."
 
 
But last Sunday, Adam Kramer defended the study on his Facebook page, stating that "We didn't clearly state our motivations in the paper…The reason we did this research is because we care about the emotional impact of Facebook and the people that use our product." Kramer also insisted that none of the posts were hidden; they just simply didn’t show up in some feeds.
 
This is not the first time that Facebook has participated in a research study. Facebook has been participating in studies and sharing results with social scientists since 2007, prompting the New York Times to call Facebook a “petri dish for the social sciences.”
 
As for the practice of “tweaking” a user’s newsfeed, it is stated in Facebook's Data Use Policy that the company has the right to use the data that they receive for research and to select the type of ads the users would be most interested in.
 
Controversy aside, the study does present interesting results despite being plagued by flaws. Dr. John Grohol, founder of PsychCentral, criticized the study for its methodological problems. One of the tools the researchers used, Linguistic Inquiry and Word Count application (LIWC 2007), was designed to analyze linguistic patterns in huge volumes of text.

These patterns are what the researchers looked at to determine the user’s mood.

According to Grohol, “Why would researchers use a tool not designed for short snippets of text to, well… analyze short snippets of text? Sadly, it’s because this is one of the few tools available that can process large amounts of text fairly quickly… or a tweet or status update, however, this is a horrible analysis tool to use. That’s because it wasn’t designed to differentiate — and in fact, can’t differentiate — a negation word in a sentence.” He also criticized the study for “showing ridiculously small correlations that have little to no meaning to ordinary users” and “put too much faith in the tools they’re using without understanding — and discussing — the tools’ significant limitations.” — TJD, GMA News