Related:

25 March 2014, Proceedings of the National Academy of Sciences: Kramer, Facebook Core Data Science Team, et al: Experimental evidence of massive-scale emotional contagion through social networks (PDF)
http://www.nytimes.com/2014/07/01/opinion/jaron-lanier-on-lack-of-transparency-in-facebook-study.html

JUNE 30, 2014

Should Facebook Manipulate Users?

Jaron Lanier on Lack of Transparency in Facebook Study

By JARON LANIER

SHOULD we worry that technology companies can secretly influence our emotions? Apparently so.

A study recently published by researchers at the University of California, San Francisco, Cornell and Facebook suggests that social networks can manipulate the emotions [1] of their users by tweaking what is allowed into a user's news feed. The study, published in the Proceedings of the National Academy of Sciences, changed the news feeds delivered to almost 700,000 people for a week without getting their consent to be studied. Some got feeds with more sad news, others received more happy news.

The researchers were studying claims that Facebook could make us feel unhappy by creating unrealistic expectations of how good life should be. But it turned out that some subjects were depressed when the good news in their feed was suppressed. Individuals were not asked to report on how they felt; instead, their writing was analyzed for vocabulary choices that were thought to indicate mood.

The researchers claim that they have proved that "emotional states can be transferred to others via emotional contagion, leading people to experience the same emotions without their awareness." The effect was slight, but imposed on a very large population, so it's possible the effects were consequential to some people. The paper itself states its claims rather boldly, but one of the authors, Adam D. I. Kramer of Facebook, responding to intense criticism that it was wrong to study users without their permission, has since emphasized how tiny the effects were. But however the results might be interpreted now, they couldn't have been known in advance.

The manipulation of emotion is no small thing. An estimated 60 percent of suicides are preceded by a mood disorder. Even mild depression has been shown to increase the risk of heart failure by 5 percent; moderate to severe depression increases it by 40 percent.

Research with human subjects is generally governed by strict ethical standards, including the informed consent of the people who are studied. Facebook's generic click-through agreement, which almost no one reads and which doesn't mention this kind of experimentation, was the only form of consent cited in the paper. The subjects in the study still, to this day, have not been informed that they were in the study. If there had been federal funding, such a complacent notion of informed consent would probably have been considered a crime. Subjects would most likely have been screened so that those at special risk would be excluded or handled with extra care.

This is only one early publication about a whole new frontier in the manipulation of people, and Facebook shouldn't be singled out as a villain. All researchers, whether at universities or technology companies, need to focus more on the ethics of how we learn to improve our work.

To promote the relevance of their study, the researchers noted that emotion was relevant to human health, and yet the study didn't measure any potential health effects of the controlled manipulation of emotions.

It is unimaginable that a pharmaceutical firm would be allowed to randomly, secretly sneak an experimental drug, no matter how mild, into the drinks of hundreds of thousands of people, just to see what happens, without ever telling those people. Imagine a pharmaceutical researcher saying, "I was only looking at a narrow research question, so I don't know if my drug harmed anyone, and I haven't bothered to find out." Unfortunately, this seems to be an acceptable attitude when it comes to experimenting with people over social networks. It needs to change.

Our laws require that cars be recalled and fixed even if a defect would be likely to injure only a very small number of people. In this case, we're talking about a study that was actually intended to cause a negative effect in many people, and one open question is how destructive it was in the worst instances that might have occurred.

All of us engaged in research over networks must commit to finding a way to modernize the process of informed consent.  Instead of lowering our standards to the level of unread click-through agreements, let's raise the standards for everyone.

Now that we know that a social network proprietor can engineer emotions for the multitudes to a slight degree, we need to consider that further research on amplifying that capacity might take place. Stealth emotional manipulation could be channeled to sell things (you suddenly find that you feel better after buying from a particular store, for instance), but it might also be used to exert influence in a multitude of other ways. Research has also shown that voting behavior can be influenced by undetectable social network maneuvering, for example.

The principle of informed consent in the age of social networking can't be limited to individuals who are studied; the public has every right to be informed of otherwise undetectable commercial or political practices that are made possible by the results of research into high-tech manipulation, and to choose whether to give consent.

My guess is that the public would choose to outlaw using our communication tools as conduits for secret, algorithmic manipulations of our emotions.

Let us choose to live in a society of true hearts, not calculated ones.

Jaron Lanier is the author of "Who Owns the Future," and is an interdisciplinary scientist at Microsoft Research.

[1] http://www.pnas.org/content/111/24/8788.full