Adam Kramer is well-acquinted with irony. A week ago, the New Scientist reported a study he published as head of Facebook’s data science team: they looked at the emotional responses of 600,000 Facebook users once they had their Facebook news-feeds slightly modified to display either more positive or negative news.
The irony came when Kramer saw the controversy that followed the report’s publication: users felt their emotions had been ‘toyed’ with because Facebook had modulated its newsfeed algorithms to find out which was best. Social media lit-up: at once the source of the controversy and the platform people used to voice anger about it.
The second irony was, in Kramer’s own words, that “the research benefits of the paper may not have justified all of this anxiety.” The study looked at the ‘social contagion’ effect – if one person feels something, do we all feel it? -which, after all, has been well-established in many other works.
Facebook’s researchers wanted to see in particular if the effect worked on people looking at positive and negative posts on their homepage news feed. In Kramer’s words, they did this “to investigate the common worry that seeing friends post positive content leads to people feeling negative or left out,” and refine the service. Instead, they were attacked by users in their thousands.
This might encourage businesses to tread very carefully with user data or to abandon any plans to use it at all for fear of backlash in case users found out. (Despite the fact most if not all digital companies know to this data to shape their service.) This apprehension over user privacy is not the right tact.
Henry Ford is believed to have said “If I had asked people what they wanted, they would have said faster horses.” And had he listened to them, that’s what we would have. Equally Facebook has time and time again been criticised for updates to its website, with users threatening to quit the service if updates are not made optional (they never are).
And yet, Facebook is the second most-visited site in the world, after Google. These controversies encourage the view that controversy has done no damage to its popularity. If Zuckerberg, one of Facebook’s founders and current CEO, had asked users what they wanted, they would’ve said a more restricted, feature-sparse version -which fewer people would have used.
It’s worth remarking upon hypocrisy in the media’s coverage of the study as well: as it has arguably fuelled consumer responses. The study has been called ‘manipulative’ and ‘creepy’ in headlines alone, with articles describing an alienating, 1984-inspired research study – which took control over people’s emotions and may have forced people to kill themselves.
Whether it was is very, very debatable – which makes the integrity of these angles debatable themselves. They say the story was ‘manipulative’ – but more so are the sensationalist headlines to create controversy and get readers. The Atlantic quote the PNAS editor of the study as saying “it’s ethically ok […] but ethics are social decisions.” Would she say the same about the media-led hysteria over a relatively ineffectual Facebook study?
The lesson for companies shouldn’t be that user data has become useless because consumers are sensitivity about online privacy. Facebook has shown that consumers sometimes misrepresent themselves – and its updates go without damaging its popularity, which has grown year-on-year. They then say they hate the website and want to abandon it: yet it’s almost the most popular in the world.
Facebook said they want to use the study to refine their service. Despite what they say, customers hope that too.
Share this story