After popularizing sensational headlines and taking your news feed by storm, Upworthy seemingly fell off a cliff. Its story reveals just as much about Facebook as it does about why we click.
Researchers from London School of Economics report about their ongoing research into political targeting on Facebook in the UK (the research is done in collaboration with WhoTargetsMe, a browser extension to measure political targeting on Facebook):
As we made clear in our first two posts, our analysis here is exploratory. It is, for example, unclear to what extent our dataset is representative of Conservatives’ online advertising throughout this campaign, and the WhoTargetsMe sample of potential voters, from whose Facebook feeds our data is scraped, may be skewed.
Bearing in mind these problems, we can say that, of the 820 exposures to ads paid for by Conservatives that we analysed, 28% (or 232 items) attacked Corbyn using facts that appear to be false or are clearly manipulated to confound the reader – and sometimes both.
Generally, Conservatives used 73% (598) of its 820 ads exposed in our sample to attack Corbyn. They are not, of course, the only ones targeting opponents. As we have shown, both Labour and Lib Dems have done the same. However, while ads by these other parties conveyed simplifying messages, portraying adversaries as weak, immoral or pro-elite, we couldn’t find, at least in our samples, pieces by them using baseless or misleading facts.
Judith, Damian, Bram, and Natali presented the methodology of measuring diversity in recomendation sets:
What do social media users think about social media data mining? This article reports on focus group research in three European countries (the United Kingdom, Norway and Spain). The method created a space in which to make sense of the diverse findings of quantitative studies, which relate to individual differences (such as extent of social media use or awareness of social media data mining) and differences in social media data mining practices themselves (such as the type of data gathered, the purpose for which data are mined and whether transparent information about data mining is available). Moving beyond privacy and surveillance made it possible to identify a concern for fairness as a common trope among users, which informed their varying viewpoints on distinct data mining practices. The authors argue that this concern for fairness can be understood as contextual integrity in practice (Nissenbaum, 2009) and as part of broader concerns about well-being and social justice.
The Guardian reveals Facebook’s manuals for moderators
Sue Halpern June 8, 2017
Prototype Politics: Technology-Intensive Campaigning and the Data of Democracy by Daniel Kreiss Oxford University Press, 291 pp., $99.00; $27.95 (paper)
Hacking the Electorate: How Campaigns Perceive Voters by Eitan D. Hersch Cambridge University Press, 261 pp., $80.00; $30.99 (paper)
Donald Trump; drawing by James Ferguson
Not long after Donald Trump’s surprising presidential victory, an article published in the Swiss weekly Das Magazin, and reprinted online in English by Vice, began churning through the Internet. While pundits were dissecting the collapse of Hillary Clinton’s campaign, the journalists for Das Magazin, Hannes Grassegger and Mikael Krogerus, pointed to an entirely different explanation—the work of Cambridge Analytica, a data science firm created by a British company with deep ties to the British and American defense industries.According to Grassegger and Krogerus, Cambridge Analytica had used psychological data culled from Facebook, paired with vast amounts of consumer information purchased from data-mining companies, to develop algorithms that were supposedly able to identify the psychological makeup of every voter in the American electorate. The company then developed political messages tailored to appeal to the emotions of each one. As the New York Times reporters Nicholas Confessore and Danny Hakim described it: A voter deemed neurotic might be shown a gun-rights commercial featuring burglars breaking into a home, rather than a defense of the Second Amendment; political ads warning of the dangers posed by the Islamic State could be targeted directly at voters prone to anxiety….Even more troubling was the underhanded way in which Cambridge Analytica appeared to have obtained its information. Using an Amazon site called Mechanical Turk, the company paid one hundred thousand people in the United States a dollar or two to fill out an online survey. But in order to receive payment, those people were also required to download an app that gave Cambridge Analytica access to the profiles of their unwitting Facebook friends.
By Elizabeth Denham, Information Commissioner.
In March we announced we were conducting an assessment of the data protection risks arising from the use of data analytics, including for political purposes.
Engagement with the electorate is vital to the democratic process. Given the big data revolution it is understandable that political campaigns are exploring the potential of advanced data analysis tools to help win votes. The public have the right to expect that this takes place in accordance with the law as it relates to data protection and electronic marketing.
What is the role of search in shaping opinion? Survey results indicate that, among others: 1. The filter bubble argument is overstated, as Internet users expose themselves to a variety of opinions and viewpoints online and through a diversity of media. Search needs to be viewed in a context of multiple media. 2. Concerns over echo chambers are also overstated, as Internet users are exposed to diverse viewpoints both online and offline. Most users are not silenced by contrasting views, nor do they silence those who disagree with them. 3. Fake news has attracted disproportionate levels of concern, in light of people’s actual practices. Internet users are generally skeptical of information across all media and know how to check the accuracy and validity of information found through search, on social media, or on the Internet in general.
Source: Search and Politics: The Uses and Impacts of Search in Britain, France, Germany, Italy, Poland, Spain, and the United States by William H. Dutton, Bianca Christin Reisdorf, Elizabeth Dubois, Grant Blank :: SSRN
When Instagram announced the implementation of algorithmic personalization on their platform a heated debate arose. Several users expressed instantly their strong discontent under the hashtag #RIPINSTAGRAM. In this paper, we examine how users commented on the announcement of Instagram implementing algorithmic personalization. Drawing on the conceptual starting point of framing user comments as “counter-narratives” (Andrews, 2004), which oppose Instagram’s organizational narrative of improving the user experience, the study explores the main concerns users bring forth in greater detail. The two-step analysis draws on altogether 8,645 comments collected from Twitter and Instagram. The collected Twitter data were used to develop preliminary inductive categories describing users’ counter-narratives. Thereafter, we coded all Instagram data extracted from Instagram systematically in order to enhance, adjust and revise the preliminary categories. This inductive coding approach (Mayring, 2000) combined with an in-depth qualitative analysis resulted in the identification of the following four counter-narratives brought forth by users: 1) algorithmic hegemony; 2) violation of user autonomy; 3) prevalence of commercial interests; and 4) deification of mainstream. All of these counter-narratives are related to ongoing public debates regarding the social implications of algorithmic personalization. In conclusion, the paper suggests that the identified counter-narratives tell a story of resistance. While technological advancement is generally welcomed and celebrated, the findings of this study point towards a growing user resistance to algorithmic personalization.
I believe the social media giant could target ads at depressed teens and countless other demographics. But so what?