How significant is algorithmic personalization in searches for political parties and candidates?
A new social-media policy at the Washington Post prohibits conduct on social media that “adversely affects The Post’s customers, advertisers, subscribers, vendors, suppliers or partners.” In such cases, Post management reserves the right to take disciplinary action “up to and including termination of employment.”The Post‘s Guild sent out a bulletin Sunday night protesting the policy. “If you’re like most of us, you probably acknowledged its receipt without reading it,” says the note, which was written by G
After popularizing sensational headlines and taking your news feed by storm, Upworthy seemingly fell off a cliff. Its story reveals just as much about Facebook as it does about why we click.
Researchers from London School of Economics report about their ongoing research into political targeting on Facebook in the UK (the research is done in collaboration with WhoTargetsMe, a browser extension to measure political targeting on Facebook):
As we made clear in our first two posts, our analysis here is exploratory. It is, for example, unclear to what extent our dataset is representative of Conservatives’ online advertising throughout this campaign, and the WhoTargetsMe sample of potential voters, from whose Facebook feeds our data is scraped, may be skewed.
Bearing in mind these problems, we can say that, of the 820 exposures to ads paid for by Conservatives that we analysed, 28% (or 232 items) attacked Corbyn using facts that appear to be false or are clearly manipulated to confound the reader – and sometimes both.
Generally, Conservatives used 73% (598) of its 820 ads exposed in our sample to attack Corbyn. They are not, of course, the only ones targeting opponents. As we have shown, both Labour and Lib Dems have done the same. However, while ads by these other parties conveyed simplifying messages, portraying adversaries as weak, immoral or pro-elite, we couldn’t find, at least in our samples, pieces by them using baseless or misleading facts.
Judith, Damian, Bram, and Natali presented the methodology of measuring diversity in recomendation sets:
What do social media users think about social media data mining? This article reports on focus group research in three European countries (the United Kingdom, Norway and Spain). The method created a space in which to make sense of the diverse findings of quantitative studies, which relate to individual differences (such as extent of social media use or awareness of social media data mining) and differences in social media data mining practices themselves (such as the type of data gathered, the purpose for which data are mined and whether transparent information about data mining is available). Moving beyond privacy and surveillance made it possible to identify a concern for fairness as a common trope among users, which informed their varying viewpoints on distinct data mining practices. The authors argue that this concern for fairness can be understood as contextual integrity in practice (Nissenbaum, 2009) and as part of broader concerns about well-being and social justice.
The Guardian reveals Facebook’s manuals for moderators
Sue Halpern June 8, 2017
Prototype Politics: Technology-Intensive Campaigning and the Data of Democracy by Daniel Kreiss Oxford University Press, 291 pp., $99.00; $27.95 (paper)
Hacking the Electorate: How Campaigns Perceive Voters by Eitan D. Hersch Cambridge University Press, 261 pp., $80.00; $30.99 (paper)
Donald Trump; drawing by James Ferguson
Not long after Donald Trump’s surprising presidential victory, an article published in the Swiss weekly Das Magazin, and reprinted online in English by Vice, began churning through the Internet. While pundits were dissecting the collapse of Hillary Clinton’s campaign, the journalists for Das Magazin, Hannes Grassegger and Mikael Krogerus, pointed to an entirely different explanation—the work of Cambridge Analytica, a data science firm created by a British company with deep ties to the British and American defense industries.According to Grassegger and Krogerus, Cambridge Analytica had used psychological data culled from Facebook, paired with vast amounts of consumer information purchased from data-mining companies, to develop algorithms that were supposedly able to identify the psychological makeup of every voter in the American electorate. The company then developed political messages tailored to appeal to the emotions of each one. As the New York Times reporters Nicholas Confessore and Danny Hakim described it: A voter deemed neurotic might be shown a gun-rights commercial featuring burglars breaking into a home, rather than a defense of the Second Amendment; political ads warning of the dangers posed by the Islamic State could be targeted directly at voters prone to anxiety….Even more troubling was the underhanded way in which Cambridge Analytica appeared to have obtained its information. Using an Amazon site called Mechanical Turk, the company paid one hundred thousand people in the United States a dollar or two to fill out an online survey. But in order to receive payment, those people were also required to download an app that gave Cambridge Analytica access to the profiles of their unwitting Facebook friends.
By Elizabeth Denham, Information Commissioner.
In March we announced we were conducting an assessment of the data protection risks arising from the use of data analytics, including for political purposes.
Engagement with the electorate is vital to the democratic process. Given the big data revolution it is understandable that political campaigns are exploring the potential of advanced data analysis tools to help win votes. The public have the right to expect that this takes place in accordance with the law as it relates to data protection and electronic marketing.