Category Archives: news personalisation

Is the Conservative Party deliberately distributing fake news in attack ads on Facebook?

Researchers from London School of Economics report about their ongoing research into political targeting on Facebook in the UK (the research is done in collaboration with WhoTargetsMe, a browser extension to measure political targeting on Facebook):

As we made clear in our first two posts, our analysis here is exploratory. It is, for example, unclear to what extent our dataset is representative of Conservatives’ online advertising throughout this campaign, and the WhoTargetsMe sample of potential voters, from whose Facebook feeds our data is scraped, may be skewed.

Bearing in mind these problems, we can say that, of the 820 exposures to ads paid for by Conservatives that we analysed, 28% (or 232 items) attacked Corbyn using facts that appear to be false or are clearly manipulated to confound the reader – and sometimes both.

Generally, Conservatives used 73% (598) of its 820 ads exposed in our sample to attack Corbyn. They are not, of course, the only ones targeting opponents. As we have shown, both Labour and Lib Dems have done the same. However, while ads by these other parties conveyed simplifying messages, portraying adversaries as weak, immoral or pro-elite, we couldn’t find, at least in our samples, pieces by them using baseless or misleading facts.

Source: Is the Conservative Party deliberately distributing fake news in attack ads on Facebook? | LSE Media Policy Project

User perspectives on social media data mining

What do social media users think about social media data mining? This article reports on focus group research in three European countries (the United Kingdom, Norway and Spain). The method created a space in which to make sense of the diverse findings of quantitative studies, which relate to individual differences (such as extent of social media use or awareness of social media data mining) and differences in social media data mining practices themselves (such as the type of data gathered, the purpose for which data are mined and whether transparent information about data mining is available). Moving beyond privacy and surveillance made it possible to identify a concern for fairness as a common trope among users, which informed their varying viewpoints on distinct data mining practices. The authors argue that this concern for fairness can be understood as contextual integrity in practice (Nissenbaum, 2009) and as part of broader concerns about well-being and social justice.

Source: Convergence – Helen Kennedy, Dag Elgesem, Cristina Miguel, 2017

Search and Politics: The Uses and Impacts of Search in Britain, France, Germany, Italy, Poland, Spain, and the United States

What is the role of search in shaping opinion? Survey results indicate that, among others: 1. The filter bubble argument is overstated, as Internet users expose themselves to a variety of opinions and viewpoints online and through a diversity of media. Search needs to be viewed in a context of multiple media. 2. Concerns over echo chambers are also overstated, as Internet users are exposed to diverse viewpoints both online and offline. Most users are not silenced by contrasting views, nor do they silence those who disagree with them. 3. Fake news has attracted disproportionate levels of concern, in light of people’s actual practices. Internet users are generally skeptical of information across all media and know how to check the accuracy and validity of information found through search, on social media, or on the Internet in general.

Source: Search and Politics: The Uses and Impacts of Search in Britain, France, Germany, Italy, Poland, Spain, and the United States by William H. Dutton, Bianca Christin Reisdorf, Elizabeth Dubois, Grant Blank :: SSRN

Mahnke Skrubbeltrang,  Grunnet, & Traasdahl Tarp: #RIPINSTAGRAM: Examining user’s counter-narratives opposing the introduction of algorithmic personalization on Instagram

When Instagram announced the implementation of algorithmic personalization on their platform a heated debate arose. Several users expressed instantly their strong discontent under the hashtag #RIPINSTAGRAM. In this paper, we examine how users commented on the announcement of Instagram implementing algorithmic personalization. Drawing on the conceptual starting point of framing user comments as “counter-narratives” (Andrews, 2004), which oppose Instagram’s organizational narrative of improving the user experience, the study explores the main concerns users bring forth in greater detail. The two-step analysis draws on altogether 8,645 comments collected from Twitter and Instagram. The collected Twitter data were used to develop preliminary inductive categories describing users’ counter-narratives. Thereafter, we coded all Instagram data extracted from Instagram systematically in order to enhance, adjust and revise the preliminary categories. This inductive coding approach (Mayring, 2000) combined with an in-depth qualitative analysis resulted in the identification of the following four counter-narratives brought forth by users: 1) algorithmic hegemony; 2) violation of user autonomy; 3) prevalence of commercial interests; and 4) deification of mainstream. All of these counter-narratives are related to ongoing public debates regarding the social implications of algorithmic personalization. In conclusion, the paper suggests that the identified counter-narratives tell a story of resistance. While technological advancement is generally welcomed and celebrated, the findings of this study point towards a growing user resistance to algorithmic personalization.

[link to full text]

Build a Better Monster: Morality, Machine Learning, and Mass Surveillance

By Maciej Ceglowski

The tech industry is in the middle of a massive, uncontrolled social experiment. Having made commercial mass surveillance the economic foundation of our industry, we are now learning how indiscriminate collections of personal data, and the machine learning algorithms they fuel, can be put to effective political use. Unfortunately, these experiments are being run in production. Our centralized technologies could help authoritarians more than they help democracy, and the very power of the tools we’ve built for persuasion makes it difficult for us to undo the damage done. What can concerned people in the tech industry do to seize a dwindling window of opportunity, and create a less monstrous online world?

 

Source: Build a Better Monster: Morality, Machine Learning, and Mass Surveillance

Helping Computers Explain Their Reasoning

Edgar Meij, a senior data scientist on Bloomberg’s news search experience team, is working to explain the logic behind related entities in search results.

One of the first steps toward that goal is to design an algorithm that can explain the relationship between two terms – called entities – in plain English. In a paper together with two researchers from the University of Amsterdam, Prof. Dr. Maarten de Rijke and Nikos Voskarides, he presents a methodology to do just that.

Source: Helping Computers Explain Their Reasoning: New Research by Edgar Meij | Tech at Bloomberg

Privacy kijkers smart-tv fors onder de maat

Natali Helberger en Kristina Irion in de Telegraaf:

Producenten van smart-tv’s waarschuwen gebruikers nauwelijks dat de privacy van gebruikers wordt geschonden bij normaal kijkgedrag. Die waarschuwing geeft Natali Helberger, hoogleraar Informatierecht van de Universiteit van Amsterdam (UvA), in een wetenschappelijk artikel dat gepubliceerd is in ‘Telecommunications Policy’.

Source: Privacy kijkers smart-tv fors onder de maat|Digitaal| Telegraaf.nl