Researchers from London School of Economics report about their ongoing research into political targeting on Facebook in the UK (the research is done in collaboration with WhoTargetsMe, a browser extension to measure political targeting on Facebook):
As we made clear in our first two posts, our analysis here is exploratory. It is, for example, unclear to what extent our dataset is representative of Conservatives’ online advertising throughout this campaign, and the WhoTargetsMe sample of potential voters, from whose Facebook feeds our data is scraped, may be skewed.
Bearing in mind these problems, we can say that, of the 820 exposures to ads paid for by Conservatives that we analysed, 28% (or 232 items) attacked Corbyn using facts that appear to be false or are clearly manipulated to confound the reader – and sometimes both.
Generally, Conservatives used 73% (598) of its 820 ads exposed in our sample to attack Corbyn. They are not, of course, the only ones targeting opponents. As we have shown, both Labour and Lib Dems have done the same. However, while ads by these other parties conveyed simplifying messages, portraying adversaries as weak, immoral or pro-elite, we couldn’t find, at least in our samples, pieces by them using baseless or misleading facts.
Source: Is the Conservative Party deliberately distributing fake news in attack ads on Facebook? | LSE Media Policy Project
Researchers from the Personalised Communication project presented various papers at TILTing Perspectives 2017: Regulating a connected world, a conference organized by Tilburg University from 17-19 May. The conference brought together researchers, practitioners, policy makers, and civil society at the intersection of law and regulation, technology, and society, and provided a great opportunity to exchange ideas and make new connections.
Marijn presented a paper on mobile health apps, privacy and autonomy. Frederik chaired several panels, among which a panel on price discrimination. A paper cowritten by Bálazs, Frederik, Kristina, Judith, Natali, and Claes was presented, discussing the technological, legal, ethical, and organizational infrastructures of research into algorithmic agents. Another paper presented was written by Balázs, Judith, and Natali, and concerned the conditions under which people accept news personalization. Sarah presented her paper on how news personalization affects the right to receive information.
What do social media users think about social media data mining? This article reports on focus group research in three European countries (the United Kingdom, Norway and Spain). The method created a space in which to make sense of the diverse findings of quantitative studies, which relate to individual differences (such as extent of social media use or awareness of social media data mining) and differences in social media data mining practices themselves (such as the type of data gathered, the purpose for which data are mined and whether transparent information about data mining is available). Moving beyond privacy and surveillance made it possible to identify a concern for fairness as a common trope among users, which informed their varying viewpoints on distinct data mining practices. The authors argue that this concern for fairness can be understood as contextual integrity in practice (Nissenbaum, 2009) and as part of broader concerns about well-being and social justice.
Source: Convergence – Helen Kennedy, Dag Elgesem, Cristina Miguel, 2017
What is the role of search in shaping opinion? Survey results indicate that, among others: 1. The filter bubble argument is overstated, as Internet users expose themselves to a variety of opinions and viewpoints online and through a diversity of media. Search needs to be viewed in a context of multiple media. 2. Concerns over echo chambers are also overstated, as Internet users are exposed to diverse viewpoints both online and offline. Most users are not silenced by contrasting views, nor do they silence those who disagree with them. 3. Fake news has attracted disproportionate levels of concern, in light of people’s actual practices. Internet users are generally skeptical of information across all media and know how to check the accuracy and validity of information found through search, on social media, or on the Internet in general.
Source: Search and Politics: The Uses and Impacts of Search in Britain, France, Germany, Italy, Poland, Spain, and the United States by William H. Dutton, Bianca Christin Reisdorf, Elizabeth Dubois, Grant Blank :: SSRN
Edgar Meij, a senior data scientist on Bloomberg’s news search experience team, is working to explain the logic behind related entities in search results.
One of the first steps toward that goal is to design an algorithm that can explain the relationship between two terms – called entities – in plain English. In a paper together with two researchers from the University of Amsterdam, Prof. Dr. Maarten de Rijke and Nikos Voskarides, he presents a methodology to do just that.
Source: Helping Computers Explain Their Reasoning: New Research by Edgar Meij | Tech at Bloomberg
The Personalised Communication team organised an engaging afternoon panel discussion at academic-cultural center Spui25 in Amsterdam: “Who controls the algorithms?.
The topic of the discussion was algorithms in the online information environment, and if and how we should control them. Sanne Kruikemeier presented some ongoing research about how citizens view the algorithmic society, how concerned citizens are about algorithms, and to what extent citizens are actually able to protect themselves against algorithms. Frederik Zuiderveen Borgesius discussed algorithmic pricing in e-commerce, and asked if this is fair, or if it is maybe only fair if rich people pay more, or if just pricing based on illegal discrimination is unfair. Sarah Eskens reviewed some of the arguments that are made against regulating algorithms, and challenged the audience to think about regulatory options by explaining different regulatory strategies and regulatory examples from other (information) law areas. Marleen Elshof from the Ministry of Education, Culture, and Science talked about the effect of algorithms on public objectives in media policy, and the questions that policy makers struggle with in this area. Balázs Bodó moderated the event and the discussion that followed with the audience.
Natali Helberger en Kristina Irion in de Telegraaf:
Producenten van smart-tv’s waarschuwen gebruikers nauwelijks dat de privacy van gebruikers wordt geschonden bij normaal kijkgedrag. Die waarschuwing geeft Natali Helberger, hoogleraar Informatierecht van de Universiteit van Amsterdam (UvA), in een wetenschappelijk artikel dat gepubliceerd is in ‘Telecommunications Policy’.
Source: Privacy kijkers smart-tv fors onder de maat|Digitaal| Telegraaf.nl
By midyear, The Times will begin an ambitious new effort to customize the delivery of news online by adjusting a reader’s experience to accommodate individual interests. What readers see when they come to The Times will depend on factors like the specific subjects they are most interested in, where they live or how frequently they come to the site.
from the comments:
“I do not want a “bespoke” NYTimes *experience*. I want the news. I want the newspaper with the editorial decisions of what is above the fold important news and what should be on page 12.
I pay for a subscription for a reason: the judgement and experience of the editors and writers that make this paper great. Don’t try to be Facebook or Twitter. Be the New York Times and do it right.”
“Seriously? You picked right now to limit our access and control what we can and can’t see when we most need availability of all information?”
“This sounds like an awful idea.”
Source: A ‘Community’ of One: The Times Gets Tailored – The New York Times
BuzzFeed News, attempting to address a problem media companies have grappled with since the presidential election, introduced a feature to help readers see what people outside their social-media networks are saying about the news.
The idea is an attempt to get readers to understand — or even acknowledge the existence of — the viewpoints of people who don’t think like them. BuzzFeed’s “Outside Your Bubble” feature will appear at the bottom of its widely-shared articles. A BuzzFeed staffer will curate different opinions from Twitter, Facebook, Reddit, blogs and elsewhere with help from data tools, Editor-in-Chief Ben Smith said in an interview.
Source: BuzzFeed Tries to Break Readers Out of Their Social-Media Bubbles – Bloomberg
Filter bubbles of course create interesting business opportunities. Market your brand with its anti-filter bubble efforts, just like brands distinguish themselves with a privacy protective stance.
En, niet onbelangrijk, er liggen grote kansen voor merken! Veel merken hebben programma’s lopen op het gebied van duurzaamheid of maatschappelijke verantwoordelijkheid. Voor zover ik weet is er nog geen enkel merk dat zich al gestort heeft op het tackelen van de filter bubble. Zou het iets voor Ziggo zijn om een online lesprogramma te ontwikkelen dat kinderen voorbereidt op de wereld van digitale meningen? Of kan Campina niet haar verpakkingen gebruiken voor een leuke tips and tricks campagne? Kan de NS die gratis wifi in de treinen niet gebruiken om reiziger op een ludieke manier bewust te maken van de bubble?
Source: Welk merk gaat het gevecht aan met de filter bubble? | Emile van den Berg | Pulse | LinkedIn