Category Archives: news personalisation

The Decentralized Web

The Web is a key space for civic debate and the current battleground for protecting freedom of expression. However, since its development, the Web has steadily evolved into an ecosystem of large, corporate-controlled mega-platforms which intermediate speech online. In many ways this has been a positive development; these platforms improved usability and enabled billions of people to publish and discover content without having to become experts on the Web’s intricate protocols.But in other ways this development is alarming. Just a few large platforms drive most traffic to online news sources in the U.S., and thus have enormous influence over what sources of information the public consumes on a daily basis. The existence of these consolidated points of control is troubling for many reasons. A small number of stakeholders end up having outsized influence over the content the public can create and consume. This leads to problems ranging from censorship at the behest of national governments to more subtle, perhaps even unintentional, bias in the curation of content users see based on opaque, unaudited curation algorithms. The platforms that host our networked public sphere and inform us about the world are unelected, unaccountable, and often impossible to audit or oversee.

At the same time, there is growing excitement around the area of decentralized systems, which have grown in prominence over the past decade thanks to the popularity of the cryptocurrency Bitcoin. Bitcoin is a payment system that has no central points of control, and uses a novel peer-to-peer network protocol to agree on a distributed ledger of transactions, the blockchain. Bitcoin paints a picture of a world where untrusted networks of computers can coordinate to provide important infrastructure, like verifiable identity and distributed storage. Advocates of these decentralized systems propose related technology as the way forward to “re-decentralize” the Web, by shifting publishing and discovery out of the hands of a few corporations, and back into the hands of users. These types of code-based, structural interventions are appealing because in theory, they are less corruptible and resistant to corporate or political regulation. Surprisingly, low-level, decentralized systems don’t necessarily translate into decreased market consolidation around user-facing mega-platforms.In this report, we explore two important ways structurally decentralized systems could help address the risks of mega-platform consolidation: First, these systems can help users directly publish and discover content directly, without intermediaries, and thus without censorship. All of the systems we evaluate advertise censorship-resistance as a major benefit. Second, these systems could indirectly enable greater competition and user choice, by lowering the barrier to entry for new platforms. As it stands, it is difficult for users to switch between platforms (they must recreate all their data when moving to a new service) and most mega-platforms do not interoperate, so switching means leaving behind your social network. Some systems we evaluate directly address the issues of data portability and interoperability in an effort to support greater competition.We offer case studies of the following decentralized publishing projects:Freedom Box, a system for personal publishingDiaspora, a federated social networkMastodon, a federated Twitter-like serviceBlockstack, a distributed system for online identity servicesIPFS (Interplanetary File System), a distributed storage service with a proposed mechanism to incentivize resource sharingSolid (Social Linked Data), a linked-data protocol that could act as a back-end for data sharing between social media networksAppcoins, a digital currency framework that enables users to financially participate in ownership of platforms and protocolsSteemit, an online community that uses an appcoin to incentivize development and community participation in a social networkConsidering these projects as a whole, we found a robust and fertile community of experimenters developing promising software. Many of the projects in this report are working on deeply exciting new ideas. Easy to use, peer-to-peer distributed storage systems change the landscape for content censorship and archiving. Appcoins may transform how new projects are launched online, making it possible to fund open-source development teams focused on developing shared protocols instead of independent companies. There is also a renewed interest in creating interoperable standards and protocols that can cross platforms.However, we have reason to doubt that these decentralized systems alone will address the problems of exclusion and bias caused by today’s mega-platforms. For example, distributed, censorship-resistant storage does not help address problems related to bias in curation algorithms – content that doesn’t appear at the top of your feed might as well be invisible, even if it’s technically

Source: The Decentralized Web

Decentralized Social Networks Sound Great. Too Bad They’ll Never Work | WIRED

Designing robust reward mechanisms to curate content that keeps people informed rather than entertained remains a problem. If distributed platforms could solve it, they could theoretically tackle media challenges like echo chambers and filter bubbles, but such dilemmas still present a serious challenge for new systems.

Source: Decentralized Social Networks Sound Great. Too Bad They’ll Never Work | WIRED

Is the Conservative Party deliberately distributing fake news in attack ads on Facebook?

Researchers from London School of Economics report about their ongoing research into political targeting on Facebook in the UK (the research is done in collaboration with WhoTargetsMe, a browser extension to measure political targeting on Facebook):

As we made clear in our first two posts, our analysis here is exploratory. It is, for example, unclear to what extent our dataset is representative of Conservatives’ online advertising throughout this campaign, and the WhoTargetsMe sample of potential voters, from whose Facebook feeds our data is scraped, may be skewed.

Bearing in mind these problems, we can say that, of the 820 exposures to ads paid for by Conservatives that we analysed, 28% (or 232 items) attacked Corbyn using facts that appear to be false or are clearly manipulated to confound the reader – and sometimes both.

Generally, Conservatives used 73% (598) of its 820 ads exposed in our sample to attack Corbyn. They are not, of course, the only ones targeting opponents. As we have shown, both Labour and Lib Dems have done the same. However, while ads by these other parties conveyed simplifying messages, portraying adversaries as weak, immoral or pro-elite, we couldn’t find, at least in our samples, pieces by them using baseless or misleading facts.

Source: Is the Conservative Party deliberately distributing fake news in attack ads on Facebook? | LSE Media Policy Project

User perspectives on social media data mining

What do social media users think about social media data mining? This article reports on focus group research in three European countries (the United Kingdom, Norway and Spain). The method created a space in which to make sense of the diverse findings of quantitative studies, which relate to individual differences (such as extent of social media use or awareness of social media data mining) and differences in social media data mining practices themselves (such as the type of data gathered, the purpose for which data are mined and whether transparent information about data mining is available). Moving beyond privacy and surveillance made it possible to identify a concern for fairness as a common trope among users, which informed their varying viewpoints on distinct data mining practices. The authors argue that this concern for fairness can be understood as contextual integrity in practice (Nissenbaum, 2009) and as part of broader concerns about well-being and social justice.

Source: Convergence – Helen Kennedy, Dag Elgesem, Cristina Miguel, 2017

Search and Politics: The Uses and Impacts of Search in Britain, France, Germany, Italy, Poland, Spain, and the United States

What is the role of search in shaping opinion? Survey results indicate that, among others: 1. The filter bubble argument is overstated, as Internet users expose themselves to a variety of opinions and viewpoints online and through a diversity of media. Search needs to be viewed in a context of multiple media. 2. Concerns over echo chambers are also overstated, as Internet users are exposed to diverse viewpoints both online and offline. Most users are not silenced by contrasting views, nor do they silence those who disagree with them. 3. Fake news has attracted disproportionate levels of concern, in light of people’s actual practices. Internet users are generally skeptical of information across all media and know how to check the accuracy and validity of information found through search, on social media, or on the Internet in general.

Source: Search and Politics: The Uses and Impacts of Search in Britain, France, Germany, Italy, Poland, Spain, and the United States by William H. Dutton, Bianca Christin Reisdorf, Elizabeth Dubois, Grant Blank :: SSRN

Mahnke Skrubbeltrang,  Grunnet, & Traasdahl Tarp: #RIPINSTAGRAM: Examining user’s counter-narratives opposing the introduction of algorithmic personalization on Instagram

When Instagram announced the implementation of algorithmic personalization on their platform a heated debate arose. Several users expressed instantly their strong discontent under the hashtag #RIPINSTAGRAM. In this paper, we examine how users commented on the announcement of Instagram implementing algorithmic personalization. Drawing on the conceptual starting point of framing user comments as “counter-narratives” (Andrews, 2004), which oppose Instagram’s organizational narrative of improving the user experience, the study explores the main concerns users bring forth in greater detail. The two-step analysis draws on altogether 8,645 comments collected from Twitter and Instagram. The collected Twitter data were used to develop preliminary inductive categories describing users’ counter-narratives. Thereafter, we coded all Instagram data extracted from Instagram systematically in order to enhance, adjust and revise the preliminary categories. This inductive coding approach (Mayring, 2000) combined with an in-depth qualitative analysis resulted in the identification of the following four counter-narratives brought forth by users: 1) algorithmic hegemony; 2) violation of user autonomy; 3) prevalence of commercial interests; and 4) deification of mainstream. All of these counter-narratives are related to ongoing public debates regarding the social implications of algorithmic personalization. In conclusion, the paper suggests that the identified counter-narratives tell a story of resistance. While technological advancement is generally welcomed and celebrated, the findings of this study point towards a growing user resistance to algorithmic personalization.

[link to full text]

Build a Better Monster: Morality, Machine Learning, and Mass Surveillance

By Maciej Ceglowski

The tech industry is in the middle of a massive, uncontrolled social experiment. Having made commercial mass surveillance the economic foundation of our industry, we are now learning how indiscriminate collections of personal data, and the machine learning algorithms they fuel, can be put to effective political use. Unfortunately, these experiments are being run in production. Our centralized technologies could help authoritarians more than they help democracy, and the very power of the tools we’ve built for persuasion makes it difficult for us to undo the damage done. What can concerned people in the tech industry do to seize a dwindling window of opportunity, and create a less monstrous online world?

 

Source: Build a Better Monster: Morality, Machine Learning, and Mass Surveillance