What do social media users think about social media data mining? This article reports on focus group research in three European countries (the United Kingdom, Norway and Spain). The method created a space in which to make sense of the diverse findings of quantitative studies, which relate to individual differences (such as extent of social media use or awareness of social media data mining) and differences in social media data mining practices themselves (such as the type of data gathered, the purpose for which data are mined and whether transparent information about data mining is available). Moving beyond privacy and surveillance made it possible to identify a concern for fairness as a common trope among users, which informed their varying viewpoints on distinct data mining practices. The authors argue that this concern for fairness can be understood as contextual integrity in practice (Nissenbaum, 2009) and as part of broader concerns about well-being and social justice.
What is the role of search in shaping opinion? Survey results indicate that, among others: 1. The filter bubble argument is overstated, as Internet users expose themselves to a variety of opinions and viewpoints online and through a diversity of media. Search needs to be viewed in a context of multiple media. 2. Concerns over echo chambers are also overstated, as Internet users are exposed to diverse viewpoints both online and offline. Most users are not silenced by contrasting views, nor do they silence those who disagree with them. 3. Fake news has attracted disproportionate levels of concern, in light of people’s actual practices. Internet users are generally skeptical of information across all media and know how to check the accuracy and validity of information found through search, on social media, or on the Internet in general.
Source: Search and Politics: The Uses and Impacts of Search in Britain, France, Germany, Italy, Poland, Spain, and the United States by William H. Dutton, Bianca Christin Reisdorf, Elizabeth Dubois, Grant Blank :: SSRN
When Instagram announced the implementation of algorithmic personalization on their platform a heated debate arose. Several users expressed instantly their strong discontent under the hashtag #RIPINSTAGRAM. In this paper, we examine how users commented on the announcement of Instagram implementing algorithmic personalization. Drawing on the conceptual starting point of framing user comments as “counter-narratives” (Andrews, 2004), which oppose Instagram’s organizational narrative of improving the user experience, the study explores the main concerns users bring forth in greater detail. The two-step analysis draws on altogether 8,645 comments collected from Twitter and Instagram. The collected Twitter data were used to develop preliminary inductive categories describing users’ counter-narratives. Thereafter, we coded all Instagram data extracted from Instagram systematically in order to enhance, adjust and revise the preliminary categories. This inductive coding approach (Mayring, 2000) combined with an in-depth qualitative analysis resulted in the identification of the following four counter-narratives brought forth by users: 1) algorithmic hegemony; 2) violation of user autonomy; 3) prevalence of commercial interests; and 4) deification of mainstream. All of these counter-narratives are related to ongoing public debates regarding the social implications of algorithmic personalization. In conclusion, the paper suggests that the identified counter-narratives tell a story of resistance. While technological advancement is generally welcomed and celebrated, the findings of this study point towards a growing user resistance to algorithmic personalization.
“The tech industry is in the middle of a massive, uncontrolled social experiment. Having made commercial mass surveillance the economic foundation of our industry, we are now learning how indiscriminate collections of personal data, and the machine learning algorithms they fuel, can be put to effective political use. Unfortunately, these experiments are being run in production. Our centralized technologies could help authoritarians more than they help democracy, and the very power of the tools we’ve built for persuasion makes it difficult for us to undo the damage done. What can concerned people in the tech industry do to seize a dwindling window of opportunity, and create a less monstrous online world?”
Edgar Meij, a senior data scientist on Bloomberg’s news search experience team, is working to explain the logic behind related entities in search results.
One of the first steps toward that goal is to design an algorithm that can explain the relationship between two terms – called entities – in plain English. In a paper together with two researchers from the University of Amsterdam, Prof. Dr. Maarten de Rijke and Nikos Voskarides, he presents a methodology to do just that.
Natali Helberger en Kristina Irion in de Telegraaf:
Producenten van smart-tv’s waarschuwen gebruikers nauwelijks dat de privacy van gebruikers wordt geschonden bij normaal kijkgedrag. Die waarschuwing geeft Natali Helberger, hoogleraar Informatierecht van de Universiteit van Amsterdam (UvA), in een wetenschappelijk artikel dat gepubliceerd is in ‘Telecommunications Policy’.
By midyear, The Times will begin an ambitious new effort to customize the delivery of news online by adjusting a reader’s experience to accommodate individual interests. What readers see when they come to The Times will depend on factors like the specific subjects they are most interested in, where they live or how frequently they come to the site.
from the comments:
“I do not want a “bespoke” NYTimes *experience*. I want the news. I want the newspaper with the editorial decisions of what is above the fold important news and what should be on page 12.
I pay for a subscription for a reason: the judgement and experience of the editors and writers that make this paper great. Don’t try to be Facebook or Twitter. Be the New York Times and do it right.”
“Seriously? You picked right now to limit our access and control what we can and can’t see when we most need availability of all information?”
“This sounds like an awful idea.”
About four years ago, Svenska Dagbladet, one of Sweden’s top newspapers, was in free fall. The paper was experiencing depressed circulation, no digital revenue, and dwindling readership. When Fredric Káren took the job as editor-in-chief in 2013, the message from the board and the parent company, Scandinavian media conglomerate Schibsted Group, was clear: “Something radical had to be done to secure the newspaper’s future,” he recalls. Since then, SvD has gotten back on track by bringing a digital-first mentality into the newsroom. Káren talks a lot about how investing heavily in technology allowed the paper to move forward and regain profitability. Mostly, though, he says he owes the paper’s recovery to one thing: an algorithm that runs the news.
The recent election, which took place beneath a cloud of fake news, revealed that Americans cloister in like-minded online communities. Since then, it’s become increasingly fashionable to complain about the polarizing power of the Internet. Everyone from Katy Perry to Barack Obama to the Pope has lamented the social-media echo chamber and its corrosive effects on society.
If the Internet is truly tearing the nation apart, though, it’s hard to see that in the data. Plugged-in millennials aren’t the ones who seem to be getting more polarized, according to a new Stanford study. In fact, it’s the opposite: Over the past 20 years, political acrimony spiked among older Americans — the same people who are least likely to use the Internet.