Damian and Judith gave a talk at Bessensap, an annual event organized by the Dutch Science Foundation, where scholars present their work to science journalists. In their talk, they gave an overview over their finished and ongoing research, highlighting the discrepancy between popular beliefs about around filter bubbles and algorithmic news recommendation and empirical evidence.
Together with colleagues from IViR, Frederik, published a new study, “An assessment of the Commission’s Proposal on Privacy and Electronic Communications” (by Frederik Zuiderveen Borgesius, Joris van Hoboken, Ronan Fahy, Kristina Irion, and Max Rozendaal).
This study, commissioned by the European Parliament’s Policy Department for Citizens’ Rights and Constitutional Affairs at the request of the LIBE Committee, appraises the European Commission’s proposal for an ePrivacy Regulation. The study assesses whether the proposal would ensure that the right to the protection of personal data, the right to respect for private life and communications, and related rights enjoy a high standard of protection. The study also highlights the proposal’s potential benefits and drawbacks more generally. The proposed ePrivacy Regulation aims to protect privacy on the internet, and includes rules on, for instance, online tracking.
Read the full study here.
Frederik presented the preliminary results at the European Parliament. A video is available here (from 26:00 minutes on).
Earlier this year, Frederik also spoke at the European Parliament about the ePrivacy proposal, video here.
Damian wrote a blogpost in which he comments on the current debate on the role of algorithms. He argues that – while there are valid reasons to be concerned – critics who see algoritms as evil or scary per se, miss the point and hinder a constructive debate.
On Tuesday, Damian gave a lecture “Big Data: Why social scientists should care” at the Amsterdam Research Initiative, discussing the role of Big Data in society as well as in research. He argued that social scientists on the one hand have to observe the role of so-called Big Data as a societal phenomenon, but on the other hand also can make use of these techniques to answer social-scientific research questions. Directly before, he had given a two-day workshop on the use of Python to answer social-scientific research questions at Radboud University Nijmegen.
When Instagram announced the implementation of algorithmic personalization on their platform a heated debate arose. Several users expressed instantly their strong discontent under the hashtag #RIPINSTAGRAM. In this paper, we examine how users commented on the announcement of Instagram implementing algorithmic personalization. Drawing on the conceptual starting point of framing user comments as “counter-narratives” (Andrews, 2004), which oppose Instagram’s organizational narrative of improving the user experience, the study explores the main concerns users bring forth in greater detail. The two-step analysis draws on altogether 8,645 comments collected from Twitter and Instagram. The collected Twitter data were used to develop preliminary inductive categories describing users’ counter-narratives. Thereafter, we coded all Instagram data extracted from Instagram systematically in order to enhance, adjust and revise the preliminary categories. This inductive coding approach (Mayring, 2000) combined with an in-depth qualitative analysis resulted in the identification of the following four counter-narratives brought forth by users: 1) algorithmic hegemony; 2) violation of user autonomy; 3) prevalence of commercial interests; and 4) deification of mainstream. All of these counter-narratives are related to ongoing public debates regarding the social implications of algorithmic personalization. In conclusion, the paper suggests that the identified counter-narratives tell a story of resistance. While technological advancement is generally welcomed and celebrated, the findings of this study point towards a growing user resistance to algorithmic personalization.
Journalist Maurits Martijn wrote a piece on Cambridge Analytica, the online political microtargeting company. The company essentially applies behavioural targeting marketing techniques to political campaigns. Martijn questions whether Cambridge Analytica is really that powerful, and whether it really caused Trump to win the US elections. Both Claes de Vreese and Frederik Zuiderveen Borgesius are quoted in the piece.
The Court of Justice of the European Union decided an important case on 21 December 2016. In short, the Court prohibits mass metadata surveillance. The Court says that EU member states are not allowed to impose an obligation on telecommunications companies to store metadata of all telecom-users.
The Court says that mass metadata surveillance, even if it may help to catch criminals or terrorists, violates people’s privacy and data protection rights. The Court adds that metadata are just as sensitive as the content of communications. Metadata show, for instance, who you call and when. The Court says that such metadata are “no less sensitive, having regard to the right to privacy, than the actual content of communications.”
Frederik commented on the case for Dutch media:
- NOS: http://nos.nl/artikel/2149484-eu-hof-locatie-en-telecomdata-mogen-niet-zomaar-worden-verzameld.html
- Tweakers: https://tweakers.net/nieuws/119273/eu-hof-massaal-verzamelen-telecomgegevens-is-niet-toegestaan.html
- BNR Nieuwsradio (around 37 minutes): https://www.bnr.nl/player/audio/10053540/10315699
The full judgment by the Court of Justice of the European Union is here: http://curia.europa.eu/juris/document/document.jsf?text=&docid=186492&pageIndex=0&doclang=EN&mode=req&dir=&occ=first&part=1&cid=566657
For an analysis of metadata surveillance and human rights, see:
Frederik J. Zuiderveen Borgesius and Axel Arnbak, New Data Security Requirements and the Proceduralization of Mass Surveillance Law after the European Data Retention Case, Amsterdam Law School Research Paper No. 2015-41. https://ssrn.com/abstract=2678860
This week, there were a lot of media appearences related to the discussion of the role of filter bubbles and fake news in electin campaigns. Damian is quoted in de Volkskrant interviewed for television broadcast EenVandaag (around minute 27.00). Sanne was interviewed the NOS. And we published a piece on Stuk rood vlees.
On 9 November Frederik Zuiderveen Borgesius participates in the Privacy Platform in Brussels (‘Automated profiling after the GDPR, more regulation needed?’), organized by Sophie in ‘t Veld. The event will address the topic of the potential risks of automated profiling, on whether the provisions on profiling in the General Data Protection Regulation will suffice, and on what can be expected of the revision of the ePrivacy Directive.
Chair: Ms Sophie in ‘t Veld MEP
- Frederik Borgesius: Researcher at the Institute for Information Law (IViR) at the University of Amsterdam
- Jeremy Rollison: Director EU Government Affairs at Microsoft
- Sachiko Scheuing: Co-chair of FEDMA, Federation European Direct Advertisement Associations
- Tal Zarsky: Professor at the University of Haifa
More info: http://www.sophieintveld.eu/agenda#
This weekend, there was a lot of media attention on Filter Bubbles. On Friday, quality newspaper Trouw published a two-page story based on interviews with Judith, Damian, and Frederik. The piece reflected some outcomes of our research project, mainly that filter bubbles – at least in the Netherlands, at this point in time – are less of a problem than often assumed.
On Saturday, Damian was interviewed on Radio 1 (Argos) about the same topic, as well as about political microtargeting. You can listen to the fragment here.
Coincidentally, and not related to our project, also quality newspaper de Volkskrant published a large story on Filter Bubbles related to music. It discussed the relationship between the usage of Spotify and music taste, and also hinted at the need for diversity in a music recommendation algorithm, to prevent it from becoming ‘boring’.