On 4 August 2005, the police department of Memphis, Tennessee, made so many arrests over a three-hour period that it ran out of vehicles to transport the detainees to jail. Three days later, 1,200 people had been arrested across the city – a new police department record. Operation Blue Crush was hailed a huge success. Larry Godwin, the city’s new police director, quickly rolled out the scheme and by 2011 crime across the city had fallen by 24%. When it was revealed Blue Crush faced budget cuts earlier th
We need regulation to make sure all citizens gain equal access to the networks of opportunity and services they need. We also need to know that all public speech and expression will be treated transparently, even if they cannot be treated equally. This is a basic requirement for a functioning democracy.For this to happen, there has to be at least some agreement that the responsibilities in this area are shifting. The people who built these platform companies did not set out to do so in order to take over the responsibilities of a free press. In fact, they are rather alarmed that this is the outcome of their engineering success.
This contribution introduces the mathematical theory of information that ‘informs’ computer systems, the internet and all that has been built upon it. The aim of the author is to invite lawyers to reconsider the grammar and alphabet of modern positive law and of the Rule of Law, in the face of the alternative grammar and alphabet of a data-driven society. Instead of either embracing or rejecting the technological transitions that reconfigure the operations of the law, this article argues that lawyers should collaborate with the computer scientists that engineer and design the affordances of our new onlife world. This is crucial if we want to sustain democratic participation in law-making, contestability of legal effect and transparency of how citizens may be manipulated by the invisible computational backbone of our rapidly and radically changing world.
China is considering a new “social credit” system, designed to rate everyone’s trustworthiness. Many fear that it will become a tool of social control — but in reality it has a lot in common with the algorithms and systems that score and classify us all every day.Human judgment is being replaced by automatic algorithms, and that brings with it both enormous benefits and risks. The technology is enabling a new form of social control, sometimes deliberately and sometimes as a side effect. And as the Internet of Things ushers in an era of more sensors and more data — and more algorithms — we need to ensure that we reap the benefits while avoiding the harms.
Erneut hat der vzbv zwei Klauseln in der Datenschutzerklärung von Google abgemahnt. Es geht um die Erhebung und Verwendung von personenbezogenen Daten. Zwei Nutzungsbedingungen enthielten Formulierungen, die die Rechte der Verbraucher nach Ansicht des vzbv unzulässig einschränkten.
Friday 4th December 2015 19:00ALGORITHMS ARE NOT ANGELSwith Matthew Fuller and Graham HarwoodAcademy of Fine Arts Vienna, Atelierhaus, Lehargasse 8, A-1060The continuation of the series of events on the regulatory politics of code and machines brings to Vienna two researchers and artists that investigate these issues since the mid-90s. Matthew Fuller and Graham Harwood from the Centre for Cultural Studies Goldsmiths University London call for a better understanding of digital systems in culture, politics and everyday life and demonstrate that the idea of neutral objectivity in connection with algorithms is misleading.Entry: FreeKonrad Becker, Felix Stalder, World-Information Institute in cooperation with Kunst und Digitale Medien at the Academy of Fine Arts Vienna. Supported by SHIFT Vienna and BKA Kunst Saturday 5th December 2015Invisible Algorithm CollegeEphemeral Explorations on Rule based Terrain with Matthew Fuller and Graham HarwoodEnrolment by In Situ Admission Formula only International Conference Friday 25th September 2015 at TU Karlsplatz, 1040 ViennaA conference that investigates the growing influence of digital control systems and their cascading chains of agency on the cultural and social reality. #algoregimesBecause of their technical nature algorithms are often presented as a guarantee of objectivity, particularly on controversial issues. But the way in which data for the algorithm is processed affects the results and what an index takes account of or not is relevant as decisions for inclusion or exclusion. Algorithms raise numerous questions regarding the methodology and the claims on knowledge associated with it. Even if automated processes are ideally adapted to the computer logic of syllogisms, the complexity of the world outside cannot be reduced to unambiguous statements that can be combined easily.Addressing issues of the politics of algorithms, normative classifications and algorithmic governmentality “Algorithmic Regimes and Generative Strategies” wants to mobilize the critical perspectives of researchers, artists and activists, to open the field for a wider and more diverse debate.
US insurers told use of external data in price opt. models will be subject to detailed regulatory scrutiny http://ow.ly/Vn0c4 p74
Potential Questions for Regulators to Ask Regarding the Use of Models in P&C Rate Filings
Insurers might use a model in the development of proposed rates and rating factors. The Task Force offers some potential questions a regulator could ask regarding the use of models in rate proposals.
Questions may include, but not be limited by, the following:
1. Please provide a high-level description of the workings of the model that was used to select rates and rating factors that differ from the indicated.
2. What is the purpose of the model? What does the model seek to maximize or minimize (e.g. underwriting profit, retention, other) and explain.
3. How were the input variables for your model selected?
a. What is the support for the model variables, including the predictive values and error statistics for the model variables?
b. Are the parameters loss related, expense related, or related to the risk in some other way?
4. Which of the input variables are internal (customer-provided or deduced from customer-provided information) or external?
a. Identify whether each input variable is used in your rating plan.
b. For each external variable, please identify:
i. The owner or vendor of the data (e.g. Department of Motor Vehicles).
ii. Which variables are subject to the requirements of the federal Fair
Credit Reporting Act.
iii. How you ensure that the data are complete and accurate.
iv. The framework, if any, which provides consumers a means of correcting errors in the data pertaining to them.
Model Constraints & Output
5. What level of granularity is your model output (e.g. the class plan level, individual rating factors, or some other level such as household or demographic segment that is different than the rating plan)?
6. What are the limits (or constraints) for the selected rating plan factors, if any?
7. How do the modeled values compare to the company experience?
Note: Regulators should evaluate the particular filing and associated costs to insurers to determine the extent of questioning needed. Regulators should also consider the potential proprietary nature of modeling information and grant confidentiality as appropriate and if allowed under state law.
Am Wochenende sprach ich auf dem Zündfunk Netzkongress über Filter Bubbles,weshalb ich sie für absolut…
Empiricists may be frustrated by the ‘black box’ nature of algorithmic decision-making; they can work with legal scholars and activists to open up certain aspects of it (via freedom of information and fair data practices). Journalists, too, have been teaming up with computer programmers and social scientists to expose new privacy-violating technologies of data collection, analysis, and use – and to push regulators to crack down on the worst offenders.Researchers are going beyond the analysis of extant data, and joining coalitions of watchdogs, archivists, open data activists, and public interest attorneys, to assure a more balanced set of ‘raw materials’ for analysis, synthesis, and critique. Social scientists and others must commit to the vital, long term project of assuring that algorithms are producing fair and relevant documentation; otherwise states, banks, insurance companies and other big powerful actors will make and own more and more inaccessible data about society and people. Algorithmic accountability is a big tent project, requiring the skills of theorists and practitioners, lawyers, social scientists, journalists and others. It’s an urgent, global cause with committed and mobilised experts looking for support.The world is full of algorithmically driven decisions. One errant or discriminatory piece of information can wreck someone’s employment or credit prospects. It is vital that citizens be empowered to see and regulate the digital dossiers of business giants and government agencies. Even if one believes that no information should be ‘deleted’ – that every slip and mistake anyone makes should be on a permanent record for ever – that still leaves important decisions to be made about the processing of the data. Algorithms can be made more accountable, respecting rights of fairness and dignity for which generations have fought. The challenge is not technical, but political, and the first step is law that empowers people to see and challenge what the algorithms are saying about us.