Category Archives: price discrimination

Attack discrimination with smarter machine learning

If the goal is for the two groups to receive the same number of loans, then a natural criterion is demographic parity, where the bank uses loan thresholds that yield the same fraction of loans to each group. Or, as a computer scientist might put it, the “positive rate” is the same across both groups.In some contexts, this might be the right goal. In the situation in the diagram, though, there’s still something problematic: a demographic parity constraint only looks at loans given, not rates at which loa

Source: Attack discrimination with smarter machine learning

How to Hold Algorithms Accountable

lgorithms are now used throughout the public and private sectors, informing decisions on everything from education and employment to criminal justice. But despite the potential for efficiency gains, algorithms fed by big data can also amplify structural discrimination, produce errors that deny services to individuals, or even seduce an electorate into a false sense of security. Indeed, there is growing awareness that the public should be wary of the societalrisks posed by over-reliance on these systems and

Source: How to Hold Algorithms Accountable

Amazon Says It Puts Customers First. But Its Pricing Algorithm Doesn’t. – ProPublica

One that substantially favors Amazon and sellers it charges for services, an examination by ProPublica found.Amazon often says it seeks to be “Earth’s most customer-centric company.” Jeffrey P. Bezos, its founder and CEO, has been known to put an empty chair in meetings to remind employees of the need to focus on the customer. But in fact, the company appears to be using its market power and proprietary algorithm to advantage itself at the expense of sellers and many customers.Unseen and almost whol

Source: Amazon Says It Puts Customers First. But Its Pricing Algorithm Doesn’t. – ProPublica

Amazon’s $23,698,655.93 book about flies

At first I thought it was a joke – a graduate student with too much time on their hands. But there were TWO new copies for sale, each be offered for well over a million dollars. And the two sellers seemed not only legit, but fairly big time (over 8,000 and 125,000 ratings in the last year respectively). The prices looked random – suggesting they were set by a computer. But how did they get so out of whack?Amazingly, when I reloaded the page the next day, both priced had gone UP! Each was now nearly $2.8 million. And whereas previously the prices were $400,000 apart, they were now within $5,000 of each other. Now I was intrigued, and I started to follow the page incessantly. By the end of the day the higher priced copy had gone up again. This time to $3,536,675.57. And now a pattern was emerging.On the day we discovered the million dollar prices, the copy offered by bordeebook was1.270589 times the price of the copy offered by profnath. And now the bordeebook copy was 1.270589 times profnath again. So clearly at least one of the sellers was setting their price algorithmically in response to changes in the other’s price. I continued to watch carefully and the full pattern emerged.Once a day profnath set their price to be 0.9983 times bordeebook’s price. The prices would remain close for several hours, until bordeebook “noticed” profnath’s change and elevated their price to 1.270589 times profnath’s higher price. The pattern continued perfectly for the next week.

Source: Amazon’s $23,698,655.93 book about flies

The risks of letting algorithms judge us (Opinion) –

China is considering a new “social credit” system, designed to rate everyone’s trustworthiness. Many fear that it will become a tool of social control — but in reality it has a lot in common with the algorithms and systems that score and classify us all every day.Human judgment is being replaced by automatic algorithms, and that brings with it both enormous benefits and risks. The technology is enabling a new form of social control, sometimes deliberately and sometimes as a side effect. And as the Internet of Things ushers in an era of more sensors and more data — and more algorithms — we need to ensure that we reap the benefits while avoiding the harms.

Source: The risks of letting algorithms judge us (Opinion) –

US insurers told use of external data in price opt. models will be subject to detailed regulatory scrutiny

US insurers told use of external data in price opt. models will be subject to detailed regulatory scrutiny p74


Potential Questions for Regulators to Ask Regarding the Use of Models in P&C Rate Filings
Insurers might use a model in the development of proposed rates and rating factors. The Task Force offers some potential questions a regulator could ask regarding the use of models in rate proposals.
Questions may include, but not be limited by, the following:
Model Description
1. Please provide a high-level description of the workings of the model that was used to select rates and rating factors that differ from the indicated.
2. What is the purpose of the model? What does the model seek to maximize or minimize (e.g. underwriting profit, retention, other) and explain.

Model Variables
3. How were the input variables for your model selected?
a. What is the support for the model variables, including the predictive values and error statistics for the model variables?
b. Are the parameters loss related, expense related, or related to the risk in some other way?

4. Which of the input variables are internal (customer-provided or deduced from customer-provided information) or external?
a. Identify whether each input variable is used in your rating plan.
b. For each external variable, please identify:
i. The owner or vendor of the data (e.g. Department of Motor Vehicles).
ii. Which variables are subject to the requirements of the federal Fair

Credit Reporting Act.
iii. How you ensure that the data are complete and accurate.
iv. The framework, if any, which provides consumers a means of correcting errors in the data pertaining to them.

Model Constraints & Output
5. What level of granularity is your model output (e.g. the class plan level, individual rating factors, or some other level such as household or demographic segment that is different than the rating plan)?
6. What are the limits (or constraints) for the selected rating plan factors, if any?
7. How do the modeled values compare to the company experience?

Note: Regulators should evaluate the particular filing and associated costs to insurers to determine the extent of questioning needed. Regulators should also consider the potential proprietary nature of modeling information and grant confidentiality as appropriate and if allowed under state law.

Hidden truths about car insurance – The Boston Globe

You’ll also be charged more if big data says you won’t notice. In yet another bid to maximize profits, some insurance companies have begun in the past few years to use a new technique to determine your sensitivity to prices. That way, they can base your premiums not just on your risk profile or credit score, but also on how high a price you’re willing to tolerate.

Source: Hidden truths about car insurance – The Boston Globe

Research | Price Discrimination

This is Northeastern University’s personalization research page for our price discrimination project.

0px; “> Today, many e-commerce websites personalize their content,
including Netflix (movie recommendations), Amazon (product suggestions), and Yelp (business reviews). In many
cases, personalization provides advantages for users: for example, when a user searches for an ambiguous query such as
“router,” Amazon may be able to suggest the woodworking
tool instead of the networking device. However, personalization on e-commerce sites may also be used to the user’s disadvantage by manipulating the products shown (price steering) or by customizing the prices of products (price discrimination). Unfortunately, today, we lack the tools and techniques necessary to be able to detect such behavior.
In this paper, we make three contributions towards addressing this problem. First, we develop a methodology for
accurately measuring when price steering and discrimination occur and implement it for a variety of e-commerce web
sites. While it may seem conceptually simple to detect differences between users’ results, accurately attributing these
differences to price discrimination and steering requires correctly addressing a number of sources of noise. Second, we
use the accounts and cookies of over 300 real-world users
to detect price steering and discrimination on 16 popular
e-commerce sites. We find evidence for some form of personalization on nine of these e-commerce sites. Third, we
investigate the effect of user behaviors on personalization.
We create fake accounts to simulate different user features
including web browser/OS choice, owning an account, and
history of purchased or viewed products. Overall, we find
numerous instances of price steering and discrimination on
a variety of top e-commerce sites.

Source: Research | Price Discrimination