Leaked 2017 document reveals FB Australia’s intent to exploit teens’ words, images.
About four years ago, Svenska Dagbladet, one of Sweden’s top newspapers, was in free fall. The paper was experiencing depressed circulation, no digital revenue, and dwindling readership. When Fredric Káren took the job as editor-in-chief in 2013, the message from the board and the parent company, Scandinavian media conglomerate Schibsted Group, was clear: “Something radical had to be done to secure the newspaper’s future,” he recalls. Since then, SvD has gotten back on track by bringing a digital-first mentality into the newsroom. Káren talks a lot about how investing heavily in technology allowed the paper to move forward and regain profitability. Mostly, though, he says he owes the paper’s recovery to one thing: an algorithm that runs the news.
“This is a propaganda machine. It’s targeting people individually to recruit them to an idea. It’s a level of social engineering that I’ve never seen before. They’re capturing people and then keeping them on an emotional leash and never letting them go,” said professor Jonathan Albright.Albright, an assistant professor and data scientist at Elon University, started digging into fake news sites after Donald Trump was elected president. Through extensive research and interviews with Albright and other k
If we now demand that algorithms have to be made better in this applied sense, we only demand that the program that was built into them should work better. But this program is not just harmless efficiency, the definition of the problems and the possible solutions almost always corresponds to a neo-liberal world view. By this I mean three things: First, the society is individualized. Everything is attributed to individual, specifiable persons separated by their differences from one another and action means first and foremost individual action. Second, the individuals so identified are placed in a competing relationship with one another, through all sorts of rankings on which one’s own position relative to that of others can rise or fall. And, third, the group or the collective – which expresses the consciousness of its members as related to one another – is replaced by the aggregate, which is essentially formed without the knowledge of the actors. Either because it is supposed to emerge spontaneously, as Friedrich von Hayek thought it, or because it is constructed behind the backs of the people in the closed depths of the data centers. Visible to a few actors only.
If the goal is for the two groups to receive the same number of loans, then a natural criterion is demographic parity, where the bank uses loan thresholds that yield the same fraction of loans to each group. Or, as a computer scientist might put it, the “positive rate” is the same across both groups.In some contexts, this might be the right goal. In the situation in the diagram, though, there’s still something problematic: a demographic parity constraint only looks at loans given, not rates at which loa
But we will not see computers acquire minds anytime soon, and in the meantime we will end up accommodating the formalist methodologies of computer algorithms. The problem is one of ambiguity as much as nonneutrality. A reductive ontology of the world emerges, containing aspects both obvious and dubious.
Iyad Rahwan was the first person I heard use the term society-in-the-loop machine learning. He was describing his work which was just published in Science, on polling the public through an online test to find out how they felt about various decisions people would want a self-driving car to make – a modern version of what philosophers call “The Trolley Problem.” The idea was that by understanding the priorities and values of the public, we could train machines to behave in ways that the society would consider ethical. We might also make a system to allow people to interact with the Artificial Intelligence (AI) and test the ethics by asking questions or watching it behave.
Microsoft’s newest online AI, CaptionBot, tries to identify what’s in an uploaded photo, using two recognition APIs recently released by Microsoft Cognitive Services for app developers– “Computer Vision” and “Emotion”. But while Microsoft brags that their AI “can understand thousands of objects, as well as the relationships between them,” bloggers are also sharing funny examples of CaptionBot’s many mistakes. While it correctly identified Bea Arthur, Ozzy Osbourne and Joan Jett, and a movie poster with Arn