Data Science Investment Counter —Funding raised by UK data science companies in 2018.
£ 5.640 Million

GDPR is coming: What’s the Deal with EU’s “Right to Explanation”?

The movies Netflix recommends, the advertisement we are shown after a Google search, the random song Spotify plays next. Most of these things are automatically “decided” by algorithms, and we know that. But have you ever stopped to think what is the rationale behind those choices— why Netflix’s algorithm went for Moonrise Kingdom instead of Kill Bill?  New EU regulation will allow anybody to ask that question, and get an answer. 

The new rules are included in the General Data Protection Regulation (GDPR), an act that will become enforceable across the EU on 25 May  2018.  Article 71 of the bill establishes the new concept of “right to explanation.” That is: when a decision about a person is taken in an automated fashion, that person has the right “to obtain human intervention, to express his or her point of view, to obtain an explanation of the decision reached after such assessment, and to challenge the decision.”

Screen Shot 2016-08-31 at 16.07.57

Article 71 of EU’s General Data Protection Regulation (GDPR), to become enforceable across Europe in 2018

Article 71 was conceived with other in mind than just Netflix or YouTube. Automated decisions are becoming the norm in critical realms such as job recruiting, credit score predictions and medical diagnostics. Although they are touted as impartial executors, algorithms powering those decisions have been found to be sometimes spoiled by bias— for instance showing ads for high-paying jobs only to men, or classifying black inmates as more likely to reoffend than their white counterparts.

Job Interview

Job recruiting is one of the domains affected by the increasing use of automated screening, which has sometimes resulted in discriminatory or unfair decisions (image via bit.ly/2bC31R2)

So far it had been hard to challenge machine-powered discriminatory choices, given that most algorithms are proprietary technologies, shielded from scrutiny — a model dubbed “black box”. Now, requiring that companies explain why an automated outcome happened could help expose and solve that kind of problems. And the possibility of bringing a flesh-and-blood human in the process might sort out another issue: the lack of real accountability for automated mess-ups.

“I as a person can be responsible for what I do, but we cannot punish an algorithm who makes an illicit or discriminatory decision,” explains Lydia Nicholas, a senior researcher in Nesta’s Collective Intelligence team. “ If someone makes a racist judgement we can punish them; with an algorithm it’s more complicated to understand who’s responsible, because the algorithm can’t take the fault. That’s why we needed these additional rules.”

There are some open questions, though. The main one is whether automated decisions can always be logically expounded.  Take companies that make use of an increasingly popular AI technique called machine learning. The way machine learning algorithms are trained to deliver decisions is by feeding them lots of data and let them —with various degrees of independence— find significant correlations by themselves.

If someone makes a racist judgement we can punish them. With an algorithm it’s more complicated to understand who’s responsible, because the algorithm can’t take the fault. That’s why we need additional rules. – Lydia Nicholas

The trouble is that such correlations, and the features that a machine learning system singles out as relevant to its choices, are not always clear, let alone explainable. The issue is so pressing Toyota recently sponsored an MIT project to develop self-driving cars capable of explaining why they took a certain decision— an endeavour made all the more complicated by the fact that AIs’ conversational capabilities are still desperately subpar. A similar initiative was just launched by DARPA in August.

“There are very few algorithms that are linear in how they decide and, for instance, make decisions directly linking education to income and creditworthiness,” says Bryce Goodman, a researcher at Oxford Internet Institute, who devoted a paper to the new regulation. “Machine learning algorithms just discover things by trial and error. Very often there’s nothing literally ‘intelligent’ in the way they take decisions, This makes it basically impossible to explain their choices in human terms.”

There are several possible outcomes to this situation. Companies and developers, for instance, may take this as a chance to become more aware of the unintentional biases entrenched in their algorithms, and put in place stronger human oversight and auditing procedures. (Some universities are already developing tools to check a posteriori whether an algorithm is biased.)

“In a way, this could end up creating a new job: auditors with a computer science training who specialise in detecting discriminations in algorithms,” Goodman says.

 

Machine learning algorithms just discover things by trial and error. Very often there’s nothing literally ‘intelligent’ in the way they take decisions, This makes it basically impossible to explain their choices in human terms. -Bryce Goodman

 

Companies will also have to abandon their current “black box” approach and make the inner workings of their algorithms available to the public. Nicholas says that, over time, algorithms could even be subjected to “kitemarking systems” like those used in food security. Such arrangements would create a series of standards that all automated decision-making systems will have to abide by in order to be considered legit.

“That would not necessarily be the end of it: in the same way a lot of food companies did with their products, software companies could design around existent problems,” Nicholas explains. “They could draft white papers, they’ll write descriptions to make something sound better than it is.”

"HAL, could you please explain me why you didn't open the pod bay doors?" (image via bit.ly/2bBOt8P)

“HAL, could you please explain me why you didn’t open the pod bay doors?” (image via bit.ly/2bBOt8P)

Still, other experts are preoccupied that the regulation — and a possible subsequent spike in red tape— will stunt the progress of machine learning usage and research.

This could end up hurting any automated process anything made without human intervention, including self-driving cars” says Daniel Castro, a director at Center for Data Innovation, a research think-tank in Washington DC. “Mass amounts of efficiency could be lost.”

Castro believes that GDPR would impose much stricter standards on algorithms than those required from human decision-makers. He thinks hidden, automated biases should be defeated by the law of the market, rather than by legal rigidity.

“The role of public policy should be to create an environment to minimise errors: to create a competitive environment, where if you hire a bad candidate because of skewed algorithmic decision you lose money— and vice versa if you reject a good candidate you lose money,” he says. “The EU’s approach seems too punitive.”

SHARE THIS ARTICLE:
BY SHACK15

Co-working space and blog dedicated to all things data science.

Subscribe to our newsletter