AI

Google’s research chief questions value of explainable AI

Life
Image: Stockfresh

27 June 2017

As machine learning and AI become more ubiquitous, there are growing calls for the technologies to explain themselves in human terms.

Despite being used to make life-altering decisions from medical diagnoses to loan limits, the inner workings of various machine learning architectures – including deep learning, neural networks and probabilistic graphical models – are incredibly complex and increasingly opaque.

As these techniques improve, often by themselves, revealing their inner workings becomes more and more difficult. They have become a ‘black box’, according to growing numbers of scientists, governments and concerned citizens.

According to some, there is a need for these systems to expose their decision-making process, and be ‘explainable’ to non-experts: An approach known as ‘explainable artificial intelligence’ or XAI.

But efforts to crack open the black box have hit a snag, as the research director of arguably the world’s biggest AI powerhouse, Google, cast doubt on the value of explainable AI.

After all, Peter Norvig suggested, humans aren’t very good at explaining their decision-making either.

Frontier psychology
Speaking at an event at UNSW in Sydney on Thursday, Norvig – who at NASA developed software that flew on Deep Space 1 – said: “You can ask a human, but, you know, what cognitive psychologists have discovered is that when you ask a human you’re not really getting at the decision process. They make a decision first, and then you ask, and then they generate an explanation and that may not be the true explanation.”

Just as humans worked to make sense and explain their actions after the fact, a similar method could be adopted in AI, Norvig explained.

“So we might end up being in the same place with machine learning where we train one system to get an answer and then we train another system to say – given the input of this first system, now it’s your job to generate an explanation.”

Although a relatively new field of study, progress in this area is already being made. Researchers at the University of California and Max Planck Institute for Informatics published a paper in December on a system that put machine learning-based image recognition into human explanations.

Although explanations were justified “by having access to the hidden state of the model”, they “do not necessarily have to align with the system’s reasoning process”, researchers said.

Besides, Norvig added: “Explanations alone aren’t enough, we need other ways of monitoring the decision making process.”

Output checks
A more accurate way of checking AI for fairness and bias, Norvig said, was to look instead not at inner workings but at outputs.

“If I apply for a loan and I get turned down, whether it’s by a human or by a machine, and I say what’s the explanation, and it says well you didn’t have enough collateral. That might be the right explanation or it might be it didn’t like my skin colour. And I can’t tell from that explanation,” he said.

“…But if I look at all the decisions that it’s made over a wide variety of cases then I can say you’ve got some bias there – over a collection of decisions that you can’t tell from a single decision. So it’s good to have the explanation but it’s good to have a level of checks.”

Google’s own algorithmic outputs have been accused of bias. A Google image search for hands or babies, for example, displays exclusively white-skinned results. In 2015, the company’s Photos app mistakenly labelled a black couple as being gorillas.

Accusations of racism and sexism have also been targeted at Google’s autocomplete function, which for example, completed ‘Are jews’ with ‘a race’, ‘white’, ‘Christians’ and ‘evil’.

“These results don’t reflect Google’s own opinions or beliefs,” the company said in response to an Observer story in December, adding the results were merely a “reflection of the content across the Web”.

There were better ways to avoid bias than investigating under the hood of machine learning algorithms, Norvig explained.

“We certainly have other ways to probe because we have the system available to us,” he said. “We could say well what if the input was a little bit different, would the output be different or would it be the same? So in that sense there’s lots of things that we can probe.”

Where we’re going
Although checks on outputs might be a satisfactory approach from Google’s perspective, individuals and governments are beginning to demand that they, and all entities that employ machine learning, go much further.

Earlier this year, the UK government’s chief scientific adviser, wrote in a Wired op-ed: “We will need to work out mechanisms to understand the operations of algorithms, in particular those that have evolved within a computer system’s software through machine learning.”

European legislators are making significant efforts in the area to protect individuals. The EU’s General Data Protection Regulation, which will come into force in May 2018, restricts automated decision-making systems which significantly affect users. It also creates a ‘right to explanation,’ whereby a user can ask for the reason behind an algorithmic decision that was made about them.

In May, Google said it would “continue to evolve our capabilities in accordance with the changing regulatory landscape” while helping customers do the same.

Despite the significant implications, Norvig welcomed the regulators’ focus.

“I think it’s good that were starting to look into what the effects are going to be. I think it’s too early to have the answers,” he said. “I think it’s good that right now as we start seeing the promise of AI that we’re not waiting, we’re asking the questions today, trying to figure out where we’re going.”

IDG News Service

Read More:


Back to Top ↑

TechCentral.ie