window.dotcom = window.dotcom || { cmd: [] }; window.dotcom.ads = window.dotcom.ads || { resolves: {enabled: [], getAdTag: []}, enabled: () => new Promise(r => window.dotcom.ads.resolves.enabled.push(r)), getAdTag: () => new Promise(r => window.dotcom.ads.resolves.getAdTag.push(r)) }; setTimeout(() => { if(window.dotcom.ads.resolves){ window.dotcom.ads.resolves.enabled.forEach(r => r(false)); window.dotcom.ads.resolves.getAdTag.forEach(r => r("")); window.dotcom.ads.enabled = () => new Promise(r => r(false)); window.dotcom.ads.getAdTag = () => new Promise(r => r("")); console.error("NGAS load timeout"); } }, 5000)

Google tackles the black box problem with Explainable AI

Leo Kelion
Technology desk editor
Google Andrew MooreGoogle
Prof Moore introduced Google Cloud's Explainable AI in London

There is a problem with artificial intelligence.

It can be amazing at churning through gigantic amounts of data to solve challenges that humans struggle with. But understanding how it makes its decisions is often very difficult to do, if not impossible.

That means when an AI model works it is not as easy as it should be to make further refinements, and when it exhibits odd behaviour it can be hard to fix.

But at an event in London this week, Google's cloud computing division pitched a new facility that it hopes will give it the edge on Microsoft and Amazon, which dominate the sector. Its name: Explainable AI.

To start with, it will give information about the performance and potential shortcomings of face- and object-detection models. But in time the firm intends to offer a wider set of insights to help make the "thinking" of AI algorithms less mysterious and therefore more trustworthy.

"Google is definitely the underdog behind Amazon Web Services and Microsoft Azure in of the cloud platform space, but for AI workloads I wouldn't say that's the case - particularly for retail clients," commented Philip Carter from the consultants IDC.

"There's a bit of an arms race around AI... and in some ways Google could be seen to be ahead of the other players."

Amazon Explainable AIAmazon
The Explainable AI cards will outline the performance and limitations of the algorithms involved

Prof Andrew Moore leads Google Cloud's AI division.

He told the BBC the secret behind the breakthrough was "really cool fancy maths".

The transcript below has been edited for clarity and length:

Can you explain what led to Explainable AI?

One of the things which drives us crazy at Google is we often build really accurate machine learning models, but we have to understand why they're doing what they're doing. And in many of the large systems we built for our smartphones or for our search-ranking systems, or question-answering systems, we've internally worked hard to understand what's going on. Now we're releasing many of those tools for the external world to be able to explain the results of machine learning as well. The era of black box machine learning is behind us.

How do you go about doing that - it's not as though you can peer into a neural net and see why an input became an output?

The main question is to do these things called counterfactuals, where the neural network asks itself, for example, 'Suppose I hadn't been able to look at the shirt colour of the person walking into the store, would that have changed my estimate of how quickly they were walking":[]}