ADVERTISEMENT

Artificial Intelligence Has Some Explaining to Do

Artificial Intelligence Has Some Explaining to Do

(Bloomberg Businessweek) -- Artificial intelligence software can recognize faces, translate between Mandarin and Swahili, and beat the world’s best human players at such games as Go, chess, and poker. What it can’t always do is explain itself.

AI is software that can learn from data or experiences to make predictions. A computer programmer specifies the data from which the software should learn and writes a set of instructions, known as an algorithm, about how the software should do that—but doesn’t dictate exactly what it should learn. This is what gives AI much of its power: It can discover connections in the data that would be more complicated or nuanced than a human would find. But this complexity also means that the reason the software reaches any particular conclusion is often largely opaque, even to its own creators.

For software makers hoping to sell AI systems, this lack of clarity can be bad for business. It’s hard for humans to trust a system they can’t understand—and without trust, organizations won’t pony up big bucks for AI software. This is especially true in fields such as health care, finance, and law enforcement, where the consequences of a bad recommendation are more substantial than, say, that time Netflix thought you might enjoy watching The Hangover Part III.

Regulation is also driving companies to ask for more explainable AI. In the U.S., insurance laws require that companies be able to explain why they denied someone coverage or charged them a higher premium than their neighbor. In Europe, the General Data Protection Regulation that took effect in May gives EU citizens a “right to a human review” of any algorithmic decision affecting them. If the bank rejects your loan application, it can’t just tell you the computer said no—a bank employee has to be able to review the process the machine used to reject the loan application or conduct a separate analysis.

Artificial Intelligence Has Some Explaining to Do

David Kenny, who until earlier this month was International Business Machines Corp.’s senior vice president for cognitive services, says that when IBM surveyed 5,000 businesses about using AI, 82 percent said they wanted to do so, but two-thirds of those companies said they were reluctant to proceed, with a lack of explainability ranking as the largest roadblock to acceptance. Fully 60 percent of executives now express concern that AI’s inner workings are too opaque, up from 29 percent in 2016. “They are saying, ‘If I am going to make an important decision around underwriting risk or food safety, I need much more explainability,’ ” says Kenny, who is now chief executive officer of Nielsen Holdings Plc.

In response, software vendors and IT systems integrators have started touting their ability to give customers insights into how AI programs think. At the Conference on Neural Information Processing Systems in Montreal in early December, IBM’s booth trumpeted its cloud-based artificial intelligence software as offering “explainability.” IBM’s software can tell a customer the three to five factors that an algorithm weighted most heavily in making a decision. It can track the lineage of data, telling users where bits of information being used by the algorithm came from. That can be important for detecting bias, Kenny says. IBM also offers tools that will help businesses eliminate data fields that can be discriminatory—such as race—and other data points that may be closely correlated with those factors, such as postal codes.

Quantum Black, a consulting firm that helps companies design systems to analyze data, promoted its work on creating explainable AI at the conference, and there were numerous academic presentations on how to make algorithms more explainable. Accenture Plc has started marketing “fairness tools,” which can help companies detect and correct bias in their AI algorithms, as have rivals Deloitte LLC and KPMG LLC. Google, part of Alphabet Inc., has begun offering ways for those using its machine learning algorithms to better understand their decision-making processes. In June, Microsoft Corp. acquired Bonsai, a California startup that was promising to build explainable AI. Kyndi, an AI startup from San Mateo, Calif., has even trademarked the term “Explainable AI” to help sell its machine learning software.

There can be a trade-off between the transparency of an AI algorithm’s decision-making and its effectiveness. “If you really do explanation, it is going to cost you in the quality of the model,” says Mikhail Parakhin, chief technology officer for Russian internet giant Yandex NV, which uses machine learning in many of its applications. “The set of models that is fully explainable is a restricted set of models, and they are generally less accurate. There is no way to cheat around that.”

Parakhin is among those who worry that the explanations offered by some of these AI software vendors may actually be worse than no explanation at all because of the nuances lost by trying to reduce a very complex decision to just a handful of factors. “A lot of these tools just give you fake psychological peace of mind,” he says.

Alphabet-owned AI company DeepMind, in conjunction with Moorfields Eye Hospital in the U.K., built machine learning software to diagnose 50 different eye diseases as well as human experts can. Because the company was concerned that doctors wouldn’t trust the system unless they could understand the process behind its diagnostic recommendations, it chose to use two algorithms: One identified what areas of the image seemed to indicate eye disease, and another used those outputs to arrive at a diagnosis. Separating the work in this fashion allowed doctors to see exactly what in the eye scan had led to the diagnosis, giving them greater confidence in the system as a whole.

“This kind of multimodel approach is very good for explainability in situations where we know enough about the kind of reasoning that goes into the final decision and can train on that reasoning,” says Neil Rabinowitz, a researcher at DeepMind who has done work on explainability. But often that’s not the case.

There’s another problem with explanations. “The suitability of an explanation or interpretation depends on what task we are supporting,” Thomas Dietterich, an emeritus professor of computer science at Oregon State University, noted on Twitter in October. The needs of an engineer trying to debug AI software, he wrote, were very different from what a company executive using that software to make a decision would need to know. “There is no such thing as a universally interpretable model.”

To contact the editor responsible for this story: Giles Turner at gturner35@bloomberg.net, Jillian Ward

©2018 Bloomberg L.P.