Home » Cognitive Technology » In Pursuit of AI That Tells Us Why: Ethics and Explainability in Machine Learning
Cognitive Technology

In Pursuit of AI That Tells Us Why: Ethics and Explainability in Machine Learning

Imagine that you’ve reached the magical number after years; you’ve finally saved enough money to buy a home. You search the retail listings in your ideal neighborhood, find your dream house and apply for the loan. Then you get turned down. You call the bank, and you ask, “Why not?” The response on the other line is: “I don’t know.”

A scenario like this is possible today because organizations everywhere have begun to delegate more and more decisions to machines, submitting important life events like securing a loan to artificial intelligence systems. But AI practitioners are finding that these systems suffer from a lack of explainability and the possibility of bias. If an algorithm rejects you for a loan, and no human can tell you why, then how can you confirm that the machine has reached its decision fairly?

Overcoming bias in models      

The Chief Data Officer for Mint at Intuit, Anu Tewary, has said that biases in machine learning threaten the credibility of their decisions. Tewary told TechRepublic that she had found biases against women in self-driving cars. “Imagine if there were no women on the team that either built the cars or tested the cars,” she said. “Then if the technology was faced with a woman either operating or interacting with the car, it might have problems trying to understand the voice, or understand the person, and so on.”

Discussing the AI systems at Intuit that help make decisions on granting small business loans, Tewary concludes, “We have to make sure the bias doesn’t creep into these models.”

Understanding AI decisions in commerce

AI systems are becoming embedded in most major industries. At banks and large merchants, machine learning can have a profound impact when it comes to navigating the increasing complexities and dangers of today’s digital economy.

These organizations face a proliferation of new payments types and channels, and they’re competing for customers demanding immediacy and innovation. At the same time, fraud is skyrocketing. Global card fraud losses reached $21 billion in 2016 and will exceed $30 billion in 2020, according to The Nilson Report. To balance customer experience with risk management in this high-stakes, high-speed environment, businesses in the commerce space are leveraging machine learning.

But how can we know that these AI systems are making decisions without bias? How can we come to understand the machine’s logic so that we can audit its decisions for compliance? How can we be sure that the machine is maintaining the privacy and integrity of its data?

Machine learning

These are the questions explored in a new e-book from Feedzai, an AI company that fights fraud in banks and large merchants. The e-book is titled, “What’s Next After Machine Learning: Ethics and Explainability in AI for Fraud” and it maps the evolution of AI systems as they progress in terms of flexibility on one hand, and explainability on the other hand.

For example, neural networks, like the deep learning systems behind self-driving cars, boast high flexibility, but they perform their machine thinking inside opaque black boxes. A human cannot understand why these systems have reached their decisions.

There are other systems that have been developed to perform white-box processing, such as Feedzai’s AI platform. These white-box systems do have a degree of explainability as the result of a process that can cull and communicate the factors behind the machine’s decision to the human analyst.

But as Feedzai co-founder and Chief Science Officer Pedro Bizarro recently said at Money20/20, these white-box explanations are only the first step toward true, human-conversant explainability, where AI systems begin to teach humans proactively about all the nuances of the relevant patterns behind its decisions.

Bridging the gap with better AI ethics

It’s critical to develop AI systems that are explainable so we can reduce and eliminate bias. With perfect explainability, the age of machine learning may give way to a new age of machine teaching.

Currently, we find ourselves in a transition period: post-attainability, but pre-explainability. We’ve partnered with AI systems to make important decisions, but we still haven’t developed a scalable way to converse with these systems to understand their logic in human terms.

Can ethics bridge the gap? Bizarro says we can. He’s developed an AI Code of Ethics, internally referred to as an AI-ppocratic Oath, that Feedzai data scientists take in order to be more mindful about reducing machine learning bias and improving data integrity. Read the ebook to read the oath and to learn more about the nascent conversation of ethics in AI.

Next article