Why business leaders need to understand their algorithms

141

One of the biggest sources of anxiety about AI is not that it will turn against us, but that we simply cannot understand how it works. The solution to rogue systems that discriminate against women in credit applications or that make racist recommendations in criminal sentencing, or that reduce the number of black patients identified as needing extra medical care, might seem to be “explainable AI.” But sometimes, what’s just as important as knowing “why” an algorithm made a decision, is being able to ask “what” it was being optimized for in the first place?

Machine-learning algorithms are often called a black box because they resemble a closed system that takes an input and produces an output, without any explanation as to why. Knowing “why” is important for many industries, particularly those with fiduciary obligations like consumer finance, or in healthcare and education, where vulnerable lives are involved, or in military or government applications, where you need to be able to justify your decisions to the electorate.

Unfortunately, when it comes to deep-learning platforms, explainability is problematic. In many cases, the appeal of machine learning lies in its ability to find patterns that defy logic or intuition.

If you could map a relationship simply enough between inputs and outputs to explain it, you probably wouldn’t need machine learning in that context at all. Unlike a hand-coded system, you can’t just look inside a neural network and see how it works.

A neural network is composed of thousands of simulated neurons, arranged in interconnected layers that each receive input and output signals that are then fed into the next layer, and so on until a final output is reached. Even if you can interpret how a model is technically working in terms that an AI scientist could comprehend, explaining that to a “civilian decision-maker” is another problem altogether.

Deep Patient, for example, is a deep-learning platform at Mount Sinai Hospital in New York. It was trained using electronic health records from 700,000 individuals, and became adept at predicting disease, discovering patterns hidden in the hospital data that provided early warnings for patients at risk of developing a wide variety of ailments, including liver cancer, without human guidance.

Then, much to everyone’s surprise, Deep Patient also demonstrated an ability to predict the onset of certain psychiatric disorders like schizophrenia, which are notoriously difficult even for doctors to predict.

The challenge for medical professionals in such a scenario is to balance acknowledging the efficacy and value of the system with knowing how much to trust it, given that they don’t fully understand it or how it works.

Some organizations and industries are investing in the capability to audit and explain machine learning systems. The Defense Advanced Research Projects Agency (DARPA) is currently funding a program called Explainable AI, whose goal is to interpret the deep learning that powers drones and intelligence-mining operations.

Capital One, which has had its own serious issues with data breaches, created a research team dedicated to finding ways to make deep learning more explainable, as U.S. regulations require this type of company to explain decisions such as why they denied a credit card to a prospective customer.

Algorithmic regulation is likely to become more sophisticated over the next few years, as the public starts to become more openly concerned about the impact of AI on their lives. For example, under the General Data Protection Regulation (GDPR), which came into effect in 2018, the European Union requires companies to be able to explain a decision made by one of its algorithms.

Arguably in the near future, you won’t be able to design any kind of AI without both a team of top scientists, and also an equally capable team of privacy engineers and lawyers.

The rationale behind algorithmic regulation is accountability. Making AI more explainable is not just about reassuring leaders that they can trust algorithmic decisions; it is also about providing recourse for people to challenge AI-based decisions.

In fact, the issue of algorithmic transparency applies not only to machine learning, but also to any algorithm whose inner workings are kept hidden.

Algorithms that either appear to be biased or are obscure in the way they work have already been challenged in the courts.

For example, in 2014, the Houston Federation of Teachers brought a lawsuit against the Houston school district, arguing that the district’s use of a secret algorithm to determine how teachers were evaluated, fired, and given bonuses was unfair. The system was developed by a private company, which classified its algorithm as a trade secret and refused to share it with teachers.

Without knowing how they were being scored, teachers said, they were denied the right to challenge their terminations or evaluations. A circuit court found that the unexplainable software violated the teachers’ 14th Amendment right to due process, and the case was ultimately settled in 2016, with use of the algorithm being discontinued. In the next few years, the number of such challenges is likely to rise.

However, for leaders, the most important question to ask the teams designing and building automated solutions may be not why they reached a particular decision, but rather what are they being optimized for? Optimums are important.

There’s a classic thought experiment proposed by Swedish philosopher Nick Bostrom called the Paperclip Maximizer. It describes how an AI could end up destroying the world after being given the goal to manufacture paperclips as efficiently as possible, “with the consequence that it starts transforming first all of earth and then increasing portions of space into paperclip manufacturing facilities.”

The AI in Bostrom’s paper is not intrinsically evil. It was simply, in his view, given the wrong goal and no constraints. Wrong goals or optimums can cause a lot of unintended harm. For example, an AI program that set school schedules and bus schedules in Boston was scrapped after an outcry from working parents and others who objected that it did not take into account their schedules, and that it seemed to be focused on efficiency at the expense of education.

But was it the program’s fault? It was, after all, coded to look for ways to save money. However, unlike the complexities of building and interpreting an AI model—debating and deciding on what a system is optimized for is absolutely within the capability set of business leaders and boards, and so it should be.

AI is a tool that reflects our priorities, as organizations and governments. It might seem cold to discuss human fatalities in automotive or workplace accidents in terms of statistics, but if we decide that an algorithmic system should be designed to minimize accidents as a whole, we have to also judge any resulting harm in the context of the system it replaces. But in doing so, we will also have to be ready to be judged ourselves, and ready to justify our decisions and design principles.

Leaders will be challenged by shareholders, customers, and regulators on what they optimize for. There will be lawsuits that require you to reveal the human decisions behind the design of your AI systems, what ethical and social concerns you took into account, the origins and methods by which you procured your training data, and how well you monitored the results of those systems for traces of bias or discrimination.

Document your decisions carefully and make sure you understand, or at the very least trust, the algorithmic processes at the heart of your business.

Simply arguing that your AI platform was a black box that no one understood is unlikely to be a successful legal defense in the 21st century. It will be about as convincing as “the algorithm made me do it.”


Author Mike Walsh is the author of The Algorithmic Leader: How to Be Smart When Machines Are Smarter Than You. Walsh is the CEO of Tomorrow, a global consultancy on designing companies for the 21st century.