Return to site

What Is Explainable AI?

What Is Explainable AI?

broken image
Will explainable AI help us to not miss the next major step in AI regulation, evolution and ethics? 
broken image

What Is Explainable AI? Like it or not, there is no longer any need to demonstrate that AI is at the root of many of the decisions that are taken in our daily lives. 

This ubiquity is part of an ambition for automation that is quite natural to mankind. Among the goals of this quest for automation we can note are the following. The desire for productivity. Or the desire to find more time. And to be able to concentrate on tasks requiring more cognitive capacity and to advance research, for example. 

Despite the technological prowess achieved through consistent innovation, the replacement of humans in certain tasks has never been done without raising certain problems. These problems are later resolved thanks to various government regulations. 

AI is no exception:

Many problems have emerged such as racial biases in facial recognition algorithms. And some of which have led to discrimination. Other than discrimination, real issues of accountability and legal responsibility are also on the table. 

Some sectors such as civil aviation are beginning to experience legislation and liability in the event of accidents involving automatic systems. But, AI is still in its infancy. 

One difference with the past, though, is that AI is less and less understandable and transparent. Given that the machine can surpass our cognitive capacities. We are sometimes forced to accept the opacity of certain models. 

On the other hand, more and more regulations will appear to address these different problems. However, what will they do to address this problem in an optimal way, i.e. without reducing the performance of these technologies?

Could explained AI be the solution?

First of all, the origin of the issue is transparency. We’ve always valued it, especially when it comes to high stake decisions. When a sector is critical (crimes, loans, health, insurance policies,…) a huge amount of trust is placed upon the decision making.

If something goes wrong. We should be able to know who is accountable for what happened. Another utility would be to learn from our mistakes. If we take aviation for example. Almost every piece of data and decisions made whether by the pilot or the autopilot is cautiously recorded and stored. Beyond responsibility, it helps to avoid making the same errors and generally improve our safety. 

Transparency is the basis of our societies and our thinking, if we believe in science it’s because it’s verifiable; you can access all the research done and the demonstration. It makes it very hard to contest. 

Do We Need Complete Transparency?

broken image

A counter example of this could be political leaders. We trust them in making critical decisions for us even though they could make mistakes or act in their own interest.

So we don’t look for complete transparency. And, we just need an acceptable threshold that will make us feel safe and comfortable while still giving us some marginal control. We accept the delegation of power because we compensate for it with the ability to remove people from power if they make major mistakes.

Furthermore, the presence of time-limited tenure for our  leaders guarantees the population a certain amount of control. 

The thing is that trust in technology isn’t really widespread and it’s hard to combine capability and control if AI is to be leveraged to its full potential. To get the value promised by technology and global automation, we need to let it make decisions with as much autonomy as possible. But this would mean less and less control as humans.

Explainable AI Automation

Automating things has also always been the source of a very controversial debate around accountability. By removing as many humans as possible in very complex systems. We are able to track who is responsible when something goes wrong has been shown to be very challenging. 

With AI being at the very core of our life, data and AI regulation isn’t focused on the right place and it’s only a matter of time for it to focus on AI biases and transparency instead of data storage and exploitation. 

But if the models are too deeply anchored to critical decision processes (loans, crime etc) it might be too late and very expensive to reform. However, initiatives like GDPR helped redirect the global debate toward AI regulation. 

On top of that, sophisticated AI models are real black boxes that can’t be interpreted by a human. And problem is the more we advance in time, the tasks we want to automate become more complex and involve a multilayer intention process. 

To solve this, research took an approach that consisted of prioritizing the target before the means of accomplishing it. In almost every ML model, the main metric used is the error between what our model predicted and what it should have predicted. . 

Because we don’t fully understand how the human mind works to make decisions, we assumed that it wasn’t a problem if our models had a blind part that would “imitate” the non-understandable part in humans.

And it actually worked!

broken image

Deep neural networks work very well for tasks like image recognition. But, cede transparency over the decision process. We do know how it works theoretically because we built it. However we have no idea how it behaves because of the complexity of the model. 

broken image

Let’s take a step back

If we want something to make critical and vital decisions for us, we want it to be very transparent and explainable. This way we feel comfortable with its purpose and we can more easily trust it.

And this is why government or institutional decisions take that much time. Transparency is fundamental to democracy. But, we want our models to perform complex tasks. And then there is less transparency. We get caught in a tradeoff. A tradeoff between performance and transparency. 

Actually it’s not a problem. Especially when the decisions do not imply legal accountability or a risk of discrimination.

Explainable AI can be seen as a potential solution to this issue. And this term is a neologism used since 2004 in research and debates on machine learning.

Currently, there is no universal definition of Explainable AI. And the XAI program of the Defense Advanced Research Projects Agency (DARPA) defines the objectives of explicable artificial intelligence with the following requirements. 

It must include explicable models. Without having to give up its great capacity for learning. It should also be ensured that future users can understand the emerging generation of artificial intelligences. And trust them within reason. And use and work with them effectively. 

Today the most common explainable AI methods used are: 

The Layer-wise Relevance Propagation (LRP – 2015 (1)). The goal of this method is to detect which input vectors contribute the most to the output. It could help isolate biases for example. 

The Counterfactual Method (2) is used to describe how input data are transformed during the model prediction. For example, when looking at images. The method can break down the main features or shapes of the picture it is trying to classify.. 

For facial recognition, this would be helpful because if the skin color is used as a differentiation factor then we would factually know our model has a problem and won’t be subject to statistical deduction. 

The Local Interpretable Model-Agnostic Explanations (LIME – 2016 (3)) is a model that uses a holistic approach. This explains why certain data points have been classified in a way. And it approximates any model with a local, interpretable model to explain each individual prediction. 

So What is Explainable AI’s Drawbacks?

broken image

All of those methods have one major drawback: they address the problem after the model has been implemented. 

We should be able to create and develop models that take into account the explainability. And all of the intermediary intentions through the decision process. 

Intermediary decisions and processes should have metrics dedicated to them. Especially when it comes to evaluating models.

Understanding AI isn’t the end of it. We need to be able to change it easily if we are to implement explainable AI in the future. 

Written by Adam Rida

Edited by Jack Argiro & Ryan Cunningham