Return to site

The Significance of Explainable AI in Finance & Asset Management

September 27, 2020

The Significance of Explainable AI in Finance & Asset Management

The high availability of data and advances in computing capabilities have led companies in the financial industry to apply artificial intelligence to a variety of use cases, from improved financial services to asset management.

Tasks and decisions that required large teams of professionals with domain-expertise have gradually become automated by machine learning models trained on large amounts of data. Those who implement AI in their processes often have a competitive advantage over those who choose not to.

In asset management, AI has completely changed the dynamics of investing strategy through the use of quantitative models to analyze and predict trends of various types of investment vehicles.

As financial markets grow increasingly complex, asset managers have turned to analyze the large amounts of alternative data collected by services like Capital IQ and Factset that provide information about a company's performance. AI models are built and trained to value public companies and forecast prices of assets like company shares or cryptocurrency.

These models are often powered by deep learning and come in various forms including deep neural networks, time-series analysis, and natural language processing. Analyzing the different sources of data through mathematical algorithms and outputting values used in investment decision-making has yielded significantly greater and more consistent returns.

The hidden trends within big data are too complicated for humans to understand, especially as the dimensionality of the data increases.

However, analyzing all available and relevant data, including fundamental, technical, and economic factors, as well as textual information from 10Ks, is beneficial and can provide additional insights. This can be seen both positively and negatively.

Companies can use technology to account for large amounts of data about assets and find patterns that are associated with increases in prices.

This task is difficult for domain-experts to perform consistently because emotion and human error are always a factor.

Furthermore, humans cannot process large amounts of information (historic data, indicators, textual sentiment), nor make decisions as fast as AI; speed and efficiency are essential in today’s financial markets because of the rapid changes in economic and market conditions.

The speed and accuracy of AI have continuously attracted more asset managers to incorporate AI into their investing strategy.

Nevertheless, this algorithmic approach has been mostly regarded as a black-box process in which models predict increases in share price without explanation. Deep learning algorithms are output-focused at the expense of the explainability of the prediction process.

In a Wired Magazine article, Chris Anderson discusses the “End of Theory” 1 in which AI trained on big data eliminates the need for explanations and human understanding of processes given high predictive power. However, this is not sufficient in many practical cases like investing. However some firms like Wells Fargo are working had to show explainability in their AI processes.

Important tasks such as investment decision-making require not only high accuracy but also explainability in the decisions made. Just as it is important to know why AI rejected a borrower’s loan request (Equal Credit Opportunity Act)2, it is also important to interpret why AI determines one company to be of greater value than another. One advantage of human investors over AI is the ability to explain decisions of investment allocations.

Investors want to understand what they’re putting money into, and the black-box AI often leads to a lack of confidence in the decision.

Oftentimes, algorithms are biased or flawed based on the inputted data, as it may be prone to overfitting the data. As Aaron Harris, CTO of Sage Group says, clients expect to understand the context behind investment allocations, so explainability is as important as returns.3

Explainable AI (XAI) which can show what factors led to a prediction or decision has become a necessary component in institutional investing strategies. The primary purpose of XAI is to provide not only output but also an explanation of some form interpretable by humans.

Traditional AI vs. Explainable AI. (Image Source: DARPA) 4

Asset managers commonly perform NLP on text from news and financial statements to predict share prices.

Open AI has performed research on developing explainable sentiment analysis algorithms through an unsupervised sentiment neuron that learns parts of sentences corresponding to certain sentiments; this algorithm has achieved SoTA accuracy.

Sentiment readings of a trained neuron. (Image Source: Open AI) 5

This explainability allows asset managers and investors to understand, for example, why an MD&A section is perceived by the model to lead to increases in share prices.

Furthermore, the results from this textual analysis are coupled with other important technical and economic indicators. Present macroeconomic conditions are essential to account for in analyzing share prices.

The economy consists of cycles, as historical trends have shown, and is affected by many indicators like inflation, unemployment, and CPI. Being able to forecast macroeconomic conditions (predict present values) is an advantage in asset management.

XAI is important in this task because understanding which factors contribute to recessions or growth can help investors prepare for swings in the economy.

Combining these results, XAI can communicate which attributes about the data are the most important contributors to predicted prices of assets.

Analysis of the importance of features in making predictions. (Image Source: Alpha Quantum) 6

Advanced AI algorithms have been treated as black-boxes, but investment decisions made should be accompanied by explanations.

Transparency throughout the process is incredibly useful to investors who want to thoroughly understand their assets. XAI allows model predictions to be integrated with domain-expertise to increase confidence in investment decisions.

Leveraging the capabilities of AI on big data combined with the explainability of results not only rids human emotion and provides greater returns on investments, but also establishes the trust between asset managers and clients which is fundamental to a sustainable, bias-free, and consistent investment strategy.

Written by Samson Qian

Edited by Calvin Ma, Gihyen Eom, Helen Wu, Xujia Ma, Pranshu Gupta, Jack Argiro & Alexander Fleiss

Sources:

[1] https://www.wired.com/2008/06/pb-theory

[2] https://www.ftc.gov/enforcement/statutes/equal-credit-opportunity-act

[3] https://www.forbes.com/sites/forbestechcouncil/2019/08/13/explainable-ai-is-the-next-big-thing-in-accounting-and-finance

[4] https://www.darpa.mil/program/explainable-artificial-intelligence

[5] https://openai.com/blog/unsupervised-sentiment-neuron

[6] https://www.alpha-quantum.com/