World Economic Forum

Moving First on AI Has Competitive Advantages and Risks, New Report Helps Navigate

  • Financial services firms that are first movers on implementing artificial intelligence (AI) use have the most to gain, but also face higher risks by deploying emerging technologies without regulatory clarity.
  • To overcome these challenges, the World Economic Forum has proposed frameworks to help financial institutions and regulators explain AI decisions, understand emerging risks from the use of AI, and identify how they might be addressed.
  • In one of the largest studies of AI in financial services, including 250+ contributions from experts, the Forum explores AI explainability, bias, fiduciary duty, and more.
  • Read the report and more about the project here.

New York, USA, 23 October 2019  Financial institutions that implement Artificial Intelligence early have the most to gain from its use, but also face the largest risks. The often-opaque nature of AI decisions and related concerns of algorithmic bias, fiduciary duty, uncertainty, and more have left implementation of the most cutting-edge AI uses at a standstill. However, a newly released report from the World Economic Forum, Navigating Uncharted Waters, shows how financial services firms and regulators can overcome these risks.

Using AI responsibly is about more than mitigating risks; its use in financial services presents an opportunity to raise the ethical bar for the financial system as a whole. It also offers financial services a competitive edge against their peers and new market entrants.

“AI offers financial services providers the opportunity to build on the trust their customers place in them to enhance access, improve customer outcomes and bolster market efficiency,” says Matthew Blake, Head of Financial Services, World Economic Forum. “This can offer competitive advantages to individual financial firms while also improving the broader financial system if implemented appropriately.”

Across several dimensions, AI introduces new complexities to age-old challenges in the financial services industry, and the governance frameworks of the past will not adequately address these new concerns.

Explaining AI decisions

Some forms of AI are not interpretable even by their creators, posing concerns for financial institutions and regulators who are unsure how to trust solutions they cannot understand or explain. This uncertainty has left the implementation of cutting-edge AI tools at a standstill. The Forum offers a solution: evolve past “one-size-fits-all” governance ideas to specific transparency requirements that consider the AI use case in question.

For example, it is important to clearly and simply explain why a customer was rejected for a loan, which can significantly impact their life. It is less important to explain a back-office function whose only objective is to convert scans of various documents to text. For the latter, accuracy is more important than transparency, as the ability of this AI application to create harm is limited.

Beyond “explainability”, the report explores new challenges surrounding bias and fairness, systemic risk, fiduciary duty, and collusion as they relate to the use of AI.

Bias and fairness

Algorithmic bias is another top concern for financial institutions, regulators and customers surrounding the use of AI in financial services. AI’s unique ability to rapidly process new and different types of data raise the concern that AI systems may develop unintended biases over time; combined with their opaque nature such biases could remain undetected. Despite these risks, AI also presents an opportunity to decrease unfair discrimination or exclusion, for example by analyzing alternative data that can be used to assess ‘thin file’ customers that traditional systems cannot understand due to a lack of information.

Systemic risk

The widespread adoption of AI also has the potential to alter the dynamics of the interactions between human actors and machines in the financial system, creating new sources of systemic risk. As the volume and velocity of interactions grow through automated agents, emerging risks may become increasingly difficult to detect, spread across various financial institutions, Fintechs, large technology companies, and other market participants. These new dynamics will require supervisory authorities to reinvent themselves as hubs of system-wide intelligence, using AI themselves to supervise AI systems.

Fiduciary duty

As AI systems take on an expanded set of tasks, they will increasingly interact with customers. As a result, fiduciary requirements to always act in the best interests of the customer may soon arise, raising the question if AI systems can be held “responsible” for their actions – and if not, who should be held accountable.

Algorithmic collusion

Given that AI systems can act autonomously, they may plausibly learn to engage in collusion without any instruction from their human creators, and perhaps even without any explicit, trackable communication. This challenges the traditional regulatory constructs for detecting and prosecuting collusion and may require a revisiting of the existing legal frameworks.

“Using AI in financial services will require an openness to new ways of safeguarding the ecosystem, different from the tools of the past,” says Rob Galaski, Partner, Deloitte Canada; Global Leader, Banking & Capital Markets, Deloitte Consulting. “To accelerate the pace of AI adoption in the industry, institutions need to take the lead in developing and proposing new frameworks that address new challenges, working with regulators along the way.”

For each of the above described concerns, the report outlines the key underlying root causes of the issue and highlights the most pressing challenges, identifies how those challenges might be addressed through new tools and governance frameworks, and what opportunities might be unlocked by doing so.

The report was prepared in collaboration with Deloitte and follows five previous reports on financial innovation. The World Economic Forum will continue its work in Financial Services, with a particular focus on AI’s connections to other emerging technologies in its next phase of research through mid-2020.