FCA and Alan Turing Institute to consider explainability of AI

Out-Law News | 18 Jul 2019 | 9:22 am | 3 min. read

A new partnership between the Financial Conduct Authority (FCA) and the Alan Turing Institute should provide financial services firms with greater clarity over the extent to which they need to explain to consumers how artificial intelligence (AI) tools they deploy work.

In a speech in London on Tuesday, Christopher Woolard, the FCA's executive director of strategy and competition, confirmed that the regulator is to work with the Alan Turing Institute in an effort to "explore the transparency and explainability of AI in the financial sector".

"Through this project we want to move the debate on – from the high-level discussion of principles (which most now agree on) towards a better understanding of the practical challenges on the ground that machine learning presents," Woolard said.

Recent research published by Pinsent Masons, the law firm behind Out-Law, in partnership with Innovate Finance, found that while many UK consumers are ready to embrace the use of AI in financial services, some have concerns that the data used by an AI service is inaccurate or biased. A majority of consumers said they always want to know when they’re engaging with an AI system, and 64% stated that they should always have the choice over whether decisions are made by an AI system or a human advisor.

Financial services and technology law expert Luke Scanlon of Pinsent Masons said: "As the FCA, has highlighted, and our research with Innovate Finance has shown, the discussion now needs to move on from focussing on high level principles to a discussion about what is expected of regulated entities as they explore their use of AI technologies. There is a series of suggestions coming forward from various bodies – the Office of AI for example has brought out its guidance in relation to public sector AI projects, the European Commission has its high level group and the Centre for Data Ethics is also engaging in detailed work."

"It is important that all of these pieces of work be brought together and made consistent so that regulated entities can develop a roadmap of what should be done to ensure that they have the right processes, controls and contractual arrangements in place in order to use AI effectively and in a manner that is consistent with the high level principles of explainability, transparency, avoidance of bias and discrimination and involvement of humans," Scanlon said.

According to Woolard, while there is "growing consensus around the idea that algorithmic decision-making needs to be ‘explainable’", it remains less clear "what level" of explainability is necessary.

"When does a simple explanation reach a point of abstraction that becomes almost meaningless – along the lines of ‘we use your data for marketing purposes’ or ‘click here to accept 40 pages of small print’," Woolard said in his speech. "The challenge is that explanations are not a natural by-product of complex machine learning algorithms. It’s possible to ‘build in’ an explanation by using a more interpretable algorithm in the first place, but this may dull the predictive edge of the technology. So what takes precedence – the accuracy of the prediction or the ability to explain it? These are the trade-offs we’re going to have to weigh up over the months and years ahead."

Woolard said the FCA does not have a single approach for addressing harm in financial services, and that businesses can expect the regulator to take a view on "specific safeguards needed" for AI solutions based on specific use cases and specific potential harms identified.

"If firms are deploying AI and machine learning they need to ensure they have a solid understanding of the technology and the governance around it." Woolard said. "This is true of any new product or service, but will be especially pertinent when considering ethical questions around data. We want to see boards asking themselves: 'what is the worst thing that can go wrong' and providing mitigations against those risks."

Woolard said that financial services firms must operate with "customer-centricity" in mind to ensure their use of data and AI is "used in the interests of consumers" and not just for their own benefit. This approach, he said, will help businesses to avoid falling foul of competition rules.

"At a basic level, firms using this technology must keep one key question in mind, not just ‘is this legal?’ but ‘is this morally right?'" Woolard said.

Woolard also announced that the FCA is to take steps aimed at better coordinating its regulatory action with that of other authorities and that it has begun the process of conducting a major review of the way it regulates financial services.

"We are taking a fundamental look at how we carry out the task of conduct regulation, and how we shape the regulatory framework going forward, in what we are calling our ‘Future of Regulation’ project," Woolard said. "This involves taking a fresh look at some of the core components that determine our approach, including reviewing our Principles for Businesses and considering how we can become a more outcomes-based regulator."