When future-proofing contracts, the various types of loss and claims which may arise through AI adoption should be considered. The use of AI can give rise to losses in a number of areas including, for example, in circumstances where it is embedded in drones or machines, for personal injury and product liability. However, these scenarios are less likely to be of significance to many financial services businesses which use AI technology as part of digital solutions.
In a digital financial services context, the following may be of particular concern:
Economic loss
From chatbots and insurance quotes, to assessing investment portfolios and analysing market trends, businesses are increasingly relying on AI technology in back office operations and to provide services to its customers and assist with making financial decisions. Algorithmic errors, insufficient or inaccurate data, and lack of training of systems and AI users, could result in bad decisions leading to financial loss for both the financial services businesses themselves and their customers.
Data protection claims
An AI solution may rely on data, whether that is personal, non-personal or a combination of both. Businesses need to ensure that they lawfully collect and use data, and in particular, personal data, in a way which is compliant with data protection laws – including the General Data Protection Regulation (GDPR) and Data Protection Act 2018 – to minimise the risk of customers raising data protection claims and regulatory fines.
Where data collection and use is seen to be unlawful, financial services businesses using AI to collect data from customers may face enforcement action from the Information Commissioner's Office (ICO). Infringement could result in fines under the GDPR of up to €20 million or 4% of a firm's worldwide annual revenue, whichever is higher.
Security breaches and data loss
A failure to implement adequate security measures to protect data can lead to corruption, leaks and losses of significant volumes of customer data which in turn may lead to customer complaints and a right to compensation. The ICO in its draft AI auditing framework highlighted two security concerns which may be heightened in the AI context.
The first is the extent to which AI is dependent on the use of third party frameworks and code libraries and the supply chain security issues this creates. Machine learning technologies often require access to large third party code repositories with the ICO's study finding that one popular machine learning development framework included "887,000 lines of code" and relied on "137 external dependencies".
The second is around the use of open source code. While open source is a necessary and valid option, consideration must be given to the liability implications of finding security vulnerabilities when they are used.
IPR ownership and infringement
Disputes over ownership of IP generated by AI technologies may arise along with claims in relation to infringement by AI of third party IPR.
Rights of ownership may be not be clear in respect of the standard legal categories of protection for confidential information, know-how and copyright where one party provides the data and the other the algorithm, and in relation to patent rights where machine learning is used to achieve a novel or innovative step.
In respect of IP infringement, a starting position may be an expectation that liability would rest with the legal entity that controls or directs the AI system. However, the position may not be so clear cut where multiple businesses are involved in the process and where AI systems develop to make decisions independently and without human supervision. Where AI technology evolves or improves its processes beyond the original purpose for which a business creates it, it may risk infringing another business’s copyright by using that business’s data as one of its inputs. Attributing liability may be less challenging where a business is able to trace back how decisions were made by AI but this is complicated by the existence of 'black box AI' where AI decision making cannot be explained.
Bias and discrimination
Decisions which are based on insufficient or low quality data may produce biased outcomes. These outcomes can lead to complaints from consumers and requests for decisions to be retaken, particularly where the decisions put individuals at a disadvantage. This might include, for instance, where customers face paying higher insurance premiums due to their location or gender. Where this is the case, financial services businesses should consider how to minimise the risk of discriminatory outputs. Considerations such as whether the biased outputs could have been prevented through testing of algorithms and quality checks on data may be relevant when attributing the level of liability.