Out-Law / Your Daily Need-To-Know

Out-Law Analysis 5 min. read

How buyers of AI can impose controls on use


Businesses are increasingly looking to use AI solutions to improve existing processes, create new business propositions and remain competitive, but in buying the technology they need to consider what internal and external controls they can exert over their use of those systems.

Controls are necessary to address risks that may arise in the life cycle of an AI solution, and these are increasingly being picked up in corporate AI policies. In the context of supplier relationships, those AI policies need to be implemented externally to address controls on errors in models or implementation which may give rise to biased outcomes, or a supplier’s uncontrolled use of customer data, or use of customer data to develop bespoke solutions which are then shared with the supplier’s wider customer base – something that can be problematic where an AI system is being used to deliver some competitive advantage.

The risk of regulatory non-compliance is another important consideration, particularly from a data privacy perspective, and increasingly from an AI regulation perspective.

However, customers can only engage their AI policies and apply contractual controls to their suppliers’ use of AI once they have identified the use of AI. With the huge growth of interest in generative AI and what it can do for businesses, many suppliers are happy to highlight how AI is used within their service offerings, but in anything other than a pure AI engagement, this means defining what “AI” is and building this into a notification process. From a customer’s perspective, a sensible starting point for this would be to consider the risks it intends to mitigate by way of contractual protections, and in turn, the controls that they are looking to apply to their supplier’s use of AI to mitigate those risks.

Controls can include requirements to obtain consent to the use of a new or updated AI solution, rights to object to the use of AI or the manner it will be used, or appropriate use controls. Broadly, there are three approaches that a customer can take when looking to impose controls on the use of AI:

  • imposing controls on AI generally as a concept;
  • imposing controls on the suppliers’ current service offering specific of AI; or
  • imposing controls which mirror current legislative compliance.

Customers must give consideration to the wider context of the commercial deal when deciding which approach to take.

If the customer knows that it is contracting for an AI solution, the first approach will be of limited use. Suppliers will inevitably resist obligations aimed at limiting the use of AI altogether if their solution clearly uses – as is increasingly common – machine learning technology as part of its standard functionality. Suppliers tend to favour the second approach, which can be more nuanced and use case specific.

With regard to the third approach, we are already seeing instances where the EU AI Act – currently in the final stages of being adopted by lawmakers – is the core of contractual positions.

The EU AI Act takes a risk-based approach to the regulation of AI, with four defined classifications of risk. The most significant obligations in the EU AI Act start to bite where AI systems present a high level of risk and beyond, and we’re seeing examples of drafting aligning with these requirements.

For example, if a customer is contracting for an AI solution that falls under the ‘limited risk’ classification under the EU AI Act, imposing the limited set of associated controls on the suppliers’ current service offering is seen as appropriate. In that case, customers should seek to include a definition of “prohibited AI” in the contract that is tied to the ‘high risk’ and ‘unacceptable risk’ classifications in the EU AI Act which won’t be addressed by those controls.

Although the EU AI Act is the only law that has informed drafting approaches to date, other jurisdictions are considering their regulatory stance, and we expect contracting approaches to evolve in a manner shaped by the EU AI Act as the global regulatory landscape further develops.

Beyond the headline issue of ‘is the use of AI permitted’, there are other important issues that AI customers should look to address in their contracts with suppliers. If these aren’t addressed in contracts, then compensating controls – such as heightened supplier management – can be considered to cover some risks, although not all.

Testing and monitoring

If a customer is happy for AI to be used by its supplier, the next important question will be around how they can make sure that the AI system is working in the way it is intended. With traditional software procurements, customers would expect to complete full acceptance testing before rolling software out across their organisation. However, testing an AI system will be more challenging, particularly if you want to test for as many scenarios of error, bias and compliance as is possible.

For a complex AI system, completing “full” testing prior to implementation could be near-impossible. Instead, customers can look to mitigate this risk by trialling new AI systems in a pilot phase, for example using the solution in a discrete business unit, or by working with a discrete data set, and assessing performance prior to making a go/no go decision for a full scale roll out.

Whilst the contract can act as a control measure, it will not replace effective and ongoing testing and monitoring throughout the AI system’s lifecycle. Industry standards are developing quickly in this space, and customers and suppliers both have a shared responsibility for ensuring that AI models are working as intended.

Data and assets

Effective use of AI is often reliant on a strong data strategy to protect the customer’s key data assets. It is important for a customer to understand the types of business data and personal data it owns or licences from third parties so that it can determine which data the supplier should have access to and on what terms. From a contracting perspective, any restrictions – third party or otherwise – need to be baked into provisions dealing with supplier use of data.

Ownership and control of data is also an area of concern, for both customers and suppliers, with suppliers increasingly troubled by restrictions on how they can use outputs. Suppliers will often look for a broad right to use customer data, signals, derivative data, and feedback to improve their systems or create new data assets, not just for the benefit of the customer, but as an improvement to the AI system being sold to its other clients. There is often a common good in enabling suppliers to use improved learnings and create insights as long as suitably anonymised or aggregated.

From a customer perspective, giving suppliers this right may have implications on IP ownership and, where personal data forms part of its dataset, data protection provisions that need careful consideration. Typically, customer data will be collected from data subjects for purposes relating to the business of that company. It may not be envisaged that data may then be used for ancillary purposes such as training data systems, particularly by a third party. This would need to be baked into privacy policies. From a supplier perspective, provenance of training datasets is a concern. Suppliers will want assurances that such use has been envisaged and that it can legitimately use these datasets in a way that will not result in liability being incurred.

Liability

When contracting for AI systems, liability for generated output is typically a concern for both customer and supplier, but liability clauses don’t themselves proactively manage operational risk. An issue that can have the biggest impact on liability is the scale at which things can go wrong with an AI solution.

Whilst clearly allocating liability, agreeing large limits on liability and including warranties and indemnities in contracts does provide important protection, ultimately customers and suppliers should both ensure there are other contractual controls which drive the management of operational risk. Circuit breakers capable of halting the use of an AI system that is showing signs of error or bias, and an ability to revert back to an earlier version of the AI solution which showed no signs of corruption, can be helpful tools here.

Written by Anita Basi of Pinsent Masons. Pinsent Masons is hosting a webinar on the topic of how to manage risk and operate an effective technology transformation programme on Wednesday 15 May. The event is free to attend – registration is open.

We are processing your request. \n Thank you for your patience. An error occurred. This could be due to inactivity on the page - please try again.