Out-Law Analysis 7 min. read

AI – the new workstream in M&A deals


Transactional lawyers have a new challenge on the horizon – how to deal with AI-specific issues in deals.

We expect to see more M&A activity across sectors for AI-focused acquisitions, ranging from international corporations looking to make strategic purchases for tech-based companies, to private equity houses looking to bolt-on companies to their portfolios. For lawyers supporting those deals, there is a need to ensure the M&A transactional process keeps pace with the technology it is meant to cover. Below are just some of workstreams we expect to change in AI-focused acquisitions.

Due diligence

IP transactional lawyers often lead a number of due diligence workstreams during a corporate transaction in addition to traditional due diligence workstreams. For example, they may also lead on IT, data protection, and cybersecurity matters, depending on their organisation’s structure. Therefore, IP lawyers may find themselves leading on AI due diligence as well, particularly given the myriad IP and data issues that arise out of, or are related to, AI. 

Although AI will not be so different from the other workstreams IP lawyers will already be working on, certain issues will likely be amplified. There will still be an IP workstream, but we expect there will be less ‘checkbox’ exercises, such as verifying IP registrations, and more conversations with the management team about whether and how the target protects any trade secrets for the AI system and considering how practicable it is to police infringement.

Open source software

Although use of open source software (OSS) is already investigated thoroughly in technology-heavy sales, the importance of examining the key components, and considering mitigation strategies, will likely be heightened in the AI sphere. When an acquisition is focused squarely on the tech, there are always concerns about whether valuable source code may include OSS code obtained under ‘copyleft’ licences, which could trigger making code available to downstream users. OSS review may extend the due diligence timeline or add to the list of pre-closing conditions, particularly if third party vendors need to audit the source code of key products.

Copyright issues

In addition to standard data protection due diligence, there will be heightened review of the data used to train the target’s AI systems. These issues very often have an IP angle. In the UK, it is possible to use data scraped from the internet but only in certain circumstances.

First, businesses need to understand whether they have lawful access to a website’s content in the first place because the site’s terms and conditions may restrict data scraping. If the requirement of lawful access is met, and the business has not otherwise obtained permission from the website’s owner to use the data, then businesses need to assess whether their proposed use of the data is covered under the text and data mining exception (TDM exception) of section 29A of the 1988 Copyright, Designs and Patents Act, a narrow exception allowing such activity for non-commercial purposes only.

If a target has trained its AI system using datasets scraped from websites and it did not have lawful access to the websites, or its use does not fall within the TDM exception, then this would be a potential red flag in due diligence.

The UK government proposed broadening the TDM exception in 2022 but recently rowed back on this after a backlash from the creative industries. The government is now proposing a code of practice as means of striking the right balance between supporting AI development and protecting the interests of rights holders. Proposals for the code of practice are anticipated from the UK’s Intellectual Property Office (IPO) this summer, but the government has specified that “an AI firm which commits to the code of practice can expect to be able to have a reasonable licence offered by a rights holder in return”.

Whether this morphs in to a full FRAND-style regime remains to be seen, but we do know that during due diligence lawyers will almost certainly need to try to confirm whether the data used to train the AI system was lawfully obtained and, if so, whether it can be used for the intended purposes. This may be possible if the target has kept good records, but it likely will be a real challenge for many companies – particularly those who have used large datasets for training.

Data privacy and cybersecurity

Another issue to consider is the target’s data privacy compliance and the robustness of its cybersecurity systems and processes. While these are standard due diligence workstreams, the sheer amount of data that the AI system may have used as training data, or that it may process as part of its functionality, means that this may be more labour-intensive than in typical deals, particularly if a multi-jurisdictional review of applicable legislation is required.

Data quality/ethics

The quality of data will also need to be considered. An AI system is only as good as the data used to train it, so if it is fed low-quality data, then this will impact the quality of the results and solutions that the AI system generates. If the target’s AI system is generating low-quality solutions, or the target doesn’t have sufficient safeguards in place to identify low-quality data, this could raise risks around AI ethics and ultimately impact the valuation of the target. It seems likely that lawyers will need to review a target’s “AI policy” just as they currently review their data privacy policy to understand how the target handles quality and ethical issues, and if there are gaps in a target’s procedures these will likely require additional warranty protection in the sale and purchase agreement.

Employment

The AI due diligence workstream will likely overlap with traditional employment workstreams, as retention of key staff will be more of an issue in AI-focused acquisitions. AI algorithms can be incredibly complex – so complex that their inner workings are possibly unknown even within the target business, depending on how the algorithms have transformed.

It will be important for businesses to focus on retaining the employees and contractors who created and developed the AI. Retaining “gatekeeper” staff, such as the computer programmers, software engineers and other technical experts, may be the only way to understand the technology and to protect and develop it. Such retention may also be critical to mitigating potential harms to users.

Warranties and indemnities

AI will raise new issues in relation to warranty and indemnity protection. Warranties will need to be drafted to mitigate any risks already identified and “flush out” any additional relevant information not yet disclosed by the target. Indemnities may also need to be included to protect the buyer from any potential future liability arising from such risks. Such risks can be wide-ranging and would likely pop up as “red flags” in diligence reports. These might include risks around copyright infringement, data protection breaches, cybersecurity concerns, data ethics issues – such as discrimination arising from biased data – and OSS issues, as well as new regulatory compliance risks.

Regulatory compliance

We anticipate there will be a patchwork of AI regulation across jurisdictions which will spur a degree of uncertainty in the short-term about how compliance should be measured.

In March 2023, the UK government released its AI white paper, which set out a “pro-innovation” approach to AI regulation in the UK based on five principles: safety, security and robustness; appropriate transparency and explainability; fairness; accountability and governance; and contestability and redress.

Under the UK proposals, the principles would be issued on a non-statutory basis and implemented on a cross-sector basis by regulators within their individual areas. The government has indicated it may introduce a statutory duty for regulators to have due regard to these principles. In contrast, the EU, with its plans for a new Artificial Intelligence Act (AI Act), intends to introduce a stricter statutory framework based on the level of risk the AI systems may present to the public, with significant potential fines for non-compliance – even higher in some cases than GDPR fines. It aims to engender public trust in AI systems by emphasising transparency – for example, disclosure of copyrighted data used to train the AI models.

In addition, if enacted in the current form proposed by the European Parliament, the AI Act will require assurances to be given that not only ‘AI systems’ as defined in the Act but also ‘foundation models’ have met stringent conformity assessment requirements. Assurances will also need to be obtained that the AI system has met a range of regulatory requirements that will require assessments to be made of data sourcing practices, model risk management, approaches to governance and the robustness of the algorithms used.  

Given the differing approaches proposed, it seems likely that lawyers seeking to conduct quick-paced due diligence would face practical challenges in tracing a target’s data back to various countries and then applying those countries’ varying regulatory regimes to each piece of sourced data – particularly in a jurisdiction like the UK, where individual regulators are expected to be given some leeway to set their own rules. Unless and until harmonisation of the different regulatory regimes is achieved, lawyers will need extra time and care to identify areas of non-compliance, which may have to be done on a country-by-country basis, and recommend mitigation strategies.

Navigating AI complexities

One of the most difficult aspects of taking on the AI workstream in transactions may be simply getting to grips with the target’s technology. Lawyers cannot fully analyse risk and provide recommendations for mitigation without understanding the target’s products and services. IP transactional lawyers are expected to be able to get up to speed quickly on tech and act as a type of translator during the transaction, explaining the technology, and associated risks, in plain English to their colleagues and clients.

AI technology, however, is particularly complex. Without being able to understand what is in the “black box” of the AI system, it will be difficult to fully assess risk and provide comprehensive recommendations. However, despite the fact that so much may be unknown, there are certainly plenty of issues such as those sign-posted above which we think can act as a helpful framework for many deals. Ultimately, we anticipate that there will need to be more conversations with clients, management teams, R&D experts and sector specialists during the entire M&A process to bring the threads together.

Co-written by Concetta Scrimshaw of Pinsent Masons.

We are processing your request. \n Thank you for your patience. An error occurred. This could be due to inactivity on the page - please try again.