Out-Law Analysis Lesedauer: 7 Min.
Jackie Niam/Getty Images.
04 Aug 2025, 1:23 pm
With one year until new rules on ‘high-risk’ AI systems take effect in the EU, pharmaceutical companies using AI in the process of drug development need clarity on whether the rules will apply to them.
The EU AI Act’s rules on ‘high-risk’ AI systems take effect from 2 August 2026, but the extent to which those rules will govern use of AI by pharmaceutical companies is currently unclear.
Pharmaceutical manufacturers will be among other businesses eagerly anticipating publication of new guidelines on the matter, which are expected to stem from a recent targeted consultation the European Commission held with stakeholders concerning classification of AI systems as high-risk under the Act.
Below, we look at how AI use across the medicines lifecycle could engage the AI Act’s rules and explore more generally how EU regulators are responding to the wave of AI-related innovation in the pharmaceuticals sector.
Pharmaceutical companies are using AI already today in many areas of the business. As in other industries, large language models (LLMs) play an important role. In addition, we see more and more bespoke solutions building on AI and serving the purpose of enhancing sector-specific tasks. In particular, AI is deployed to speed up, refine, and rethink how medicines are discovered and brought to market.
For example, AI is helping pharmaceutical companies to process huge datasets more quickly, to better identify new biological targets for diseases and design new treatments. An important innovation is the use of digital twins – virtual patient models that simulate individual responses to therapies. These models generate vast clinical datasets and are increasingly used to support clinical trials, enabling more precise trial design and patient matching. The creation of digital twins and embracing of a ‘biology-first’ approach to drugs development can help companies not just to understand that a drug works but why it works.
As Roche explained earlier this year, some companies are making use of AI like an additional co-researcher in the lab, via what has been dubbed a ‘lab in the loop’ strategy. This, Roche said, “involves training AI models with massive quantities of data generated from lab experiments and clinical studies” and then using those models to “generate predictions about disease targets and designs of potential medicines that are experimentally tested” by human scientists.
Many organisations in the pharmaceutical market are also using AI to support their clinical trials – not only by matching eligible patients to trials more effectively but in informing how those trials are designed and adapted in response to patient feedback.
AI is a vital enabler of personalised medicine too, helping companies to tailor therapies based on genetic and clinical data, while it can further help identify new medical uses for existing products and enhance efficiency and quality control in the manufacturing process.
The transformational potential of AI to support the development of new medicines has also been recognised – not only by industry, but by regulators too.
In a paper published last year (18-page / 379KB PDF), the European Medicines Authority (EMA) said AI “can, if developed and used correctly, effectively support the acquisition, transformation, analysis, and interpretation of data within the medicinal product lifecycle”.
The EMA, however, said use of AI in the lifecycle of medicinal products introduces “new risks”. It said it expects medicines researchers “to perform a regulatory impact and risk analysis” of all its uses of AI and to “seek regulatory interactions when no clearly applicable written guidance is available”.
New regulatory guidance on how to manage AI-related risks is expected from the EMA.
A workplan (37-page / 810KB PDF) published recently by the EMA and Heads of Medicines Agencies, a body that brings together senior leaders from national medicines regulators in Europe, confirmed that guidance on AI in the medicines’ lifecycle is currently under development.
According to the workplan, the bodies further plan to begin drafting guidance on AI in clinical development and AI in pharmacovigilance during this quarter of the year. Publication of the regulators’ AI research priorities is also expected soon.
The AI Act, dubbed the world’s first AI law, came into force in the EU last year, introducing a new risk-based system of regulation towards AI in the EU. Its chapters enter into force on a staggered basis – and some of the rules are already in effect.
A ban on some types and certain uses of AI took effect on 2 February 2025. A separate regime applicable to providers of so-called ‘general purpose AI’ models entered into force on Saturday, 2 August 2025.
The strictest regulatory requirements under the AI Act are reserved for AI systems classed as ‘high-risk’. Those rules will take effect on 2 August 2026. High-risk AI systems will need to comply with certain requirements – including around risk management, data quality, transparency, human oversight and accuracy – while providers and deployers of such systems will face new legal obligations, such as in relation to registration, quality management, monitoring, record-keeping, and incident reporting. Importers and disruptors of high-risk AI systems will also face duties under the Act.
With AI use becoming ubiquitous, there is a task for businesses across sectors to understand whether their use of AI will be subject to the ‘high-risk AI’ regime.
There are three ways in which the AI Act provides for AI systems to be considered ‘high-risk’. These ways are explored in detail in our guide to high-risk AI systems under the EU AI Act, but in essence they are:
In a paper published last autumn, the European Federation of Pharmaceutical Industries and Associations (EFPIA) argued (9-page / 611KB PDF) that AI systems “used solely for the purpose of medicines R&D” are exempt from the requirements of the EU AI Act. It said Articles 2(6) and 2(8) of the Act, as well as the wording in recital 25 in the legislation, support its position.
Articles 2(6) provides that the AI Act “does not apply to AI systems or AI models, including their output, specifically developed and put into service for the sole purpose of scientific research and development”.
Article 2(8) provides that the AI Act “does not apply to any research, testing or development activity regarding AI systems or AI models prior to their being placed on the market or put into service”, though “testing in real world conditions” is not covered by that exemption.
Recital 25, which is non-binding, elaborates on the wording in articles 2(6) and 2(8).
According to the EFPIA, even if the exemptions under articles 2(6) and 2(8) do not apply, it believes the EU AI Act’s rules on high-risk AI would not govern “most uses of AI in medicines R&D”. Its position is based on its interpretation of the scope of the rules on high-risk AI.
The EFPIA said: “If the [articles 2(6) or 2(8)] exemption were not to apply, EFPIA assesses that most uses of AI in medicines R&D typically involve AI-enabled software that is not regulated under any of the product-specific legal frameworks outlined in Annex I (including those for medical devices) nor are they featured under Annex III high-risk uses. Therefore, they cannot legally qualify as high-risk under the AI Act.”
The EFPIA further argues that classifying AI systems used in medicines R&D as ‘high risk’ could discourage innovation by imposing burdensome compliance requirements. Crucially, the EFPIA believes that pharmaceutical companies can demonstrate sufficient understanding of AI models without disclosing extensive proprietary information in regulatory submissions – a concern that underscores the need for tailored guidance in the pharmaceutical sector and beyond.
In this respect, the AI Office recently emphasised that the EU regulators are aware of the need to find the right balance between adequate transparency and disclosure on the one hand and respect for trade secrets and businesses’ legitimate interest in confidentiality on the other hand.
While the EFPIA’s view on the scope of the ‘high-risk’ AI regime is clear, the official position is not. Guidance on the issue is anticipated in the months ahead, however.
Under the AI Act, the European Commission has a legal duty to provide guidelines specifying the practical implementation of the AI Act’s classification rules for high-risk AI systems, together with a comprehensive list of practical examples of use cases of AI systems that are high-risk and not high-risk, by 2 February 2026.
In June this year, the Commission opened a targeted stakeholder consultation in which it sought feedback to inform the development of those guidelines. The consultation closed on 18 July.
Pharmaceutical companies will be hoping that the Commission is sympathetic to the views of the EFPIA and other industry lobbyists when it comes to clarify the scope of the AI Act’s high-risk AI regime in its new guidelines.
Notwithstanding its arguments relating to the scope of the AI Act’s rules, the EFPIA has questioned whether there is a need for pharmaceutical companies to be subject to the AI Act at all. In its paper, it said it believes “upcoming AI guidance from the EMA in conjunction with the established, well-functioning legislative and regulatory frameworks for medicines will ensure an appropriate regulatory framework for AI used in the lifecycle of medicines”.
The EFPIA’s position is that “the existing, well-established EU regulatory and policy environment which governs medicine development and authorisation … can also be leveraged for the use of AI as a tool in the medicines lifecycle” and that those existing frameworks, “coupled with future EMA guidance on AI and EMA meeting interactions, will be sufficient to address any gaps in information and guidance to accommodate the inclusion of AI tools in the lifecycle of medicinal products”.
The EMA is planning to publish an observatory publication on AI and principles for responsible AI and AI terminology this year.
In the short-term, pharmaceutical companies should give thought to whether and how their use of AI engages legal and regulatory compliance obligations under EU rules already in effect – from sector specific rules on pharmaceuticals and regulations on medical devices, to rules governing their use of data, like the General Data Protection Regulation (GDPR) – and to ethical duties arising under professional standards.