With their proposals, the two European Parliament committees are seeking to add specific further obligations for foundation models. In general, foundation models will not be classed as ‘high-risk’ AI systems – unless they are “directly integrated in [a] high-risk AI system”.
Beyond proposing amendments relating to foundation models, the MEPs suggested extending the list of AI uses that would be prohibited under the AI Act. They also proposed amendments to criteria for ‘high-risk’ AI systems – the systems would have to pose a significant risk to harm people’s health, safety, or fundamental rights to be categorised in this way, under their proposals.
Providers would be obliged to notify regulators if they did not think their systems pose a ‘significant risk’, with the potential for penalties to be issued if systems are put into use but are subsequently found to have been misclassified.
The MEPs have also proposed making the obligations for high-risk AI providers much more prescriptive, notably in relation to risk management, data governance, technical documentation and record keeping. In addition, a completely new requirement had been proposed for users of high-risk AI solutions to conduct a fundamental rights impact assessment considering aspects such as the potential negative impact on marginalised groups and the environment.
The committees’ draft also provides for overarching principles to apply to all AI systems. Those principles are human agency and oversight; technical robustness and safety; privacy and data governance; transparency; diversity, non-discrimination and fairness; and social and environmental wellbeing.
Technology law expert Sarah Cameron of Pinsent Masons said the inclusion of the overarching principles suggest EU law makers are taking notice of the approach other policymakers are taking globally on the issue of AI regulation.
“Many in the tech industry and beyond prefer the UK’s principles-based approach, with interpretation and implementation left to vertical regulators focusing on the context and use, rather than the EU’s horizontal, cross-sector rules focused on levels of risk around specific systems – in what is a more tech-focused approach,” Cameron said.
“Some of the changes proposed in the European Parliament’s latest draft just might point to a softening in approach or a nod to the different, lighter touch, principles-based approaches emerging in other countries such as UK, US, Singapore and Japan, and the need for collaboration and interoperability,” she said.
“For example, a new preamble that the OECD-based definition should be closely aligned with the work of international bodies working on AI to ensure legal certainty, harmonisation and wide acceptance is notable, as is the adoption of overarching principles applicable to all AI systems, and the raising of the bar for what qualifies as a high-risk system. It is vital that we do see effective and productive co-operation at the international level if we do want to see a pro-innovation, confidence-building approach that is navigable by AI developers and users, particularly as the EU may well find it is not setting the global standard for AI regulation as it did with GDPR,” Cameron said.
The proposals of the two parliamentary committees are expected to be adopted by full European Parliament in a vote scheduled to take place between 12 and 15 June. Once the Parliament adopts its position, it will be ready to open so-called trilogue negotiations on finalising the text with the Commission and the Council of Ministers, which adopted its own draft text in late 2022.