Out-Law / Your Daily Need-To-Know

Out-Law News 2 min. read

EU AI Act: talks open on finalising new legislation


There will need to be significant compromises made by EU law makers if the proposed new EU AI Act is to become law given the major differences in opinion between them over how artificial intelligence (AI) systems should be regulated, according to technology law experts.

Sarah Cameron and Luke Scanlon of Pinsent Masons were commenting after so-called trilogue negotiations over the AI Act were opened on Wednesday. The talks will involve representatives from three EU institutions – the European Parliament, Council of Ministers and European Commission – which have each developed different views on the draft legislation’s scope and content.

Proposals for a new EU AI Act were set out by the European Commission in April 2021. The Commission proposed to regulate AI in accordance with the level of risk those systems are deemed to present to people. Under its plans, AI that poses an “unacceptable risk” to people would be prohibited, while the bulk of the regulatory requirements would apply to ‘high-risk’ AI systems, including obligations around the quality of data sets used, record-keeping, transparency and the provision of information to users, human oversight, and robustness, accuracy and cybersecurity. ‘Low-risk’ AI systems would be subject to limited transparency obligations.

Both the Parliament and Council have subsequently scrutinised the plans over the past two years and adopted their own negotiating positions ahead of trilogue talks, the purpose of which is to land on a final text that can be adopted into EU law.

The Council adopted its position late last year and on Wednesday the Parliament endorsed the proposals put forward last month by its Internal Market Committee and Civil Liberties Committee, paving the way for the trilogue talks to begin.

Cameron and Scanlon said the respective positions of the Council and Parliament are very different and to a large degree reflect late amendments proposed by MEPs this year to reflect the thinking on fast-moving technological developments.

In its proposals, the Parliament has provided for the regulation of not just AI systems but ‘foundation models’ too – a term defined by the MEPs as an AI model that is trained on broad data at scale, is designed for generality of output, and can be adapted to a wide range of distinctive tasks. Providers of ‘generative foundation models’, which are used in AI systems specifically for the purposes of generating content, would be subject to particular obligations – including a need to provide transparency over when content has been created by an AI system and not a human, and make publicly available a sufficiently detailed summary of the use of training data protected under copyright law.

The concept of foundation models is not addressed in the Council’s negotiating position.

Cameron said: “Highlighting the challenges with the Act’s prescriptive risk-based approach, foundation models have not been allocated to the high-risk category. Instead, new drafting imposes specific obligations including transparency requirements around training data and designing the model to avoid generating illegal content.”

“Risks from generative AI have caused significant concern even amongst AI proponents. Given the variety of approaches to AI regulation globally – not least between EU and UK – the real concern around foundation models and generative AI may be the enabler for rapid alignment to address the risks and avoid a loss of public trust in AI overall. It is clear from Rishi Sunak’s plans to host an AI summit later this year that the UK wishes to be at the centre of this discussion,” she said.

Scanlon said: “The EU position, which relies on formal approvals, conformity assessments, and now, transparency around the data sources used for foundation models, is gaining traction in international discussions. However, it is not clear that other jurisdictions will take the same approach. As the ‘raw materials’ for large language models and generative AI are readily available in almost all jurisdictions, misalignment internationally as to the process for permitting, prohibiting, and/or licensing the development or use of AI in the EU could have significant unintended consequences on efforts to promote the responsible and safe use of AI and economic development.”

“In respect of the EU legislative process, it is difficult to see how the approach put forward by the Parliament can be quickly reconciled with the approach of the Council, in particular in relation to foundation models,” he said.

We are processing your request. \n Thank you for your patience. An error occurred. This could be due to inactivity on the page - please try again.