The use of artificial intelligence (AI) systems to support the recruitment of new staff or internal decision-making affecting existing employees is set to be more tightly regulated when the EU AI Act takes effect.

The proposed new legislation is in the final stages of being approved – it has already been formally adopted by the European Parliament and recent reports suggest that it will come into force in June.

The AI Act will introduce new obligations for producers, deployers, importers and distributors of AI systems under a new risk-based system of regulation. The most stringent regulatory requirements will apply to AI systems that can be classed as ‘high-risk’ AI systems.

Employers deploying AI systems in the recruitment process could fall subject to the rules for ‘high-risk’ AI – this includes where AI systems are intended to be used in the recruitment or selection process, such as to place targeted job advertisements, to analyse and filter job applications, and to evaluate candidates.

AI systems intended to be used to make decisions affecting the terms of work-related relationships, promotion, and termination of work-related contractual relationships; to allocate tasks based on individual behaviour or personal traits or characteristics; and to monitor and evaluate performance and behaviour of persons in such relationships, will also generally be considered high-risk AI systems for the purposes of the regulation.

High-risk AI systems will need to conform to certain requirements – including around risk management, data quality, transparency, human oversight and accuracy – while the businesses deploying that technology will face obligations around registration, quality management, monitoring, record-keeping, and incident reporting.

In the employment context, the rules are aimed at addressing risks such as of bias and discrimination and to data protection or privacy rights.

The new rules will not be effective immediately, however. After its final approval, the AI Act will enter into force 20 days after its publication in the Official Journal of the EU, and begin to apply 24 months after its entry into force. The AI Act will enter into force in stages. The obligations pertaining to high-risk AI systems will apply 36 months after the legislation enters into force.

While it is likely that it will be summer 2027 before the new rules for using high-risk AI will begin to apply, it will be good practice for employers – and HR professionals specifically – to properly prepare for the entry into force of the AI Act and embrace the legislative obligations in advance. They should:

  • use high-risk AI systems in accordance with the instructions of use issued by the providers;
  • comply with relevant sectoral legislation – banks, for example, may face further regulatory obligations when deploying AI;
  • ensure that the input data is relevant and sufficiently representative in view of the intended purpose of the high-risk AI system, where they are able to exercise control over the input data;
  • monitor the high-risk AI system’s compliance with its own terms of use – and suspend use and make a report when any serious incident is identified;
  • keeping records of logs generated by the AI systems in an automatic and documented manner, if under the organisation’s control;
  • conduct a data protection impact assessment (DPIA) and a fundamental rights impact assessment (FRIA) before using the relevant system.

Also, where high-risk AI systems are not currently in use but their use is planned, HR professionals should begin to factor the above obligations into any implementation process – and consider any practical barriers that might arise in respect of meeting the obligations.

Organisations developing their own AI systems for use in the AI context should ensure that they are aware of their obligations as providers of high-risk AI systems under the EU AI Act. Where they put into service a high-risk AI system under their own name or trademark, or make a "substantial modification" to an existing high-risk AI system, the organisation will be deemed ‘provider’ and additional obligations under the AI Act will apply.

HR professionals should put in place robust AI governance, on a timely basis, to implement controls, policies and frameworks to address the challenges brought by HR systems using AI, and to govern how employees use AI. It is also strongly recommended to monitor implementation of the EU AI Act across relevant EU member states and review any associated EU-wide or national guidance that is published.

We are working towards submitting your application. Thank you for your patience. An unknown error occurred, please input and try again.