Out-Law / Your Daily Need-To-Know

Out-Law Analysis 8 min. read

What meaningful human oversight of AI should look like


Meaningful human oversight of the way artificial intelligence (AI) systems operate is considered essential by experts in the technology and is increasingly being demanded by policymakers and regulators across Europe.

A recent report by MEPs provides businesses with a fresh perspective on the growing expectations around human oversight of AI and adds to the existing resources businesses – in particular financial services firms – have available to help them determine what practical steps they need to take to deliver effective meaningful oversight that safeguards against consumer harm and corresponds to emerging law and regulation.

Human oversight requirements in law?

Many businesses are increasingly relying on artificial intelligence (AI) systems to carry out traditional human functions. As use increases and the technology continues to develop, businesses should consider the extent to which they rely solely on this technology and the extent to which they allow AI systems to run autonomously.

Many such businesses already operate against a heavy regulatory backdrop, such as those operating in financial services where there are particularly stringent requirements in relation to customer facing operations. New EU laws on AI are set to introduce further requirements in respect of ‘high risk’ AI systems and specific AI use cases, such as credit checking. In the UK, the development of “additional cross-sector principles or rules, specific to AI” is also under consideration. The Office for AI is developing a “pro-innovation national position” on governing and regulating AI and this is expected to be articulated in a white paper “in early 2022”, according to the UK’s national AI strategy published last autumn.

In its report on trustworthy AI, the EU High Level Expert Working Group (EU HLEG) said that “any allocation of functions between humans and AI systems should follow human-centric design principles and leave meaningful opportunity for human choice”, which in turn requires implementing human oversight and controls over AI systems and processes. The concepts of human centricity and oversight were carried over into the European Commission’s draft AI regulation (EU AI Act).

In respect of high risk AI systems, the draft EU AI Act provides that AI should “be designed and developed in such a way, including with appropriate human-machine interface tools, that they can be effectively overseen by natural persons during the period in which the AI system is in use”.

How much oversight should there be?

According to the EU HLEG, there are various ways and differing levels of oversight which can be used. These include:

  • human in the loop, which involves human intervention at every stage of the AI lifecycle;
  • human on the loop, which involves human intervention during the design cycle of the system and monitoring the system’s operation; and
  • human in command oversight, which involves the capability to oversee the overall activity of the AI system and the ability to decide when and how to use the system in any particular situation. This can include the decision not to use an AI system in a particular situation, to establish levels of human discretion during the use of the system or have the ability to override a decision made by a system.

The level of oversight required will also depend on factors such as what the system is being used for and the safety, control, and security measures in place. The less oversight a human can exercise over an AI system, the more testing and governance will be required to ensure that the system is producing accurate and reliable outputs.

Too much oversight?

High levels of human involvement may not be possible, desirable or cost effective in practice. This appears to be recognised by policymakers and regulators in both the EU and UK – the European Commission, UK Information Commissioner’s Office (ICO), and the AI Public Private Forum (AIPPF) set up by the Financial Conduct Authority and the Bank of England all agree that the level of human oversight used must be “appropriate”.

Having the right people involved and at the right stage of the AI lifecycle can help with ensuring any human oversight or intervention is an effective safeguard.

The AIPPF in a recent report said that there is “a need to increase data skills across different business areas and teams” and that “board members and senior managers are not always aware of, or do not fully appreciate, the importance of issues like data quality”. It added that “there is a need to increase understanding and awareness at all levels of how critical data and issues like data quality are to the overall governance of AI in financial services firms”.

Similar views are shared by the ICO. It has said that organisations should ensure they decide upfront who will be responsible for reviewing AI systems and that AI developers understand the skills, experience and ability of human reviewers when designing AI systems. The ICO explains that organisations should “ensure human reviewers are adequately trained to interpret and challenge outputs” from the AI system, and “human reviewers should have meaningful influence on the decision, including the authority and competence to go against the recommendation”.

The ICO further explains in its guidance on AI and data protection that “the degree and quality of human review and intervention before a final decision is made about an individual are key factors” in relation to solely automated decision making. Human reviewers must be involved in checking an AI system’s decision/output and should not automatically apply the decision of the system; the review must be meaningful, active and should not simply be a “token gesture” – it should include having the ability to override a system’s decision; and reviewers “must ‘weigh up’ and ‘interpret’ the recommendation, consider all input data, and also take into account other additional factors”.

Responsibility for meaningful human input around solely automated decision making lies throughout an organisation and not only with the individual using the AI system, according to the ICO. Senior leaders, data scientists, business owners, and those with oversight functions are cited as being “expected to play an active role in ensuring that AI applications are designed, built and used as intended”.

Meaningful human oversight in practice

Both the ICO and EU HLEG have articulated steps that businesses can take to ensure they apply meaningful human oversight of AI systems in practice. A recent report by two European Parliament committees, which suggests amendments to the draft EU AI Act, suggests some specific requirements in this regard will soon be stipulated in EU law.

Training

The ICO notes that training of staff is important in controlling the level of automation of a system. It recommends that organisations train or retrain human reviewers to:

  • understand how an AI system works and its limitations;
  • anticipate when the system may be misleading or wrong and why;
  • have a healthy level of scepticism in the AI system’s output and given a sense of how often the system could be wrong;
  • understand how their own expertise is meant to complement the system, and provide them with a list of factors to take into account; and
  • provide meaningful explanations for either rejecting or accepting the AI system’s output – a decision they should be responsible for. Organisations should also have a clear escalation policy in place.

Training is also endorsed in the MEPs’ report, which suggests stipulating in EU law that businesses using ‘high risk’ AI ensure that people responsible for human oversight of those systems “are competent, properly qualified and trained and have the necessary resources in order to ensure the effective supervision of the system”. They also suggest that the law also require providers of ‘high risk’ AI systems “ensure that natural persons to whom human oversight of high-risk AI systems is assigned are specifically made aware and remain aware of the risk of automation bias”.

These requirements would complement Article 14 of the European Commission’s draft EU AI Act, which already lists proposed requirements on those tasked with providing human oversight. “As appropriate to the circumstances”, the Commission has said those individuals should:

  • fully understand the capacities and limitations of the high-risk AI system and be able to duly monitor its operation, so that signs of anomalies, dysfunctions and unexpected performance can be detected and addressed as soon as possible;
  • remain aware of the possible tendency of automatically relying or over-relying on the output produced by a high-risk AI system (‘automation bias’), in particular for high-risk AI systems used to provide information or recommendations for decisions to be taken by natural persons;
  • be able to correctly interpret the high-risk AI system’s output, taking into account in particular the characteristics of the system and the interpretation tools and methods available;
  • be able to decide, in any particular situation, not to use the high-risk AI system or otherwise disregard, override or reverse the output of the high-risk AI system;
  • be able to intervene on the operation of the high-risk AI system or interrupt the system through a “stop” button or a similar procedure.

The training of individuals will be a pre-requisite to ensuring those individuals can fulfil those expectations and any others that are to be added as the EU AI Act continues to be scrutinised.

Monitoring

Keeping records of human input and review of decisions made by AI systems can be useful in assisting businesses with assessing and managing risk arising from AI use. Noting how often human reviewers agree or disagree with AI decision making can also help with determining a system’s accuracy and the quality and efficiency of the systems. This is helpful particularly where AI systems are used in customer facing environments.

The EU HLEG guidelines set out a number of considerations to help organisations manage their human review and oversight processes, providing a form of checklist that businesses can reference themselves against. The guidelines ask:

  • Did you consider the appropriate level of human control for the particular AI system and use case?
  • Can you describe the level of human control or involvement?
  • Who is the “human in control” and what are the moments or tools for human intervention?
  • Did you put in place mechanisms and measures to ensure human control or oversight?
  • Did you take any measures to enable audit and to remedy issues related to governing AI autonomy?
  • Is there is a self-learning or autonomous AI system or use case? If so, did you put in place more specific mechanisms of control and oversight?
  • Which detection and response mechanisms did you establish to assess whether something could go wrong?
  • Did you ensure a stop button or procedure to safely abort an operation where needed? Does this procedure abort the process entirely, in part, or delegate control to a human?

Steps for businesses

Businesses should ensure that governance processes for AI include adequate and appropriate human review measures. Data protection rules in relation to solely automated decision making where personal data is processed must also be considered and measures implemented to control the level of human input to meet requirements under the data protection laws.

Any human oversight must be meaningful and businesses should ensure that those reviewing AI decision making are suitably trained and skilled to do so, as well as being empowered to override AI decision making where necessary.

Co-written by Priya Jhakra of Pinsent Masons.

Rewiring financial services
Digital transformation is accelerating in the financial services sector, particularly in the wake of the global pandemic. We investigate the legal and regulatory landscape in financial services technology and highlight the opportunities for change.
Rewiring financial services
We are processing your request. \n Thank you for your patience. An error occurred. This could be due to inactivity on the page - please try again.