Similar views are shared by the ICO. It has said that organisations should ensure they decide upfront who will be responsible for reviewing AI systems and that AI developers understand the skills, experience and ability of human reviewers when designing AI systems. The ICO explains that organisations should “ensure human reviewers are adequately trained to interpret and challenge outputs” from the AI system, and “human reviewers should have meaningful influence on the decision, including the authority and competence to go against the recommendation”.
The ICO further explains in its guidance on AI and data protection that “the degree and quality of human review and intervention before a final decision is made about an individual are key factors” in relation to solely automated decision making. Human reviewers must be involved in checking an AI system’s decision/output and should not automatically apply the decision of the system; the review must be meaningful, active and should not simply be a “token gesture” – it should include having the ability to override a system’s decision; and reviewers “must ‘weigh up’ and ‘interpret’ the recommendation, consider all input data, and also take into account other additional factors”.
Responsibility for meaningful human input around solely automated decision making lies throughout an organisation and not only with the individual using the AI system, according to the ICO. Senior leaders, data scientists, business owners, and those with oversight functions are cited as being “expected to play an active role in ensuring that AI applications are designed, built and used as intended”.
Meaningful human oversight in practice
Both the ICO and EU HLEG have articulated steps that businesses can take to ensure they apply meaningful human oversight of AI systems in practice. A recent report by two European Parliament committees, which suggests amendments to the draft EU AI Act, suggests some specific requirements in this regard will soon be stipulated in EU law.
Training
The ICO notes that training of staff is important in controlling the level of automation of a system. It recommends that organisations train or retrain human reviewers to:
- understand how an AI system works and its limitations;
- anticipate when the system may be misleading or wrong and why;
- have a healthy level of scepticism in the AI system’s output and given a sense of how often the system could be wrong;
- understand how their own expertise is meant to complement the system, and provide them with a list of factors to take into account; and
- provide meaningful explanations for either rejecting or accepting the AI system’s output – a decision they should be responsible for. Organisations should also have a clear escalation policy in place.
Training is also endorsed in the MEPs’ report, which suggests stipulating in EU law that businesses using ‘high risk’ AI ensure that people responsible for human oversight of those systems “are competent, properly qualified and trained and have the necessary resources in order to ensure the effective supervision of the system”. They also suggest that the law also require providers of ‘high risk’ AI systems “ensure that natural persons to whom human oversight of high-risk AI systems is assigned are specifically made aware and remain aware of the risk of automation bias”.
These requirements would complement Article 14 of the European Commission’s draft EU AI Act, which already lists proposed requirements on those tasked with providing human oversight. “As appropriate to the circumstances”, the Commission has said those individuals should:
- fully understand the capacities and limitations of the high-risk AI system and be able to duly monitor its operation, so that signs of anomalies, dysfunctions and unexpected performance can be detected and addressed as soon as possible;
- remain aware of the possible tendency of automatically relying or over-relying on the output produced by a high-risk AI system (‘automation bias’), in particular for high-risk AI systems used to provide information or recommendations for decisions to be taken by natural persons;
- be able to correctly interpret the high-risk AI system’s output, taking into account in particular the characteristics of the system and the interpretation tools and methods available;
- be able to decide, in any particular situation, not to use the high-risk AI system or otherwise disregard, override or reverse the output of the high-risk AI system;
- be able to intervene on the operation of the high-risk AI system or interrupt the system through a “stop” button or a similar procedure.
The training of individuals will be a pre-requisite to ensuring those individuals can fulfil those expectations and any others that are to be added as the EU AI Act continues to be scrutinised.
Monitoring
Keeping records of human input and review of decisions made by AI systems can be useful in assisting businesses with assessing and managing risk arising from AI use. Noting how often human reviewers agree or disagree with AI decision making can also help with determining a system’s accuracy and the quality and efficiency of the systems. This is helpful particularly where AI systems are used in customer facing environments.
The EU HLEG guidelines set out a number of considerations to help organisations manage their human review and oversight processes, providing a form of checklist that businesses can reference themselves against. The guidelines ask:
- Did you consider the appropriate level of human control for the particular AI system and use case?
- Can you describe the level of human control or involvement?
- Who is the “human in control” and what are the moments or tools for human intervention?
- Did you put in place mechanisms and measures to ensure human control or oversight?
- Did you take any measures to enable audit and to remedy issues related to governing AI autonomy?
- Is there is a self-learning or autonomous AI system or use case? If so, did you put in place more specific mechanisms of control and oversight?
- Which detection and response mechanisms did you establish to assess whether something could go wrong?
- Did you ensure a stop button or procedure to safely abort an operation where needed? Does this procedure abort the process entirely, in part, or delegate control to a human?
Steps for businesses
Businesses should ensure that governance processes for AI include adequate and appropriate human review measures. Data protection rules in relation to solely automated decision making where personal data is processed must also be considered and measures implemented to control the level of human input to meet requirements under the data protection laws.
Any human oversight must be meaningful and businesses should ensure that those reviewing AI decision making are suitably trained and skilled to do so, as well as being empowered to override AI decision making where necessary.
Co-written by Priya Jhakra of Pinsent Masons.