This is one reason why the UK government reversed its controversial decision to use an algorithm to downgrade a large number of A-level results in 2020. Due to exams not being possible during the Covid-19 pandemic, teacher-predicted grades were used instead. The initial government decision had applied the algorithm to adjust predicted grades for a number of factors, including past performance of a student’s school. The adjustments had a disproportionate and discriminatory impact on students from disadvantaged backgrounds attending state schools, and after widespread complaints about unfair outcomes and the threatened judicial review, the government swiftly agreed to abandon the adjustments.
Personal data issues
Public bodies often deal with large volumes of personal data, where AI can offer considerable efficiency savings in processing the data. However they will need to comply with the General Data Protection Regulation (GDPR) when doing so.
The GDPR prohibits decisions based on solely automated processing, subject to very limited exceptions that require a clear legal basis and the provision of "meaningful information” about the workings of the AI to data subjects. This is supplemented by the Data Protection Act 2018, which requires a controller to notify the data subject about a significant decision based solely on automated processing, giving the person the right to request a new decision involving a human in the decision-making process. In practice, however, those provisions bite only where decisions are made without any human intervention. For this reason, many AI systems are configured to produce recommendations rather than decisions, with the final decision to be made by a human.
The Bridges case also illustrated the broader pitfalls that a public body must avoid when using AI to process large volumes of personal data. In that case, the police had prepared a data protection impact assessment (DPIA) which largely met its legal obligations. However, the Court of Appeal found that the DPIA did not fully comply with the Data Protection Act, because the police had not fully assessed how the privacy rights of the public under Article 8 of the ECHR were restricted by the use of the facial recognition technology, and whether the restriction was justified and proportionate.
Similarly, the Information Commissioner’s Office (ICO) in 2024 sanctioned a school which had installed facial recognition technology in its canteen for the identification of pupils, without proper data protection safeguards.
Common law principles
The common law principle of rationality can take various forms that need to be considered when a public body uses AI in its decision-making.
One aspect of the principle is that, where a body is exercising a discretion, it must not fetter its discretion. For example, if a decision maker with a discretionary power to decide on a variety of outcomes institutes a simplified ‘yes/no’ process following recommendations made by an algorithm, this would be fettering the decision-maker’s discretion to decide on alternatives to a simple yes or no.
Similarly, rationality requires that irrelevant factors are not considered in the decision-making process. The example of the 2020 A-level results illustrates this principle, in that when determining the grade that is merited by any individual student’s performance, the past performance of other students cannot be a relevant consideration.
Common law principles of fair consultation also regulate how public bodies may use AI to support their analysis of large-scale public consultations. The principles require that when decisions are taken following a consultation, the responses to the consultation must be conscientiously taken into account. Whilst this does not mean that decision-makers are obliged to read every one of thousands of public responses, if they instead rely on an AI-generated summary of responses, care needs to be taken that the AI is smart enough to produce a fair summary which gives sufficient prominence to the most important points made.
Transparency and disclosure issues
Transparency is an important safeguard in the use of AI, and in public law and judicial review it is a principle that the courts enforce rigorously.
A public body may sometimes be under a legal duty to give reasons for its decisions, so that an individual affected by a decision can understand the basis on which it has been made. Moreover, once judicial review proceedings start, or even in pre-litigation correspondence, the public body must comply with its so-called ‘duty of candour’. This is a duty to provide the claimant and court with all information and materials relevant to the issues in the case, to ensure they have a true and comprehensive picture of the decision-making process in issue.
So, if a claimant has reasonable grounds for believing that AI software has led to a decision that discriminated against them, or which took irrelevant factors into account, the public bodies will be expected to disclose sufficient details about the AI that was used in order for the court to ascertain whether or not that was the case.
This is likely to pose a real challenge in some cases for public bodies, particularly if they have taken a ‘black-box’ approach – purchasing and deploying an AI software solution from a commercial software provider, without a full understanding of how the software was developed and how it operates. Commercial sensitivity may also be an obstacle to disclosure, given the significant commercial value in maintaining the confidentiality of AI software development. In other contexts, the government has stated that it is not prepared to disclose full details of AI software that it uses to detect fraud, because doing so would enable individuals to circumvent its fraud detection measures more easily.
Reconciling the tension between AI confidentiality and the high expectations of the courts on public bodies’ duty of candour is likely to be a central battle ground in judicial review proceedings over the next few years.
Practical steps for public bodies
There are practical steps that public bodies can take to follow best practice in navigating these legal issues. Important ones to consider include:
- Establishing dedicated internal governance processes, including an AI oversight board, and developing strategic policies and ethical gateways to anticipate and mitigate AI risks;
- Following government guidance on using AI solutions and on purchasing AI solutions;
- Recording decisions to use AI technology, such as in the government’s algorithmic transparency records;
- Checking if personal data will be processed, and ensuring compliance with requirements for a legal basis for processing and in relation to conducting DPIA;
- Ensuring human involvement in decision-making where the outcome is likely to have a significant effect on individuals.
Co-written by Malcolm Dowden of Pinsent Masons. Pinsent Masons is hosting a webinar on the topic of judicial review and AI, on Tuesday 17 September. The event is free to attend – registration is open.