Out-Law / Your Daily Need-To-Know

Out-Law Analysis 6 min. read

How contractual measures can help with AI risks in surveillance software


Artificial intelligence (AI) is increasingly being used in surveillance software so businesses should be aware of the risks associated with it and apply appropriate contractual measures when purchasing AI tools.

AI is being widely used in surveillance software to enhance the capabilities of monitoring and analysing large amounts of video data, and to allow for more efficient and accurate surveillance. France’s plan to trial AI-powered video surveillance technology during the 2024 Olympic Games is the most high profile example of AI’s role in detecting security threats, tracking people, and providing real time alerts at large events.

Risks associated with using AI tools for surveillance include claims of discrimination and unfair treatment by certain groups, and the breach of privacy and data protection laws. But there are several contractual solutions available for users to address these issues.

These should be assessed along with recent developments in the regulations, such as those recently in the UK, France and Spain.

The usage of AI-powered monitoring software

Algorithmic video surveillance, more commonly known as ‘smart cameras’, uses computer software to analyse images captured by video surveillance cameras in real time. Algorithms are being trained to detect predefined suspicious events, such as specific objects, behaviours, patterns in video footage, and to carry out movement analysis. This technology can be used for tracking or identifying abnormal events, crowd movements, and the demographics of filmed people such as age range and gender, among other things.

The UK government has invested in AI technologies for crime prevention. It plans to double the Safer Streets fund to £45 million, which facilitates not only the use of CCTV cameras in public places, such as parks, but also installing new AI-driven upgrades to process the information gathered. The AI software automatically analyses unfolding situations and identifies known suspects, suspicious objects and recognises unusual behaviour, providing useful insights to police.

Network Rail’s Crowd Monitoring Solution at Waterloo Station, developed by UK-based company Createc, is another example of how AI is deployed in surveillance. The system recognises early signs of suspicious behaviour and security operators receive real-time updates on crowd density and movement patterns to identify bottlenecks. The focus is now on developing the technology to recognise incidents such as people falling and malfunctioning escalators. The trials at Euston Station and Luton Airport illustrated that the technology helped prevent overcrowding at the station during delays and has the potential to be used in bigger venues, including stadiums.

In France, smart cameras can be used in both the public and private sectors. In the public sector, AI-enhanced video surveillance has been used for tasks such as detection of abandoned baggage or for the exercise of administrative and judicial police powers by public authorities. In the private sector, the technology can be used to secure people and property in shops, concert halls or other establishments open to the public by detecting certain situations or behaviours. But this deployment is strictly supervised and limited by the French data protection authority (CNIL).

In preparation for the 2024 Olympic Games, France has recently given the green light for the trial of algorithmic video surveillance. This decision is aimed at ensuring the security of “sporting, recreational, and cultural events” until 31 March 2025. The experiments have already begun; in April 2024, algorithmic video surveillance was used during a football game and a concert.

Similarly, smart cameras are being used in in Spain. In the public sector, some law enforcement bodies in Spanish cities use AI-driven surveillance systems to prevent crime and improve public safety. The implementation of AI-enhanced cameras by the General Directorate of Traffic (DGT), in particular, has marked significant progress in road surveillance and control. These cameras are seen by the government as a key tool to increase road safety and reduce traffic offences.

In Spain’s private sector, a number of companies use smart cameras for near-instant detection of health and safety risks, from images taken by cameras installed on production sites. This type of software uses AI to identify a dangerous situation and alert supervisors to put an end to it. This technology can also be used to identify unsafe acts, unsafe conditions, and non-compliance with mandatory equipment requirements.

Potential issues of using AI in surveillance software

Although AI-powered surveillance tools are becoming widely used, business users need to pay particular attention to several issues.

AI-driven surveillance technology can result in discrimination and  unfair treatment of certain groups of people. It needs to be developed and trained in a way that minimises and preferably eliminates the risk of unfair or unintended bias with regards to protected characteristics, such as age, sex, disability and ethnicity, to avoid the customers of such technology being subject to discrimination claims. This requires forward-facing obligations on the technology supplier to ensure that  mechanisms are maintained to monitor, detect and minimise discrimination.

AI-powered surveillance technology may also be liable to breach privacy and data protection laws. The use of this technology involves the collection and analysis of personal data without consent of the individuals. The lack of control over personal data means companies using these solutions have to put in place robust storage and appropriate safeguards, and ensure that an appropriate legal basis for collection of the data is relied upon,. There is also the heightened risk of cyber-attacks that may compromise the integrity of the data collected.

In France and Spain, the data protection regulators have voiced strong concerns around discrimination, privacy and data protection. The CNIL stated that these new video tools can lead to massive processing of personal data, which sometimes includes sensitive data, and that the people concerned are not just filmed, but automatically analysed to deduce, on a probabilistic basis, certain information that may enable decisions or measures to be taken concerning them. The French regulator sees the proliferation of augmented cameras, particularly in public spaces, as major risks for individual and collective freedoms.

The Spanish Data Protection Agency (AEPD) also raised concerns on the risk of bias in decision-making systems and the natural persons discrimination, commonly referred to as algorithmic discrimination, and the risks in relation to the social context and the collateral effects which may be derived from data processing activities that incorporate AI.

The AEPD highlighted three factors that affect the accuracy of data: errors in the systems that occur due to the implementation of the AI system, either by external elements or due to programming or design errors; errors contained within the training or validation; and the biased evolution of the AI model.

There are other business risks in relation to compliance and ethical issues. The legal and regulatory framework requires customers to ensure AI is developed in line with various law and regulations that govern data privacy and security. Compliance is challenging, as this area of law is complex and constantly evolving. If errors are made or inaccurate results are produced by the surveillance, the company implementing this surveillance could be liable for harm or damages caused by subsequent false identifications, wrongful accusations or violations of privacy rights.

The vastly uncertain ethical landscape may result in reputational damage for businesses that use AI software in surveillance. It is difficult to insist that suppliers develop their AI model in accordance with another organisation’s principles, and there is not a single set of principles that are to be used when developing software. Customers of AI solutions face the risk that the supplier hasn’t adopted a transparent approach and aligned the software with the customer’s values.

Contractual solutions to protect users of AI-powered surveillance software

In response to the risks highlighted, businesses should first ensure that the deployment of smart cameras complies with data protection regulations and consider establishing safeguards to reduce the risks for individuals.

The practical considerations for compliance include checking the legal basis for the processing and the proportionality of the processing, carrying out a data protection impact assessment, and informing individuals of their right to object. Possible safeguards include no use of biometric data, no interconnection with other processing and no automatic decision making.

It is important for customers to understand what principles the supplier’s AI model has been trained in accordance with. The customer can then assess whether such principles are sufficient and appropriate and use this as an opportunity to educate suppliers on the regulatory framework.

There are also certain contractual and drafting techniques that businesses could adopt when drafting agreements for the provision of AI surveillance software. They include:

  • Setting out the customer’s needs and values in conjunction with developed principles, such as the OECD’s AI principles;
  • Implementing outcomes-based warranties around bias and discrimination to ensure that the consequence of any discrimination or data protection breach will be considered as opposed to just placing an obligation on the supplier around the process and the steps to be taken;  
  • Reviewing provisions against emerging industry standards to ensure the supplier has trained, designed and developed it in accordance with Responsible Business Principles
  • Involving data protection expertise to ensure that the storage and collection of personal data is lawful, fair and transparent and appropriate accountability is placed on the supplier;
  • Setting suitable liability caps for the customer to mitigate their exposure to claims where the software produces biased results, through no fault of their own; and
  • Fairly allocating the responsibility in monitoring and preventing the issues discussed by ensuring both the supplier and customer acknowledge the ways it may go wrong and implement drafting to protect against this.

Recent developments of AI legal framework in different jurisdictions

  • The UK

    There is currently no statutory AI Act in the UK. However, the UK government has set out its stance in response to its AI white paper proposals from last year. The UK “pro-innovation approach” posits five non-statutory principles – safety, security and robustness; appropriate transparency and explainability; fairness; accountability and governance; and contestability and redress – and regulators are expected to interpret and apply these principles within their individual remits. The UK’s non-statutory approach is in contrast to the more prescriptive EU AI Act, but the government plans to adopt new legislation when an understanding of the risks AI poses has matured, and the EU AI Act sets a precedent for robust regulation in this field. It represents a shift for businesses developing, supplying and using AI, and UK businesses operating in the EU on a cross-border basis will need to be familiar with this legislation.

    There has been some progress in case law. The UK Court of Appeal considered a case on human rights in the context of facial recognition systems which can be powered by AI – in the case of R (Bridges) v Chief Constable of South Wales Police. The court held that the police force had not taken reasonable steps to investigate whether the technology had a racial or gender bias as required under the public sector equality duty that applies in the UK.

    International standards is another area of consideration for businesses in the UK. The International Organization for Standardization (ISO) and the International Electrotechnical Commission (IEC) recently published a new set of international standards on AI. This includes planning and proactive management, such as comprehensive risk and impact evaluation, which is in line with the data protection impact assessment requirement of Article 35 GDPR.

    The UK’s Information Commissioner’s Office (ICO) has provided guidance to clarify requirements for fairness in AI, in the context of personal data. This guidance is a useful indicator for the direction of further legislation.

    The Financial Conduct Authority (FCA) has also published its AI guidance, which asks firms to take a centralised approach to setting governance standards for AI. Under the centralised structure, primary responsibility should sit with one or more senior managers, with business areas being accountable for the output, compliance and execution of the governance standards. The centralised body should have a complete view of all AI Models and projects to enable it to set the standard or policy for managing AI models and associated risks. This is expected to result in greater education, training and relevant information on the risks of the technology.

    A recent settlement made by Uber Eats to a driver who experienced continuous difficulties with the company’s verification checks serves as a reminder of the cost implication of AI-related disputes. The case highlights the importance of ensuring that employers understand the systems that they have in place. In addition to financial costs, being unprepared when there is a cyber-attack and data breach can also damage a company’s reputation and customer trust.

  • France

    The French data protection regulator CNIL launched a public consultation in January 2022 concerning the conditions for the deployment of smart cameras in public spaces. In its response to the consultation, the CNIL stressed the inherent risks to people's rights and freedoms associated with the deployment of smart cameras in public spaces. It said that the provisions of the French Internal Security Code (CSI), which applies to the installation of CCTV in public space or in places and establishments open to public, is not adapted to smart cameras, but it does not prohibit their deployment. It confirms that the use of smart cameras is possible, but if  smart cameras process personal data, then data protection regulations must be complied with.

    In a bid to keep the Paris Olympic Games 2024 safe, France has launched an experimental framework allowing AI surveillance to be used at sports, recreational or cultural events. This experimental  will last until 31 March 2025, and the images undergoing algorithmic processing will be collected from video protection systems or aircraft cameras, in places hosting these events, vehicles, public transport, and their surroundings.

    The algorithmic video surveillance can detect in real time suspicious activity and events, such as the presence of abandoned objects, weapons, unexpected crowds, person or vehicle in a prohibited area, and fights. However, the law explicitly bans using facial recognition technology and processing biometric data.

    The European Union adopted the EU AI Act on 13 March 2024, and will be enacted in its agreed final version imminently as of May 2024. The regulation sets rules for AI systems according to their potential risks and level of impact. Real-time remote biometric identification systems in public areas for law enforcement purposes are prohibited, but exceptions exist for situations like searching for specific victims or missing persons, preventing imminent threats to life or safety, or locating or identifying suspects for the purposes of conducting a criminal investigation.

  • Spain

    While there is no specific legal framework concerning smart cameras in Spain, provisions in different laws are applicable to video surveillance and the country has been playing a very active role in AI since 2020 by implementing several initiatives for the promotion and development of an “inclusive, sustainable, and citizen-centred AI”, which is one of the key pillars of the 2026 Spanish Digital Agenda.

    Key initiatives of the Spanish AI National Strategy (part of the 2026 Spanish Digital Agenda) include launching the EU’s first AI Regulatory Sandbox and creating the Spanish Agency for the Supervision of Artificial Intelligence (AESIA), which is the first AI regulatory body appointed in the EU under the new EU AI Act. The AI Regulatory Sandbox aims to connect innovators and regulators and provide a controlled environment for them to cooperate in AI. This facilitates the development, testing and validation of innovative AI systems with a view to ensuring compliance with the requirements of the EU AI Act. The newly created AESIA does not replace the role currently played by the data protection regulator AEPD with respect to AI, but both entities will collaborate to ensure compliance with the EU AI Act and the GDPR.

    In terms of AI surveillance, data protection needs to be carefully considered. The AEPD published guidelines in February 2020 on GDPR compliance by data processing activities that use AI as well as on audit requirements for personal data processing activities involving AI in 2021. These guidelines are essential for businesses that use and provide products and services with AI components. The AEPD’s guidelines on the use of video surveillance cameras for security and other purposes form another key part of the regulatory framework concerning AI in surveillance.


Co-written by Brendan Hatch, Concetta Dalziel, Lidia Vidal Vallmaña and Guillaume Morat of Pinsent Masons

We are processing your request. \n Thank you for your patience. An error occurred. This could be due to inactivity on the page - please try again.