South Africa does not yet have AI-specific legislation. However, as we have already examined, South African employment laws would govern AI deployment in the workplace and, as we explore below, the country’s privacy regime would also play an important role in governing the deployment of AI-driven systems in the workplace.
Where there are otherwise gaps in South African legislation, employers can look to international frameworks for help in implementing best practice governance frameworks calibrated to the South African legal environment. This will help position their organisations to adapt efficiently as domestic regulation on AI develops. Indeed, South Africa’s draft national AI policy (draft SA AI policy), which we have already examined, references various international trends and instruments that could be considered when determining South Africa’s next AI regulation steps.
POPIA and the regulation of automated decision-making
South Africa's Protection of Personal Information Act (POPIA) is the closest that South African law currently comes to directly regulating the use of AI. Its provisions are particularly significant in the workplace context.
POPIA regulates the processing of personal and special personal information. In drafting POPIA, the South African legislature drew heavily on the EU's 1995 Data Protection Directive. As a result, the EU General Data Protection Regulation (GDPR), which replaced the 1995 directive, is a useful interpretive tool for POPIA's provisions. POPIA is principles-based legislation, which makes it more open to interpretation, flexible, and well-suited to application in rapidly evolving technological contexts, including AI-assisted HR processes.
POPIA establishes eight conditions for lawful processing: accountability; processing limitation; purpose specification; further processing limitation; information quality; openness; security safeguards; and data subject participation
The draft SA AI policy reinforces the importance of these conditions, specifically endorsing the application of data protection by design and default, data minimisation and purpose limitation as principles that should govern AI systems that process personal information. These are considerations that South African employers would be well served to embed at the system design stage rather than address retrospectively.
Unlike the GDPR, POPIA does not prohibit an employer from relying on an employee's consent as a lawful basis for processing in the employment context. South African employment contracts frequently include broad consent provisions, often bundled with the processing notice required by POPIA. This practice is currently lawful, though employers should keep it under review as regulatory practice in this area develops.
Section 71 of POPIA directly regulates automated decision-making (ADM). It provides that a data subject may not be subjected to a decision that results in legal consequences or affects them to a substantial degree, where that decision is based solely on automated processing of personal information intended to create a profile of the individual – including their performance at work, reliability, personal preferences or conduct.
This ADM restriction does not apply where the decision:
- is taken in connection with the conclusion or execution of a contract and either the data subject's request under the contract has been met, or appropriate measures have been taken to protect the data subject's legitimate interests; or
- is governed by a law or code of conduct that specifies appropriate measures for protecting data subjects' legitimate interests.
The appropriate measures exclusion requires that data subjects be given the opportunity to make representations about the automated decision, and that the responsible party provide sufficient information about the logic of the automated processing to enable those representations to be made. In practice, this requirement aligns closely with the procedural fairness obligations that South African employment law already imposes, meaning that well-structured AI deployment processes can satisfy both regimes simultaneously, without duplicating effort.
Isolated consideration of POPIA or South Africa's employment laws would give a distorted picture of the legal environment in which AI-driven workplace systems will operate in South Africa and miss the nuance that each regime is designed to address. A responsible employer, with a focus on sustainable risk management and governance, would need to take a more meaningful and holistic approach. It is the combination of employment law and privacy law, and the way in which each can close the gaps that the other leaves open, that can produce a coherent and workable domestic legislative framework against which AI systems can be deployed and responsibly governed and managed in South Africa's workplaces.
South Africa's employment laws, for example, do not expressly prohibit fully automated employment decisions, but non-compliance with section 71 of POPIA would make those decisions unlawful. Conversely, compliance with section 71 would militate strongly in favour of fairness.
Ultimately, when assessing the deployment of AI systems into the workplace in South Africa, employers must consider both employment law and privacy law: each of them relevant, neither of them providing all the answers, but each of them, properly approached, complementing the other.
The EU AI Act
The EU AI Act is the world's first comprehensive legal framework regulating the development, deployment and use of AI. It entered into force in August 2024 and provides for a risk-based approach to AI regulation. Under the AI Act, some types and uses of AI are prohibited altogether, while strict regulatory requirements are also reserved for ‘high-risk’ AI systems. Pinsent Masons has developed a guide to help businesses understand what AI systems might be classed this way.
Certain employment-related AI systems – those used for hiring, worker evaluation or allocation of work – are classified as high-risk and subject to stringent governance obligations. High-risk systems must implement a risk-management system, maintain properly governed datasets and extensive technical documentation, enable automatic logging for traceability, meet requirements for accuracy, robustness and cybersecurity, and be deployed with meaningful human oversight that allows humans to understand, intervene in and override AI-driven decisions.
The EU AI Act also prohibits emotion recognition AI systems in the workplace and manipulative or exploitative AI systems that deploy subliminal techniques or exploit specific vulnerabilities in a manner that materially distorts behaviour or risks causing harm.
For South African employers, the EU AI Act is relevant on two levels. Where it applies directly – to multinational organisations with operations in the EU – compliance is mandatory. More broadly, it sets the global benchmark for responsible AI governance and is a credible indicator of the direction in which South African regulation will travel. Aligning with its principles now is both prudent and commercially far-sighted.
The draft SA AI policy strongly suggests that South Africa will follow an EU AI Act-like risk-based approach to AI regulation, with differing applications attracting different levels of regulation, if not prohibition, based on the specifics of the application. While South Africa may identify slightly different ‘high-risk’ areas, based on its goals and Constitutional framework, inevitably there will be important learnings from the EU AI Act that employers can already begin to anticipate.
OECD AI principles
The draft SA AI Policy aligns with, and draws from, the OECD AI principles. While adopted as a government-level instrument, the OECD AI principles are directed at all AI actors across the lifecycle, including private sector deployers. They are relevant for deployers of AI systems in South Africa as they provide a strong indication of the direction of South Africa’s regulatory travel and expectations. They can also be useful in providing high-level, principled guidance on the difficult questions that often arise when determining how to govern the deployment of AI systems within the workplace.
In terms of transparency and explainability, for example, the OECD AI principles state that AI actors should commit to transparency and responsible disclosure regarding AI systems and should, to this end, provide meaningful information appropriate to the context:
- to make stakeholders aware of their interactions with AI systems, including in the workplace;
- where feasible and useful, to provide plain and easy-to-understand information on the sources of data/input, factors, processes and/or logic that led to the prediction, content, recommendation or decision, to enable those affected by an AI system to understand the output; and
- to provide information that enables those adversely affected by AI systems to challenge their output.
ISO/IEC 42001 and the NIST AI risk management framework
The draft SA AI policy recognised ISO/IEC 42001 and the NIST AI risk management framework (NIST AI RMF) as examples of global best practice when it comes to a targeted regulatory approach. The NIST AI RMF is further recognised as a key instrument to benchmark South Africa’s approach against.
ISO/IEC 42001 is an international standard, developed jointly by the International Organization for Standardization and the International Electrotechnical Commission, that provides organisations with a structured and repeatable framework for governing AI systems. It is certifiable, globally recognised and trusted for its rigour and neutrality.
ISO/IEC 42001 requires organisations to establish a formal AI management system: a framework of policies, processes and controls governing how AI is designed, procured, deployed, monitored and decommissioned. This includes allocating roles and responsibilities, assessing AI-related risks before adoption, ensuring transparency and accountability, managing data quality, and monitoring system performance throughout the AI lifecycle.
For South African employers, ISO/IEC 42001 is a practical and immediately usable asset. In the current absence of AI-specific domestic legislation, it provides a credible, auditable framework for demonstrating responsible governance. Employers that build ISO/IEC 42001-aligned governance now will find that as South African regulation develops, adaptation requires refinement rather than reconstruction.
The NIST AI RMF, developed by the US National Institute of Standards and Technology, is a voluntary but widely adopted structure for identifying, understanding, assessing and managing the risks associated with AI systems across their full lifecycle. It is organised around four recurring functions: govern, map, measure and manage.
- ‘Govern’ establishes the foundation for AI risk management through policies, accountability and oversight structures that make risk ownership clear and decisions transparent;
- ‘Map’ deepens risk understanding by clarifying the purpose of an AI system, identifying those affected by it, and mapping potential impacts and risk sources across the system and its supply chain;
- ‘Measure’ uses qualitative and quantitative assessments to evaluate reliability, bias, safety and trustworthiness;
- ‘Manage’ ensures that risk mitigation, monitoring and corrective action occur continuously throughout deployment – not only at launch.
For South African employers, the NIST AI RMF provides HR teams and risk functions with a credible, globally aligned methodology for assessing and controlling AI tools used in hiring, performance evaluation, employee monitoring and workplace decision-making. Because the framework is flexible and non-prescriptive, it can be adopted as a foundation for responsible AI governance even in the absence of local AI-specific regulation.
The value of leaning on international frameworks
South Africa's employment and privacy laws, international regulatory developments and global governance frameworks together provide a coherent and workable foundation for AI deployment in South African workforces.
While the draft SA AI policy’s call to localise international standards for South Africa should be answered, international frameworks remain valuable in the current environment. They fill gaps where domestic law has not yet caught up with the technology, they signal where South African law is heading, and they can provide guidance to employers when dealing with difficult questions around how to responsibly govern the deployment of AI systems into South African workplaces. Two examples illustrate this point:
- South African law does not yet mandate a risk-based approach to AI deployment in the workplace, as the EU AI Act does. Voluntarily adopting such an approach is both sensible and practical and pre-empts South Africa’s now indicated legislative direction. Having an AI-system anonymously monitor stationery use by employees and then placing new orders carries fundamentally different risk to an AI-driven system issuing disciplinary warnings for late coming, and treating them identically would be inefficient and unnecessary;
- Unlike their counterparts in the UK or EU, South African employers are not subject to explicit prohibitions on using employee consent to process personal information, or on deploying emotion recognition AI systems in the workplace to monitor, for example, performance. That those prohibitions do not exist in South Africa, however, does not diminish the value of drawing on international best practice to nevertheless guide responsible processing and deployment.
Governance and risk management – grounded in a sound understanding of South African law and informed by international best practice – is the answer. It is a more efficient, more accurate and more commercially mature approach than searching for legal certainty that, in this area, does not yet fully exist.
Co-written by Annelle Kamper and Alex du Plessis of Pinsent Masons.