OUT-LAW ANALYSIS 10 min. read

Navigating AI’s rapid expansion within South African workplaces

Developer checking code

AI is increasingly embedded in workplace decision making. Photo: pixdeluxe/iStock


Artificial intelligence (AI) has moved rapidly from experimentation to everyday use in workplace decision-making. For organisations operating in South Africa, this shift presents both opportunities and new responsibilities.

Organisations now rely on automated tools to screen CVs, assess performance, monitor productivity and, in some cases, inform disciplinary action. These systems increasingly shape outcomes across the employment lifecycle, altering both how decisions are made and how they are experienced by employees.

AI can enable faster, more consistent and more data‑driven people decisions, ease managerial workloads and improve operational efficiency. Deployed effectively, it allows HR functions to scale in ways that were not previously possible. To fully realise these benefits within a South African workforce, AI must be used with a clear understanding of the legal framework and supported by appropriate governance and risk management frameworks.

The stakes are particularly high for multinational employers rolling out global HR technologies into South Africa. Many AI systems are designed around assumptions drawn from foreign legal environments which do not necessarily align with South African constitutional protections, employment law or data protection principles.

AI systems have no legal personality. Within the context of the workplace, responsibility for the decisions they inform or generate rests with employers and, in certain circumstances, with the providers of the technology. This responsibility cannot be outsourced or avoided through contractual arrangements and employers remain accountable even where decisions are heavily automated or depend on third‑party vendors.

South African employment law is anchored in principles of fairness, equality and dignity. Those principles are technology‑neutral and apply whether the decision is made by an algorithm or a human decision‑maker. Where AI‑assisted decisions are unfair, discriminatory or involve unlawful processing of personal information, liability follows regardless of the sophistication or opacity of the system.

In the absence of AI‑specific legislation, the appropriate response is not to avoid the technology, but to deploy it responsibly. Attempting to anticipate every legal risk on a case‑by‑case basis is neither practical nor sustainable. A more mature approach is to build governance and risk management frameworks that are grounded in existing law, informed by international best practice and capable of adapting as regulation develops.

Governance principles

With South Africa’s AI regulatory landscape only in its infancy, the law applicable to AI‑enabled people decisions is derived from constitutional protections, employment legislation and general regulatory principles rather than binding AI‑specific statutes or regulatory models. In this environment, the legal certainty that employers would often seek before deploying a new technology into their people management approach cannot be achieved by waiting for comprehensive regulation or by seeking definitive answers to every hypothetical scenario.

Instead, organisations are best served by responsible enablement. This means embracing AI while embedding controls that ensure transparency, accountability and human oversight. Effective governance goes beyond technical compliance. It requires regular testing for bias, clear explanations of how automated decisions are made, robust escalation paths for human review and careful due diligence of vendors and datasets.

Innovation and legal responsibility are not inherently in tension. Properly governed AI systems can improve consistency in decision‑making and reduce arbitrary treatment. The core question is not whether AI should be used in employment contexts, but how its use is structured and supervised.

Constitutional foundations

Any assessment of workplace AI in South Africa begins with the Constitution. The Bill of Rights protects dignity, equality and privacy. These rights apply within the employment relationship.

Section 23 guarantees the right to fair labour practices, forming the constitutional basis for fairness in dismissals, discipline and employment treatment. Section 14 guarantees privacy, including protection against unlawful intrusion into communications. This is directly relevant to AI‑driven monitoring tools, analytics based on employee communications and systems that infer behaviour or performance from digital activity. Section 32 guarantees the right of access to information that is held by another person and that is required for the exercise or protection of any rights. This right is likely to assume particular significance in the context of AI-driven decisions, underpinning the ability of affected employees to obtain meaningful explanations of automated outcomes and to hold employers accountable for the fairness and transparency of those processes.

South Africa’s constitutional protections underpin all labour and equality legislation and provide the framework against which AI deployment in workplaces must be assessed.

The Labour Relations Act

The Labour Relations Act (LRA) governs the employment relationship in South Africa and remains central to assessing the legality of AI‑assisted decisions. Under the LRA, dismissals must be procedurally and substantively fair. Substantive fairness requires a valid reason relating to misconduct, incapacity or operational requirements. Dismissals are automatically unfair where they are for reasons that infringe fundamental rights, including reasons linked to unfair discrimination or the exercise of constitutionally protected rights. The use of AI does not alter this analysis. An unfair reason for a dismissal does not become permissible because it is identified or acted upon by a system, rather than a manager.

The LRA also establishes an ‘unfair labour practice’ regime. Employees may refer disputes to the Commission for Conciliation, Mediation and Arbitration (CCMA) where they believe they were subjected to unfair conduct by their employer relating to promotion, demotion, probation, training, benefits, suspension or disciplinary action short of dismissal. In such cases, the central enquiry is whether the employer’s conduct was fair.

AI tools increasingly influence these areas, whether through performance scoring, ranking systems or automated warning systems. Where AI shapes outcomes in these processes, employers would have to be able to demonstrate that human discretion has not been displaced without the appropriate oversight or control and that the resulting decisions meet the LRA’s fairness requirements.

The Employment Equity Act

The Employment Equity Act (EEA) will play a critical role in regulating AI‑driven people decisions in South Africa. It is specifically concerned with eliminating unfair discrimination and promoting equality in employment.

The Act prohibits both direct and indirect unfair discrimination on a wide range of listed grounds, as well as on any arbitrary ground. Its reach extends to all employment policies and practices, including recruitment, selection, appointments, performance evaluation, disciplinary measures, job assignments and promotions.

For AI governance, two aspects of the EEA are particularly significant. First, liability turns on outcome rather than intent. Employers may be liable even where discrimination arises inadvertently through, for example, biased training data, proxy variables or model design. Secondly, employers carry a positive obligation not only to avoid unfair discrimination but also to take steps to eliminate it.

Designated employers must also implement affirmative action measures. These are recognised under both the EEA and the Constitution as fair discrimination, underscoring that equality in South African law is substantive rather than formal. AI systems used in hiring or promotions in South Africa which were designed and configured abroad to be inherently neutral, may in practice need to be re-trained or even reconstituted to actively advance affirmative action objectives. Systems that undermine these objectives, may expose employers to risk.

PEPUDA and third-party liability

Outside the employment relationship, and particularly relevant to technology providers, the Promotion of Equality and Prevention of Unfair Discrimination Act (PEPUDA) may apply. PEPUDA fills gaps where the EEA does not apply, including in relation to vendors, platform providers and other third parties that could contribute to discriminatory outcomes.

Under PEPUDA, discrimination is defined broadly to include any act or omission, including policies or practices, that disadvantages a person on prohibited grounds. Claims may be brought in the Equality Court by individuals, public interest litigants or representative bodies.

Where unfair discrimination is established, the court may order audits of relevant systems or practices, award damages or issue an order stopping the conduct. This has implications for both employers and vendors involved in the deployment or development of AI systems that affect individuals’ rights.

Occupational health and safety considerations

The Occupational Health and Safety Act (OHSA) requires South African employers to maintain a working environment that is safe and without risk to employees’ health. The OHSA does not limit risk to physical hazards. Psychosocial risks arising from workplace practices may also fall within its scope.

In 2025, the Department of Labour issued guidance on work in the digital economy, recognising psychosocial hazards associated with digitisation, including work intensity, surveillance‑related stress, isolation and detachment. Call centres were identified as a prominent example.

AI‑driven monitoring, productivity tracking and behavioural analytics can exacerbate these risks if implemented without appropriate safeguards. Employers deploying these tools must consider not only productivity gains but also potential impacts on employee wellbeing.

Emerging AI policy

South Africa’s regulatory landscape is also currently evolving. On 10 April 2026, the Department of Communications and Digital Technologies (DCDT) published the Draft South Africa National Artificial Intelligence (AI) Policy for public comment. The draft should be seen as a point of departure and an indication of the government's current thinking, rather than South Africa's final position on AI regulation. While not yet binding, it is likely to shape expectations around AI governance even before formal legislation is introduced. Employers should not delay action pending the introduction of binding obligations, as South Africa's existing legal framework already provides the foundation against which AI-driven decisions will be tested.

The DCDT envisions a staged implementation approach. Year 1 focuses on finalising the National AI Policy and identifying key draft regulatory requirements to address unacceptable risks; year 2 on implementing key regulatory requirements for high-risk use cases; and year 3 on full implementation. The policy includes built-in monitoring and evaluation provisions, signalling that AI governance in South Africa will remain a moving target. Employers who delay building governance frameworks until this cycle is complete are likely to find themselves behind the curve and exposed under laws already in force.

Several aspects of the policy are notable for employers.

The draft policy identifies six objectives, including the establishment of an AI Ethics Board, a National AI Commission/Office and an AI Regulatory Authority to oversee AI development, implementation and compliance, together with the development of localised ethical standards aligned with international norms. It is structured around six strategic pillars: Capacity and Talent Development; AI for Inclusive Growth and Job Creation; Responsible Governance; Ethical and Inclusive AI; Cultural Preservation and International Integration; and Human-Centred Deployment.

In terms of the policy’s governance framework section, organisations are required to provide sufficiently explainable and transparent AI outputs, particularly in high-risk contexts, and must establish traceable lines of responsibility with an accountable official or entity. The policy makes explicit that AI-driven employment decisions require fairness and transparency as entrenched in the LRA and EEA. The policy also adopts a risk-based approach, categorising AI systems according to levels of potential harm and drawing inspiration from the EU AI Act. Notably, the policy does not identify which areas are to be classified as high-risk. Under the EU AI Act, employment-related AI systems, including those used for hiring, worker evaluation or allocation of work, are classified as high-risk and subject to stringent governance obligations. South African employers should anticipate a similar classification and begin mapping their current deployments against the risk framework. Key risk mitigation strategies identified in the policy include regular scenario-based risk planning; human rights impact assessments and regulatory impact assessments; enhanced data governance through POPIA-aligned frameworks; and bias detection and mitigation protocols, including mandatory testing of high-stakes systems.

The policy emphasises fairness, transparency, accountability, inclusivity and human autonomy, supported by an independent AI Ethics Board tasked with enforcing ethical governance standards relating to bias and fairness. The policy places strong emphasis on localisation as a central driver of bias mitigation. Employers deploying AI tools sourced from overseas, being tools trained predominantly on datasets from other jurisdictions and demographics, should interrogate whether those tools have been tested against South Africa's demographic realities. A tool that performs well in its country of origin may produce discriminatory outcomes when applied to a South African workforce. The EEA's prohibition on unfair discrimination, and South Africa’s employment laws in general, make no allowance for the provenance of the system that generated the outcome. The policy explicitly requires that foreign-based AI providers and systems meet local accountability standards and be locally configured and that citizens' data be protected in third-party procurement arrangements. Employers relying on overseas AI platforms should assess those platforms against this clear regulatory signal rather than passively relying on vendor assurances.

The policy also emphasises human control in key decision-making processes. This includes ‘human-in-the-loop’ mechanisms and reinforcement learning with human feedback, with predetermined points of human intervention even in high-risk areas involving sensitive data. The policy recognises the provisions of section 71 of POPIA as a useful tool for ensuring explainability and transparency in automated decision-making. AI may inform decisions across the employment lifecycle, but it cannot replace human judgment, and affected employees must be able to understand and challenge automated outcomes. These are standards that the LRA and EEA already demand.

The policy does not specifically address the use of AI in the workplace as between employer and employee, which is a significant omission given its detailed treatment of AI's interaction with the labour market and its emphasis on the need to upskill, reskill and prepare for a labour market transition. It is likely that targeted interventions in this regard will follow in due course, particularly given the policy’s alignment with the EU AI Act.

Finally, the policy calls for a code of conduct for AI professionals and the integration of ethics training into professional development. Building internal AI competency now is not only sound governance but also preparation for an environment in which those responsible for AI-driven decisions may be held to an accredited professional standard.

The challenge

South Africa’s existing legal framework already provides clear principles for assessing AI‑enabled people decisions, with employers remaining accountable for outcomes. Automated systems do not displace obligations of fairness, equality and dignity, nor does technological complexity excuse unlawful conduct.

The difficulty lies not in identifying the principles, but in applying them to novel, fact‑specific scenarios. Addressing those challenges through reactive legal analysis alone is inefficient and unsustainable. A governance‑led approach, grounded in law and informed by ethical and operational considerations, offers a more durable solution.

In the employment context, AI governance is not a defensive exercise. It is a means of enabling innovation while preserving trust, legitimacy and legal compliance. Organisations that invest in governance now will be better placed to navigate future regulatory change and to ensure that technology enhances, rather than undermines, fairness at work.

Co-written by Annelle Kamper and Alex du Plessis of Pinsent Masons.

We are processing your request. \n Thank you for your patience. An error occurred. This could be due to inactivity on the page - please try again.