Out-Law / Your Daily Need-To-Know

OUT-LAW ANALYSIS 5 min. read

Finding balance in using AI for workforce management in South Africa

Group of young business professionals attentively listening to interview questions or presentation in a corporate office setting

AI may help provide consistency in dealing with employee disciplinary issues – if used correctly. Photo: Getty Images


AI is increasingly used in South African workplaces to monitor productivity, generate performance scores and trigger warnings. In high-volume environments, these tools can offer real operational value by improving consistency, reducing inappropriate subjectivity and enabling more data-driven assessments of employee contribution, thus enhancing fairness.

Performance management is often a human-centred exercise, orientated around a managerial assessment of an employee’s work. Procedural fairness in this area requires that employees understand the concerns raised about them and are given a genuine opportunity to respond, and that their explanations are meaningfully considered.

Performance assessments are by their nature partly subjective, and South Africa’s employment law permits employers to make managerial assessments of their employees. It also, however, imposes constraints on subjectivity in performance assessments, with these assessments having to be grounded in objective facts. AI can helpfully inform and support those assessments. However, legal risk can arise when AI is called upon to substitute value judgments without adequate human involvement.

Performance management decisions can give rise to unfair labour practice claims, discrimination claims and unfair dismissal claims. Even where AI materially impacts or makes the decisions giving rise to these claims, employers must ensure that outcomes remain compliant with South Africa's domestic laws (considered in a previous piece). Considering the infancy of South Africa’s regulation of AI, responsible deployment these sorts of AI-systems in South Africa may also require alignment with international best practice.

Using AI and emotion recognition in performance assessment

Consider a customer-facing employee whose interactions with customers are rated in real time by an AI-system that assesses both the content of the conversation and the employee’s tone, including what the system characterises as emotional indicators. The employee's ratings are consolidated weekly and reviewed by their manager, who then assigns the final performance score for that period.

In this configuration, POPIA's ADM restriction, discussed in a previous piece, is likely not engaged as a final decision is not made solely by automated means, because the manager reviews and finalises the score. The deployment of such a system in a South African workplace would be permissible, subject to the employer processing their employees’ personal and special personal information in accordance with POPIA, with employees likely providing an unqualified consent to such processing in their employment contract or a workplace privacy consent form. This arrangement would also not be inherently objectionable in terms of South Africa’s employment laws. Employees who receive unfavourable ratings will nevertheless be able to challenge these performance assessments as unfair or even discriminatory. Where employers are called upon to meet such claims, they will need to be able to defend the impugned performance assessments, including the AI-driven assessment of the employee’s emotional state or tone that underpins them.

In determining how to responsibly deploy such systems, it would be prudent for South African employers, particularly those with international operations or exposure, to assess or consider such deployments not only against South Africa’s still-developing AI regulatory frameworks, but also against international best practice.

Under the EU AI Act for example, the use of AI systems to infer emotions of a natural person in the workplace is outright prohibited as such tools are assessed to present an unacceptable risk. This prohibition reflects well-founded concerns about the reliability of AI-systems in inferring emotions, the risk of discriminatory outcomes and intrusions to the rights and freedoms of the concerned employee. In the absence of an equivalent South African prohibition, the appropriate response is not to proceed without governance controls, but rather to apply the kind of risk-based governance framework that international best practice recommends.

Automated warnings and AI in progressive discipline

Managing workplace discipline consistently and at scale can also present operational challenges, with managers having to dedicate large amounts of time to process.

Consider an AI system that automatically issues warnings to employees for late coming. The employee reports late, the AI system requests an explanation and, based on the explanation provided, issues a warning calibrated to the circumstances as assessed by it. Such a system, particularly where it issues final warnings, would likely engage POPIA's ADM restriction.

To comply with section 71 of POPIA, the employer would need to satisfy the Appropriate Measures Exclusion, as discussed in our previous piece. In practice, this would likely require an appeal mechanism enabling the employee to make representations about the AI-generated warning after it has been issued. This is not an abnormal requirement as employers are, in any event, required to give employees the opportunity to submit an explanation to an intended warning if it is not accepted. Structuring such a process to satisfy POPIA’s requirements would complement compliance with South African labour law, and vice versa, illustrating how well-designed governance can serve both regimes simultaneously.

From an employment law perspective, at least two further legal issues could arise when considering AI-issued disciplinary warnings.

The first is an AI system making factual findings. Under South Africa’s laws, disputes of fact are resolved against the probabilities, credibility and reliability of the respective versions and witnesses. How the law will accommodate or deal with disputes of fact determined by AI systems is something still to be determined. 

The second is whether South Africa’s employment laws would permit disciplinary sanctions to be meted out by AI systems. Determining a disciplinary sanction requires consideration of all relevant circumstances, including submissions in mitigation and aggravation, the personal circumstances of the employee and the imposition of a sanction that is fair to both employee and employer. This is not a mechanical exercise and the law recognises and protects the discretion of the presiding officer to apply a value judgment as to what fairness requires in the circumstances. Whether the law would show such deference to an AI-determined sanction is also something still to be determined.

Automated disciplinary sanctions may improve consistency, but they risk sacrificing the inherently value-laden judgment that fairness demands. AI systems used in discipline must therefore be designed with controls that identify when value judgments are required and escalate those determinations to appropriately authorised human decision-makers.

What employers need to consider

The following governance measures operationalise the principle that AI systems should inform, but not decide, issues arising in performance management and progressive discipline:

  • restrict emotion recognition features in the workplace and avoid automated credibility scoring;
  • prefer content-based or outcome-anchored metrics that can be substantiated;
  • publish clear decision boundaries for each workflow – what AI may flag and what humans must decide – and require reviewers to document reasons when accepting or departing from AI suggestions;
  • validate systems before use and on a fixed cadence for accuracy, stability and bias, retaining version logs and test reports for traceability;
  • only permit fully automated warnings or sanctions where appropriate governance frameworks are in place, and require human validation of any disciplinary outcome, including consideration of context and mitigation;
  • provide notices explaining how deployed performance or discipline related AI-systems operate, including explainers on inputs, oversight and appeal rights; and
  • monitor for psychosocial risk where real-time evaluation is used and engage the workforce where feelings of isolation or detachment may arise.

Employers who build these governance structures into their processes and procedures will find that many of the individual legal questions that AI-assisted discipline or performance management might otherwise generate simply do not arise as the framework addresses the underlying risks before they can materialise.

We are processing your request. \n Thank you for your patience. An error occurred. This could be due to inactivity on the page - please try again.