Used well, AI can increase consistency, speed up hiring processes, reduce administrative burdens and ease the pressure faced by HR and recruitment functions. It can also help identify talent that might otherwise be overlooked. But these benefits sit alongside well‑documented risks. AI‑based recruitment systems introduce legal and governance risks that require active management.
Recruitment AI systems are trained or calibrated against data, which can often define both their utility and their risk. Where the data reflects existing inequalities or systemic bias, the system can reproduce, and amplify, those outcomes — often at scale and without clear visibility. Systems trained or calibrated against data drawn from contexts foreign to the environment in which they are deployed can lead to unintended and unfair consequences. Uncalibrated filters that rely on oversimplified demographics or poorly constructed datasets may disadvantage qualified applicants.International experience shows what can go wrong in the absence of proper controls. In one widely cited case, a recruitment tool trained on data from a male‑dominated workforce consistently downgraded applications from women. The system simply learned from past decisions and repeated them.
The lesson is not that AI should be excluded from recruitment, but that it must be tested, calibrated and governed before, and during, its use. In addition, the fact that a decision is made or informed by an AI system does not reduce the employer’s responsibility.
Automated interviews and accent‑based bias
Consider the following situation: An employer advertises a team manager role and invites 20 internal team members to apply. To manage scale and resource constraints, the employer uses an automated interview system. Candidates respond to prompts, and the AI scores and ranks them based on both content and delivery. The highest‑ranked candidate is then appointed. Several unsuccessful candidates challenge the decision, alleging unfair discrimination and an unfair labour practice. They argue that the AI system disadvantaged them because of their regional South African accents or command of the interview language.
This type of automated interview process will involve the processing of voice and other biometric data, which is classed as ‘special personal information’ under the Protection of Personal Information Act (POPIA). The employer would therefore have to ensure that it processed this special personal information in accordance with POPIA’s requirements for the lawful processing of such data. If the decision was fully automated, the employer would need to ensure that it fell within one of POPIA’s exceptions permitting automated decision‑making. In practice, this would likely mean reopening the process and giving affected candidates an opportunity to make representations; something that could result in employment law risk.
From an employment law perspective, drawing adverse inferences from a candidate’s accent or command of a language could amount to indirect and possibly direct unfair discrimination on listed grounds such as race, ethnic origin or culture; or to unfair discrimination on an arbitrary ground. South Africa’s Employment Equity Act prohibits such unfair discrimination, and any argument that the absence of a regional accent or a particular command of a language is an inherent requirement of the role would face scrutiny.
The unfair labour practice claim would be assessed against a different standard where fairness would be the ultimate test. An unsuccessful employee’s challenge would succeed if they demonstrated that the employer acted arbitrarily, capriciously or in bad faith. To answer this claim, the employer would likely need to explain how the process worked and what safeguards were in place. This would be most feasible where the AI based recruitment exercise was done against robust governance structures and controls.
The localisation gap
A further risk in AI‑assisted recruitment in South Africa is the localisation gap. AI-systems developed and trained in foreign markets might not adequately account for South Africa’s linguistic and cultural diversity. Although South Africa’s draft Artificial Intelligence (AI) Policy has been withdrawn, its recognition of localisation as a central driver of bias mitigation is likely to be carried forward in the next iteration.
An automated CV review system designed and trained abroad could unfairly screen out candidates because the language or structure of their CV does not align with inherent metrics that are uncalibrated to or inappropriate for South Africa. This can create a structural barrier that closely mirrors forms of indirect discrimination that South Africa’s employment law is intended to address.
This is where governance and risk management are more effective than legal strategy alone. A structured risk‑mapping exercise would require employers to identify who is likely to be affected by an AI recruitment or promotion system and how. That analysis would surface localisation risks and allow employers to build in controls before deployment – including, where necessary, a decision that the process cannot be fully automated.
Governance considerations
Responsible use of AI in recruitment requires deliberate design and oversight. Practical governance measures include:
- maintaining an inventory of recruitment AI tools and documenting ownership, purpose, data used, affected groups and decision impact;
- treating recruitment systems as high‑risk and requiring formal approval before deployment, including confirmation of human oversight and bias testing;
- localising models and keyword libraries for South Africa’s linguistic and demographic context, and validating them on local data;
- running pre‑deployment and periodic adverse‑impact testing, with results documented and acted upon;
- ensuring meaningful human involvement and providing candidates with a way to challenge automated assessments;
- embedding vendor obligations in contracts, including transparency around training data, support for bias testing, access to logs and notice of material model changes;
- giving candidates clear information about how AI is used in recruitment and what review rights they have; and
- establishing processes to respond to discriminatory outputs or data issues, including investigation, remediation and reporting.
Employers who adopt these measures are better placed to reduce legal risk. More importantly, they can demonstrate that their recruitment processes were designed, tested and governed responsibly. That is what turns AI in recruitment from a potential liability into a defensible and sustainable operational tool.
Co-written by Annelle Kamper and Alex du Plessis of Pinsent Masons.