Out-Law Analysis 7 min. read

Employers should look at AI’s impact on workforce through ESG lens


Employers can get ahead of new reporting requirements and position themselves to attract and retain talent by looking at how adopting artificial intelligence (AI) systems within their organisation will impact on workers through the lens of ‘ESG’.

The ESG (environmental, social and corporate governance) agenda reflects the growing focus of policymakers, regulators, investors and other stakeholders on not just what businesses do but how they do it – including the impact they have on people and the environment they operate in.

The May 2023 future of jobs report by the World Economic Forum (WEF) identified frontier technologies, like AI, as potentially more transformational to businesses than the transition to a ‘green’ economy. There are, however, different views on the extent to which AI will impact the labour market.

Elon Musk’s prediction, at an event hosted at the time of the UK’s AI safety summit in November –that “[t]here will come a point when no job is needed” – was the subject of significant media coverage and commentary, demonstrating the potential for debate and the inherent public interest around the impact of AI on workforces. However, a recent report by the European Central Bank claimed that reports of AI ending human labour may be greatly exaggerated. Although the report noted that it is too soon to reach a conclusive verdict, its research showed that AI-enabled automation was associated with an increase in employment in EuropeWhilst some jobs may be replaced by AI, there is growing recognition of the jobs it will create in the tech sector and how it will free up worker time to spend on other tasks.

If the jury is still out on 'macro' trends around AI and employment levels, company-specific assessments of the impact of AI on the workforce will be the best evidence for all stakeholders.

Workforce impacts and double materiality

ESG brings the workforce impact of AI into view primarily by looking at strategies through the lens of double materiality. Double materiality is an ESG concept and encourages businesses to look at strategies from an external perspective – how the business impacts stakeholders – and an internal perspective – how the business itself is impacted.

Impacts can be positive and negative. A double materiality analysis of AI would look at how AI strategies positively and negatively impact the workforce, and how workforce issues related to AI positively and negatively impact the business. 

For most potentially positive workforce impacts of AI, a corollary potential negative impact can usually be identified: while new tech jobs may be created, some manual jobs may be deleted; while some algorithms may introduce unintended bias and discrimination risks, a well-tested algorithm may be empowered to remove bias and discrimination from decision-making; while there may be reduced human involvement and contact in some AI decision-making, it can free up humans to spend more time on decision-making that benefits most from uniquely human qualities.

For the business itself, replacing large sections of the workforce with AI may lead to risks or opportunities, for example, around publicity: positive publicity may be generated if jobs are created, or existing workers are retrained and redeployed; negative publicity may be generated if there are large scale job losses. AI applications that replace workers with chatbots may also create risks and opportunities for the business around customer service satisfaction: positive impacts may be felt if AI makes interaction slicker and faster; negative impacts may be felt if AI produces pro-forma, imprecise and impersonal responses. 

An ESG approach to AI would require a business to assess what positive and negative impacts are relevant to its own business. That the workforce impact of AI will be business specific is shown in the WEF future of jobs report. According to that report, nearly 75% of companies expect to adopt AI, with many expecting this to lead to high churn in the workforce – 50% of organisations expect it to create job growth and 25% expect it to create job losses.

Employers that understand what the workforce impact will be of a business specific AI application will be better placed to manage the impact of AI on workers and the business. Employers can also consider in advance if the workforce impact is aligned with broader corporate purpose statements and risk strategies. Good governance is promoted by this kind of analysis as there will be greater senior visibility of the potential impacts and, therefore, accountability for those impacts.

Ideally, this assessment would be built-in to the initial decision-making around the adoption of AI applications, so it is forward-looking. However, some businesses that are new to ESG thinking may still find a retroactive analysis helpful when AI has already been rolled out and impacts have already been made.

Mitigating the workforce impact of AI

If negative impacts are likely, an ESG approach promotes proactive mitigation of these risks. In an AI context, this may mean retraining and redeployment of staff, or enhanced redundancy packages and support in finding new roles at other organisations. Workforce training and development is, of itself, a key indicator of good ESG. Communication strategies will also be an important risk mitigation tool. Media communications may need to be planned. The AI impact on workers is a topic of current interest in the press, with many positive and negative stories already in the public domain – whether about possible mass redundancies or the retraining of staff for 'better jobs' when AI can be relied upon to perform routine tasks.

Evidence of the effective mitigation of negative AI-related workforce impacts could be a really positive ESG success story for a business to tell.

Employee engagement around AI 

Employee engagement is a core aspect of taking an ESG approach. It necessitates engagement around the workforce impact of AI.

Employers may want to consider if existing channels of employee engagement are well suited to engaging on AI topics. If not, a new channel may need to be established, such as a worker focus group on AI.

Engaging with staff can also help identify whether and what training is needed to ensure representatives are skilled in their ability to perform new or existing tasks.

Engagement needs to be effective. Ideally, employers will be able to demonstrate the influence that the engagement has had on their overall AI strategy for the business. In this regard, AI engagement should be looped, with employee feedback being fed to the board and board feedback being fed to the employees. An effective employee engagement strategy may remove fear and rumours around the introduction of AI, build trust, and avoid disputes: employees can help give ideas about how to best mitigate any negative workforce impacts, and they are also well placed to help the business assess where AI would best improve business processes.

Whistleblowing and AI

Providing for effective whistleblowing channels is also a core component of good ESG.

Existing laws already account for the application of AI systems, such as the GDPR and equality legislation, but more bespoke AI legislation and regulatory guidance is expected – with the AI Act anticipated in the EU and the UK government expected to provide more detail soon on how it will integrate cross-sector principles for use of AI with a sectoral approach to regulation, which UK regulators are already applying within their own remits and existing frameworks. Businesses may, however, want to ensure that employees feel empowered to raise legal concerns around AI and consider whether existing whistleblowing channels are effective to address AI disclosures. Employees may also speak up with concerns about whether new AI legal requirements are being complied with.

There may also be whistleblowers who report on AI applications breaching existing laws - such as those on equality, due for example to bias in algorithms; or health and safety, such as the use of AI in medical and transport applications which may raise safety concerns if not subject to appropriate testing.

Reporting workforce impacts of AI

Many employers already voluntarily report on ESG, including workforce matters, under various voluntary global reporting standards such as the Global Reporting Initiative Standards. Given the global prominence of AI, investors are likely to want companies to report on AI and associated ESG angles.

Despite the initial focus of international reporting obligations being on climate matters, new reporting obligations are beginning to emerge to shine a light on social matters too, with the EU Corporate Sustainability Reporting Directive (CSRD) being the leading global example of that. Provisions under the CSRD require businesses to give consideration to, and report on, matters pertaining to their own workforce and those in their wider value chain. The first CSRD reports for large public interest entities will be published in 2025, with compliance then filtering down to SMEs who will first report in 2027.

Although the CSRD is an EU directive, its impact will be wider because non-EU corporates may have connections to the EU that are caught by it. Therefore, although it will not form part of UK law post-Brexit, some UK companies could be brought in as a result of its extraterritorial reach. The directive is also expected to drive up global standards of workforce reporting, which could well be felt in the UK where, aside from some specific employee topics, existing legal obligations to report on employee matters and the interests of employees are expressed in very broad terms.

The CSRD contains specific examples around how climate change strategies may impact the workforce and how these should be reported on as part of non-financial reporting obligations. However, the impact of AI on the workforce needs to be taken as seriously as climate change in its capacity to reshape workforces. Businesses with an AI strategy that has a workforce impact will need to report about it within the granular rigours of the CSRD’s social reporting standards. CSRD reporting standards relevant to workers look for double materiality analysis, mitigation of negative impacts, training and development, whistleblowing channels and employee engagement.

With increasing mandatory non-financial reporting ahead, businesses may want to get ahead of regulation and combine their ESG and AI strategies.

Co-written by Gemma Herbertson and Anthony Convery of Pinsent Masons.

We are processing your request. \n Thank you for your patience. An error occurred. This could be due to inactivity on the page - please try again.