Out-Law News

AI in Australia must be human-rights-centred says Commission


Emma Lutwyche tells HRNews about the development of AI regulation in Australia

HR-Guide-Tile-1200x675pxV2

We're sorry, this video is not available in your location.

  • Transcript

    Artificial intelligence has the potential to have a profoundly positive impact the workplace in Australia but safeguards are needed to protect human rights. That is the central message from the Australian Human Rights Commission in a paper they have submitted to the Australian government. We’ll speak to a Sydney based lawyer about the current state of AI regulation in Australia.

    The Commission was responding to a consultation by the Australian government’s Department of Industry, Science and Resources which opened on 1 June. The government’s discussion paper provides an overview of global approaches to the regulation of AI and it invited views from stakeholders on whether the existing regulatory framework, such as it is, caters adequately for the rapid emergence of AI-driven technologies and whether enhanced regulation is necessary. The Commission’s response is a paper called ‘The Need for Human Rights-centred Artificial Intelligence’ and raises a number of concerns around the risk of AI interfering with a wide range of human rights including in respect to privacy, algorithmic discrimination, automation bias and misinformation. 

    The Commission sets out no less than 47 recommendations to the Australian government which are best summarised as a request for a complete review of the current regulatory landscape and the human rights risks associated with using AI. 

    At this point, like most countries across the globe, Australia is feeling its way. As you’d expect, Australia was represented at the Bletchley Park Summit hosted by Rishi Sunak three weeks ago and they were one of the 28 countries that signed up to the Bletchley Declaration pledging to cooperate to promote the safe use of artificial intelligence tools. 

    The Bletchley Summit was all about international cooperation, with each country still working out how to shape its own domestic regulatory framework. So what does Australia’s look like right now? Emma Lutwyche is an employment lawyer based in Sydney and earlier she joined me by video-link. First question – right now, how is AI in Australia regulated?

    Emma Lutwyche: “It isn’t currently. There's no legislation in place other than bits and pieces that may apply, for example, to privacy or data but there is no fit for purpose legislation for this. The government is aware that it is an issue and starting to put together papers, and receive submissions from stakeholders, but it's not at the point of draft legislation. It's not at the point of  having decided what it needs to do about this yet.”

    Joe Glavina: “I gather one of the issues you’ve been flagging with clients is the impact AI might have in a performance management setting. So, when someone uses AI to help them do their job, how a manager takes that into account in judging the individual’s performance. Can you tell me about that?” 

    Emma Lutwyche: “Yes, so this is an issue that we've certainly started talking to clients about and clients are concerned about. The issue is around where the use of AI will become increasingly part of people's jobs on the ground and how the employer understands, and monitors, and then reviews, how their employees are using that AI. So for stuff like performance management, for review of work product, drafting of documents, for example, or preparation of a slide pack for a client, or any real work product that the employee is employed to prepare, if they've started using AI reasonably for the preparation of that work product, how is the employer then going to monitor how they have done that, that it’s not plagiarised, for example, that it's correct and factual, and then when there is a performance review, or an assessment of that work product, whether the employer can rightly take performance management steps for that employee where they have used AI and it's not technically their fault, for example, that the information is not correct.”

    Joe Glavina: “That's interesting. So I guess that comes down to really understanding how the AI works and how the employee is using it? So if managers are making judgments about an employee’s performance they need to understand exactly how the machine affects that performance?” 

    Emma Lutwyche: “Yes, that's absolutely right. There can't be a gap in understanding the use of what information has been fed to the AI and what has been developed by AI between the employer and the employee and where you're talking about, for example, quite a long chain of management or generational gaps between employees and management, that's going to be difficult.”
    Joe Glavina: “Finally, Emma, what the advice to HR professionals watching this?” 

    Emma Lutwyche: “So one of the key things that we're advising our HR representative clients to do is to remain really involved in the evolution of AI within their business because they really need to understand what data is being driven into the AI so that decisions are made transparently and without bias and objectively. So, it's really important, from our perspective, to be talking to our HR managers about being involved, and at the table, for those discussions and really, again, understanding how it's going to be developed and used.”

    The Australian Human Rights Commission’s submission paper is called ‘The Need for Human Rights-centred AI’ and we’ve included a link to it in the transcript of this programme. We’ve also included a link an Out-Law analysis piece looking at the current state of AI regulation globally and some of the opportunities and challenges facing HR in this area. 

    LINKS
    - Link to Out-Law analysis: ‘Artificial intelligence offers opportunities and challenges for HR’
    - Link to submission by the Australian Human Rights Commission in response to the Australian government’s Discussion Paper ‘Supporting Responsible AI’

We are processing your request. \n Thank you for your patience. An error occurred. This could be due to inactivity on the page - please try again.