Out-Law / Your Daily Need-To-Know

Out-Law News 1 min. read

Uber case a reminder of dangers of potentially discriminatory AI


The UK’s Equality and Human Rights Commission (EHRC) has issued a reminder to employers to be mindful of the way in which they use artificial intelligence (AI), to prevent inadvertent bias or discrimination.

It follows a case in which Uber Eats made a financial settlement was made to one of its drivers, Pa Edrissa Manjang, in response to his claims that AI facial recognition checks required to access the Uber Eats platform were racially discriminatory under the 2010 Equality Act.

Manjang, who had worked as an Uber Eats driver since 2019, experienced continuous difficulties with the company’s verification checks, which use AI facial detection and facial recognition software. Manjang was removed from the platform following a failed recognition check and subsequent automated process.

Anne Sammon, employment law expert at Pinsent Masons said: “The case highlights the importance of ensuring that employers understand the systems that they have in place.”

Uber Eats claimed they found “continued mismatches” in the photos of Manjang’s face he had taken for the purpose of accessing the platform. The EHRC and App Drivers and Couriers Union (ADCU) were concerned by the use of AI and automated processes in this case, particularly how it could be used to permanently suspend a driver’s access to the app, depriving them of an income.

“Employers and those responsible for AI within firms should take note of some of the issues arising from this case,” Sammon added. For instance, during the preliminary hearing it appeared that the reasons given to Manjang for his treatment were different from those advanced in the defence of the claim “which seems to have been a result of a lack of clear processes in place”, Sammon said.

As the use of AI becomes increasingly prevalent in the workplace, businesses and those in charge now have responsibility to ensure the technologies used are transparent, fair, and non-discriminatory, she said.

Where AI systems are responsible for discrimination or bias, resulting in unfair treatment, this could lead to regulatory scrutiny. This is particularity relevant in the financial services context given the Financial Conduct Authority (FCA) and Prudential Regulation Authority’s (PRA) focus on non-financial misconduct, she said.

“Chief technology officers and those senior managers responsible for AI will want to ensure that they properly understand the systems that their firms are deploying so that they are able to answer any regulatory challenges that subsequently arise,” Sammon said.

Pinsent Masons, in partnership with the Edinburgh Future Institute and the University of Edinburgh Business School, is delivering a 1.5-day course for senior managers in financial services focusing on developing competence and training in understanding and addressing AI legal and regulatory risk. Registration for the event on 4 and 5 June is open now.

We are processing your request. \n Thank you for your patience. An error occurred. This could be due to inactivity on the page - please try again.