Out-Law News 4 min. read

Agentic AI raises transparency questions for retailers

Mobile online shopping depiction_Digital - SEOSocialEditorial image

showcake/iStock.


Online retailers have been urged to consider how they might meet their obligations around transparency to consumers if those consumers use agentic AI systems to shop.

Malcolm Dowden and Angelique Bret of Pinsent Masons said a new report published by the UK Information Commissioner’s Office (ICO) highlighted the impact agentic AI could have on compliance.

Agentic AI systems are set up to act autonomously on the basis of dynamic reasoning, with little or no human input. This works through a process known as ‘chaining’, where agentic AI systems start with a specific prompt and a set goal and deliver their end result by performing a series of sequential tasks, with the process being informed by the outputs – and the systems’ interpretation of those – at each stage.

Agentic AI systems determine how to act by perceiving information they receive and adapting to their environment without needing precise human instruction. In operating, agentic AI systems leverage ‘static’ AI agents and other tools and resources to build additional capabilities, like reasoning, planning and self-evaluation.

In its recent report, the ICO said it wants to “encourage responsible development and use of agentic AI”, but it warned public trust in this area of evolving technological development could be harmed if “organisations fail to demonstrate adequate compliance of their AI agents”.

While a lot of the focus of the report was on data protection compliance issues that might arise from how organisations might deploy agentic AI themselves, the ICO highlighted how people might also use such systems – including for the purposes of online shopping. Dowden said this example raises compliance questions for retailers.

According to the ICO, there are already some agentic AI systems that enable consumers to select retailers from whom they might buy products. Based on budget limits and preferences set by the consumer, and “with appropriate permissions”, those systems can then arrange for payment and delivery of goods.

Dowden said: “High-level EU-funded research into ‘non-deterministic’ AI – a concept now more routinely referred to as agentic AI – identified no legal barrier to AI validly forming contracts, but it did flag practical concerns focused on information requirements under consumer law, in particular regarding requirements in consumer transactions for the provision of information to the consumer and, for certain types of transaction, for a cooling off period.”

“In the scenario of a retailer engaging with an agentic AI system deployed by a consumer, it seems clear that the retailer would not simply be able to rely on providing the information required under consumer law to the agentic AI system. This is because that information must be ‘read’ and ‘understood’ at the point of contract formation, which suggests it must be provided to the consumer,” he said.

“While this by no means bars the use of agentic AI in the context of regulated consumer contracts, it does introduce an element of friction, along with a need to code-in cooling off provisions to meet statutory requirements. It is an issue retailers should consider as developments with agentic AI evolve, given that they might find themselves interacting with shopping bots rather than shoppers,” Dowden said.

Angelique Bret, competition and consumer law expert at Pinsent Masons, said: “Agentic AI could raise concerns in respect to the protection of consumers from mis-selling and/or pricing transparency, including when addressing unfair commercial practices under the DMCC Act or unfair terms under the Consumer Rights Act.”

“For example, the use of agentic AI could raise complexities when ascertaining how an average consumer’s ‘transactional decision’ may have been influenced by a seller’s actions or omissions if the consumer’s decision-making was in effect outsourced to an AI agent. Likewise, agentic AI could conceivably raise various compliance considerations for retailers, for example whether and how their online choice architecture or pricing transparency practices would need to be adapted to meet consumer law requirements to take into account consumers’ use of AI agents for their purchasing activities,” she said.

Transparency obligations also arise for retailers in the context of data protection and privacy law

Under the GDPR, personal data must be processed lawfully, fairly and in a transparent manner in relation to the data subject. Certain types of information need to be provided to data subjects when either collecting personal data directly from them or obtaining such data about them indirectly, such as when data is sourced from a third party.

For example, where organisations obtain personal data indirectly they must provide information about who they are, their contact details, the purposes of processing and the legal basis for that processing, the categories of personal data they intend to process, and at least the categories of other organisations they intend to share the data with. They must also provide information about how long they intend to retain the data and about the rights data subjects can exercise in respect of the data. The information provided must be concise, transparent, intelligible, be in an easily accessible format, and be in plain language.

In its report, the ICO warned that a lack of transparency over the processing of personal data in the context of agentic AI could result in “unintended harms to the people affected”.

It added that agentic AI systems could in future become so sophisticated and autonomous as to “generate new ways of using personal information”. It further highlighted the potential for processing to be undertaken “beyond what developers anticipated”, without the knowledge of data subjects, in pursuit of objectives that do not align with data subjects’ expectations, or for purposes that differ from those the data was initially collected for.

The ICO said: “While the autonomous nature of future agentic systems could pose challenges to organisations for transparency, the organisations’ obligations as data controllers remain the same. To ensure that personal information is processed transparently in agentic systems, organisations must consider their obligations and how they will meet them before they begin processing.”

We are processing your request. \n Thank you for your patience. An error occurred. This could be due to inactivity on the page - please try again.