OUT-LAW NEWS 4 min. read
Agentic AI poses new questions for retailers and payment providers around user experience and compliance. deepblue4you/iStock.
18 Feb 2026, 12:09 pm
Processes for completing online payments may need to be reconfigured by online retailers and payment service providers to enable consumers to complete transactions using AI agents, an expert has said.
David Tilbury of Pinsent Masons said that while agentic AI offers consumers a new way to shop online, it poses new questions for retailers and payment providers around user experience and compliance.
David Tilbury
Senior Associate
The user experience risk is real: poorly designed consent journeys will lead to failed payments, abandoned carts, and potential regulatory exposure
Agentic AI is a term used to describe systems set up to act autonomously on the basis of dynamic reasoning, with little or no human input. Agentic AI is already impacting the way that some consumers shop online, as a recent report by the UK’s Information Commissioner’s Office (ICO) highlighted. At the time, experts at Pinsent Masons flagged consumer law and data protection law issues that arise in the retail context.
Some payment service providers (PSPs) and online retail businesses have already acted in response to the emergence of agentic AI in commerce. For example, eBay has restricted the use of certain AI agents while Worldpay has moved to enable agent-driven transactions, including by adjusting its practices around fraud and chargebacks.
Tilbury said agentic AI raises specific issues for how retailers and payment providers enable compliance with UK and EU payment rules on ‘strong customer authentication’.
The strong customer authentication requirements are broadly designed to ensure that the person whose funds are to be drawn for transactions are who they say they are and consent to the payments. They entail ensuring at least two of three possible elements – something the account holder knows, something they possess and something they are – are present and independent of one another.
Tilbury said: “Agent‑initiated payments challenge traditional assumptions about the identity that needs to be authenticated – the consumer’s or the agent’s delegated authority – and how consent is captured. The strong customer authentication rules require that the payer is made aware of the payment amount and the payee, and satisfying the different elements often requires a human to be present – such as, for example, where there is verification via biometrics.”
According to Tilbury, there are ways in which the strong customer authentication requirements can be met in the context of agentic AI use under the current rules.
He said: “For example, credentials denoting an initial consumer authentication can be tokenised for an AI agent to use to execute future payments. Alternatively, a consumer could complete strong customer authentication processes on their device at the point of being onboarded to services, which could then serve to authorise an AI agent to act on their behalf thereafter. The ‘trusted beneficiaries’ exemption may also be relied upon, which allows customers to ’whitelist’ specific merchants, e.g., preferred retailers, so that any future payments to those merchants do not require further authentication.”
“In those scenarios, the AI agent would use the authentication already provided by a human so that it becomes the one that decides what to purchase; initiates transactions; and triggers the payment flow,” Tilbury said.
The user experience needs to be factored into any reconfiguring of payment processes as well as regulatory compliance, according to Tilbury.
“In an agentic AI context, the aim would be to minimise human input,” Tilbury said. This means reducing friction – if an AI agent seeks user confirmation for every small action, agentic commerce essentially becomes the same as traditional manual checkout. It also means adopting payment protocols that can accurately interpret the AI agent’s action as being the user’s intent, so as to require fewer user approvals, and for shared protocols to be developed across different merchants so that AI agents can seamlessly transact with any merchants, so that each merchant does not need to build bespoke integrations. On top of that, security is paramount – authentication, credential management, tokenisation, and fraud detection must operate behind the scenes so that human input is not required every time.”
“Retailers and financial services providers will need to design flows where agents can trigger payments without degrading conversion – the user experience risk is real: poorly designed consent journeys will lead to failed payments, abandoned carts, and potential regulatory exposure,” he said.
David Tilbury
Senior Associate
At the moment, there is no guidance on allocation of liability for when an AI agent is involved
According to Tilbury, thought also needs to be given to whether enabling agentic AI requires changes to be made to the way liability is allocated currently.
“Allocation of liability usually falls between three actors: consumers, PSPs and merchants; in the context of agentic AI, there will be four actors – the merchant, PSP, the model developer, and the deploying business – each with different capabilities and risk profiles,” Tilbury said. “At the moment, there is no guidance on allocation of liability for when an AI agent is involved. So, it is unclear who would be responsible if an agent over‑orders, pays the wrong merchant, or misinterprets a consumer instruction, for example.”
“Financial services and retail contracts will need to: define when an agent is deemed to be ‘acting on behalf of’ the consumer; set out loss‑sharing frameworks for unauthorised or erroneous agent actions; and build in indemnities and service levels that reflect model behaviour and don’t simply focus on availability and settlement timeframes. Without this, disputes will increase and recovery routes will be unclear,” he said.
Further issues around data governance need to be considered too, Tilbury added.
“AI agents ingest more data, from more sources, and at greater speed – that creates governance demands well beyond typical payment flows,” according to Tilbury, who said retailers and PSPs that want to facilitate agentic AI in commerce will typically need to undertake a data protection impact assessment first.
Tilbury said the way retailers and PSPs approach compliance with rules on purpose limitation and data minimisation in data protection law may need to change to account for AI agents combining behavioural, transactional, and contextual data.
Other risks around the international transfer of personal data by downstream model providers, the use of customer data in agent training, and around data leakage where agents would interface with multiple merchants or platforms, also need to be considered, he said.
Retailers and PSPs need to have an appreciation of rules around auditability when enabling agent-driven payments, Tilbury said: “Organisations will need to maintain detailed logs capturing: the agent’s input/output; the payment instruction generated; and timestamped traceability of every decision step. Provenance matters too – regulators will expect evidence of how an agent reached a particular recommendation or action. Technical and contractual ‘kill-switches’ will also be essential, so businesses can immediately suspend agent activity if behaviour becomes risky or anomalous.”