Agentic AI is a term used to describe systems set up to act autonomously on the basis of dynamic reasoning, with little or no human input. Agentic AI is already impacting the way that some consumers shop online, as a recent report by the UK’s Information Commissioner’s Office (ICO) highlighted. At the time, experts at Pinsent Masons flagged consumer law and data protection law issues that arise in the retail context.
Some payment service providers (PSPs) and online retail businesses have already acted in response to the emergence of agentic AI in commerce. For example, eBay has restricted the use of certain AI agents while Worldpay has moved to enable agent-driven transactions, including by adjusting its practices around fraud and chargebacks.
Tilbury said agentic AI raises specific issues for how retailers and payment providers enable compliance with UK and EU payment rules on ‘strong customer authentication’.
The strong customer authentication requirements are broadly designed to ensure that the person whose funds are to be drawn for transactions are who they say they are and consent to the payments. They entail ensuring at least two of three possible elements – something the account holder knows, something they possess and something they are – are present and independent of one another.
Tilbury said: “Agent‑initiated payments challenge traditional assumptions about the identity that needs to be authenticated – the consumer’s or the agent’s delegated authority – and how consent is captured. The strong customer authentication rules require that the payer is made aware of the payment amount and the payee, and satisfying the different elements often requires a human to be present – such as, for example, where there is verification via biometrics.”
According to Tilbury, there are ways in which the strong customer authentication requirements can be met in the context of agentic AI use under the current rules.
He said: “For example, credentials denoting an initial consumer authentication can be tokenised for an AI agent to use to execute future payments. Alternatively, a consumer could complete strong customer authentication processes on their device at the point of being onboarded to services, which could then serve to authorise an AI agent to act on their behalf thereafter. The ‘trusted beneficiaries’ exemption may also be relied upon, which allows customers to ’whitelist’ specific merchants, e.g., preferred retailers, so that any future payments to those merchants do not require further authentication.”
“In those scenarios, the AI agent would use the authentication already provided by a human so that it becomes the one that decides what to purchase; initiates transactions; and triggers the payment flow,” Tilbury said.
The user experience needs to be factored into any reconfiguring of payment processes as well as regulatory compliance, according to Tilbury.
“In an agentic AI context, the aim would be to minimise human input,” Tilbury said. This means reducing friction – if an AI agent seeks user confirmation for every small action, agentic commerce essentially becomes the same as traditional manual checkout. It also means adopting payment protocols that can accurately interpret the AI agent’s action as being the user’s intent, so as to require fewer user approvals, and for shared protocols to be developed across different merchants so that AI agents can seamlessly transact with any merchants, so that each merchant does not need to build bespoke integrations. On top of that, security is paramount – authentication, credential management, tokenisation, and fraud detection must operate behind the scenes so that human input is not required every time.”
“Retailers and financial services providers will need to design flows where agents can trigger payments without degrading conversion – the user experience risk is real: poorly designed consent journeys will lead to failed payments, abandoned carts, and potential regulatory exposure,” he said.