Out-Law / Your Daily Need-To-Know

OUT-LAW ANALYSIS 10 min. read

Retailers need to respond to the rise of agentic AI in commerce

Young woman using smarpthone with credit card

iStock/Moment Makers Group


The fact that consumers can now use agentic AI to make online purchases on their behalf directly from within the AI chat interface should spur retailers to reconsider how they sell goods online.

Whilst retailers will be familiar with the concept of 'customer not present' transactions, the application of this concept to agentic AI systems’ initiation of transactions on a customer's behalf stretches the bounds of its intended purpose. Along with the way agentic AI works in practice, it will challenge retail industry norms in terms of retailers' systems, processes and practices, raising legal and contractual risks that require retailers to act.

The growing trend of agentic AI in retail

Agentic AI is a term describing systems set up to act autonomously based on dynamic reasoning, with little or no human input. For some, it may seem futuristic, but it is already shaping how people shop. Consumers can build their own agentic AI tools – or use those provided by retailers – and instruct those systems to continuously browse products from thousands of online stores, monitor and be notified of price changes in real time, select products to review, and even complete purchases of goods on their behalf. The consumer sets the objective – such as buying a specific product at or below a specified price – while the AI agent autonomously determines how that objective is achieved, typically using large language models to drive decision‑making.

A study by consultancy Bain found that while around half of US consumers said they are not ready to let an agentic AI tool handle an end-to-end transaction on their behalf without their involvement, between 30% and 45% of US consumers already use generative AI tools to research and compare products. Some analysts believe this serves as a catalyst for significant growth in agentic AI shopping over the next few years. Moreover, consumers currently place three times more trust in retailers' on‑site agents than in third‑party agents, although that lead may narrow as more consumers experiment with third‑party agents.

AI agent-led shopping could account for a material share of online retail. Morgan Stanley, for example, predicts that between 10% and 20% of all US e-commerce spend could be attributable to agentic shoppers by 2030. An estimated 23% of US consumers reported making an AI‑assisted purchase in recent months, signalling growing commercial relevance of agent‑driven shopping.

Deloitte has cited analysts that believe the impact could be even greater – that 25% of global e-commerce sales will be enabled by AI agents by 2030. Retailers whose platforms are not set up to integrate effectively with AI agents risk losing competitive advantage and visibility within the market.

Some early adopters are already seeing its impact. In announcing financial results recently, Amazon reported that its part-year figures depict that its agentic AI shopping assistant 'Rufus', with a 'Buy For Me' function, contributed to nearly $12 billion worth of annualised sales from 2025. It said that Rufus has been used by more than 300 million customers.

Issues retailers should consider

In the context of this fast-growing market, UK authorities have been keen to position themselves at the forefront of debate and discussion on agentic AI commerce.

The Digital Regulation Cooperation Forum, which brings together the UK's Competition and Markets Authority (CMA), the Financial Conduct Authority, the Information Commissioner's Office (ICO) and Ofcom, published a report on the future of agentic AI at the end of last month with a view to enabling innovation but in a compliant way. It follows research on the topic published by the CMA earlier in March and a report published in January by the ICO, which Pinsent Masons colleagues highlighted at the time.

In an earlier article, I raised some of the key considerations from a regulatory perspective relating to agentic commerce. However, agentic AI commerce also raises a range of issues retailers need to consider:

Contractual authority and the notion of agency

Under English law, AI systems do not have legal personality and cannot themselves be parties to contacts; contracts for sale concluded by AI agents will therefore need to rely on existing principles of agency, under which a contract is only binding where the agent acts within actual or apparent authority. Retailers will therefore need to consider how they verify the scope of authority granted to an AI agent in order to ensure contracts for sale are enforceable.

As agent ecosystems involve complex chains of delegation, establishing authority after the fact may be difficult. Retailers should update their standard terms of purchase to expressly address AI agent-executed transactions, including provisions on authority, ratification and liability. Granular evidence may need to be provided of the AI agent's decision-making processes throughout the transaction to defend any potential disputes.

Furthermore, a payment is only authorised if the payer has given informed consent for a transaction to occur and it should also be in the form agreed with their payment service provider. It should be clear who bears liability where an AI agent exceeds its mandate or misinterprets instructions.

Consumer protection

UK consumer protection legislation requires retailers to communicate – and ensure consumers understand – key information, like their cancellation rights and returns policies, at the point of purchase. Those obligations apply in the same way whether the consumer interacts with a human or an AI agent. Where a transaction is executed by an AI agent, there is a risk that this information is being delivered to the AI agent rather than the human consumer; it is likely not sufficient simply to provide the required information to the AI agent as consumer law requires that the information is 'understood' by the consumer at the point of contract formation.

Regulators may take the view that compliance requires the consumer's actual awareness, not merely the agent's receipt. Retailers should therefore rethink their disclosure workflows accordingly, for example mandating agents feed back confirmation from the end consumer that terms have been understood.

Currently, retailers retain overall responsibility for compliance with UK consumer protection law, even where issues are caused by the AI agent – for example, failing to identify additional fees or miscalculating return deadlines.

Fraud, chargebacks and card scheme requirements

Card scheme rules are built around the assumption of direct cardholder involvement, and agent-initiated transactions do not sit comfortably within existing authorisation frameworks, leaving retailers exposed to elevated chargeback rates, which can in turn have consequences under retailers' arrangements with payment service providers and card schemes. There is also a risk of consumers exploiting the agent framework to reverse purchases they later regret.

Retailers should engage their acquirers and payment service providers proactively to understand how existing chargeback rules apply and what evidence of authorisation they should be retaining, including whether they will need additional evidence of delegated intent and AI agent identity for potential disputes. Industry initiatives are already emerging to support this, including schemes focused on verifying AI agent legitimacy and recording cardholder intent, which may affect retailer dispute handling and the type of evidence required.

Agentic AI payments may accelerate and scale familiar retail fraud patterns, including returns/refund abuse, and can automate the workflow of purchasing, lodge complaints, and increase the volume and speed of disputes faced by retailers.

Retailers should also anticipate that traditional anti-fraud systems may flag legitimate agent activity as bot behaviour, increasing false positives and lost sales unless fraud strategies are adapted. Large-scale agent-driven dispute activity can also result in retailers being flagged or blacklisted by payment providers due to unusually high or suspicious chargeback volumes, even where behaviour is driven by automated abuse rather than merchant misconduct.

Age verification for restricted goods

Retailers remain responsible for preventing underage sales online and are required to take all reasonable precautions and have effective systems capable of verifying a purchaser's age. AI agents cannot physically verify the age or identity of the underlying purchaser, and there is a risk that AI agents could inadvertently circumvent verification steps if those steps are not designed to interact meaningfully with agentic systems. Retailers should audit their age verification processes to ensure they are robust against agentic interactions and tie verification to the human consumer's account or identity, rather than the agent session alone.

Retailers should not rely solely on self-certification by the AI agent and should factor in emerging evasion techniques, such as circumvention with a VPN, when assessing whether current controls are robust.

Data protection and compliance

Agentic transactions will introduce new data flows involving consumer data being accessed and transmitted by third-party AI systems, requiring retailers to review their records of processing, data sharing agreements and privacy notices. Questions will also arise as to whether the AI agent operator is acting as a processor, joint controller or independent controller, each of which carries different compliance obligations. The ICO has highlighted that determining controller/processor responsibilities can be challenging in agentic AI supply chains. Retailers must ensure that the legal basis for processing remains valid where data is handled by an automated agent rather than the consumer directly.

Product and inventory visibility

For retailers to benefit from the advantages offered by agentic commerce, their products must be accessible and intelligible to the AI systems conducting purchase searches on behalf of consumers, raising questions about compatibility with existing product data feeds and structured data formats. Retailers that are not discoverable by AI agents risk being excluded from a growing share of purchasing decisions. Investment in product data quality and agent-friendly catalogue formats may become a competitive necessity. Some retailers are also exploring interoperability approaches that make inventory and data accessible within third‑party AI environments, supported by emerging standards intended to help AI agents discover and act on retailer inventory across platforms.

For example, Carrefour integrated grocery shopping into a ChatGPT interface. This enables AI agents, via ChatGPT, to discover its products, check real‑time availability and build shopping baskets using its live product catalogue, with completed baskets then passed to Carrefour's existing website for payment and delivery.

Price scraping and stock availability

Large numbers of AI agents conducting automated product research could place significant strain on retailer infrastructure and may constitute price scraping at scale. Beyond infrastructure, the ability of AI agents to rapidly reserve or purchase stock could distort availability and disadvantage human shoppers, particularly if AI agent activity drives bursts of demand at scale rather than normal human purchasing patterns. Retailers should review their website terms to restrict unauthorised scraping and consider rate-limiting or access controls where necessary, including bot management measures that distinguish permitted AI agent activity from abusive automation.

In addition, retailers should factor in agent security risks such as prompt injection – malicious instructions embedded in content that can manipulate an AI agent's behaviour – and ensure controls exist to detect and respond to abusive or unsafe AI agent interactions.

Website and e-commerce terms

Many existing website terms were not drafted with agentic transactions in mind and may contain provisions – such as restrictions on automated access – that unintentionally prohibit or invalidate AI agent-initiated purchases. Retailers should review their terms to assess whether they need to be updated to expressly permit, govern and set conditions for agentic transactions, including provisions addressing liability where an agent acts outside its authority.

Other things to think about

Regulators are beginning to engage with how existing frameworks apply to AI-driven financial transactions. Retailers should monitor developments closely, as guidance or rule changes could affect transaction flows, liability allocation and disclosure obligations relatively quickly.

In addition, as standard insurance policies and supplier contracts may not contemplate liability arising from AI agent errors, such as duplicate orders or out-of-hours transactions without human oversight, retailers should review insurance arrangements and consider how liability is allocated between themselves, the consumer, agent operator, and payment provider.

Another issue to follow is the emergence of new standards. Standardised frameworks for agent identity and delegated authorisation are likely to emerge through industry bodies or regulatory intervention. Retailers should therefore engage with payment industry working groups and monitor emerging technical standards that may shape how agent identity and authority must be verified at the point of transaction.

Further, not all consumers will have equal access to AI agent technology, and over-reliance on agentic channels could disadvantage certain consumer groups, attracting scrutiny under consumer duty and fairness frameworks. Retailers should ensure that non-agentic purchasing routes remain fully functional and accessible.

An actions list for retailers

The emergence of agentic AI presents an opportunity for retailers that can adapt to this new way of shopping, with the potential for additional sales driven by increased, scalable AI‑led access to retailers' websites. Those that fail to adapt risk their margins being squeezed as shopping habits change, as well as potential non-compliance with legal, regulatory and contractual obligations. There is therefore an imperative on retailers to act.

Actions retailers should consider include:

  • Reviewing and updating contract terms to expressly address agent-initiated transactions, including provisions on authority, liability and consumer rights compliance;
  • Engaging payment service providers and acquirers to understand how card scheme rules apply to agentic transactions and what records of authorisation should be maintained;
  • Undertaking an audit of data flows to ensure processing of consumer data by or through AI agents is captured in data protection frameworks and that appropriate agreements are in place;
  • Reviewing age verification processes to ensure they cannot be circumvented by agentic systems and that compliance obligations for restricted goods remain met;
  • Human‑centric checkout controls, such as CAPTCHAs and 3D Secure, may introduce friction or prevent AI agent‑led purchasing unless processes are adapted;
  • Assessing product visibility and technical infrastructure to ensure products are discoverable by AI agents and that systems are protected against unauthorised scraping or rapid stock depletion;
  • Monitoring regulatory developments, including FCA guidance, CMA activity and emerging standards on agent identity and delegated authorisation;
  • Reviewing insurance and liability provisions across the supply and payments chain to ensure adequate cover for agentic transaction risks;
  • Implementing consumer-focused governance for integrated AI agents, including clear transparency where appropriate, testing, and ongoing monitoring – including complaints – and;
  • Preparing policies for fraud, disputes and returns operations for higher-velocity AI agent-driven activity, including evidencing delegated consent and adapting controls to reduce false positives from legitimate AI agent traffic.
We are processing your request. \n Thank you for your patience. An error occurred. This could be due to inactivity on the page - please try again.