AI shopping agents are buying for you - can UK consumer law keep up?

AI shopping agents are hitting the UK, and liability gets messy when bots buy. DMCC and CCRs will bite: clear info, no tricks, fair terms-or expect action.

Categorized in: AI News Legal
Published on: Mar 04, 2026
AI shopping agents are buying for you - can UK consumer law keep up?

AI shopping agents: How will UK consumer law apply?

England and Wales - 03/03/2026

AI shopping agents are moving from demo to distribution. Walmart is already letting US customers browse and buy within ChatGPT and Gemini. Amazon blocks that model, but it offers "Buy for Me" to make purchases from third-party sites without leaving Amazon. UK consumers will meet similar features soon - and with OpenClaw's rise, fully autonomous, personal agents that transact end-to-end are edging closer.

That creates a legal gap. There's no UK statute aimed squarely at AI, and regulators haven't issued guidance on AI shopping agents. But the existing consumer regime is principles-based and broad. It will bite. The question is how.

Where AI fits in the customer journey

Assist

Answer engines - ChatGPT, Gemini and others - gather the user's requirements, search, compare, and recommend. The human still places the order, even if it happens inside the same interface.

Execute

Agents complete the transaction for the user. That capability has been tightened in the UK after a brief window in 2025, but execution looks inevitable as agent ecosystems mature. A personal agent could confer with other agents, select a product, and pay - all without human review of the final choice. That distinction matters for risk and responsibility.

Which UK consumer laws apply?

Unfair commercial practices - DMCC Act 2024

The Digital Markets, Competition and Consumers Act 2024 (DMCC Act) outlaws unfair commercial practices. "Commercial practice" is drafted widely enough to capture how AI agents are designed, trained, prompted, and deployed in the course of promoting or supplying products to consumers. In short: agent providers are in scope.

  • Missing required info when promoting a product. If an agent surfaces "CoffeeMaster 5000 for £200 - want me to buy it?" it should also provide key details: main characteristics, total price, seller identity, and any withdrawal/cancellation right. Omitting these can breach the DMCC Act even without proven consumer harm.
  • Misleading acts or omissions. Hallucinated features, scraped data read wrongly, or misinterpreted user instructions that push a purchase the average consumer wouldn't have made can be unlawful - especially if sponsored placement or paid ranking isn't disclosed clearly.
  • Dark patterns in agent flows. Fake countdowns, false scarcity, nudges that obscure cheaper options, or behind-the-scenes tactics that bias an executing agent toward higher commissions can cross the line. These practices erode trust and invite enforcement.

Expect scrutiny of agent UX, recommendation logic, and any monetisation that could distort outcomes.

Digital Markets, Competition and Consumers Act 2024

Pre-contract information - Consumer Contracts Regulations 2013

The Consumer Contracts (Information, Cancellation and Additional Charges) Regulations 2013 (CCRs) require clear, prominent pre-contract information for distance sales, and an explicit acknowledgement that placing the order creates an obligation to pay (e.g., "Buy now"). These duties fall on the trader of record - usually the retailer or marketplace - not the agent provider.

But design choices by agent providers can still create exposure under unfair commercial practices if they prevent traders from complying. If an agent assists a purchase, safe practice is to surface the trader's information clearly, confirm the consumer's intent, and use an order confirmation that signals payment unambiguously.

Consumer Contracts (Information, Cancellation and Additional Charges) Regulations 2013

The hard question: executing agents

Do the CCRs fit when a consumer authorises rules, not a final SKU? Example: "Buy a blue men's XL t-shirt from any store for £30-£40. Deliver home. Use my Amex." With prepaid cards and agent wallets becoming common, frictionless execution is realistic - especially for routine or low-value items.

The current framework assumes a human reviews the last step. If mainstream UK usage shifts to high-level instructions without final approval, what then? One view: if the agent acts as the consumer's personal representative, regulators may accept that the agent - and not the human - receives the mandated information pre-purchase, provided the retailer supplies it to the agent. Another view: consumers must still be put in possession of core information before committing. This is untested territory.

When purchases go wrong

The practical risk is simple: the agent buys something the consumer wouldn't have chosen if the information had been correct and complete. Returns will often soften the blow, but they don't erase legal exposure for misleading practices or information failures. Providers could face enforcement and consumer claims.

Contract terms and liability

Agent-provider terms must pass the fairness test under the Consumer Rights Act 2015. Blanket exclusions of liability to consumers are likely unfair and unenforceable. Limitations must be fair and transparent, and they won't save a provider from liability for unlawful practices.

What regulators might do next

Expect a mix of guidance, case-by-case enforcement, and - for the biggest players - potential conduct requirements if designated with strategic market status under the DMCC Act. Clear rules would help, but until then, providers should build to existing consumer standards.

Practical playbook for legal and product teams

For assist-mode agents

  • Surface key product details with price, seller identity, and cancellation/withdrawal rights alongside any "buy" prompt.
  • Disclose sponsored placements, affiliate links, and ranking factors in plain language.
  • Avoid pressure tactics: no fake timers, faux scarcity, or manipulative nudges.
  • Present the pay obligation clearly on any confirmation action (e.g., "Buy now" or "Pay now").

For execute-mode agents

  • Let users set guardrails (price caps, brands, delivery speed, sustainability preferences) and confirm them periodically.
  • Keep an accessible audit trail of each decision: sources checked, options considered, why a product was selected, and any payments from sellers.
  • Ensure the retailer provides required pre-contract information to the agent, and consider concise post-purchase summaries to the user.
  • Offer easy undo/return flows and escalation paths when the agent misfires.

Governance and risk

  • Test for hallucinations, bias toward monetised outcomes, and data-quality gaps; gate any auto-purchase features behind risk thresholds.
  • Adopt clear policies on paid promotion and ranking inputs; log and label them consistently.
  • Map trader-of-record responsibilities and confirm CCR/CRA alignment in partner contracts.
  • Design for vulnerable consumers: throttled spend, extra confirmations for higher-value items, and transparent explanations.

Bottom line

AI shopping agents can deliver speed and convenience. But UK consumer law already sets the standards: clear information, no deception, no manipulation, fair terms. Build to those now and you'll reduce enforcement risk while earning consumer trust - the asset that decides who wins as agent-first commerce takes hold.

For deeper professional development, see AI for Legal for resources on contracting, compliance, and liability in AI-driven consumer products.


Get Daily AI News

Your membership also unlocks:

700+ AI Courses
700+ Certifications
Personalized AI Learning Plan
6500+ AI Tools (no Ads)
Daily AI News by job industry (no Ads)