From Audit to Payments, Agentic AI Rolls Out Faster Than the Governance Around It

As agentic AI moves into payments, audit and customer-facing systems, questions remain over oversight, intent and recovery.

AGENTIC AI

Agentic AI has moved quickly from buzzword to commercial forecast. Gartner expects most brands to use it for one‑to‑one interactions by 2028, and Forrester has placed agentic commerce among its top emerging technologies for 2026. What was, until recently, mostly a talking point is now showing up in audit platforms, payments infrastructure, enterprise software rollouts and the governance teams being built to oversee them.

One branch of that shift is agentic commerce: software completing routine steps on a customer’s behalf once it has permission, whether that means comparing options, filling a basket, renewing a service or triggering a payment.

Speaking at the Innovate Finance Global Summit this week, Jessica Rusu, the FCA’s chief data, information and intelligence officer, said agentic commerce “will change how financial decisions and transactions are made” and argued that AI is already challenging “accountability, trust and control” in financial services. The FCA’s next AI Live Testing cohort will include firms working on agentic payments, indicating that the model is now entering practical supervision.

Companies are rolling these systems out, but the supporting controls are perhaps less mature. Recent research suggests many firms still lack clear visibility into what their agents are doing, or reliable ways to reverse actions once they have been taken.

Rubrik Zero Labs, the security research company, says 86% of organisations expect AI agents to outpace their security guardrails within a year, while only 23% report full visibility into the agents operating in their environments. Its data also shows 88% say they cannot roll back agent actions without disruption.

A survey from KTSL, conducted with BMC Helix among 400 senior IT leaders at large enterprises, found that 88% were already deploying AI agents, yet only 20% described those deployments as fully established and delivering measurable impact. Only 27% had a comprehensive formal security policy in place, while 25% said returns had fallen short of expectations.

Gartner adds another warning, forecasting that more than 40% of agentic AI projects will be cancelled by the end of 2027 because of escalating costs, unclear business value or inadequate risk controls.

Rubrik’s chief transformation officer, Kavitha Mariappan, outlined the problem: “AI adoption is outpacing our ability to control it.” That line captures the mood of the market. Firms are not just experimenting with agents; they are trying to place them inside live environments where decisions need to be explained, actions need to be monitored and mistakes need to be unwound.

From pilot to platform

The commercial prize helps explain the pace of the rollout. McKinsey estimates that agentic commerce could orchestrate up to $1 trillion in US B2C retail revenue by 2030, with global projections of $3 trillion to $5 trillion. That is enough to ensure the market does not sit around waiting for every control question to be settled before moving ahead.

Different sectors are taking different steps. EY is embedding agentic tools into its audit platform, including a rollout across the MENA region covering more than 10,000 engagements. Visa has introduced a route for businesses to accept agent-initiated payments through a single integration, opening the door for agents to complete purchases on behalf of consumers.

AWS has launched an agent-driven application for drug discovery that links model selection, experiment design and lab testing in one workflow, while Sage and PwC are using agents to automate parts of ERP implementation in an effort to reduce the manual work that slows deployments. Together, those moves show agentic systems turning up in functions much closer to execution than the first wave of chat-based enterprise AI.

Payments are also raising specific operational questions. Visa’s launch is designed to help merchants, agent builders and enablers connect to AI-driven commerce through a single integration, with tokenisation, authentication and spend controls built in. The logic is straightforward: if AI agents are going to browse, select and buy on a consumer’s behalf, the payment layer has to accommodate them.

What that still leaves open is what happens afterwards. Monica Eaton, founder and chief executive of dispute resolution and chargeback prevention company Chargebacks911, argues that “Visa is building the front end of agentic commerce but the dispute infrastructure will need to evolve just as quickly.” In other words, the mechanics of enabling agent-led payments are moving faster than the systems for handling intent, liability and recourse once a transaction is challenged.

Other parts of the payments market are starting to frame the same issue in terms of trust architecture. Fime this week launched FACT, a new framework for agentic commerce trust, built around the idea that existing payment and trust infrastructures were not designed to govern decisions made by autonomous agents.

Its proposal is a neutral trust layer between AI systems and payment rails, focused on intent validation, compliance monitoring and transaction-level trust signals. Whether FACT is adopted at scale remains to be seen. Its launch suggests that attention is shifting from enabling agent-led transactions to the systems needed to verify, monitor and audit them.

The fraud conversation is evolving in parallel. Visa’s own Payment Ecosystem Risk and Control team has identified a more than 450% increase in dark web posts mentioning ‘AI Agent’ over a six-month period, suggesting criminal interest is rising alongside commercial enthusiasm. That does not change the commercial interest in agentic commerce, but it does underline that new transaction models are likely to bring familiar fraud risks into new settings.

Building the governance layer

If the payments side of the story exposes questions around intent and disputes, the banking and audit side shows how institutions are trying to prepare for them. Lloyds Banking Group has expanded its Responsible AI function with hires in AI safety, threat intelligence, model assurance and ethical impact assessment.

Group head of AI Rohit Dhawan said “Responsible AI is what makes innovation at scale possible,” a line that captures where some large institutions now see the real work. New functionality may grab the headlines, but the harder operational effort sits in policy, assurance, escalation and fallback.

EY is making a similar point in a different way. Its rollout of agentic tools in Assurance comes with a global training programme for audit and technology-risk teams, updated as regulation and methodology evolve. That matters because audit is not a peripheral enterprise use case. It is one of the systems through which trust is built in corporate reporting. Once agentic AI reaches that part of the stack, questions around evidence trails, professional judgement and accountability stop being abstract.

There are also signs that some markets are already thinking about this as a customer-experience issue, not just a technical one. In KPMG Singapore’s latest customer experience report, agentic AI is presented as both an ‘orchestrator’ working across systems behind the scenes and a ‘participant’ interacting directly with customers. The same report says 53% of respondents cite data security as one of their top three AI concerns, 45% cite incorrect responses and 40% cite the inability to reach human support, a useful reminder that trust in these systems will not be won by automation alone.

Taken together, the latest launches, warnings and regulatory comments suggest that agentic systems are moving into live environments before the surrounding standards and controls are fully settled. Companies are introducing new uses in payments, audit and customer-facing services, while regulators, risk teams and infrastructure providers work through how those systems should be authorised, monitored and reviewed. How quickly that framework comes together may determine how far and how fast agentic systems move into everyday use.