The Gap Between Spending and Value in the Era of AI Agents
The landscape of global artificial intelligence investments is rapidly accelerating, yet KPMG data highlights a concerning trend: the gap between enterprise AI spending and measurable business value is widening. KPMG's first quarterly Global AI Pulse survey reveals that, despite global organizations planning to invest a weighted average of $186 million in AI over the next 12 months, only 11% have reached the stage of deploying and scaling AI agents in ways that produce enterprise-wide business outcomes.
This finding does not imply a failure of AI itself; 64% of respondents state that AI is already delivering meaningful business outcomes. However, the definition of โmeaningfulโ is often ambiguous. The distance between incremental productivity gains and compounding operational efficiency, capable of truly impacting margins, remains substantial for most organizations. This scenario underscores the need for a more strategic and less fragmented approach to AI adoption.
The Architecture of the Performance Gap: Comparing Strategies
KPMG's report distinguishes between โAI leadersโ โ organizations that are scaling or actively operating agentic AI โ and everyone else. The gap in outcomes between these two groups is striking. Among AI leaders, 82% report that AI is already delivering meaningful business value, compared to 62% of their peers. This 20-percentage-point spread, while seemingly modest, compounds quickly when considering what it reflects: not just better tooling, but fundamentally different deployment philosophies.
Organizations in that 11% are deploying agents that coordinate work across functions, route decisions without human intermediation at every step, surface enterprise-wide insights from operational data in near real-time, and flag anomalies before they escalate into incidents. For example, in IT and engineering functions, 75% of AI leaders are using agents to accelerate code development versus 64% of their peers. In operations, where supply-chain orchestration is the primary use case, the split is 64% versus 55%. These are not marginal differences in tool adoption rates; they reflect different levels of process re-architecture. Most enterprises have deployed AI by layering models onto existing workflows (e.g., a co-pilot here, a summarization tool there) without redesigning the process those tools sit inside. This produces incremental gains. Organizations closing the performance gap have inverted this approach: they are redesigning the process first, then deploying agents to operate within the redesigned structure. The difference in return on AI spend between these two approaches, over a three-to-five-year horizon, is likely to be the defining competitive variable in several industries.
Hidden Costs and Governance as an Enabler
KPMG's investment figures warrant scrutiny. A weighted global average of $186 million per organization sounds substantial, but regional variance tells a more interesting story. ASPAC leads at $245 million, the Americas at $178 million, and EMEA at $157 million. These figures represent planned spend across model licensing, compute infrastructure, professional services, integration, and the governance and risk management apparatus needed to operate AI responsibly at scale.
The question is not whether $186 million is too much or too little; it is what proportion of that figure is being allocated to the operational infrastructure required to derive value from the models themselves. Survey data suggests that most organizations are still underweighting this latter category. Compute and licensing costs are visible and relatively easy to budget for. The friction costs โ the engineering hours spent integrating AI outputs with legacy ERP systems, the latency introduced by Retrieval-Augmented Generation (RAG) pipelines built on top of poorly structured data, and the compliance overhead of maintaining audit trails for AI-assisted decisions in regulated industries โ tend to surface late in deployment cycles and often exceed initial estimates. Vector database integration, with providers like Pinecone, Weaviate, or Qdrant, is a useful example: building and maintaining this infrastructure adds meaningful engineering complexity and ongoing operational cost that rarely appears in initial AI investment proposals. For those evaluating on-premise deployments, these integration and local infrastructure management costs are critical components of the Total Cost of Ownership (TCO) and require careful planning.
Another crucial finding is the relationship between AI maturity and risk confidence. Among organizations still in the experimentation phase, just 20% feel confident in their ability to manage AI-related risks. Among AI leaders, that figure rises to 49%. Governance frameworks do not slow AI adoption among mature organizations; they enable it. The confidence to move faster โ to deploy agents into higher-stakes workflows, to expand agentic coordination across functions โ correlates directly with the maturity of the governance infrastructure surrounding those agents. This means that organizations treating governance as a retrospective compliance layer are doubly disadvantaged: they are slower to deploy and more exposed to operational risk. Organizations that have embedded governance into the deployment pipeline itself (e.g., model cards, automated output monitoring, explainability tooling, and human-in-the-loop escalation paths for low-confidence decisions) are the ones operating with the confidence that allows them to scale.
Regional Divergence and Future Outlook
For multinationals managing AI programs across regions, KPMG data flags material differences in deployment velocity and organizational posture that will affect global rollout planning. ASPAC is advancing most aggressively on agent scaling; 49% of organizations there are scaling AI agents, compared with 46% in the Americas and 42% in EMEA. ASPAC also leads on the more complex capability of orchestrating multi-agent systems, at 33%.
The barrier profiles also differ in ways that carry real operational implications. In both ASPAC and EMEA, 24% of organizations cite a lack of leadership trust and buy-in as a primary barrier to AI agent deployment. In the Americas, that figure drops to 17%. Agentic systems, by definition, make or initiate decisions without per-instance human approval. In organizational cultures where decision accountability is tightly concentrated at the senior level, this can generate institutional resistance that no amount of technical capability resolves. The fix is governance design: specifically, defining in advance what categories of decision an agent is authorized to make autonomously, what triggers escalation, and who carries accountability for agent-initiated outcomes. For those evaluating on-premise deployments, the ability to customize and control these governance aspects is a significant advantage.
One data point in the KPMG survey that deserves particular attention from CFOs and boards: 74% of respondents say AI will remain a top investment priority even in the event of a recession. This is either a sign of genuine conviction about AI's role in cost structure and competitive positioning, or it reflects a collective commitment that has not yet been tested against actual budget pressure. Probably both, in different proportions across different organizations. What it does indicate is that the window for organizations still in the experimentation phase is not indefinite. If the 11% of AI leaders continue to compound their advantage, the question for the remaining 89% is not whether to accelerate AI deployment, but how to do so without compounding the integration debt and governance deficits that are already constraining their returns. AI-RADAR offers analytical frameworks on /llm-onpremise to evaluate trade-offs between different deployment strategies, including TCO and data sovereignty aspects.
๐ฌ Comments (0)
๐ Log in or register to comment on articles.
No comments yet. Be the first to comment!