Databricks reports a transition in enterprise AI adoption towards "agentic" systems, characterized by intelligent workflows.
The first wave of generative AI promised significant transformations, but often it was limited to isolated chatbots and stalled pilot programs. However, Databricks' telemetry data indicates a change of course.
Agent Systems: a new era for AI
Data from over 20,000 organizations, including 60% of Fortune 500 companies, highlights a rapid shift to "agentic" architectures, where models not only retrieve information, but also plan and execute workflows independently. Between June and October 2025, the use of multi-agent workflows on the Databricks platform increased by 327%, signaling that AI is becoming a fundamental component of the system architecture.
The "Supervisor Agent" is driving this growth. Instead of relying on a single model to handle every request, a supervisor acts as an orchestrator, breaking down complex queries and delegating tasks to specialized sub-agents or tools. Since its launch in July 2025, the Supervisor Agent has become the leading use case, accounting for 37% of usage by October. Technology companies are at the forefront of this adoption, developing almost four times more multi-agent systems than any other sector.
Traditional infrastructure under pressure
As agents move from answering questions to executing tasks, the underlying data infrastructure faces new demands. Traditional OLTP databases were designed for human-speed interactions with predictable transactions and infrequent schema changes. Agent workflows invert these assumptions. AI agents now generate continuous, high-frequency read and write patterns, creating and tearing down environments programmatically to test code or run scenarios. Two years ago, AI agents created only 0.1% of databases; today, that figure stands at 80%. Furthermore, 97% of database testing and development environments are now built by AI agents.
The multi-model standard
Vendor lock-in remains a persistent risk for business leaders. Data indicates that organizations are actively mitigating this by adopting multi-model strategies. As of October 2025, 78% of companies used two or more Large Language Model (LLM) families, such as ChatGPT, Claude, Llama and Gemini. The percentage of companies using three or more model families increased from 36% to 59% between August and October 2025. Retail companies are setting the pace, with 83% employing two or more model families to balance performance and costs. The report highlights that 96% of all inference requests are processed in real time.
Governance accelerates enterprise AI deployments
Organizations that use AI governance tools put over 12 times more AI projects into production than those that do not. Similarly, companies that use evaluation tools to systematically test model quality achieve nearly six times more production deployments. Governance provides the necessary protections, such as defining how data is used and setting rate limits, which gives stakeholders the security needed to approve deployment.
The current value of agent AI lies in automating routine, mundane but necessary tasks. The main use cases of AI vary by sector, but focus on solving specific business problems.
For those evaluating on-premise deployments, there are trade-offs to consider. AI-RADAR offers analytical frameworks on /llm-onpremise to evaluate these aspects.
๐ฌ Commenti (0)
๐ Accedi o registrati per commentare gli articoli.
Nessun commento ancora. Sii il primo a commentare!