Salesforce emphasizes that scaling AI at the enterprise level requires careful architectural planning, going beyond simply selecting the model.

The problem of "pristine islands"

Many AI pilot projects fail because they originate in controlled environments that do not reflect the complexity of real data. Franny Hsiao, EMEA Leader of AI Architects at Salesforce, explains that the most common mistake is the lack of a production-grade data infrastructure with end-to-end governance integrated from the start.

Pilot projects often use curated datasets and simplified workflows, ignoring the integration, normalization, and transformation needed to handle the volume and variability of real data. When attempting to scale these projects without addressing the underlying data issues, the systems break down, leading to data gaps and performance issues.

Companies that overcome this challenge integrate observability and guardrails throughout the AI lifecycle, gaining visibility and control over the effectiveness of the systems and user adoption.

Perceived responsiveness and offline intelligence

Salesforce addresses the latency problem, caused by the high computational load of reasoning models, by focusing on "perceived responsiveness" through Agentforce Streaming. This allows AI-generated responses to be delivered progressively, even while the reasoning engine works in the background. Transparency, with progress indicators and animations, helps manage user expectations and build trust.

For industries with field operations, such as utilities and logistics, continuous cloud connectivity is not always possible. On-device intelligence allows technicians to identify faulty components and solve problems even offline, with data synchronization occurring automatically when the connection is restored.

High-stakes gateways and standardization

To ensure governance, Salesforce requires human oversight for critical actions, such as creating, uploading, or deleting data, and for decisions that could be manipulated. This creates a feedback loop in which agents learn from human experience. Salesforce uses a "Session Tracing Data Model (STDM)" to track every interaction and provide visibility into the agent's logic.

For interoperability between agents from different vendors, standardization is needed on two levels: orchestration and semantics. Salesforce adopts open source standards such as MCP (Model Context Protocol) and A2A (Agent to Agent Protocol) for orchestration and co-founded OSI (Open Semantic Interchange) to unify semantics.

The future challenge will be to make enterprise data "agent-ready" through searchable and context-aware architectures, replacing traditional ETL pipelines.