Multi-agent systems based on large language models (LLMs) require efficient and secure communication protocols.
The paper presents the LLM Delegate Protocol (LDP), a protocol designed to overcome the limitations of existing approaches such as A2A and MCP, which do not consider model properties as fundamental elements.

Key Features of LDP

LDP introduces five main mechanisms:

  1. Rich delegate identity cards: identification of delegates with quality hints and reasoning profiles.
  2. Progressive payload modes: negotiation and fallback for data transfer.
  3. Governed sessions: persistent context for communication sessions.
  4. Structured provenance tracking: confidence and verification status management.
  5. Trust domains: enforcement of security boundaries at the protocol level.

Implementation and Evaluation

LDP was implemented as a plugin for the JamJet agent runtime and evaluated against A2A and random baselines using local Ollama models. The results show an approximately 12x lower latency on simple tasks due to delegate specialization. Semantic frame payloads reduce the token count by 37% with no quality loss. Governed sessions eliminate 39% of token overhead at 10 rounds. Simulated analysis shows architectural advantages in attack detection (96% vs. 6%) and failure recovery (100% vs. 35% completion).

For those evaluating on-premise deployments, there are trade-offs to consider carefully. AI-RADAR offers analytical frameworks on /llm-onpremise to evaluate these aspects.