AriadneMem is a new memory system designed to improve the capabilities of Large Language Model (LLM) agents in managing long-term conversations. The main challenge addressed is maintaining information accuracy in contexts with limited token budgets.
Existing Problems
Current memory systems struggle with two persistent issues: disconnected evidence, where answers require linking facts distributed over time, and state updates, where evolving information creates conflicts with older static logs.
AriadneMem Architecture
AriadneMem adopts a two-phase approach:
- Offline Construction Phase: This phase uses entropy-aware gating to filter noise and low-information messages. It also applies conflict-aware coarsening to merge static duplicates while preserving state transitions as temporal edges.
- Online Reasoning Phase: Instead of expensive iterative planning, AriadneMem executes algorithmic bridge discovery to reconstruct missing logical paths between retrieved facts, followed by single-call topology-aware synthesis.
Experimental Results
Tests with GPT-4o on LoCoMo demonstrated that AriadneMem improves Multi-Hop F1 by 15.2% and Average F1 by 9.0% compared to existing baselines. A significant advantage is the reduction of total runtime by 77.8%, using only 497 context tokens. The code is available on GitHub.
๐ฌ Comments (0)
๐ Log in or register to comment on articles.
No comments yet. Be the first to comment!