๐ LLM
AI generated
CogCanvas: Enhanced Memory for Long LLM Conversations
## CogCanvas: A New Approach to Memory in LLMs
Large language models (LLMs) struggle to maintain high information fidelity in long conversations due to context window limitations. Existing techniques, such as truncation or summarization, often sacrifice important details.
CogCanvas proposes a solution to this problem. This training-free framework extracts cognitive artifacts (decisions, facts, reminders) from conversation turns and organizes them into a temporal graph for more effective retrieval.
## Performance and Advantages
In tests on the LoCoMo benchmark, CogCanvas achieved an overall accuracy of 34.7%, outperforming RAG (25.6%) and GraphRAG (13.7%). The improvement is particularly evident in temporal reasoning, with a +530% relative improvement compared to RAG and GraphRAG. Even in multi-hop causal reasoning, CogCanvas showed superior results, with a pass rate of 81.0% versus 40.0% for GraphRAG.
Furthermore, CogCanvas demonstrated a high recall rate (97.5%) and exact match preservation (93.0%) in controlled benchmarks, highlighting its ability to preserve key information.
## Implications
While highly-optimized approaches achieve higher absolute scores through dedicated training, CogCanvas offers an immediately-deployable alternative that significantly outperforms standard baselines. Code and data are available on [GitHub](https://github.com/tao-hpu/cog-canvas).
๐ฌ Commenti (0)
๐ Accedi o registrati per commentare gli articoli.
Nessun commento ancora. Sii il primo a commentare!