A-MEM: Agentic Memory for LLM Agents

.gif)

.gif)
In this Passion Academy, we study A-MEM, a dynamic memory system for LLM-based agents. Traditional fixed storage system struggles with growing, evolving data. A-MEM stores self-contained chunks, links them by content similarity, and continuously updates connections for efficiency. Queries retrieve a small number of relevant chunks and their neighbours, improving performance, reducing noise, and saving both tokens and prompting costs. This evolving memory system organises knowledge by meaning rather than rigid rules and can scale to large datasets with careful pruning and merging strategies.
View the presentation slides below
Large Language Models are powerful reasoners but they are notoriously bad at remembering.
Most agent systems today rely on static memory designs: fixed databases, rigid schemas or simple chunk-and-retrieve pipelines. These approaches work at small scales but they struggle as agents interact with the world, accumulate knowledge and need to adapt over time.
In this session Fabio, our senior ML engineer, introduces A-MEM - an agentic memory system designed to evolve, reorganise itself and scale with the agent that uses it.
Rather than treating memory as a passive store, A-MEM treats memory as an active, living structure.
Most LLM agents today rely on predefined memory structures. Developers decide in advance:
This creates two fundamental issues:
As agents become more capable (e.g. planning, reasoning, interacting over long horizons) these rigid memory systems start to work against them rather than supporting them .
Modern RAG-style systems usually work like this:
While effective, this approach has well-known limitations:
In short: retrieval works, but memory doesn’t organise itself .
A-MEM rethinks memory from the ground up.
Instead of treating memory as isolated chunks, A-MEM stores self-contained memory units and explicitly links them based on meaning. Over time, those links are continuously updated, strengthened, weakened, or removed.
The result is a semantic memory graph that evolves organically as the agent interacts with the world.
This idea is inspired by the Zettelkasten method a human knowledge system based on:
A-MEM applies the same philosophy to LLM agents .
A-MEM operates as an autonomous loop with four main stages, each powered by LLM-based agents.
Incoming information is converted into a memory unit containing:
Each memory is designed to be self-contained and atomic.
When a new memory is created:
This creates an explicit network of meaning rather than implicit proximity alone.
This is where A-MEM becomes truly agentic.
For memories in the neighbourhood:
Memory doesn’t just grow… it evolves.
When a query arrives:
This drastically reduces noise while preserving relevant context .
A-MEM delivers consistent gains across key dimensions:
In short: better reasoning, lower cost, cleaner context .
A-MEM is not just conceptual:
That said, the system is still evolving.
These are active research directions rather than fundamental blockers .
As LLM agents move from short-lived tools to long-running systems, memory becomes the bottleneck.
A-MEM shows that memory doesn’t have to be static, rigid or developer-defined. It can be:
This shift (from storage to agentic memory) is a key step toward more capable, autonomous AI systems.
Reference
https://arxiv.org/pdf/2502.12110