< Academy

A-MEM: Agentic Memory for LLM Agents

Research
Dr Fabio Rodriguez
Senior ML Engineer

In this Passion Academy, we study A-MEM, a dynamic memory system for LLM-based agents. Traditional fixed storage system struggles with growing, evolving data. A-MEM stores self-contained chunks, links them by content similarity, and continuously updates connections for efficiency. Queries retrieve a small number of relevant chunks and their neighbours, improving performance, reducing noise, and saving both tokens and prompting costs. This evolving memory system organises knowledge by meaning rather than rigid rules and can scale to large datasets with careful pruning and merging strategies.

View the presentation slides below

Large Language Models are powerful reasoners but they are notoriously bad at remembering.

Most agent systems today rely on static memory designs: fixed databases, rigid schemas or simple chunk-and-retrieve pipelines. These approaches work at small scales but they struggle as agents interact with the world, accumulate knowledge and need to adapt over time.

In this session Fabio, our senior ML engineer, introduces A-MEM - an agentic memory system designed to evolve, reorganise itself and scale with the agent that uses it.

Rather than treating memory as a passive store, A-MEM treats memory as an active, living structure.

The Problem: Rigid Memory in Smart Agents

Most LLM agents today rely on predefined memory structures. Developers decide in advance:

  • what gets stored
  • how it’s indexed
  • how it’s retrieved

This creates two fundamental issues:

  1. Static structure
  2. Memory layouts don’t change as new information arrives.
  3. Bottlenecks
  4. When agents learn something new, the system cannot autonomously reorganise its internal knowledge.

As agents become more capable (e.g. planning, reasoning, interacting over long horizons) these rigid memory systems start to work against them rather than supporting them .

Why Chunk-Based Retrieval Falls Short

Modern RAG-style systems usually work like this:

  • split data into chunks
  • embed each chunk
  • retrieve the k nearest chunks for a query
  • feed them into the prompt

While effective, this approach has well-known limitations:

  • information is often repeated across chunks
  • relationships between chunks are weak or implicit
  • knowledge is static — it doesn’t evolve as the agent learns
  • performance degrades badly at large database sizes

In short: retrieval works, but memory doesn’t organise itself .

The Core Idea Behind A-MEM

A-MEM rethinks memory from the ground up.

Instead of treating memory as isolated chunks, A-MEM stores self-contained memory units and explicitly links them based on meaning. Over time, those links are continuously updated, strengthened, weakened, or removed.

The result is a semantic memory graph that evolves organically as the agent interacts with the world.

This idea is inspired by the Zettelkasten method  a human knowledge system based on:

  • atomic notes
  • flexible linking by meaning, not hierarchy
  • structure that emerges rather than being imposed top-down

A-MEM applies the same philosophy to LLM agents .

How A-MEM Works

A-MEM operates as an autonomous loop with four main stages, each powered by LLM-based agents.

1. Note Construction

Incoming information is converted into a memory unit containing:

  • content
  • context
  • tags and keywords
  • embeddings
  • metadata (timestamps, links, etc.)

Each memory is designed to be self-contained and atomic.

2. Link Generation

When a new memory is created:

  • a vector search retrieves the top-k most similar existing memories
  • an LLM evaluates whether meaningful semantic links exist
  • relevant memories are connected

This creates an explicit network of meaning rather than implicit proximity alone.

3. Memory Evolution

This is where A-MEM becomes truly agentic.

For memories in the neighbourhood:

  • links are strengthened, weakened or removed
  • memory content, context and tags are updated
  • the system maintains a bounded number of links (k) for efficiency

Memory doesn’t just grow… it evolves.

4. Memory Retrieval

When a query arrives:

  • retrieve the k most relevant memories
  • expand to their neighbours
  • use this compact, semantically coherent subgraph to answer

This drastically reduces noise while preserving relevant context .

Why This Matters: Results & Benefits

A-MEM delivers consistent gains across key dimensions:

  • Higher accuracy, F1, and recall compared to strong baselines
  • Significant improvements on multi-hop and long-term reasoning tasks
  • Link generation and memory evolution are critical — removing them causes major performance drops
  • Efficient retrieval: moderate memory sizes achieve the best accuracy–noise trade-off
  • 85–93% fewer tokens per query
  • Costs below $0.0003 per operation
  • Fast inference and strong scalability across models and datasets

In short: better reasoning, lower cost, cleaner context .

Practical Impact and Current Limitations

A-MEM is not just conceptual:

  • an open-source implementation is available
  • it can be deployed in real agent pipelines (e.g. Farm-GPT-style systems)

That said, the system is still evolving.

Known limitations

  • memory redundancy is not yet explicitly handled
  • large, densely connected clusters can limit new link formation

Proposed improvements

  • relevance checks to decide whether to create new memories or evolve existing ones
  • merging oversized clusters into higher-level synthesised memories

These are active research directions rather than fundamental blockers .

The Bigger Picture

As LLM agents move from short-lived tools to long-running systems, memory becomes the bottleneck.

A-MEM shows that memory doesn’t have to be static, rigid or developer-defined. It can be:

  • adaptive
  • semantic
  • self-organising
  • cost-efficient
  • and agent-driven

This shift  (from storage to agentic memory) is a key step toward more capable, autonomous AI systems.

Reference

https://arxiv.org/pdf/2502.12110

< back to academy
< previous
Next >