Your agent already thinks. We give it a memory.

Memory that
feels alive

Your AI agent already knows what matters. Deep Recall gives it a memory that strengthens with use, fades when stale, catches its own contradictions, and never needs a second LLM to decide what's worth keeping.

1from deeprecall_ai import DeepRecall
2
3dr = DeepRecall(api_key="ec_live_...")
4
5# Before -- pull context
6ctx = dr.agent_context(messages, person="alice")
7
8# During -- your agent stores what matters
9dr.store("Alice prefers dark mode", person="alice")
10
11# After -- biology takes over
12dr.agent_learn(messages, signal_health=0.8)
1import { DeepRecall } from '@zappaidan/deeprecall';
2
3const dr = new DeepRecall({ apiKey: "ec_live_..." });
4
5// Before -- pull context
6const ctx = await dr.agentContext(messages, { person: "alice" });
7
8// During -- your agent stores what matters
9await dr.store("Alice prefers dark mode", { person: "alice" });
10
11// After -- biology takes over
12await dr.agentLearn(messages, { signalHealth: 0.8 });
1# Store a memory
2curl -X POST https://api.deeprecall.dev/v1/memories \
3  -H "X-API-Key: ec_live_..." \
4  -H "Content-Type: application/json" \
5  -d '{"content": "Alice prefers dark mode."}'
6
7# Search memories
8curl "https://api.deeprecall.dev/v1/memories/search?q=preferences" \
9  -H "X-API-Key: ec_live_..."

What makes this different

Other memory APIs intercept your conversations and run them through a separate LLM to decide what matters. That's like having a stranger take notes at your therapy session. Your agent already understands the conversation — it just needs somewhere to put the memories, and intelligence to manage them.

No LLM Tax

Zero extra API calls for extraction. Your agent stores what it thinks matters. No hidden OpenAI bills, no per-memory LLM costs. 41ms search, not 300ms.

Intelligent Forgetting Only us

Memories decay based on access patterns, not just age. Frequently recalled memories resist decay. Based on ACT-R cognitive architecture.

Hebbian Reinforcement Only us

When your agent cites a memory, it gets stronger. Uncited memories in failed responses get weaker. Your memory store self-optimizes.

Contradiction Detection Only us

When a new memory conflicts with an existing one, we flag it. Your agent can resolve it, not silently serve stale facts.

Emotional Context Only us

Store how users felt, not just what they said. Mood-congruent retrieval surfaces memories encoded in similar emotional states.

Hybrid Search

Full-text keywords + semantic vectors, fused with Reciprocal Rank Fusion. 41ms median latency at scale. Finds memories by meaning and by exact words.

Honest comparison

We only claim what we actually ship.

Feature Deep Recall Mem0 Zep Letta
Extra LLM calls for extractionNoneRequiredRequiredRequired
Search latency (p50)41ms~200ms~200ms~300ms
Semantic searchYesYesYesYes
Entity trackingYesYesYesNo
Graph / topology$19/moYesNo
Intelligent forgettingACT-RNoNoNo
Hebbian reinforcementYesNoNoNo
Emotional context (VAD)YesNoNoNo
Contradiction detectionYesNoNoNo
Agent decides what to storeYesNo — LLM decidesNo — LLM decidesPartial
GDPR export / deleteYes???

Simple pricing

Start free. Every plan includes all intelligence features.

Free

$0/mo
  • 10,000 memories
  • 30 requests/min
  • Hybrid search
  • Decay + reinforcement
  • Contradiction detection
Start Free

Builder

$19/mo
  • 100,000 memories
  • 120 requests/min
  • Everything in Free
  • Graph topology
  • Emotional context
Subscribe $19/mo

Enterprise

$149/mo
  • 10,000,000 memories
  • 3,000 requests/min
  • Everything in Pro
  • Dedicated support
  • Custom integrations
Subscribe $149/mo

Your agent already thinks.
Give it a memory.

No credit card required. Free tier includes 10,000 memories.