Your AI agent already knows what matters. Deep Recall gives it a memory that strengthens with use, fades when stale, catches its own contradictions, and never needs a second LLM to decide what's worth keeping.
1from deeprecall_ai import DeepRecall 2 3dr = DeepRecall(api_key="ec_live_...") 4 5# Before -- pull context 6ctx = dr.agent_context(messages, person="alice") 7 8# During -- your agent stores what matters 9dr.store("Alice prefers dark mode", person="alice") 10 11# After -- biology takes over 12dr.agent_learn(messages, signal_health=0.8)
1import { DeepRecall } from '@zappaidan/deeprecall'; 2 3const dr = new DeepRecall({ apiKey: "ec_live_..." }); 4 5// Before -- pull context 6const ctx = await dr.agentContext(messages, { person: "alice" }); 7 8// During -- your agent stores what matters 9await dr.store("Alice prefers dark mode", { person: "alice" }); 10 11// After -- biology takes over 12await dr.agentLearn(messages, { signalHealth: 0.8 });
1# Store a memory 2curl -X POST https://api.deeprecall.dev/v1/memories \ 3 -H "X-API-Key: ec_live_..." \ 4 -H "Content-Type: application/json" \ 5 -d '{"content": "Alice prefers dark mode."}' 6 7# Search memories 8curl "https://api.deeprecall.dev/v1/memories/search?q=preferences" \ 9 -H "X-API-Key: ec_live_..."
Other memory APIs intercept your conversations and run them through a separate LLM to decide what matters. That's like having a stranger take notes at your therapy session. Your agent already understands the conversation — it just needs somewhere to put the memories, and intelligence to manage them.
Zero extra API calls for extraction. Your agent stores what it thinks matters. No hidden OpenAI bills, no per-memory LLM costs. 41ms search, not 300ms.
Memories decay based on access patterns, not just age. Frequently recalled memories resist decay. Based on ACT-R cognitive architecture.
When your agent cites a memory, it gets stronger. Uncited memories in failed responses get weaker. Your memory store self-optimizes.
When a new memory conflicts with an existing one, we flag it. Your agent can resolve it, not silently serve stale facts.
Store how users felt, not just what they said. Mood-congruent retrieval surfaces memories encoded in similar emotional states.
Full-text keywords + semantic vectors, fused with Reciprocal Rank Fusion. 41ms median latency at scale. Finds memories by meaning and by exact words.
We only claim what we actually ship.
| Feature | Deep Recall | Mem0 | Zep | Letta |
|---|---|---|---|---|
| Extra LLM calls for extraction | None | Required | Required | Required |
| Search latency (p50) | 41ms | ~200ms | ~200ms | ~300ms |
| Semantic search | Yes | Yes | Yes | Yes |
| Entity tracking | Yes | Yes | Yes | No |
| Graph / topology | $19/mo | $249/mo | Yes | No |
| Intelligent forgetting | ACT-R | No | No | No |
| Hebbian reinforcement | Yes | No | No | No |
| Emotional context (VAD) | Yes | No | No | No |
| Contradiction detection | Yes | No | No | No |
| Agent decides what to store | Yes | No — LLM decides | No — LLM decides | Partial |
| GDPR export / delete | Yes | ? | ? | ? |
Start free. Every plan includes all intelligence features.
No credit card required. Free tier includes 10,000 memories.