MCP-native semantic memory

Memory for AI agents

Single binary. Zero config. Sub-25ms recall.

Zero dependencies

No Docker. No Python. No cloud. One Rust binary with local embeddings. macOS (Intel + ARM) and Linux. All data stays in ~/.sediment/.

Intelligent recall

Semantic search with memory decay, trust scoring, relationship graph, and auto-consolidation. Not just vector search.

MCP native

Works with Claude Code, Claude Desktop, Cursor, VS Code Copilot, Windsurf, JetBrains — any MCP client.

How Sediment compares

The simplest path to persistent AI memory.

Sediment

Single Rust binary, zero config

  • Single binary install
  • Zero dependencies
  • 5 focused MCP tools
  • Local embeddings
  • Relationship graph
  • Memory decay & trust scoring
Get started

OpenMemory MCP

Mem0's local MCP server

  • Docker + Postgres + Qdrant
  • 3 services required
  • 10+ MCP tools
  • API-dependent embeddings
  • No relationship graph
  • No memory decay

mcp-memory-service

Python MCP memory server

  • Python + pip install
  • Python runtime + dependencies
  • 24 MCP tools
  • API-dependent embeddings
  • No relationship graph
  • No memory decay

Performance

Sub-25ms recall at 10K items. All local, no network round-trips.

~10ms 100 items
~15ms 1K items
~23ms 10K items

With full graph features enabled. Apple M3 Max. Methodology

5 tools. That's it.

A focused API that LLMs can actually use well.

store

Save content with title, tags, metadata, expiration, scope, and related item links

recall

Semantic search with decay scoring, trust weighting, graph expansion, and co-access suggestions

list

Browse stored items by scope with tag filtering

forget

Delete an item from the vector store and relationship graph

connections

Explore the relationship graph for any item

Under the hood

Three-database hybrid

LanceDB for vectors, SQLite for the relationship graph, SQLite for access tracking. All embedded, zero config.

Memory decay

30-day half-life freshness scoring combined with log-scaled access frequency. Old memories rank lower but are never deleted.

Trust-weighted scoring

Validated and well-connected memories score higher. The more you use a memory, the more trustworthy it becomes.

Auto-consolidation

Near-duplicates auto-merged. Similar items linked. Runs in the background, non-blocking.

Project scoping

Automatic context isolation per project. Same-project items boosted, cross-project results flagged.

Local embeddings

all-MiniLM-L6-v2 via Candle. 384-dim vectors, no API keys, no network calls.

Type-aware chunking

Intelligent splitting for markdown, code, JSON, YAML, and plain text. Long content is chunked with individual embeddings.

Cross-project recall

Results from other projects are surfaced and flagged with provenance metadata. Knowledge flows across your work.

Auto-tagging

Items stored without tags automatically inherit tags from similar existing items. Your memory organizes itself.

Get started

Install

brew install rendro/tap/sediment

Or via cargo or shell installer

Add to your MCP client

{ "mcpServers": { "sediment": { "command": "sediment" } } }

Works with Claude Code, Claude Desktop, Cursor, VS Code, Windsurf, JetBrains

CLI included

Manage your memory from the terminal.

$
sediment # Start MCP server
$
sediment init # Set up project integration
$
sediment stats # Show database statistics
$
sediment list # List stored items