Memory that survives the session.

CtxOne is the persistent, searchable, accountable memory layer for AI agents. Write a fact once — Claude, Cursor, and every other tool you use remembers it forever, across sessions, across branches, with full provenance.

$ curl -sSL https://raw.githubusercontent.com/ctxone/ctxone/main/install.sh | sh

BSL-1.1 → Apache 2.0 Self-hosted MCP native Zero telemetry

Why context keeps dying

Every AI coding session starts cold. You re-explain the same architectural decisions, the same licensing rules, the same reasons you picked SQLite over Postgres. The model burns most of its context window re-learning what you already told it yesterday, and by afternoon it's still confidently wrong about basic facts.

CtxOne fixes this by giving your AI tools a real memory layer — stored locally, versioned like git, attributable like git blame, and cheap to query because it returns only the facts relevant to this question.

Read the full argument →

Three verbs. That's the whole product.

CtxOne boils down to three things an AI tool does to memory. Everything else — branches, priming, session stats, provenance — falls out of these.

Remember

Write a fact once. It survives sessions, branches, and tool switches. Importance maps to a confidence score so high-value facts aren't drowned out by chatter.

$ ctx remember "BSL-1.1 for all new repos" \
    --importance high --context licensing

# from an LLM:
remember(fact="BSL-1.1 for all new repos",
         importance="high", context="licensing")

Recall

Ask a topic. Get pinned context plus relevant facts, capped at a token budget. Every response includes the live savings ratio so the 60× claim is provable in real time.

$ ctx recall "licensing" --budget 1500

# from an LLM:
recall(topic="licensing", budget=1500)

Blame

Every commit carries an agent ID, a timestamp, an intent, and optional reasoning. ctx blame traces a fact back to the tool, user, and session that wrote it.

$ ctx blame /memory/legal/bsl

# who wrote it, when, and why:
agent:  claude-code
when:   2026-03-14T09:17:23Z
intent: Observe
reason: user preference

The 60× number, provable in real time

Every recall response includes a _ctxone_stats field showing exactly how many tokens you sent vs how many you would have sent if the model had seen the full memory graph. This isn't marketing — it's the live delta the Hub computes on every request.

Flat memory
What the model sees without CtxOne
12,000 tokens
Every session, every time
CtxOne recall
What the model sees with CtxOne
200 tokens
Topic-matched + pinned
60× savings ratio

Shown: a mature graph of a few hundred facts, recalled for a specific topic. On a fresh graph the ratio is close to 1× — it grows with the graph, because that's how selective retrieval works. See the math →

Works with the tools you already use

CtxOne exposes MCP for AI coding tools, native plugins for chat UIs, and direct client libraries for everything else. ctx init auto-detects and wires them in one command.

Install it. Use it. See the ratio climb.

There is no signup. No server to rent. No SaaS bill. Just a binary that runs on your laptop and a graph file under ~/.ctxone.

$ curl -sSL https://raw.githubusercontent.com/ctxone/ctxone/main/install.sh | sh

Source-available under BSL-1.1. Every release converts to Apache-2.0 four years after it ships — full story.