Between the Map and the Memory

Tom & Jakub · Fri, 20 Feb 2026

Why enterprise agent infrastructure needs both code intelligence and agent memory — and why that combination is new.

In our last post, we looked at what OpenAI's harness experiment proved: that context engineering is real infrastructure work, not a prompting exercise. That one team, one repo, five months of careful curation was enough to demonstrate the concept.

Now the question enterprises are actually facing is different. Not can we build a harness? but how do we build one that works across hundreds of repos, thousands of agents, and an institutional knowledge base that has accumulated for decades?

To answer that, it helps to understand two technical worlds that have been evolving in parallel — and why Ctx| sits between them.


The map and the memory

There are two types of tools worth understanding as reference points: code intelligence tools like Sourcegraph and knowledge graph platforms like Cognee. Because they define the shape of the gap we're filling.

Sourcegraph is code intelligence. It gives humans and AI agents a structural map of a codebase: search across millions of lines, navigate cross-repo dependencies, understand what exists and where. Their tagline of "code understanding for humans and agents" is accurate. If you want to know where every call to a deprecated API lives, or how a module boundary is drawn across 50 repositories, Sourcegraph is the tool. It's fundamentally a read layer over your code: precise, structural, enormously valuable.

What it isn't is a learning system. It doesn't watch what agents do and update its model of the world accordingly. It doesn't govern which instructions reach which agents. It doesn't know that last Tuesday three agents independently made the same architectural mistake because none of them had access to the ADR that prohibited it.

Cognee is agent memory. Where Sourcegraph maps what is, Cognee remembers activity through the unstructured documents you provide it. It ingests data — documents, logs, session traces — and builds a persistent knowledge graph with vectors and relationships, so agents can query it semantically across sessions. It's the infrastructure that stops agents from being goldfish: every session isolated, every lesson lost.

What it isn't is code-aware. Cognee is domain-agnostic by design: it works for credit card portfolios, policy documents, Hacker News threads. That generality is a strength for their market. But for software engineering specifically, you need a system that understands the ontology of code — repos, modules, functions, types, dependencies, ownership, domains — not just "documents in a graph." Additionally, Cognee has no governance model. No concept of instruction hierarchies, no promotion and demotion of patterns, no way to say "this memory applies to agents working in the payments domain but not the auth domain."

Interactive diagram available on desktop.


The problem neither solves

Both tools were built with a primary mental model of one agent, one session, or one team navigating one codebase. That's the world of 2023 and most of 2024–2025.

The world we're rapidly entering is different. Organisations aren't deploying one agent. They're deploying hundreds, soon thousands — running concurrently, touching multiple repos, crossing domain boundaries, making decisions that compound on each other. When one agent introduces a pattern, thirty others may replicate it before anyone realises it contradicts a decision the architecture team made six months ago.

This is the scenario neither code intelligence nor agent memory was designed for. Sourcegraph can tell you the code is inconsistent after the fact. Cognee can store what each agent did in its session. But neither prevents the divergence from happening, neither governs what context reaches which agents, and neither learns across the fleet to make every subsequent agent smarter from the mistakes and insights of all the others.

The OpenAI team named this problem precisely: "When the agent struggles, we treat it as a signal: identify what is missing — tools, guardrails, documentation — and feed it back into the repository." That's a manual feedback loop run by a small elite team on a single product. At org scale, you need that loop to close automatically.


What Ctx| is

Ctx| is the knowledge graph infrastructure layer that sits between your agents and your entire software estate — not one repo, not one session, but the full organisational picture.

It takes what's valuable from both worlds and builds the layer companies actually need:

From the code intelligence world: deep, structural understanding of your software estate. Repos roll up to domains, domains roll up to org. Typed entities — functions, modules, services, owners, ADRs, dependencies. Scalable code search. Traversable relationships. The kind of graph that lets an agent ask "what is the blast radius if I change this interface?" and get a real answer.

From the agent memory world: persistent, self-learning knowledge that accumulates as agents work. Memories that survive sessions. Patterns that generalise across the fleet. A feedback loop where every agent interaction enriches the graph, and the graph makes every subsequent agent interaction better.

And the layer neither provides: governance. An instruction hierarchy — AGENTS.md, skills, MCPs — versioned in git, reviewed in PRs, with promotion and demotion logic so the right context reaches the right agent at the right time. Visibility into what agents can do and what they're allowed to do. Control mechanisms that work at the same scale as the agents themselves.

One MCP connection. Every agent — Cursor, Claude Code, Copilot, custom workspaces — connected to the same graph, the same governance, the same accumulated institutional knowledge.


Why this is especially important for technical companies

Sourcegraph and Cognee both serve teams with relatively contained contexts. One product, one platform, one department. The enterprise challenge is categorically different.

Enterprises have decades of accumulated decisions spread across Confluence pages, SharePoint, Slack threads, Linear tickets, the heads of people who left two years ago. They have regulatory boundaries that agents cannot cross without creating compliance risk. They have platform teams whose job is to set standards that product teams are supposed to follow — and currently have no way to enforce those standards when agents are doing the building.

Birgitta Böckeler asked the right question in her analysis of the OpenAI experiment: "Will harnesses become the new service templates?" Our answer is yes — but only if there's infrastructure to govern them at scale. Service templates get forked and drift. Harnesses will too, unless the knowledge graph underneath them is shared, governed, and self-correcting.

That's the infrastructure Ctx| provides. Not a harness for one team's one product. The infrastructure layer that makes harnesses possible at org scale.


The combinatorial problem

Here's the way to think about why this matters now, not in two years.

Every enterprise already has a code intelligence problem (Sourcegraph exists because navigating large codebases is genuinely hard) and an agent memory problem (Cognee exists because stateless agents are genuinely limited). Both problems existed before autonomous agents went mainstream.

But those two problems were manageable when humans were doing the work. Humans remember context across sessions. Humans route themselves to the right knowledge source. Humans notice when a pattern contradicts an earlier decision.

Agents don't do any of those things natively. And when you have thousands of agents running concurrently, the gaps compound multiplicatively. Every missing piece of context, every repeated mistake, every pattern replicated across thirty codebases before anyone notices — these aren't individual errors anymore. They're systemic failures at machine speed.

Ctx| is built for that world. The map and the memory, combined, with governance — delivered through a single MCP to every agent in your organisation.


Ctx| is being built by Tom & Jakub. It has an open-source core, so you can deploy within your own infrastructure or use our managed hosting.

Join the waitlist