Blog

Latest announcements and updates

Context is the bottleneck. Not the model.

Frontier models and bigger context windows make agents more capable — but they don't make them more knowledgeable about how your organisation builds things. The limit is what context exists, whether it's right, and whether it travels.

Infinite context is theoretically possible. That's just the start.

Recursive Language Models push token-level working memory to millions of tokens — and make it obvious why context window limits are necessary but not sufficient for real engineering organisations.

Proactive context and memory for AI agents

A research paper gives the field a shared vocabulary for agent memory - forms, functions, dynamics. Where engineering is, where it is not, and what we think must be built next.

AGENTS.md is the wrong conversation

A paper dropped this week that tested AGENTS.md files across multiple models and real GitHub issues. Context files reduced task success rates and inflated inference costs. The debate is useful — but it's pointing at the wrong solution.

Agent memory at scale

A primer on the memory types agents depend on — and why the difference matters when you have thousands of them running at once.

Between the Map and the Memory

Why enterprise agent infrastructure needs both code intelligence and agent memory — and why that combination is new.

Systemising an agent-agnostic harnesses

What OpenAI's agent experiment teaches us about the real infrastructure problem, and why one repo is just the start.

Hello from ctx|

Why we're building knowledge graph infrastructure for the agentic era, and what we learned from Appear.