Learn - Context engineering
A growing field focused on curating and structuring context for humans and AI. Here’s what it is, why you’re seeing it more, and how we approach it.
What is context engineering?
Context engineering is the practice of treating context as a first-class concern: designing, curating, and maintaining the knowledge and structure that both people and AI use to work effectively. Instead of ad-hoc prompts or scattered docs, context is organised in a way that’s reusable, traceable, and versioned — so your team and your tools share the same source of truth.
That means knowledge stores, clear doc hierarchies, runbooks, templates, and explicit links between requirements, decisions, and outputs. The goal is context that’s easy to find, update, and feed into AI (skills, MCP, RAG, or prompts) without reinventing it each time.
Why the term is taking off
As more teams use AI assistants, agents, and agentic workflows, “context” has become the bottleneck: what the model sees, what rules it follows, and what knowledge it can use. The industry is converging on skills, MCP, plugins, and RAG — but without a discipline for how to structure and maintain that context, teams end up with duplication, drift, and one-off prompt engineering.
Context engineering names that discipline. It’s still early: the term is used more in practice than in formal definitions. What’s common is the idea that context should be curated, structured, and reusable — and that it’s worth investing in the same way you invest in code or documentation.
How we practice it
We treat context as code: knowledge and process live in the repo, versioned and installable. We use a numbered doc hierarchy (concept of operations → requirements → architecture → components) so there’s a clear order and traceability. We maintain a knowledge store — structured content that answers, runbooks, and AI tools consume — and we package that into packs you can install into your own codebase.
Runbooks, templates, and central context (e.g. user problems, goals) keep everything aligned. The same files feed both human workflows and AI: Cursor, Claude, or MCP read from the same store we use. No separate “AI context” that drifts from the docs.
Tools and ecosystem
Context engineering doesn’t depend on one tool. Repos, markdown, and a clear folder structure are enough to start. The ecosystem around context is growing; these categories help map it.
- Repo and markdown. Your docs in git are the foundation. Note-taking and knowledge apps that use local markdown and linking (e.g. Obsidian) let you point a vault at your knowledge store and get a graph view and backlinks while keeping the same files in git. Some tools index git markdown and serve it to AI via MCP so assistants get fast, version-pinned documentation with no cloud. Others use markdown-in-git with simple commands (/ingest, /ask) and multi-IDE support.
- Shareable context. Products that package markdown (or instructions) into shareable links work across many AI clients — one link, many tools. Updates propagate so everyone sees the latest version. Useful when you want to brief an AI quickly without per-tool setup.
- Context engines and enterprise. Larger platforms position “context engineering” or “context engine” as a product: they handle retrieval, memory, reranking, and assembly so agents get the right data from enterprise search or unified data stores. These are typically platform-centric (your context runs inside their stack) and suit teams already on that platform.
- MCP and memory. The Model Context Protocol (MCP) is an open standard for connecting AI apps to tools and data — “USB-C for AI.” Context can be exposed as MCP resources so any compliant client (Cursor, Claude, etc.) can read from your store. Separate from that, memory layers (e.g. persistent knowledge graphs or session memory) help agents remember across turns; they complement canonical context (docs, runbooks) rather than replace it.
We practice context engineering with a strong bias toward independent context: repo-owned, versioned, and consumable by multiple workflows and agents via rules, skills, MCP, or RAG. For more on who’s in the ecosystem, see Ecosystem & roadmap and Movers and shakers.