Agent Context Strategies

AIWeb Dev concept developing
updated today 1 source

Agent Context Strategies

Agent context strategies are the methods by which AI Agents receive domain knowledge beyond their training data. The choice of strategy — passive context injection vs active retrieval via skills — significantly impacts agent reliability, with current evidence favoring passive approaches for general framework knowledge.

The Problem

LLM training data becomes outdated. When frameworks introduce new APIs (e.g., Next.js 16’s use cache directive, connection(), forbidden()), agents that rely solely on pre-training knowledge generate incorrect code or fall back to deprecated patterns. The same applies in reverse: an agent may suggest newer APIs that don’t exist in an older project. Agents need access to version-matched documentation.

Passive Context

Passive context delivers knowledge through files that are loaded into the agent’s system prompt on every turn. Examples include AGENTS.md (Cursor, v0), CLAUDE.md (Claude Code), and GEMINI.md (Gemini CLI). The agent doesn’t decide whether to consult this information — it’s always present.

Strengths:

  • No decision point — eliminates the failure mode where agents choose not to use available tools
  • Consistent availability — present on every turn, not dependent on asynchronous invocation
  • No ordering issues — avoids sequencing decisions about when to read documentation

Limitations:

  • Consumes token budget on every turn, even when not relevant
  • Scales poorly if uncompressed — full framework documentation can be tens of thousands of tokens

Compression mitigates the token cost. Vercel’s research demonstrated that an 8KB compressed docs index (80% smaller than the original) maintained 100% eval performance. The compressed format uses a pipe-delimited directory index pointing to local doc files, so the agent knows where to find specific information without having full content in context.

Active Retrieval (Skills)

Skills are packaged bundles of prompts, tools, and documentation that agents invoke on demand. The agent recognizes when it needs framework-specific help, calls the skill, and receives relevant docs. Skills are an open standard with a growing directory of reusable packages.

Strengths:

  • Load only what’s needed — minimal context overhead when not in use
  • Well-suited for vertical, action-specific workflows (framework migrations, applying best practices)
  • Clean separation of concerns — knowledge is modular and reusable

Limitations:

  • Agents frequently fail to invoke available skills — in Vercel’s evals, skills were never invoked in 56% of cases without explicit prompting
  • Even with explicit invocation instructions, results are fragile — subtle instruction wording changes produce large behavioral swings
  • Sequencing sensitivity — “invoke skill first” vs “explore project first, then invoke” leads to different outcomes on the same task

Comparative Results

Vercel’s hardened eval suite targeting Next.js 16 APIs produced these results:

ConfigurationPass Ratevs Baseline
Baseline (no docs)53%
Skill (default behavior)53%+0pp
Skill with explicit instructions79%+26pp
AGENTS.md docs index100%+47pp

The skill with default behavior produced zero improvement — identical to having no documentation at all. This is consistent with the broader observation that agents not reliably using available tools is a known limitation of current models.

Practical Guidance

  • Embed general framework knowledge passively in context files. Don’t rely on agents deciding to look it up.
  • Compress aggressively — a directory index pointing to retrievable files works as well as full docs in context.
  • Reserve skills for explicit workflows — migrations, upgrades, and other user-triggered operations where the invocation is guaranteed.
  • Instruct agents to prefer retrieval-led reasoning over pre-training-led reasoning. A single directive (“prefer retrieval-led reasoning over pre-training-led reasoning for any [framework] tasks”) measurably shifts behavior.
  • Test with evals targeting APIs outside training data — that’s where documentation access matters most. APIs already in training data don’t surface the gap.

Relevance to Atopia Labs Verticals

  • Web Development & Automation — any team using coding agents should configure passive context files with project-specific and framework-specific knowledge. The ROI is immediate and measurable: 100% vs 53% on framework-specific tasks.
  • IT Service & Consulting — when recommending agent-based development workflows to clients, context strategy is a key configuration decision. The difference between a well-configured AGENTS.md and an unconfigured agent is the difference between reliable and unreliable output.

Sources