Context Engineering: The Next Frontier in Generative AI (Part 2)

Manu Mulaveesala | Monday, November 10, 2025

Context Engineering: The Next Frontier in Generative AI (Part 2)

Large Language Models (LLMs) like Claude and Copilot are transforming the development lifecycle. They are no longer just auto-completing code; they're generating features, debugging complex issues, and even performing code reviews. But there’s a secret to unlocking this power, and it goes far beyond a simple prompt: Context Engineering. 

 In Part 1 of this series, we explored the fundamentals of guiding AI with context. 

Now, in Part 2, we’re diving into the specific patterns that development teams are using right now. These are the techniques that turn a simple AI assistant into a deeply integrated, highly effective development partner. The core challenge is this: how do we give an AI tool enough information to be helpful without drowning its context window in noise? The answer is not to dump everything into the prompt, but to engineer structured, layered, and timely context. 

Let's look at the patterns that make AI a true extension of your engineering team, starting with how to teach the AI what to build from a robust set of requirements.

Context Engineering Patterns for Developers

Spec-Driven Development: Context from Requirements 

 Spec-driven development (SDD) is gaining traction because it aligns well with how AI tools typically operate (especially in codebases). Instead of describing what we want in natural language, we provide structured specifications that serve as both documentation and AI context. 

 The “spec” pattern:

  1. Write a spec file (Markdown, YAML, or specialized format) describing the feature 
  2. Include technical constraints, dependencies, acceptance criteria 
  3. Point your AI tool at the spec
  4. Let it generate implementation that matches the specification 

 This works because specs provide dense, structured context. A good spec tells the AI not just what to build, but how it fits into your system, what patterns to follow, what to avoid.

Spec-Driven Development Workflow

Workspace-First Context: Teaching AI Your Project

Instead of explaining your architecture every time, create workspace context files that AI tools can reference. Think of these as "onboarding docs for AI developers." 

 Create an .ai-context/ directory with: 

  • architecture.md: High-level system design, key patterns, boundaries 
  • conventions.md: Coding standards, naming patterns, common idioms 
  • dependencies.md: Key libraries, why you chose them, how you use them 
  • patterns.md: Common solutions to recurring problems in your codebase 

Modern tools like Claude Code and Cursor can read these files as foundational context. When you ask them to add a feature, they reference your conventions automatically. 

AI Tools Read Context From: 

 ↓

.ai-context/ → Base understanding of project 

specs/ → Feature requirements 

docs/adr/ → Historical decisions 

src/ → Actual implementation 

Progressive Context: Start Small, Expand as Needed 

 One of the biggest context engineering mistakes is to attempt to give the AI agent everything upfront. This can overwhelm the context window with noise making it hard to find the truly important “signal” of context through the noise. 

 Better approach: start with minimal context and let the AI ask for more. The AI progressively builds context by exploring your codebase, not by having everything dumped in advance. This matches how human developers naturally tend to work also.

Progressive Context Assembly

Context grows organically based on need, not dumped upfront.

Contextual Boundaries: What to Include, What to Ignore

Not all code is equally relevant. Good context engineering means helping AI tools understand context by placing appropriate boundaries or placing asymmetric importance. 

 Example: Use project structure to signal importance

  • Core business logic: src/core/ - always relevant
  • Feature implementations: src/features/ - relevant per-feature
  • Generated code: src/generated/ - skip for context (auto-generated)
  • Config files: Include when touching infrastructure, skip for feature work
  • Tests: Include when writing code, reference for patterns 

Tips for Context Constraints: 

  1. Quality > Quantity: Less context, more signal
  2. Create a file to specify context exclusion (some tools like Copilot support this)
  3. Provide direct references to key folders and files when possible 

 This teaches AI tools to focus on what matters and skip noise. Claude Code in particular respects workspace configuration to avoid polluting context with irrelevant files.

Starting a New Feature: Context Setup

Before you write any code, set up context that makes AI tools more effective:

  • Create a feature spec in `specs/features/` describing what you're building
  • Update workspace context if this introduces new patterns or conventions
  • Identify reference implementations - similar features to use as examples
  • Define boundaries - what files should/shouldn't be modified 

 Then when you start coding with an Agentic tool, point it to the latest spec and allow exploration and context localization.

Debugging: Context from Error States

When debugging with AI assistance, context needs to shift from "how things should work" to "what's actually happening." 

Provide diagnostic context:

  • Error messages and stack traces
  • Relevant log output
  • Current vs. expected behavior
  • Recent changes (git diff)
  • Environment details (versions, config) 

Modern AI tools are often better at debugging with the right diagnostic context. Instead of just saying "it's broken"—show them the failure mode.

 Debugging Context Checklist:

  • Error message (full stack trace)
  • When it started (commit/time)
  • Environment details (node version, OS, etc.)
  • Recent changes (git diff) □ Logs (console output, server logs)
  • Current behavior vs. expected behavior
  • Steps to reproduce
  • What you've tried already 

 Example: Good Debugging Context 

"The authentication endpoint returns 500 after commit abc123. 

Error: 'Cannot read property userId of undefined' in auth.js:42. 

This started after adding the password reset feature. 

Logs show the JWT token is missing the userId claim. 

Expected: Should extract userId from token. 

Tried: Verified token generation, it includes userId there."

Refactoring: Context from Code History

For refactoring work, historical context matters. Help AI tools understand: 

  • Why code exists in current form (git history, comments)
  • What's been tried before (check old PRs, issues)
  • What constraints exist (performance requirements, backwards compatibility)
  • What's safe to change vs. frozen (legacy integrations) 

 For example, Claude Code and Github Copilot can read git history to understand code evolution. Use this by referencing commits or branches that show previous refactoring attempts.

Code Review: Context from Standards

When using AI to help with code reviews, context also comes from your team's standards:

  • Coding conventions (formatting, linting, naming, structure, best practices)
  • Security requirements (input validation, auth patterns)
  • Performance budgets (bundle size, render time)
  • Accessibility standards (ARIA, keyboard nav) 

Create a `code-review-checklist.md` that AI tools can reference when reviewing PRs. This gives them the same lens your human reviewers use.

Wrapping Up

We’ve moved far beyond the initial days of "Prompt Engineering," where the goal was to write one perfect, magical sentence. 

Today, the frontier is Context Engineering, which treats the AI’s input not as a single command, but as a dynamic, layered, and architectural component of the project. By adopting patterns like Spec-Driven Development, creating a comprehensive .ai-context/ workspace for foundational knowledge, and leveraging diagnostic context for debugging, you’re not just using AI; you’re integrating it as an essential, knowledgeable member of your team. 

The future of software development involves structuring your project not only for human readability but also for AI comprehension. This deliberate effort to manage what the AI sees, when it sees it, and what it should focus on is what separates the effective AI-augmented developer from the rest. 

In the next and final part of this series, we will focus on one of the biggest pitfalls: context engineering mistakes. We'll cover how to avoid overwhelming the AI, what to exclude, and how to maintain quality over quantity to ensure your AI agent delivers high-signal, high-quality code every time.


Ascendient Learning, part of Accenture, offers private, customized Generative AI training across all teams within your organization from leaders, to developers, to staff (including all levels and departments). Contact us to speak with one of our AI learning specialists.

Foundations of Prompt Engineering
Applied Context Engineering for Agentic AI