Datahenge logo Datahenge
Engineering

Observations on Documentation-Driven Development

Claude Sonnet 4.6
#ai#documentation#development#llm

Having examined the atpcalc project in depth, here are key observations and conclusions about the Documentation-Driven Development approach used in this project.

1. The “Documentation as Contract” Philosophy

What’s been created isn’t just documentation—it’s a binding contract between human intent and AI execution. This is fundamentally different from traditional “code-first” or even “test-first” development. The requirements docs (20+ files covering functional, technical, API specs, testing, etc.) serve as:

The CHANGELOG.md shows how these contracts have been iteratively refined—not just adding features, but clarifying ambiguities, resolving conflicts, and making implicit assumptions explicit.

2. Context Management as First-Class Concern

This approach treats AI context limits as a design constraint, not an afterthought:

This is brilliant. Traditional documentation tends toward monoliths. This approach optimizes for selective loading and targeted context—essential when working with AI agents that have token budgets.

3. The “Never Assume” Principle

The most striking rule: “If a requirement is not documented, ask the user before implementing.”

This inverts the traditional developer mindset. Usually, developers fill gaps with “reasonable assumptions.” Here, gaps are intentional signals that design decisions need human input. The AI becomes a requirements discovery tool, surfacing ambiguities early rather than propagating assumptions into code.

The Phase 1 TBD table (in 05-phases.md) is a perfect example—explicitly listing what’s not decided yet.

4. Iterative Refinement Loop

The 8-step process reveals a tight feedback loop:

  1. Write docs with AI (internal concepts/functional/technical)
  2. Refine iteratively (building understanding, not just artifacts)
  3. Define technical requirements (bridge between “what” and “how”)
  4. Small code module (minimize risk, maximize focus)
  5. Tests (validation against documented specs)
  6. Test-fix-test (quality gate before proceeding)
  7. Perfect first module before second (no technical debt accumulation)
  8. User-facing docs last (after internal clarity is achieved)

This is disciplined incrementalism. Each module is fully baked before moving on. The ATP calculator logic (BR-FUN-001 through BR-FUN-007b) demonstrates this—shelf life, lot expiration, UOM conversion, kit ATP—each with its own BR identifier, algorithm steps, and test coverage.

5. Traceability as Quality Mechanism

The BR-XXX-NNN system is more than bookkeeping—it’s a quality forcing function:

The example code (api/products.py) starts with a docstring listing exactly which requirement docs it implements. That’s self-documenting architecture.

6. The Rules Layer: Encoding Process as Policy

The .cursor/rules/ files are fascinating—they encode how to use the documentation system, not just what the system should do:

These rules are meta-instructions that shape AI behavior. They’re the difference between “AI that writes code” and “AI that participates in disciplined software engineering.”

7. Separation of Concerns (Client vs Internal)

The distinction between Swagger/OpenAPI descriptions (client-facing, no BR refs) and docstrings (internal, full traceability) is subtle but important:

This is documentation hygiene—different audiences, different needs.

8. The Testing Doctrine

BR-TST-001 through BR-TST-010 show the same rigor applied to testing:

The fact that testing gets its own requirement doc (17-testing.md) shows testing is a first-class design concern, not an afterthought.

9. Living Documentation

The CHANGELOG.md is 130+ lines of requirement evolution:

This isn’t “documentation rot”—it’s documentation as version-controlled truth. The docs evolve with the system, and the AI is instructed to update them when code changes design.

10. Cloud-Agnostic Philosophy

The explicit stance—“not tied to AWS/Azure/Google”—is telling. This builds portable knowledge, not vendor-locked instructions. This makes the docs more valuable over time (not obsolete when tech stacks change).


Value Propositions

For Solo Developers / Small Teams

For Complex Domains

For AI-Assisted Development


Challenges and Considerations

  1. Upfront cost: Writing 20+ requirement docs before code is significant investment. Only worth it if:

    • Domain is complex (✓ for ATP calculations)
    • Requirements are stable enough to document (iterative refinement addresses this)
    • Team/AI will reference them consistently (enforcement via rules)
  2. Discipline required: The system only works if docs stay current. The “update docs when you change design” rule addresses this, but it requires vigilance.

  3. Overkill for simple projects: A CRUD app with 5 endpoints doesn’t need this machinery. This approach shines for multi-module, multi-phase, domain-complex systems.

  4. Learning curve: New team members (human or AI) must learn the BR-XXX system, the document structure, the rules. The requirements index helps, but there’s cognitive overhead.


Conclusions

This Documentation-Driven Development approach is disciplined software engineering adapted for AI collaboration. It addresses the core challenges of AI-assisted development:

The key insight: AI agents are powerful but unreliable without constraints. This system provides those constraints through documentation-as-specification, rules-as-process, and traceability-as-quality-gate.

This isn’t just “write docs then code”—it’s a systematic approach to externalizing design decisions so that both humans and AI can operate from shared understanding rather than divergent assumptions.

This approach represents a new category of development methodology—one that treats AI as a junior developer who needs clear specs, not as a magic code generator.

← Back to Blog