Having examined the atpcalc project in depth, here are key observations and conclusions about the Documentation-Driven Development approach used in this project.
What’s been created isn’t just documentation—it’s a binding contract between human intent and AI execution. This is fundamentally different from traditional “code-first” or even “test-first” development. The requirements docs (20+ files covering functional, technical, API specs, testing, etc.) serve as:
The CHANGELOG.md shows how these contracts have been iteratively refined—not just adding features, but clarifying ambiguities, resolving conflicts, and making implicit assumptions explicit.
This approach treats AI context limits as a design constraint, not an afterthought:
This is brilliant. Traditional documentation tends toward monoliths. This approach optimizes for selective loading and targeted context—essential when working with AI agents that have token budgets.
The most striking rule: “If a requirement is not documented, ask the user before implementing.”
This inverts the traditional developer mindset. Usually, developers fill gaps with “reasonable assumptions.” Here, gaps are intentional signals that design decisions need human input. The AI becomes a requirements discovery tool, surfacing ambiguities early rather than propagating assumptions into code.
The Phase 1 TBD table (in 05-phases.md) is a perfect example—explicitly listing what’s not decided yet.
The 8-step process reveals a tight feedback loop:
This is disciplined incrementalism. Each module is fully baked before moving on. The ATP calculator logic (BR-FUN-001 through BR-FUN-007b) demonstrates this—shelf life, lot expiration, UOM conversion, kit ATP—each with its own BR identifier, algorithm steps, and test coverage.
The BR-XXX-NNN system is more than bookkeeping—it’s a quality forcing function:
The example code (api/products.py) starts with a docstring listing exactly which requirement docs it implements. That’s self-documenting architecture.
The .cursor/rules/ files are fascinating—they encode how to use the documentation system, not just what the system should do:
These rules are meta-instructions that shape AI behavior. They’re the difference between “AI that writes code” and “AI that participates in disciplined software engineering.”
The distinction between Swagger/OpenAPI descriptions (client-facing, no BR refs) and docstrings (internal, full traceability) is subtle but important:
This is documentation hygiene—different audiences, different needs.
BR-TST-001 through BR-TST-010 show the same rigor applied to testing:
The fact that testing gets its own requirement doc (17-testing.md) shows testing is a first-class design concern, not an afterthought.
The CHANGELOG.md is 130+ lines of requirement evolution:
This isn’t “documentation rot”—it’s documentation as version-controlled truth. The docs evolve with the system, and the AI is instructed to update them when code changes design.
The explicit stance—“not tied to AWS/Azure/Google”—is telling. This builds portable knowledge, not vendor-locked instructions. This makes the docs more valuable over time (not obsolete when tech stacks change).
Upfront cost: Writing 20+ requirement docs before code is significant investment. Only worth it if:
Discipline required: The system only works if docs stay current. The “update docs when you change design” rule addresses this, but it requires vigilance.
Overkill for simple projects: A CRUD app with 5 endpoints doesn’t need this machinery. This approach shines for multi-module, multi-phase, domain-complex systems.
Learning curve: New team members (human or AI) must learn the BR-XXX system, the document structure, the rules. The requirements index helps, but there’s cognitive overhead.
This Documentation-Driven Development approach is disciplined software engineering adapted for AI collaboration. It addresses the core challenges of AI-assisted development:
The key insight: AI agents are powerful but unreliable without constraints. This system provides those constraints through documentation-as-specification, rules-as-process, and traceability-as-quality-gate.
This isn’t just “write docs then code”—it’s a systematic approach to externalizing design decisions so that both humans and AI can operate from shared understanding rather than divergent assumptions.
This approach represents a new category of development methodology—one that treats AI as a junior developer who needs clear specs, not as a magic code generator.