The Documentation Efficiency Problem
In Phase 4.6, I split the monolithic repository into modular commons libraries to prevent AI from touching unwanted code. While this solved the physical boundaries problem, it created a new challenge: AI doesn't inherently know about custom libraries.
Asking Claude to read JAR files or external documentation disrupts the development workflow. At the same time, providing full commons code to AI is inefficient - it's a heavy task that includes implementation details AI doesn't need for most operations.
The solution became clear: compile commons signatures and examples into AI-friendly documentation that provides exactly what AI needs, when it needs it.
The Compilation Strategy
What AI Actually Needs
Through experimentation, I discovered AI needs different levels of detail for different tasks:
For Commons Libraries: Class signatures, method interfaces, and comments - but not implementation details. AI needs to understand what's available and how to use it, not how it works internally.
For Implementation Examples: Full code with all implementation details, comments, and test scenarios. When AI is building similar functionality, it needs complete patterns to follow.
For Constants and Configuration: All possible values and their meanings. These are reference data AI should have complete access to.
The Compilation Projects
I created two key repositories to handle this documentation strategy:
Framework Prompts: Contains compilation instructions for different types of documentation, split by purpose:
-
commons.md
: Instructions for compiling commons libraries to signatures only -
service-impl-main.md
: Full implementation examples for AI to follow -
service-impl-test.md
: Complete test scenarios and patterns -
service-objects-main.md
: Object structures with empty method stubs for TDD -
service-objects-test.md
: Test utilities, fakes, and object mothers
Framework Examples: Stores the compiled markdown files that AI can easily read and reference during development.
The Compilation Process in Action
Commons Compilation Example
Here's how commons compilation transforms verbose library code into AI-friendly documentation:
Before (Full Implementation):
/**
* Validates commands using a list of validation rules with async execution.
* Supports parallel validation for improved performance.
*/
open class CommandValidator(
private val rules: List<ValidationRule<*>>,
coroutines: Int = 5,
) {
private val validationDispatcher = Dispatchers.IO.limitedParallelism(
parallelism = coroutines,
name = "command-validator"
)
/**
* Validates command and throws ValidationException if errors found
*/
suspend fun validate(command: Command) {
val errors = validateAndGetErrors(command)
if (errors.isNotEmpty()) throw ValidationException(errors)
}
/**
* Validates command and returns list of validation errors without throwing
*/
suspend fun validateAndGetErrors(command: Command): List<ValidationError> = withContext(validationDispatcher) {
rules
.filter { it.isApplicable(command) }
.map { async { it.validateOne(command) } }
.awaitAll()
.filterNotNull()
}
@Suppress("UNCHECKED_CAST")
private fun ValidationRule<*>.validateOne(command: Command): ValidationError? {
val input: Any = command.toInput() ?: return null
val typedRule: ValidationRule<Any> = this as ValidationRule<Any>
return typedRule.validate(input)
}
}
After (Compiled for AI):
/**
* Validates commands using a list of validation rules with async execution.
* Supports parallel validation for improved performance.
*/
open class CommandValidator(
private val rules: List<ValidationRule<*>>,
coroutines: Int = 5,
) {
/**
* Validates command and throws ValidationException if errors found
*/
suspend fun validate(command: Command)
/**
* Validates command and returns list of validation errors without throwing
*/
suspend fun validateAndGetErrors(command: Command): List<ValidationError>
}
The compilation process:
- Extracts class signatures and public interfaces
- Preserves all comments (critical for AI understanding)
- Removes implementation details that AI doesn't need
- Groups by package for easy navigation
- Includes all constants with their values
Examples Compilation Strategy
For implementation examples, the strategy varies by purpose:
service-objects-main.md
: Prepared object structures with empty methods ready for TDD
service-objects-test.md
: Utility classes, fakes, and object mothers for testing
service-impl-test.md
: Full test scenarios showing expected patterns
service-impl-main.md
: Complete working implementations that AI can learn from
Each has specific instructions about what to include, what to remove, and how to format the results for maximum AI comprehension.
Service Objects Compilation Example
The service-objects-main.md
compilation transforms complete implementations into TDD-ready structures:
Before (Full Implementation):
class CreateUserUseCase(
private val timeProvider: TimeProvider,
private val idProvider: IdProvider<UUID>,
private val validator: CommandValidator,
private val storagePort: UserStoragePort,
) : SaveCommandUseCase<CreateUserCommand, User> {
/**
* This is an entry point. Always implements SaveCommandUseCase<COMMAND, MODEL>
*/
override suspend fun execute(command: CreateUserCommand): Model<User> {
validator.validate(command)
val user = Model(
// NOTE: Carefully read requirements how to generate id: a random one or based on params
id = idProvider.generate(command.email),
version = 0,
createdAt = timeProvider.now(),
updatedAt = timeProvider.now(),
data = User(
email = command.email,
name = command.name,
)
)
return storagePort.create(user)
}
}
After (Compiled for TDD):
class CreateUserUseCase(
private val timeProvider: TimeProvider,
private val idProvider: IdProvider<UUID>,
private val validator: CommandValidator,
private val storagePort: UserStoragePort,
) : SaveCommandUseCase<CreateUserCommand, User> {
/**
* This is an entry point. Always implements SaveCommandUseCase<COMMAND, MODEL>
*/
override suspend fun execute(command: CreateUserCommand): Model<User> {
TODO("Not Implemented Yet")
}
}
This compilation approach:
- Preserves class structure and dependencies
- Keeps all comments that guide AI understanding
- Replaces implementation with appropriate stubs (
TODO
,null
,emptyList()
) - Maintains type signatures for compilation success
- Provides implementation hints through comments
The Automation Solution
Project-Level Scripts
Each project gets simple bash scripts that execute Claude with the appropriate compilation prompt:
For Commons Projects (example: common-service scripts):
cd ../common-service-core || exit
claude "Read prompt: ~/VTProjects/framework/be/kotlin/prompts/compilation/commons.md"
For Example Projects (example: users service scripts):
cd ../../../../service/core/impl || exit
claude "Read prompt: ~/VTProjects/framework/be/kotlin/prompts/compilation/service-impl-main.md"
The Compilation Workflow
- Developer updates commons library or example code
- Runs compilation script (single bash command)
- Claude reads the prompt, extracts appropriate information
- Result is automatically placed in the examples project
- AI can now reference the compiled documentation efficiently
Integration with Development Workflow
Before Code Generation
Now when I ask AI to generate code based on specs, the process starts with:
- Read compiled commons: AI understands available utilities and patterns
- Read relevant examples: AI sees complete implementation patterns
- Generate following patterns: AI has concrete examples to follow
The Results So Far
The compiled documentation approach shows promising improvements:
Faster Context Loading: AI can quickly scan compiled signatures instead of parsing full implementations
Better Pattern Recognition: Complete examples help AI generate more accurate code
Reduced Invention: When AI sees established patterns, it's less likely to invent complex solutions for simple problems
Current Limitations
Still Need More Examples: AI generates better results with more reference examples. So far I provided just one example with users service, that's not enough.
Test Modification Behavior: AI still occasionally modifies tests when implementation has issues, instead of fixing the implementation.
Next Steps: Beyond CRUD
The current system I built assumes basic CRUD operations, but my architecture requires event-driven patterns that are still missing. The next phase I will fix it and add events support.
I'm also investigating Event Modeling as a framework that might naturally align with my architecture goals and make the specification-to-implementation process more seamless.
Key Insight: Documentation as Code Interface
This phase taught me that documentation for AI isn't the same as documentation for humans. AI needs:
- Structured, consistent formatting
- Complete examples, not partial snippets
- Signatures without implementation noise
- Reference data in easily scannable format
The compilation approach treats documentation as an interface between human-written code and AI consumption - optimized for machine reading while remaining human-understandable.
This compilation system demonstrates another VibeTDD principle: optimize the AI's context for the specific task at hand, rather than providing everything and hoping AI figures out what's relevant.
Top comments (0)