DEV Community

Juhani Ränkimies
Juhani Ränkimies

Posted on

Bootstrapping a Project with AI and Specs

#ai

Theory is useful. Seeing it work on a real project is better.

This post walks through bootstrapping a project using spec-driven development with AI tooling. The project is real -- a SaaS product called SitePilot2 -- and the artifacts shown are the actual artifacts generated during the bootstrap. Nothing has been cleaned up for presentation.

The starting point

An empty repository. A product idea. An AI assistant (OpenCode with GPT 5.4) as the primary development tool.

The goal: establish a project baseline where AI has enough structured context to work reliably on future features, with change management and quality enforcement in place before writing any product code.

Step 1: Bootstrap the artifact structure

The first action isn't writing code. It's creating the context documents that AI will need for every future task.

A custom command (/sdd-init) handles this. It inspects the repository, discovers what exists, and creates the minimal artifact structure. The command follows explicit rules:

  1. If docs/changes/ or docs/specs/ exists, use docs/ as the artifact root
  2. If changes/ or specs/ exists at the repo root, use the repo root
  3. Otherwise, create under docs/

For a fresh project, this produces:

docs/
  product.md
  structure.md
  tech-stack.md
  architecture.md
  quality-policy.md
  specs/
  changes/
  planning/
  adr/
  .templates/
    spec.md
    verification.yaml
    planning-item.md
Enter fullscreen mode Exit fullscreen mode

The AI populates these based on what it can discover. For a new project, most content is placeholder. For an existing project, it reads package files, source code, CI config, and fills in what it finds.

Step 2: Fill in product context

The AI generates an initial product.md from a conversation about the product. Here's what came out for SitePilot2:

# Product: SitePilot2

## What

SitePilot2 is a SaaS product that helps non-technical business owners 
create and maintain professional business websites without external 
implementation help.

## For Whom

- Solo entrepreneurs operating locally
- Small agencies serving local clients
- Small teams that need multiple collaborators to maintain one or 
  more business websites

## Key User Problems

1. Launching a website is difficult for non-technical owners.
2. Ongoing maintenance is neglected because the team lacks web skills.
3. Long periods without updates lead to costly re-implementation 
   instead of incremental improvement.
Enter fullscreen mode Exit fullscreen mode

This isn't documentation for documentation's sake. Every time AI touches this project in the future, it can load this file and know: simplicity for non-technical users is the primary constraint. That single piece of context prevents an entire class of over-engineering decisions.

Step 3: Make architecture decisions explicit

Before writing any code, twelve ADRs were created to capture key decisions:

ADR Decision
ADR-001 Backend-driven UI with Datastar
ADR-002 Hexagonal backend architecture
ADR-003 Site-centered tenancy and collaboration model
ADR-004 Passwordless email OTP authentication for v1
ADR-005 Defer Rust web framework selection until Datastar spike
ADR-007 Playwright for UI interaction proof
ADR-008 HTML-first server rendering with reusable fragments
ADR-009 AWS SES for email delivery via port/adapter
ADR-010 Minimal role-based access model for v1
ADR-011 Security scanning baseline
ADR-012 Analytics via pluggable site-level integration

Some are accepted. Some are proposed. ADR-005 explicitly defers a decision (framework selection) until evidence from a spike is available.

The point: when AI later implements authentication, it loads ADR-004 and knows the decision is email OTP, not passwords. When it structures code, ADR-002 tells it hexagonal architecture. These aren't suggestions in a long design doc -- they're discrete, loadable decisions.

Step 4: Plan the work

Five planning items were created to sequence the bootstrap:

PLN Goal Status
PLN-001 Establish working Rust project and local workflows Done
PLN-002 Framework selection spike (Datastar compatibility) Active
PLN-003 Authentication (email OTP) Proposed
PLN-004 First-time site creation and onboarding Proposed
PLN-005 Ongoing site management Proposed

Planning items aren't feature specs. They answer "what work is worth doing and in what order" -- a different question from "what must this feature do." PLN-001 and PLN-002 must complete before feature work starts. PLN-003 through PLN-005 are sequenced by dependency.

Step 5: Execute through change records

Now implementation begins -- but through change records, not ad-hoc coding.

CR-001: Repository bootstrap

The first change record creates the Rust project baseline:

---
id: CR-001
title: repository bootstrap
status: implemented
change_type: additive
branch: cr/001-repository-bootstrap
planning:
  - PLN-001
---
Enter fullscreen mode Exit fullscreen mode

Scope is explicit:

In scope:

  • Create initial Rust project structure
  • Add a minimal runnable binary
  • Add initial test layout and smoke test
  • Update structure documentation

Out of scope:

  • Web framework selection
  • Real product features
  • Production deployment

The implementation was minimal by design:

  • Cargo.toml: Standard binary project, no dependencies
  • src/main.rs: Placeholder with comments indicating future architecture
  • tests/smoke_test.rs: Integration test verifying the binary builds and runs
  • docs/structure.md: Updated to reflect actual state

A branch cr/001-repository-bootstrap was created, a PR opened, and the change merged. The change record documents what happened and why.

CR-002: Development quality baseline

The second change record establishes the quality enforcement baseline:

---
id: CR-002
title: development quality baseline
status: merged
change_type: internal-quality
branch: cr/002-development-quality-baseline
planning:
  - PLN-001
---
Enter fullscreen mode Exit fullscreen mode

This change:

  • Added Playwright scaffolding with Bun as package manager
  • Created CI workflow running rustfmt, clippy, cargo check, cargo test, and Playwright
  • Defined local developer commands (bun run verify chains all checks)
  • Documented environment conventions

The quality-policy.md was updated from aspirational to actual:

## Mandatory Automated Checks

### Formatting
- **Tool**: `rustfmt`
- **Command**: `cargo fmt --all -- --check`
- **When**: On every pull request and before merge

### Linting
- **Tool**: `clippy`  
- **Command**: `cargo clippy --all-targets --all-features -- -D warnings`
- **When**: On every pull request and before merge
Enter fullscreen mode Exit fullscreen mode

After CR-002 merged, every future change has automated quality enforcement. The system verification lane from Part 1 is operational.

What the bootstrap produced

After two change records and one day of work, the project has:

  • Product context that AI can load to understand what the system is for
  • Engineering context (structure, tech stack, architecture, quality policy) that tells AI how to operate safely
  • 12 ADRs capturing decisions AI needs to respect
  • 5 planning items sequencing future work
  • 2 completed change records with full audit trail
  • CI pipeline enforcing formatting, linting, type checking, tests, and browser proof
  • Templates for future specs, verification maps, and planning items

And the actual implementation? A placeholder binary and a smoke test. Almost no code.

That's the point. The value of the bootstrap isn't code -- it's context and infrastructure. When the first real feature gets implemented, AI starts with structured knowledge about the product, architecture, tech choices, and quality requirements. It doesn't start cold.

What the AI did vs. what I did

Being honest about the division of labor:

AI generated:

  • Initial drafts of all context documents (product.md, structure.md, tech-stack.md, architecture.md, quality-policy.md)
  • ADR drafts based on architectural discussions
  • Change record structure and content
  • Cargo project scaffolding
  • Playwright setup and CI workflow
  • Documentation updates

I did:

  • Decided what the product is and who it's for
  • Made architectural decisions (hexagonal, Datastar, OTP)
  • Reviewed and corrected AI-generated artifacts
  • Decided the sequencing (what to do first, what to defer)
  • Decided when to stop (minimal bootstrap, not over-designed)

The human role was contract design and judgment. The AI role was execution and documentation. This is the division of labor from Part 1 playing out in practice.

What's missing

Two things this bootstrap deliberately doesn't have yet:

  1. No feature specs. The docs/specs/ directory is empty. Feature specs arrive when feature work begins (starting with PLN-002's framework spike and then PLN-003's authentication).
  2. No filled verification maps. Templates exist, but no concrete spec-to-test mapping because there are no features to map yet.

These aren't gaps -- they're sequencing. The bootstrap establishes context and infrastructure. Features build on that foundation.

The cost

Total overhead for the documentation approach versus just coding:

  • ~1 hour for initial context documents (product, structure, tech stack, architecture, quality policy)
  • ~30 minutes per ADR (most were quick decisions with clear rationale)
  • ~15 minutes per change record (problem, scope, verification delta)
  • ~15 minutes per planning item

For a bootstrap that took about a day total, roughly half was documentation and half was implementation. That ratio will shrink as features get built -- context documents are a one-time investment, and change records for feature work are smaller than bootstrap scaffolding.

Whether that investment pays off depends on what comes next. If the project dies after bootstrap, it was wasted ceremony. If it grows into a real product with months of AI-assisted development, the context documents will save far more time than they cost.

Top comments (0)