Recently I added complete S3 support to Flywheel—import and export, multiple file formats, chunked processing for large datasets. From first prompt to deployed in production: 1 hour 51 minutes.
Not because I'm fast. Because my codebase is designed for AI to extend it.
Here's exactly what I typed and what happened.
The Setup
Flywheel already had AWS support for DynamoDB. I wanted to add S3. Same provider, new service. In a traditional codebase, this would mean:
- Remembering how DynamoDB was implemented
- Finding all the files that need to change
- Copy-pasting and modifying boilerplate
- Hoping I didn't miss a registration somewhere
Instead, I have a /provider-setup skill—a structured prompt that knows our patterns, templates, and conventions.
The Prompts
Prompt 1: Kick it off
using /provider-setup lets plan out the work for reading from/to S3
Claude read the skill file, checked our existing AWS patterns (DynamoDB), and asked two clarifying questions: file formats and export mode. I answered "JSON + CSV, batch files." It created a checklist and showed me the implementation plan.
Prompt 2: Ship it
lets go for it
Four words. Over the next ~25 minutes, Claude generated the full stack—Go backend (handlers, services, event handlers), TypeScript frontend (config forms, bucket selection), plus all the wiring.
- 10:31 PM — Preview endpoint working, listing S3 files in the UI
- 10:34 PM — Full import complete, 12 records pulled from S3
Prompt 3: Bug fix
Export failed. I pasted the error:
PermanentRedirect: The bucket you are attempting to access
must be addressed using the specified endpoint
Claude identified it immediately: S3's ListBuckets is global, but writes need regional endpoints. Fix: call GetBucketLocation per bucket, auto-select region. 10:40 PM — Export working.
Prompts 4-5: Refinements
If we already get the region from the bucket list why even show the region selector?
does buckets even need the region query param?
Claude agreed with both. Removed the redundant UI, made the API param optional. Simpler UX, cleaner API.
The Timeline
| Time | Milestone |
|---|---|
| 10:00 PM | Started |
| 10:05 PM | Requirements gathered, plan approved |
| 10:31 PM | Preview endpoint working |
| 10:34 PM | Full import complete |
| 10:40 PM | Export complete |
| 10:45 PM | Refinements done |
| 11:44 PM | Passed /check, CI pipeline running |
| 11:51 PM | Deployed to production |
Total prompts: 5
S3 implementation: 45 minutes
Time to production: 1 hour 51 minutes (included fixing an unrelated issue from a previous release)

The finished S3 integration: import → transform → export, all at 100% success rate
Why This Works
This isn't about AI being magic. It's about the codebase being ready for AI.
1. Ruthless consistency
Every provider follows the same structure. Same files, same patterns, same naming conventions. Claude learned DynamoDB once—S3 was just "do that again for a different service."
For example, every provider package in our Go backend has the same shape:
backend/main/{provider}/
├── model.go # Config structs, validation
├── service.go # Business logic, import/export handling
├── handler.go # HTTP endpoints
├── chunk_fetcher.go # Pagination for imports
├── export_writer.go # Batch writing for exports
├── subscriptions.go # Event handlers
└── helpers.go # Client creation, utilities
When Claude sees "add S3 support," it knows exactly what files to create and where they go. No guessing.
2. CLAUDE.md as the single source of truth
At the root of the repo, there's a CLAUDE.md file (~300 lines) that explains:
- Project architecture and conventions
- Common patterns (React Query keys, toast notifications, state management)
- Where to find detailed docs for specific domains
- Critical rules
Here's a snippet:
## Core Principles
### Architecture Rules
- **Edge-to-Service Pattern**: Handlers and subscriptions MUST delegate to service layer
- **Dual Records**: External records from sources + canonical normalized records
- **EBAC Authorization**: Entity-based access control with role and entity-specific permissions
- **Event Constants**: Always use pubsub package constants, never strings
### React Query Keys
['organizations'] // List
['organizations', orgId] // Single
['organizations', orgId, 'members'] // Related
This isn't documentation for humans—it's context for AI. Claude reads it at the start of every session and immediately knows how we do things.
3. Skills for repeated workflows
Skills are markdown files in .claude/skills/ that encode multi-step workflows. The /provider-setup skill is about 500 lines and includes:
- A decision tree for gathering requirements
- Patterns for different provider types (databases vs. object storage vs. message queues)
- Checklists of all files that need to be created
- Templates showing our conventions
- References to existing implementations for context
Here's what the workflow section looks like:
## Workflow
### Step 1: Gather Requirements
Ask the user to clarify:
- Provider name (e.g., "mongodb", "s3")
- Capabilities: Import only, Export only, or Both
- Connection credentials needed
- Data access pattern (table-based, file-based, query-based)
### Step 2: Check Existing Patterns
If adding to an existing provider (e.g., S3 to AWS):
- Check for shared patterns (auth, region selection)
- Reuse existing components, don't duplicate
### Step 3: Initialize Checklist
[... creates a todo list of all required files ...]
I'm constantly iterating on this skill—every integration teaches me something new to add. But now every new integration starts from the same playbook.
4. Small, focused prompts work
Notice I didn't write detailed specifications. I said "lets go for it" and let Claude execute the plan it had already created. When there was a bug, I described what I observed, not what I thought the fix should be.
The codebase has enough structure that Claude can fill in the gaps correctly.
5. A /check skill for validation
Before any commit, I run /check --run-all. This skill:
- Runs linting and type checks
- Runs tests
- Checks if documentation needs updating
- Flags when marketing content should be created (like "you just shipped a feature, maybe write about it")
One command catches issues before CI. It's not just about code quality—it's about making the whole workflow predictable.
What I'd Tell Other Devs
If you're using AI coding tools daily, your codebase is now a prompt. You can fight that or optimize for it.
Start with:
- Consistent file structure - Same files in the same places
- A CLAUDE.md - Or whatever your AI tool uses for context
- Document conventions explicitly - Don't rely on tribal knowledge
-
Create skills for repeated work - This is the multiplier. A
/provider-setupskill means every new integration starts from the same playbook. A/checkskill means code quality is one command. The upfront investment pays back every time.
The 40-minute S3 integration wasn't a one-time trick. It's how we work now. GCS took about the same. Firestore was similar. The pattern compounds.
Try This Yourself
Next time you're about to build something repetitive:
- Document the pattern you're following
- Create a checklist of all the files/registrations needed
- Put it in a skill file or prompt template
- Let AI execute while you review
The first time takes longer. Every subsequent time is under 2 hours to production.
Building Flywheel in public. Follow along on X/Twitter or Indie Hackers.
Top comments (0)