I'm sharing this with the team as a summary of my personal workflow when working with AI on code. It's not an official framework, but rather a set of learnings from experience (polished with a little help from AI, of course). My main goal is to start a conversation. If you have a better or similar workflow, I'd genuinely love to hear about it.
Compass, Steering Wheel, Destination — Framework for Working with AI on Code
AI can accelerate coding, but it can also drift, hallucinate requirements, or produce complex solutions without a clear rationale.
This framework provides the guardrails to keep AI-assisted development focused, deliberate, and well-documented.
Sailing Analogy (High-Level Intro)
Working with AI on code is like sailing:
- Compass → Keeps you oriented to true north (goals, requirements, assumptions).
- Steering Wheel → Lets you pivot, tack, or hold steady (decide continue vs. change).
- Destination Map → Ensures the journey is recorded (reusable, reproducible outcomes).
This framework grew out of real-world experience. It’s not brand new theory, but a way to formalize a shared language for teams working with AI.
Step 1: Compass (Revalidation)
Purpose: keep alignment with goals and assumptions.
Template (copy/paste):
- What’s the primary goal?
- What’s the secondary/nice-to-have goal?
- Which requirements are mandatory vs optional?
- What are the current assumptions? Which may be invalid?
- Has anything in the context changed (constraints, environment, stakeholders)?
- Are human and AI/system understanding still in sync?
- Any signs of drift (scope creep, contradictions, wrong optimization target)?
Step 2: Steering Wheel (Course Correction)
Purpose: evaluate if we should continue, pivot, or stop.
Template (copy/paste):
- For each assumption: what if it’s false?
- Does an existing tool/library cover ≥80%?
- Does this map to an existing framework/pattern (ADR, RFC, design template)?
Alternatives:
- Different algorithm/data structure?
- Different architecture (batch vs streaming, CPU vs GPU, local vs distributed)?
- Different representation (sketches, ML, summaries)?
- Different layer (infra vs app, control vs data plane)?
Trade-offs:
- Fit with requirements.
- Complexity (build & maintain).
- Time-to-value.
- Risks & failure modes.
Other checks:
- Overhead vs value: is the process slowing iteration?
- Niche & opportunity: is this idea niche or broadly useful? Where does it fit in the landscape?
Kill/Go criteria:
- Kill if effort > value, assumptions broken.
- Go if results justify effort or uniqueness adds value.
Next step options:
- Continue current path.
- Pivot to alternative.
- Stop and adopt existing solution.
- Run a 1-day spike to test a risky assumption.
Step 3: Destination (Reverse Prompt)
Purpose: capture the outcome in reusable, reproducible form.
Template (copy/paste):
Instructions
- Restate my request so it can be reused to regenerate the exact same code and documentation.
- Include a clear summary of the key idea(s), algorithm(s), and reasoning that shaped the solution.
- Preserve wording, structure, and order exactly — no “helpful rewrites” or “improvements.”
Reverse Prompt (regeneration anchor)
- Problem restatement (1–2 sentences).
- Key algorithm(s) in plain language.
- Invariants & assumptions (what must always hold true).
- Interfaces & I/O contract (inputs, outputs, error cases).
- Config surface (flags, environment variables, options).
- Acceptance tests / minimal examples (clear input → output pairs).
High-Level Design (HLD)
- Purpose: what the system solves and why.
- Key algorithm(s): step-by-step flow, core logic, choice of data structures.
- Trade-offs: why this approach was chosen, why others were rejected.
- Evolution path: how the design changed from earlier attempts.
- Complexity and bottlenecks: where it might fail or slow down.
Low-Level Design (LLD)
- Structure: files, functions, modules, data layouts.
- Control flow: inputs → processing → outputs.
- Error handling and edge cases.
- Configuration and options, with examples.
- Security and reliability notes.
- Performance considerations and optimizations.
Functional Spec / How-To
- Practical usage with examples (input/output).
- Config examples (simple and advanced).
- Troubleshooting (common errors, fixes).
- Benchmarks (baseline numbers, reproducible).
- Limits and gotchas.
- Roadmap / extensions.
Critical Requirements
- Always present HLD first, then LLD.
- Emphasize algorithms and reasoning over just the raw code.
- Clearly mark discarded alternatives with reasons.
- Keep the response self-contained — it should stand alone as documentation even without the code.
- Preserve the code exactly as it was produced originally. No silent changes, no creative rewrites.
When & Why to Use Each
-
Compass (Revalidation):
- Use at the start or whenever misalignment is suspected (context drift, new requirements).
-
Steering Wheel (Course Correction):
- Use at milestones or retrospectives to decide continue, pivot, or stop.
-
Destination (Reverse Prompt):
- Use at the end of a cycle/project to capture reproducible documentation & handover artifacts.
References & Correlations
This framework is simple, but it builds on proven practices:
- Systems Engineering: Verification & Validation (build the right thing).
- Agile: Sprint reviews (revalidation), retrospectives (course correction).
- Lean Startup: Pivot vs. persevere decisions.
- Architecture Practices: ADRs (decision rationale, alternatives).
- AI Prompt Engineering: Reusable prompt templates & libraries.
- Human-in-the-Loop Design: Oversight to prevent drift in AI systems.
By combining them under a sailing metaphor, the framework becomes:
- Easy to remember.
- Easy to communicate inside teams.
- Easy to apply in AI-assisted coding where drift, misalignment, and reusability are everyday challenges.
Closing Note
Think of this as a playbook, not theory. Next time in a session, just say:
- “Compass check” → Revalidate assumptions/goals.
- “Steering wheel” → Consider pivot/alternatives.
- “Destination” → Capture reproducible docs.
Top comments (0)