AI coding assistants perform well on small, bounded tasks. Ask for a utility function with clear inputs and outputs, and you will usually get something close to what you need. The challenge is complex features - multi-step implementations that span multiple components, involve existing infrastructure, and require coordination across parts of the codebase the model has not seen.
For complex features, the prompting approach that works for simple tasks breaks down. The model does not have enough context to make good architectural decisions, and a single generic prompt produces output that needs to be completely restructured.
This guide covers how to decompose complex features into a sequence of bounded prompts, how to carry context between them, and how to validate each step before moving to the next.
Step 1: Map the Feature Into Bounded Subtasks Before Prompting
The single most effective thing you can do before prompting for a complex feature is to write out the implementation steps as you would plan them yourself. List the components that need to change, the new code that needs to be written, and the integration points between them.
This is planning work you would do regardless of whether you are using an AI coding assistant. The difference is that this plan becomes the structure for your prompts. Each discrete implementation step becomes its own prompt, with explicit context about where it fits in the larger feature.
A feature that touches five files and requires three new functions should be approached as five separate prompting sessions, not one. Each session focuses on one piece of the implementation, includes context about the adjacent pieces, and produces output you can verify before moving to the next step.
The time to do this decomposition is before you open your AI coding tool, not after you have received generic output and are trying to figure out what went wrong.
Step 2: Write the Data Layer Prompt First
For features that require new data access or transformation, start with the data layer. This gives you real type definitions and function signatures to include in all subsequent prompts.
Prompt for the data schema changes, the query functions, and the repository interface. Include your existing database schema context and your ORM conventions. Specify return types explicitly, including the null/error cases.
Once you have working data layer code - verified against your test suite - you have concrete interfaces to reference in every subsequent prompt. You can paste in the actual TypeScript types or Python dataclasses that were generated, and the model will use them accurately in the next layer.
This is the key principle behind sequential prompting for complex features: each completed step provides real, verified context for the next one. You are not asking the model to imagine what the data layer might look like - you are showing it exactly what it produced.
Cursor and similar tools that index your codebase can surface some of this automatically. But for non-trivial features, explicitly including the verified output from earlier steps is more reliable than trusting automatic context retrieval.
Step 3: Write the Business Logic Prompt With the Data Types Included
With verified data layer types in hand, write the business logic prompt. Paste in the relevant data types and repository interface from the previous step. This is explicit context the model cannot generate on its own.
Specify the input to the business logic function (probably an event or user action) and the output (the resulting state change or response). Include any domain rules that are not obvious from the data types: "a user can only have one active session at a time," "the price must be recalculated whenever quantity changes," "admin users bypass the rate limit check."
These domain rules are the things the model cannot infer from your data schema. They are also the things most likely to be missing from the output if you do not specify them. Stating them explicitly produces code that enforces them correctly from the first iteration.
At 137Foundry, we have found that domain rule specification is the most commonly skipped element in developer prompts for business logic, and it is the most expensive omission - because domain rule errors often require significant restructuring to fix rather than simple edits.
Step 4: Write the API or Handler Layer With the Business Logic Interface Included
With verified business logic complete, prompt for the API layer or handler. Include the business logic function signature and return type from the previous step.
Specify the request shape (route parameters, request body, headers that matter), the response shapes for success and each error case, and the authentication or authorization requirements. If you have existing handler code in the project, include a short example of how your handlers are structured to establish the pattern.
The output at this layer should be structurally consistent with your existing handlers. Pasting in one similar handler from your codebase as an example is usually sufficient to establish the pattern - the model will follow it.
GitHub Copilot handles this kind of pattern-following well when the context is visible. For handlers in languages like TypeScript, including the relevant framework documentation or type imports in the prompt also helps anchor the output to your specific framework's conventions.
Step 5: Write Integration Points One at a Time
If the feature requires integration with external services - a payment processor, an email provider, an analytics platform - prompt for each integration separately. Do not combine multiple external integrations into a single prompt.
For each integration, include:
- The external service's interface or SDK type definitions (paste in the relevant parts)
- Your existing wrapper or client for the service, if one exists
- The specific method calls you need
- How errors from the external service should be handled and propagated
External service integrations are a common source of hallucinated APIs in AI-generated code. The model knows about many external APIs from its training, but API details change and specific method signatures can be wrong. Including the actual SDK types in your prompt gives the model the correct interface to work against.
If you do not have the SDK types available to paste, reference the official documentation homepage (Anthropic, for example, publishes SDK documentation) and avoid relying on the model to know the exact method signatures from memory.
Step 6: Validate Each Layer Before Moving to the Next
Sequential prompting only works if each layer is actually validated before you proceed. Running the tests for the data layer before prompting for business logic catches type mismatches and schema issues before they cascade into higher layers.
If you move to the next step before validating the previous one, errors compound. A wrong return type in the data layer produces wrong types in the business logic, which produces wrong types in the handler, and you end up debugging a cascade of type errors that all trace back to one early mistake.
The validation step does not need to be thorough - it needs to confirm that the interface is correct, the types align, and the basic behavior works. A minimal test covering the happy path and one error case is enough to verify the contract before moving on.
This discipline is what makes sequential prompting for complex features reliable. Each step builds on verified output from the previous one rather than assuming it. The output at each stage is real code you can read and test, not an optimistic assumption about what the model might have produced.
The full approach to structured prompting, including how to handle cases where the model produces output that does not align with your expectations, is covered in the guide on how to write effective prompts for AI coding assistants. The AI automation services at 137Foundry also include workflow support for teams building this kind of sequential prompting practice into their development process.
Top comments (0)