Part 1: Decision-Making, Architecture, and Problem Solving
Executive Summary
Modern software engineering has changed forever. The question isn’t whether to use AI—it’s how intentionally you use it while preserving engineering judgment.
As a senior backend engineer working with Laravel and distributed systems, I’ve spent the past couple of months developing a structured AI-assisted workflow. The outcome?
- ✔️ 70% faster execution on delegated tasks
- ✔️ Zero compromise on architectural or business logic integrity
- ✔️ Cleaner design decisions backed by structured reasoning
This framework is not about replacing human engineers. It’s about establishing a hybrid model where AI accelerates mechanical execution, and engineers lead architecture, decision-making, and correctness.
I Learned This the Hard Way
I wasn’t always this disciplined. When I first started using AI, I treated it like a magic wand. I pushed code to production blindly—without rigorous verification.
I paid dearly for it.
I introduced expensive N+1 queries that choked the DB, pushed logic that defied our business rules, and deployed raw, unverified code that caused regression. I learned that blind trust is expensive. Gradually, I refined my approach from "AI-generated" to "AI-assisted", moving from a passive user to an active architect.
This article explains the thinking layer of that refined workflow.
My Multi-AI Workflow Philosophy
I rely on a three-tier system:
1. The Thinking Layer: ChatGPT + Google Gemini
These tools are where my ideas take shape. I use them to:
- Clarify requirements
- Structure technical thoughts
- Stress-test assumptions
- Explore multiple architectural approaches
- Generate high-signal prompts for execution
The "Prompt Engineering" Gap:
This is often where the gap between junior and senior engineers becomes visible.
- A junior engineer might skip this layer and ask the IDE directly: "Build a registration form." The AI will guess the requirements, usually resulting in fragile, "happy-path" code.
- A senior engineer uses this Thinking Layer to generate a specification first. I ask ChatGPT: "Draft the strict technical requirements for a production-grade registration system, considering security, atomicity, and validation."
The output of this layer: ➡️ A clean, structured, context-rich prompt.
2. The Execution Layer: Windsurf (Cascade) + Laravel Boost
Once my thinking is refined, I move to Windsurf (or Cursor/Copilot).
- Windsurf: Provides full codebase awareness, stateful context, and implementation precision.
- Laravel Boost: I pair this with specific prompts to enforce modern Laravel patterns, respect service container conventions, and produce clean DTOs.
This pairing ensures speed AND correctness.
3. The Review Layer: Copilot + Gemini
These tools are used after the implementation. I use them as a "second pair of eyes" to:
- Review PRs: Highlight potentially problematic changes on varying levels of complexity.
- Simulate: Run mental simulations of how the implementation handles race conditions or high load.
- Refine: Suggest logical improvements or optimizations I might have missed.
- Automate: Generate suitable, descriptive commit messages based on staged files.
This layer acts as my quality assurance gate before I even push the code.
(And yes—I still write “please” and “thank you” in my prompts. You know… just in case the AI uprising ever happens 😄.)
How AI Enhances Architectural Decision-Making
Good architecture requires exploring multiple solutions, assessing trade-offs, and understanding constraints. Here is my 6-step architectural workflow.
Step 1: Present Requirements with Context
The quality of the architectural output is capped by the quality of your context.
The "Junior" Prompt:
"How should I build a pricing system?"
- Result: Generic, brittle CRUD advice.
The "Senior" Prompt:
"I need to redesign our pricing system.
Constraints:
- Current State: Database-stored logic that is brittle.
- Requirements: Role-based pricing, configurable thresholds.
- Load: 10K+ daily orders; read-heavy.
- Stack: Team experienced with Laravel Service patterns."
- Result: Robust patterns (Pipeline/Strategy) tailored to the stack.
Step 2: Request Multiple Architectural Options
I always ask:
"Give me 3 distinct architectural approaches."
AI usually proposes:
- Pipeline architecture
- Strategy pattern
- Rules engine
This expands the solution space immediately.
Step 3: Ask for a Trade-Off Matrix
Next, I request a comparison.
"Compare options by testability, extensibility, performance, maintainability, and migration complexity."
This gives me a clearer, multi-dimensional view.
Step 4: Validate Against Real Constraints
AI doesn’t know your team’s strengths or your deployment risks. That’s where senior judgment comes in. I evaluate:
- Can my team maintain this?
- Does this introduce fragile dependencies?
- Is the operational overhead worth it?
Step 5: Break Into Phases
Once I choose an approach:
"Produce a 5-phase implementation plan. Each phase must be independently deployable and testable."
This removes uncertainty and helps me review each phase independently, allowing me to detect architectural drift early.
Step 6: Make the Final Decision
AI provides insights, not authority. I decide—and I am accountable.
Complex Problem-Solving: AI as a Debugging Partner
Debugging is where AI shines—not because it “fixes bugs” magically, but because it structures reasoning. My debugging loop now takes 15–20 minutes, not 60–90.
The Ranked Diagnostic Workflow
- Provide Context: I paste the error, stack trace, relevant code, and recent changes.
- AI Analysis: Based on this error, provide 3 likely root causes ranked by probability. For each, suggest a verification step.
- Execution: I test the highest-probability cause (e.g., the 70% likely one) first.
- Validation: I apply the fix and verify.
This prioritization prevents rabbit holes and wasted time.
My Acceptance / Rejection Framework
Not all AI output is equal. My mental model for code review:
✅ Immediate Acceptance (Low Risk)
- Boilerplate
- DTO scaffolding
- Documentation structure
- Test class skeletons
⚠️ Acceptance After Review (Medium Risk)
- Service method logic
- Controllers
- Refactoring suggestions
- Review for: Business logic correctness and performance.
❌ Rejection or Heavy Revision (High Risk)
- Complex domain logic
- Authorization/Authentication
- Security-sensitive code
- Database migrations
Stress-Testing Ideas Before Implementation
One of AI’s hidden superpowers is pre-execution simulation.
"What can go wrong?"
"What failure modes should I expect with this payment design? Consider race conditions and timeouts."
"Find the Blind Spots"
"What edge cases am I missing for this coordinate validation?"
AI Result: Invalid formats, extreme ranges, null/empty cases, injection attempts.
Closing Thoughts: The Hybrid Engineer
The engineers who excel in the next decade won’t be those who avoid AI, nor those who blindly trust it. It will be engineers who combine clarity, system thinking, and human judgment with AI-assisted speed, structure, and exploration.
AI accelerates execution. You drive decisions.
About the Author
Senior Backend Engineer specializing in Laravel, distributed systems, and backend architecture. Focused on scalable systems, clean architecture, and AI-augmented workflows.
🚀 Part 2 Coming Up Next: Code Generation, Refactoring, Testing & Delivery Automation.
Top comments (2)
Insightful post and a great read. The thank you and please part got me laughing. I sometimes do it too 😂. Thank you Olusola for sharing this.
You're welcome, Femi.