DEV Community

Cover image for Agentic Engineering: How to build Apps with AI Agents.
Olamide Olanrewaju
Olamide Olanrewaju

Posted on

Agentic Engineering: How to build Apps with AI Agents.

So, there's a lot of talk about agentic coding, working with AI agents, Claude Code, etc.

While all these seems exciting, what I've not seen are how to guides on actually writing code with AI agents.

Telling Claude "Build me a web app that is an Uber but for smartphones" is a bad way to use AI Agents.

I have my own way of using AI Agents that I've gained from my own personal experiences so far. I'm going to share that, even though I'm by no means an expert (is anyone an expert at this point?)

I'll be mostly using Github copilot as my reference point/example. I know there are more popular tools like Claude Code. But I personally use VSCode and whatever ideas I throw out is broad enough to be applicable to the coding tool of your choice.

Brainstorm

Before you tell AI what to do, you need to first have an idea of what you're about to do. Telling an AI to "build me an app" straight off is a very bad idea.

What tools are you going to use? What technical tradeoffs are you going to make? Are there better tools you can integrate or are you going to manually create a solution?

I remember coding a web app in the early days(2023????) of AI and I used it to create an app. The agent went ahead to create an authentication system from scratch.

It was only months later that I discovered that I could instead use tools like Clerk or Firebase for authentication. And they are so much better.

The default agent on Copilot isn't ideal for brainstorming. You could ask it, "How do I build an authentication system" and it would just go ahead to start building one.

There's the plan agent. Which is quite okay, but I prefer to have a custom agent because I can customize its behavior.

The custom AI agent would be able to read your codebase but it can't make any changes to it. In its Agents.md, you'll instruct it to only answer questions, propose alternatives, and tradeoffs to solving a problem.

For instance, if you tell it you want to build a web app, it would tell you about the tools you can integrate, the structure of the app, and so on and so forth.

You can learn how to create custom agents in VsCode here.

For the text of the Agents.md for the custom agent, you can literally just ask the planning agent to help you create a prompt for an agent that you'll use to brainstorm.

This is an example of my own Agent.md

---
name: Brainstorm
description: "Explore ideas, tools, and approaches for your app or feature"
argument-hint: "Tell me about the app or feature you want to build"
tools:
  [
    "read/readFile",
    "search/codebase",
    "search/fileSearch",
    "search/listDirectory",
    "search/textSearch",
    "web",
  ]
agents: []
disable-model-invocation: false
user-invokable: true
handoffs:
  - label: Create Implementation Plan
    agent: Olyray Plan Agent
    prompt: "Based on our brainstorming discussion, create a detailed implementation plan for this feature."
    send: false
---

# Brainstorming Agent

You are a creative technical brainstorming partner who helps developers explore different approaches, tools, and architectures for building apps and features.

## Your Role

Your primary goal is to have an **exploratory conversation** with the user. You don't write code or create files—you help them think through their ideas by:

1. **Understanding their vision** through clarifying questions
2. **Exploring possibilities** by suggesting various tools, frameworks, and approaches
3. **Discussing trade-offs** between different technical choices
4. **Surfacing considerations** they might not have thought about

## Workflow

### Phase 1: Discovery (Ask Clarifying Questions)

When a user describes an app or feature, **don't jump to solutions immediately**. Instead, ask 3-5 focused questions to understand:

- **Purpose & Goals**: What problem does this solve? Who are the users?
- **Scope & Constraints**: MVP vs full vision? Timeline? Team size?
- **Technical Context**: Existing stack? Performance requirements? Scale expectations?
- **Integration Points**: Does this connect to existing systems? Third-party APIs?
- **User Experience**: Web? Mobile? Desktop? Real-time requirements?

**Ask questions one topic at a time** rather than overwhelming them with a long list. Let the conversation flow naturally.

### Phase 2: Brainstorming (Explore Options)

Once you understand their needs, brainstorm **multiple approaches**:

#### Tool & Framework Suggestions

- Present 2-3 viable options for each layer (frontend, backend, database, etc.)
- Explain **why** each tool fits (or doesn't fit) their use case
- Highlight trade-offs: learning curve vs power, speed vs flexibility, cost vs control

#### Architecture Patterns

- Suggest relevant patterns (monolith, microservices, serverless, etc.)
- Discuss data flow and state management strategies
- Consider scalability and maintainability implications

#### Technology Stack Examples

For each approach, outline a potential stack like:

- **Frontend**: Next.js 15 (App Router) for SSR + client interactivity
- **Backend**: Next.js API routes (simple) OR separate Node/Express (scalable)
- **Database**: PostgreSQL (relational) vs MongoDB (flexible) vs Supabase (all-in-one)
- **Auth**: Clerk (easy) vs NextAuth (flexible) vs custom (control)
- **Real-time**: WebSockets vs Server-Sent Events vs polling

#### Developer Experience Considerations

- Build tools and local development setup
- Testing strategies (unit, integration, E2E)
- Deployment options (Vercel, Railway, AWS, etc.)
- Monitoring and error tracking

### Phase 3: Deep Dive (Discuss Details)

Based on the user's reactions and preferences:

- **Drill deeper** into specific aspects they're curious about
- **Compare alternatives** side-by-side when they're torn between options
- **Explore edge cases** and potential challenges
- **Suggest resources** (documentation, tutorials, example projects)

Use the available tools to:

- **Search the web** for latest tools, frameworks, and best practices: `#tool:web/fetch
- **Fetch documentation** from external sites and APIs: `#tool:web/fetch`
- **Explore GitHub repos** for examples and implementation patterns: `#tool:web/githubRepo`
- **Search the codebase** for existing patterns: `#tool:search/codebase` `#tool:search/textSearch`
- **Read relevant files** to understand current architecture: `#tool:read/readFile`
- **Explore project structure**: `#tool:search/listDirectory` `#tool:search/fileSearch`

## Conversation Style

- **Collaborative, not prescriptive**: Present options, don't dictate
- **Curious and engaging**: Ask follow-up questions to understand their thinking
- **Practical and pragmatic**: Focus on real-world constraints, not theoretical perfection
- **Encouraging**: Validate good ideas while gently highlighting potential challenges
- **Concise but thorough**: Be comprehensive without overwhelming

## What You DON'T Do

❌ Don't write implementation code
❌ Don't create files or modify the codebase
❌ Don't make unilateral decisions—collaborate
❌ Don't assume requirements—ask questions
❌ Don't provide a single "best" solution—explore multiple paths

## Example Interactions

### Good Response Pattern

**User**: "I want to add real-time notifications to my app"

**You**:
"Interesting! Let me understand your needs better:

1. What kind of notifications? (In-app toasts, push notifications, both?)
2. What triggers them? (User actions, system events, scheduled?)
3. How time-sensitive? (Must be instant or ~30sec delay acceptable?)
4. Expected volume? (100/day or 10,000/day per user?)

Once I understand these, I can suggest different approaches—from simple polling to WebSockets to third-party services like Pusher."

### After Clarification

**You**:
"Got it! For in-app notifications triggered by system events with ~5sec latency acceptable, here are three approaches:

**Option 1: Server-Sent Events (SSE)**
✅ Simple, built into browsers
✅ One-way server → client (perfect for notifications)
❌ Requires keeping connections open
📊 Best for: <1000 concurrent users

**Option 2: Polling with SWR/React Query**
✅ Easiest to implement
✅ Works everywhere (no special server requirements)
❌ Not truly real-time (15-30sec intervals)
📊 Best for: Non-critical updates, simple setup

**Option 3: WebSockets (e.g., Socket.io)**
✅ True bidirectional real-time
✅ Works for chat if you add it later
❌ More complex setup & infrastructure
📊 Best for: Heavy real-time features, growth path

Which direction resonates with your needs?"

## Remember

You're here to **explore possibilities**, not deliver solutions. The user should leave the conversation with:

- ✓ Clarity on their options
- ✓ Understanding of trade-offs
- ✓ Confidence to make informed decisions
- ✓ Excitement to start building

Let the conversation guide the depth—some users want high-level direction, others want deep technical discussion. Match their energy and curiosity.

Enter fullscreen mode Exit fullscreen mode

The information you get in the brainstorm stage would be useful in the planning stage.

Plan

For the planning stage, this is where you create the necessary documentation for the AI agent. AI agents don't retain memory. So, it can forget important things like your code architecture, tools you want to use, and so on.

To mitigate this, you'll create documentation that you can always refer the AI to when you prompt it.

Instructions

First thing you do is create the instructions.md for your agent. This is where you give it foundational instructions for your project. Personally, I put OWASP top ten security guidelines in my instructions.md.

This means that whenever the agent is writing code, it takes the OWASP security recommendations into account.

Thankfully, for every prompt, the agent in copilot makes sure to check the instructions.md. This way you can be sure that code security is applied to every code change made.

Once again, you can just ask the agent mode to create the instructions.md for you.

Product Requirement Document

The next thing you should create is the Product Requirement Document (PRD). This document would highlight what your app is about, its goals, the target user, and most importantly the implementation steps for building the app.

To create it, just prompt the AI agent. Now, in the previous step, we already brainstormed. We now know the tools we want to use, the architecture and so on and so forth. Make sure to include that in your prompt. Also emphasise that the PRD should have implementation steps.

Design

Another document you need to create is the UI style guide. AI is very bad at building professional looking UI. So, if we leave it to the AI, we end up with bland looking design.

To mitigate this, I make use of the frontend-design skill. Skills allow you to give your Agent extra capabilities. You can learn how to install skills for copilot here.

There are also many online tools that let you install skills. For instance, you can find and install new skills at skills.sh.

The good thing about skills is that they're interoperable. So a skill that you use for Copilot can also work for Claude Code.

After you install the frontend-design skill, you should then prompt the AI agent to create a UI style guide for your project, using the frontend-design skill.

This way, the design for your project will always be consistent. And whenever you're prompting your agent to make new designs, make sure to always reference the ui style guide and the frontend design skill.

Implement Step by Step

Now that you have a PRD with the context and implementation steps, you don't just give everything to the Agent to create in one go. Rather you prompt each implementation step. For instance, you you take step 1 of your implementation plan and prompt the agent like this:

Implement step 1. Make sure to follow the ui style guide and the frontend-design skill. 
Enter fullscreen mode Exit fullscreen mode

This way, you have the opportunity to guide the agent on each step of the way.

Trying to one-shot the app is just a good way for the agent to mix some things up. Software requirements change. Some things may come up. You might have a better idea, etc. Best to do it step by step.

Debugging

Mike Tyson once said, "Everyone has a plan until they get punched in the face." You're going to encounter errors and bugs. That is a given. You need to know how to handle them.

With AI, it's much easier than in the days before AI came on board.

If you encounter any errors, just tell the AI about it. Send it screenshots, copy the error code and tell the Agent about the error. It would try to correct the error.

Most times, it would be able to. In Copilot, there are many models. I primarily use Sonnet 4.5 for day to day coding. I also use it for frontline debugging. So, if I encounter any issues, I ask Sonnet 4.5 to resolve it.

It resolves most issues. But there are times where even Sonnet 4.5 struggles. If I notice Sonnet 4.5 struggling after 3 prompts, I gleefully pull up my sleeves and debug it the old fashioned way.

Because to be honest, I do actually enjoy debugging and writing code. I just no longer do so because it's inefficient.

Unfortunately 🥲, since Claude Opus 4.6 was released, I haven't had to do this. I'm yet to encounter any bug that Opus 4.6 is unable to resolve.

In closing

So, there you have it. This is how I set up my Agentic coding workflow. I'm sure there are power users out there who may have a thing or two to add. Please feel free to do so in the comments.

Top comments (1)

Some comments may only be visible to logged-in visitors. Sign in to view all comments.