Part 1 of 3: Getting Started
This series is for solo founders and small teams without developer resources. If you have front-end and back-end developers on your team, this probably isn’t for you.
— — -
This is a love story!
I’ve always wanted to be a developer. (Okay, a witch was my first choice, but “developer” was a close second.)
Back in high school, over 20 years ago, I let classmates convince me I could never be one because I was a girl, even though I was top of my computer science class. I made it into tech anyway, just via the QA, release manager, and product management route.
I’m still not a developer (nor a witch, though I’m still trying). But with the help of AI, I can finally build production-ready applications. This is my journey closing the gap between what AI can prompt and what actually gets shipped.
What’s in this series:
Part 1 (this post): Getting started — tools, process, and mindset
Part 2: Debugging — when AI goes in circles and how to get unstuck
Part 3: After launch — maintenance, iteration, and keeping things running
— — -
The Reality Check: What AI Actually Changes
Here’s a summary of others’ experiences:
AI drastically lowers the barrier to coding, but the barrier to production remains. Testing, deployment, security, scaling, and maintainability still require human systems thinking. The AI can write the code; you still need to be the architect.
Non-developers can build production-grade systems, if they leverage their existing process skills.
Common failure points include: lack of architectural consistency, missing tests, insecure data handling, poor versioning, and no observability. Most people who give up do so because they treat AI like a magic button rather than a tool that needs guidance.
Success factors are: clear domain understanding, strong prompting skills, iterative QA mindset, and willingness to debug AI output. If you can write clear requirements and decompose problems, you can direct an AI to build software.
Taking this on board, here’s what I did.
— — -
Before You Start: The Non-Developer’s Toolkit
You’ll need the right tools. Here’s what I used and why:
IDE and Terminal
You need a code editor (called an IDE) and a terminal to run commands.
I chose: Warp — it’s a 2-in-1 IDE and terminal with a built-in AI agent that can read your codebase. (paid tier)
Other options: VSCode, Cursor, Replit, or GitHub Codespaces. Cursor is popular for AI-assisted coding. VSCode is the industry standard. Choose based on whether you want the AI integrated (Warp, Cursor) or separate (VSCode + ChatGPT).
Version Control
You need GitHub for version control: tracking changes, managing different versions, and collaborating (even if it’s just you and the AI).
Why it matters: When something breaks (and it will), you need to be able to go back to the version that worked. GitHub also hosts your CI/CD pipeline.
Getting started: If you’re not familiar with Git, ask your AI to help you set up your account and explain branches, commits, and merges. You don’t need to memorize commands, just understand the concepts.
CI/CD Pipeline
This is a process that automatically tests, builds, and publishes your app every time you push code.
I used: GitHub Actions (built into GitHub, defined in a single YAML file your AI can write).
Why it matters: This is what catches bugs before they reach users. It’s the difference between “works on my machine” and “works in production.
Backend & Database
If you’re saving data or managing user accounts, you need a backend service.
I used: Google Cloud and Supabase for different projects.
Other options: Firebase, Appwrite, AWS Amplify. I chose Supabase for projects needing a database because it handles auth, storage, and PostgreSQL in one place. For simpler projects, Firebase is easier to set up.
Cost consideration: Most have generous free tiers, but understand the pricing model before you start. Ask your AI: “What will it cost to run this with 100 users? 1,000 users?”
AI Agent(s)
I used: ChatGPT Projects (for planning and architecture) and Warp Agent (for implementation).
Why two: ChatGPT is better for big-picture thinking and research. Warp Agent can see and modify your actual codebase, making it better for implementation. Using both let me separate “what to build” conversations from “how to build it” execution. And save on Warp credits.
Alternative: You could use just one tool (Cursor has built-in Claude, ChatGPT Canvas works for some workflows). Experiment to find what works for you.
Task Management
This sounds like overkill for one person, but planning helps you find weak spots before they become problems.
I used: GitHub Issues (integrated with my repo, free, simple).
Why it matters: When the AI suggests adding five new features while you’re debugging login, having a task list prevents scope creep. Add it to the backlog, stay focused.
Alternatives: TickTick, Asana, Trello, Linear, or a spreadsheet. (Just don’t tell past-Andreea I said spreadsheets are acceptable.)
— — -
The Process: How It Actually Works
Here’s how I worked with AI to go from idea to deployed application.
Phase 1: Brainstorming and Architecture
I started by proposing my idea to ChatGPT. After some market research, we brainstormed with “what if” questions:
“What if I want user registration and login?”
“What if I need to store this data long-term?”
“What if I want to send email notifications?”
This helped me understand what I was actually building. Each “what if” revealed a technical decision I needed to make.
Once the idea was solid, I asked the AI for:
- A starter Git repository structure
- A CSV of tasks to get me started
- A recommended tech stack with rationale
Pro tip: Don’t just accept the first suggestion. Ask “why this framework over alternatives?” Understanding the tradeoffs helps you make better decisions later.
Phase 2: Building the Foundation
I moved into Warp and started with the UI mockup. I gave the Warp Agent specific design guidelines (colors, layout preferences, mobile-first or desktop-first) and let it build a rough prototype.
Why start with UI: It gave me something to see and click immediately. This rough architecture became my North Star. When adding backend features later, I always knew what they needed to connect to.
I also asked the AI to:
Set up the project structure with all necessary config files
Create placeholder components for each major feature
Set up routing (how users navigate between pages)
At this stage, nothing worked — but the skeleton was there.
Phase 3: Tackling the Hard Parts First
After the UI prototype, I tackled what felt most daunting: user authentication.
I needed specific permissions rules, so I gave Warp a detailed paragraph explaining:
Who can register (email domain restrictions, invitation-only, etc.)
Password requirements and recovery flow
What login methods to support (email/password, social login, etc.)
What happens after successful login
I chose Auth0 because it handles 2-factor authentication, password resets, and security compliance for me. One less thing to build and secure myself.
After a couple of hours configuring service accounts (with Warp guiding me through), we reached the exciting “Awesome! We now have authentication!” moment.
Reader, we did not have authentication.
It took another day of debugging to figure out what was wrong: redirect paths pointing to localhost, incorrect service account roles, and environment variables not being read properly. That login broke many more times during development and absolutely came back to haunt me in staging.
Lesson learned: When AI says “it’s working,” that means “it compiles.” Test it yourself. Click every button. Try to break it.
Phase 4: The Iterative Build Loop
From there, I worked feature by feature. For each one:
- Define the feature clearly in plain English with acceptance criteria
- Let the AI implement it in code
- Test it manually by actually using the app
- Ask the AI to write automated tests for it
- Ask the AI to update documentation to reflect the new feature
- Commit to Git with a clear message about what changed
I also created rules and saved workflows for the Warp Agent to maintain consistency:
“Always use TypeScript for type safety”
“Follow this file naming convention: feature-name.component.tsx”
“Keep functions under 50 lines; break complex logic into smaller pieces”
“Check with me before creating new files”
This last rule is crucial. AI loves creating new files. Without guardrails, you’ll end up with duplicate components, scattered logic, and no one (including the AI) will be able to find anything.
I asked Warp to integrate security checks into my CI pipeline. I used GitHub Copilot to review code. I asked silly questions like “Why do we need this file?” and “What is this command supposed to do?”
Phase 5: When Things Break (Spoiler: They Will)
Running builds locally was harder than expected. Dependencies conflicted. Environment variables didn’t load. The app worked in dev but crashed in staging.
I got frustrated. I walked away from my laptop. I told the Warp agent I was “going mad” and that we were “going in circles.”. I hugged a cat. Watered my plants.
Then I came back and we fixed it.
This revealed something important about working with AI: The AI is like an incredibly fast, infinitely patient junior developer who has read every textbook but has zero experience or common sense.
My role wasn’t Product Manager anymore. I became the AI Tech Lead. I wasn’t writing code; I was directing the architect and quality-checking the intern, often simultaneously.
— — -
Your Secret Weapon: The Product Manager Mindset
Here’s what surprised me most: my PM skills were more valuable than coding knowledge.
Problem Decomposition
When the AI went in circles (and it will), I had to stop and break down the problem:
Instead of: “Fix the login”
I learned to say:
“Let’s take a step back. The error is: [paste exact error]”
“Expected behavior: user clicks login, gets redirected to /dashboard”
“Current behavior: user clicks login, sees a blank page”
“Let’s check: Is the button triggering the function? Is the function calling the API? Is the API returning data? Is the redirect path correct?”
This is just writing user acceptance criteria, but for debugging. Breaking the problem into testable pieces helps the AI (and you) isolate where things break.
Systems Thinking Over Syntax
I didn’t need to know JavaScript syntax. I needed to understand:
- How does authentication flow through the system?
- Where is data stored and how is it retrieved?
- What happens when a user clicks this button?
- Which components depend on each other?
The AI can write the code. You need to guide it design the system.
Asking “Why” and “What If”
PMs are trained to ask:
- “Why are we building this?”
- “Why do we need [function]?”
- “What is the purpose of [function]?”
Apply this to code:
- “Why did you choose this approach over [alternative]?”
- “What if the API call fails?”
- “What could go wrong if two users try to edit this simultaneously?”
The AI will answer these questions — but only if you ask them.
Knowing When to Ship
PMs understand “good enough for v1.” You don’t need perfect code. You need code that:
- Solves the core problem
- Doesn’t expose user data
- Can be improved later
I shipped with bugs I knew about. I documented them. I prioritized them. Some got fixed; some didn’t matter.
This is not mediocrity — it’s agile methodology. Iterate based on real usage rather than imagined perfection.
— — -
What “Production Ready” Actually Means (For Non-Developers)
To me, “production ready” means:
Your code is tested. Not bug-free (that doesn’t exist), but tested. You’ve clicked through every flow. Your AI wrote automated tests. You’ve had friends try to break it.
You have security measures. You’re not storing passwords in plain text. You’re using HTTPS. You’ve set up proper authentication. API keys are in environment variables, not committed to Git.
You don’t push directly to production. You have three environments:
Dev (your laptop, where you break things freely)
Staging (a production-like environment where you test before launch)
Production (what users see — you only push here when staging works)
You know when something breaks. Set up basic error monitoring. Ask your AI about Sentry, LogRocket, or similar tools. You need to know when users hit errors, even if you can’t fix them immediately.
You have a rollback plan. If the new version breaks, can you quickly revert to the old one? Git makes this possible, learn how.
You understand the costs. Backend services charge based on usage. Know the pricing model. Set up billing alerts. A surprise $500 bill is not a fun way to learn about scaling costs.
You’ve documented how it works. Future you (or your first hire) needs to understand what you built. Ask the AI to maintain a README that explains the architecture, how to run the app locally, and where everything is.
— — -
The Checklist: Your Starting Framework
Copy this checklist and work through it with your AI at the very beginning:
Project Setup
[ ] Define your scope:Tell the AI your app type (web, mobile, desktop) and purpose (internal tool, consumer app, etc.). This helps it choose appropriate tools and plan for scale.
[ ] Define your stack: Ask for stack suggestions, or tell it your preferences. Specify any 3rd-party services (Supabase, Auth0, Stripe, etc.) you want to use.
[ ] Set up three environments: Tell your AI upfront you’ll have dev, staging, and production. This prevents the “works locally but breaks in production” nightmare.
Architecture and Standards
[ ] Establish architecture: Ask your AI to recommend an architecture pattern (MVC, component-based, serverless, etc.) and explain the tradeoffs.
[ ] Set coding standards: Define conventions for file naming, folder structure, code style. Ask the AI to enforce these. Can be a project wide prompt.
[ ] Create architecture rules: Tell the AI your constraints (e.g., “functions under 50 lines,” “always use TypeScript,” “check before creating new files”).
Quality and Testing
[ ] Automate testing: Ask your AI to suggest, write, and update tests with each feature. You need: lint tests (code style), integration tests (features work together), API tests (backend calls work), and frontend tests (UI behaves correctly).
[ ] Enforce quality checks: Set up pre-commit tests (run automatically before code is saved to Git): linting, documentation checks, security scans.
[ ] Build your CI/CD pipeline: Ask your AI to set up GitHub Actions or similar. Include placeholder tests that will fail initially — they’ll remind you to add real tests later.
Development Practices
[ ] Use Git properly: Create a branch for each task. Merge to main when it works. If you don’t understand Git, ask your AI to explain branching strategy and enforce it.
[ ] Test manually: Always build and use the app yourself. Automated tests catch code errors; manual testing catches UX problems.
[ ] Stay focused: When you see something to fix, add it to your task list — don’t fix it immediately. It’s easy to go down rabbit holes.
Security and Reliability
[ ] Prioritize security: Tell your AI that security is your #1 priority. Ask it to recommend security best practices for your specific stack.
[ ] Use peer review: Have GitHub Copilot or another AI review your code. Different AI models catch different issues.
[ ] Set up monitoring: Ask your AI about error tracking tools (Sentry, LogRocket) and how to integrate them.
When You’re Stuck
[ ] Take a step back: When going in circles, tell your AI: “Let’s take a step back. Here’s the exact error. Here’s what should happen. Here’s what actually happens. Let’s try again.”
[ ] Rephrase the problem: If the AI isn’t getting it, you’re not explaining it clearly. Try explaining it differently.
[ ] Remember: Different errors = progress. As long as you’re getting new errors, you’re moving forward.
— — -
What This Makes Possible
Listen, I’m not going to tell you this is easy. It is not easy.
But it is doable. How technical you want to get is up to you. I let the Warp Agent run commands I already understood, but I’m not trying to become a “proper” developer. I’m trying to ship products.
This isn’t about making developers obsolete — their role as architects and scalability experts remains critical for large, complex systems. But the real insight is this:
If you’re a solo founder or small team wanting to build your own product, you now can.
You no longer need to find a technical co-founder or pay a developer thousands just to get your idea off the ground. The barrier between a great idea and a working, production-ready product has fundamentally shifted.
The code I ship isn’t perfect. It’s good enough to solve real problems for real users. And with each iteration, it gets better.
— — -
What’s Next
In Part 2, I’ll dive deep into debugging — the biggest challenge in this whole process. How to recognize when the AI is stuck, techniques for getting unstuck, and how to think about debugging when you don’t know how to code.
In Part 3, we’ll cover life after launch: handling production bugs, adding new features, managing technical debt the AI helped create, and deciding what to fix yourself vs. when to finally hire that developer.
Want to try this yourself? Start with something small — a personal tool, an internal dashboard, a simple web app. Pick one problem you want to solve and work through the checklist above with your AI of choice.
The worst that happens? You learn a lot and don’t finish. The best that happens? You ship something real.
— — -
Have questions or want to share your own AI building experience? I’d love to hear from you — especially the messy parts where things broke.


Top comments (0)