DEV Community

Cover image for How to Actually Use AI to Build Production Software, End to End
Smyekh David-West
Smyekh David-West

Posted on

How to Actually Use AI to Build Production Software, End to End

The stack, the tools, and the step-by-step workflow nobody hands you at the start.

The Blueprint

Before you build, here's the full plan. You may bookmark this and come back to whatever stage you're at.

The Foundation

The Stack

The Build

Tools laid out. Time to build.


There's never been a better time to build software. You can open Claude or Gemini, describe an idea, and watch functional code appear in seconds. But here's the thing nobody tells you in the hype reels: the code is only as good as the decisions behind it.

With so many tools flooding the market, including Lovable, Bolt, Cursor, and a dozen others, developers and especially newcomers are drowning in choice paralysis. The real question isn't whether AI can write your code. It can. The question is, what stack do you point it at, and how do you use the AI properly in the first place?

This matters more than most people realise. Mature, well-documented technologies are easier for AI models to work with because they've been trained on years of documentation, Stack Overflow threads, GitHub repos, and real-world usage. When you pick an obscure or newly released framework, you're not just fighting the learning curve yourself. The AI is fighting it too. And that means more hallucinations, more debugging, more time lost.

So here's a practical, opinionated guide on what to use when you're building something you actually intend to ship and how to use the AI properly while you do it.

Before You Write a Line of Code: Use the CLI, Not Just the Browser

Most people start with Claude or Gemini in a browser tab. That's fine for exploring ideas, generating boilerplate, or asking conceptual questions. But once you have an actual project, you're leaving significant capability on the table if you stay in the browser.

The real power comes from the Claude Code CLI and the Gemini CLI, command-line tools that let the AI agent read your actual codebase, understand the context of your project, and make changes across files with awareness of how everything fits together.

Here's why this matters: when you paste code into a browser chat, the AI only sees what you paste. When you use the CLI inside your project, the agent can navigate your file structure, read your existing modules, understand your schema, check your config files, and make decisions based on the whole picture, not just a fragment.

A note on terminals: VS Code has a built-in terminal, and it's useful for quick things like creating folders, or running a one-off command. But for your actual AI CLI sessions, download and use Warp. Warp is a modern terminal with a genuinely better experience than the default options on Windows or Mac. It has AI features built in, a clean interface, and it handles long-running CLI sessions much more comfortably than the cramped terminal panel inside VS Code. The habit to build is using VS Code for writing and editing code and using Warp for running your Claude Code or Gemini CLI sessions. They complement each other well.

Getting started with Claude Code:

  1. Make sure you have Node.js installed (version 18 or higher). You can check by running node -v in your terminal.

  2. Install Claude Code globally: npm install -g @anthropic-ai/claude-code

  3. Authenticate with your Anthropic account: claude login

  4. Open Warp, navigate into your project folder: cd your-project-name

  5. Start a session: claude

Getting started with Gemini CLI:

  1. Make sure you have Node.js installed.

  2. Install it: npm install -g @google/gemini-cli

  3. Authenticate: gemini auth

  4. Open Warp and navigate into your project: cd your-project-name

  5. Start a session: gemini

Once you're inside a CLI session in Warp with your project loaded, you can ask the agent things like

"What does the authentication flow look like in this codebase?"

or

"Add input validation to the user registration endpoint, consistent with the patterns already in this project."

That level of context-awareness is the difference between AI as a code generator and AI as an actual collaborator.

Think of the browser as your brainstorming and planning phase. Warp with the CLI is where you build.

Before you push anything to GitLab, add one more tool to your Warp workflow: CodeRabbit. CodeRabbit is an AI code review tool, and the habit worth building is running it locally against your uncommitted changes before they ever leave your machine.

Install the coderabbit CLI and run:

coderabbit review --plain --type uncommitted
Enter fullscreen mode Exit fullscreen mode

What this does is review everything you've changed but not yet committed, the same way a senior engineer would if they were looking over your shoulder before you pushed. It flags logic issues, security concerns, inconsistent patterns, missing error handling, and things the AI that wrote the code may not have caught because it was focused on making the feature work rather than making it defensible. Running it before you commit means you're catching problems at the cheapest possible moment, before they're in the pipeline, before they're in a merge request, and before they've touched production.

The --plain flag keeps the output readable in the terminal rather than formatted for a browser. In Warp, the output renders cleanly and you can scroll through the review the same way you'd read a colleague's notes. If something CodeRabbit flags isn't clear, paste it into your Claude or Gemini CLI session and ask for an explanation or a fix.

The workflow in Warp then becomes: write code in VS Code, switch to Warp, run coderabbit review --plain --type uncommitted, address what it surfaces, then commit and push. That sequence gives you two sets of AI eyes on every change before it hits the pipeline: the model that wrote the code and the model reviewing it. They catch different things.

First, a Word on Vibecoding vs. Knowing What You're Doing

If you're a pure vibecoder, someone who delegates every architectural decision to the AI, this guide is especially for you. AI will make choices for you, and sometimes those choices are fine. But often, they're fine for a prototype. Not for production.

I built a smart plant sitter for a hackathon last year using Flet for the frontend. Got it working. Hacked my way through the bugs and was proud of it. But it was limited and not particularly polished by my standards.


Later, I tried to use Flet for a separate production project entirely, and it had so many issues I couldn't hack through them. No amount of debugging was going to get me where I needed to be. The issues weren't bugs in my code; they were limitations in the technology itself. Some frameworks and libraries simply aren't mature enough to live, exist, and thrive in production. I moved to Next.js, and I typically fall back on vanilla JS, HTML, and CSS because basics are important and I'm a proponent of Gall's Law.

Gall's Law, for those unfamiliar, states that all complex working systems evolved from simpler working systems. In other words, get the simple version working first, then build complexity on top of it. A framework that skips that foundation tends to crack under real-world pressure.

The lesson here is that pet projects and production apps are different species. Production demands polish, reliability, and critical decisions made upfront. An idea is no longer enough. Claude and Gemini can generate a hundred groundbreaking ideas before breakfast. What sets you apart is architecture, the decisions about how things fit together, why, and in what order.

You don't need to know everything. Experienced engineers still Google things constantly. But you do need to know enough to steer. That's what this guide is for.

1. Database and Cloud Infrastructure: Oracle Cloud (OCI), the Slept-On Giant

Let's start with the one that surprises people: Oracle Cloud Infrastructure, or OCI.

Most developers reaching for a cloud platform default to AWS, and some go with Google Cloud (GCP) or Azure. These are solid choices, but they come with significant complexity and cost, especially early on. Here's how they compare:

Category Oracle Cloud (OCI) AWS GCP Azure
Free Tier Very generous (Always Free includes Autonomous DB, compute, storage) Limited, expires after 12 months Limited, some always-free Limited, some always-free
Database maturity Oracle DB has been in production since 1979 RDS wraps third-party databases Cloud SQL/AlloyDB are solid but younger Azure SQL is solid, Microsoft-backed
Learning curve Moderate Steep Moderate Steep
AI familiarity Good Excellent, most AI training skews AWS Good Good
Cost at scale Competitive, often cheaper Expensive if not managed Competitive Competitive

Pro Tip: In AWS, if your app goes viral and you serve 10 TB of data, you could be looking at a surprise bill of roughly $900. In OCI, that same bill is $0. Architecture is as much about the ledger as it is about the code.

Oracle's database has a legacy that AWS, GCP, and Azure simply can't match on the same terms. It's been battle-tested in some of the most demanding enterprise environments on the planet for over four decades. The Autonomous Database offering on OCI handles a lot of the operational burden for you, including patching, backups, and performance tuning, which is exactly what you want when you're a small team or solo developer trying to ship.

One concrete number worth knowing: OCI gives you the first 10 TB of outbound data transfer free every month. AWS starts charging after 100 GB. That is a 100x difference in your egress allowance, and for a startup serving real users, that gap shows up very quickly in your monthly bill.

One thing worth addressing directly: you may have seen YouTube videos comparing API performance benchmarks with flashy graphics of requests per second across different database and backend combinations. Those comparisons are often testing systems at a scale you won't touch for a long time, and they rarely reflect the architecture you'd actually be building. Paired with FastAPI on the backend, Oracle's database handles performance very comfortably for the vast majority of real-world production applications. You don't need to let those benchmarks drive your early decisions.

The OCI interface is surprisingly clean. If you get stuck, screenshot the console and paste it into Claude or Gemini to walk you through it step by step. It works.

One more thing worth saying upfront: the right way to set up OCI is not to click through the console and configure things manually. That approach works for a first look, but it doesn't scale, it doesn't reproduce cleanly across environments, and it leaves your security configuration undocumented and easy to get wrong. The better approach is to provision everything with Terraform from the start.

Terraform is an infrastructure-as-code tool that lets you define your entire cloud environment in configuration files, what compute to spin up, what networking rules to apply, which IAM policies to attach, which secrets to store in Vault, and what security lists to enforce. You write it once, apply it, and your infrastructure exists exactly as described. If you need a staging environment, you apply the same configuration with different variables. If something breaks, you have a complete record of what was provisioned and why.

Starting with Terraform on OCI means your security lists, IAM policies, and Vault secrets are codified alongside your application code, version-controlled in GitLab, and reviewable. That's a significantly more defensible position than manually clicking through the OCI console and hoping you remember what you configured six months later. The AI can write your Terraform configuration for you. Give it your architecture and ask it to scaffold the OCI provider setup, and you'll have a working starting point within minutes.

2. Hosting and DNS: Cloudflare, Far More Than a Domain Registrar

A lot of people's first instinct for domain registration and hosting is GoDaddy. It's heavily marketed and familiar. But once you go with Cloudflare, it's genuinely hard to go back.

Category Cloudflare GoDaddy
Primary strength CDN, DDoS protection, edge hosting, DNS Domain registration, basic hosting
Performance Global CDN across 330+ cities worldwide Standard hosting from a single region
Security Built-in DDoS protection, WAF, SSL Paid add-ons for most security features
Developer tools Workers, Pages, R2, KV, D1, Wrangler CLI Limited developer-facing tooling
Free tier Generous, Pages, Workers, and CDN free to start Minimal
AI familiarity Excellent Basic

When Cloudflare says "edge," it means your site is served from whichever of their 330+ city locations is closest to the person loading it. When GoDaddy hosts your site, it's served from one place. If that one data centre is in Dallas and your user is in Lagos, they're waiting for the full round trip. With Cloudflare, a user in Lagos is likely hitting a nearby node. That's not a small difference in practice.

Cloudflare Pages lets you deploy static frontends and full-stack apps directly from a GitHub or GitLab repo. Cloudflare Workers lets you run serverless backend logic at the edge, meaning it executes close to your user's location, reducing latency significantly. Cloudflare R2 is object storage without egress fees, which is a major and often invisible cost with AWS S3.

A word on Wrangler: Cloudflare's command-line tool is called Wrangler, and it's how you develop, test, and deploy Cloudflare Workers and Pages locally before pushing them live. Instead of deploying blind, you run wrangler dev to simulate the edge environment on your own machine. Claude and Gemini both know Wrangler's syntax well, so you can ask the AI to write or debug your wrangler.toml configuration, generate Worker scripts, or set up bindings to R2 storage and KV namespaces. Run Wrangler from Warp for a cleaner experience, the same way you'd run your AI CLI sessions.

For any production project, you want your site behind Cloudflare regardless. The DDoS protection alone is worth it, and the free SSL certificates mean you're not leaving security to chance.

GoDaddy is fine for buying a domain. That's about where its utility ends for serious development.

3. Backend: FastAPI, Python That Actually Moves

Backend choice is where a lot of decisions get religious. Let's cut through it.

FastAPI is a Python web framework built specifically for building APIs quickly and cleanly. It comes with automatic request validation, data serialisation, and interactive API documentation generated from your code, all out of the box.

Here's how it compares to the common alternatives:

Category FastAPI Node.js / Express .NET (C#) Spring Boot (Java) Laravel (PHP) Ruby on Rails
Language Python JavaScript C# Java PHP Ruby
Speed Very fast (async-first) Fast Very fast Fast Moderate Moderate
Learning curve Low-moderate Low-moderate Steep Steep Moderate Moderate
Type safety Built-in via Pydantic Optional via TypeScript Built-in Built-in Partial No
Auto API docs Yes (Swagger + ReDoc) No Partial (Swagger add-on) Partial (Swagger add-on) No No
AI familiarity Excellent Excellent Good Good Good Moderate
Verbosity Low Low High Very high Moderate Low
Best for APIs, microservices, AI-adjacent services Full-stack JS apps, real-time Enterprise systems, Windows ecosystems Large enterprise backends Content-heavy web apps, rapid prototyping Full-stack web apps, startups

A few notes on the alternatives worth calling out specifically:

.NET (C#) is a serious, high-performance framework backed by Microsoft. It's genuinely excellent for enterprise environments, especially where the rest of the organisation is already running Windows infrastructure. But it carries real weight. The ecosystem is verbose, the setup is heavier, and if you're building as a solo developer or small team, you'll spend more time on configuration than on your actual product. The AI can write C# competently, but the debugging surface is larger.

Spring Boot (Java) is one of the most widely deployed backend frameworks in the world, particularly in large enterprises and financial institutions. If you've ever applied for a job and seen "5 years Spring Boot experience required," this is why. It is genuinely battle-hardened. But it is also genuinely complex. Annotations, dependency injection, and the sheer volume of configuration it expects from you make it a steep climb for anyone not already comfortable in the Java ecosystem. For a solo developer building with AI assistance, the verbosity alone makes debugging slower.

Node.js with Express is a natural choice if your frontend is already in JavaScript and you want to stay in one language end-to-end. The AI handles it well and the ecosystem is enormous. The trade-off is that JavaScript's loose typing means errors that Python would catch at definition time often only surface at runtime. TypeScript helps, but it adds a build step and its own complexity. For teams already living in JavaScript, Express or its more opinionated cousin Fastify makes sense. If you're choosing a language from scratch, Python's readability and FastAPI's structure give you a cleaner starting point.

Laravel (PHP) is worth more respect than it typically gets in developer discourse. Modern PHP is not the PHP of 2008. Laravel ships with a clean ORM, authentication scaffolding, a task queue, and good documentation. It has a large community and deploys easily to almost any shared hosting environment, which matters when you're early and cost-conscious. The honest limitation is that the AI's training data for Laravel is thinner than for FastAPI or Express, which means you'll hit more friction when the generated code needs debugging. It remains a solid choice if you already know PHP.

Ruby on Rails pioneered a lot of what we now consider standard in web frameworks: convention over configuration, built-in ORM, database migrations as code, and scaffolding. It's still used in production by serious companies. The challenge today is momentum. The Rails community has shrunk relative to its peak, which means fewer recent training examples for the AI and a smaller pool of developers to hire from if your project grows. For solo projects and quick prototypes it remains genuinely pleasant to use.

FastAPI wins for this stack for a specific combination of reasons: Python's readability makes it easier to evaluate AI-generated code rather than just accepting it, the async-first design handles real-world I/O efficiently, the automatic documentation means your API is self-describing from day one, and the AI models are exceptionally well-trained on FastAPI patterns. When something goes wrong, the error messages are clear and the debugging path is short. That combination matters more than benchmark scores when you're shipping something real.

Go (Golang) deserves an honourable mention. It is genuinely fast, its concurrency model is elegant once you understand it, and production Go services tend to be lean and reliable. But you have to learn it first. Go's error handling, interface system, and approach to concurrency are different enough from most languages that you can't just point the AI at a problem and trust the output without understanding it. Once you have the foundation, it's a strong choice for high-throughput services. Without it, you're flying blind when something breaks unexpectedly.

The core point stands throughout: mature frameworks are easier for AI to work with because the models have been trained on years of real-world usage. FastAPI paired with Oracle DB handles production traffic comfortably for the kind of applications most developers are actually building. The YouTube benchmark videos are testing someone else's scale, not yours.

4. Schema Management: Liquibase vs. ORMs, and Why the Distinction Matters

This one trips people up, so let's define the terms first.

What is an ORM?

ORM stands for Object-Relational Mapper. It's a layer of code that lets you interact with your database using the programming language you're already writing in, rather than writing raw SQL. SQL, or Structured Query Language, is the language databases speak natively. Instead of writing SELECT * FROM users WHERE id = 1 directly in SQL, an ORM lets you write something like User.query.get(1) in Python, and it translates that into SQL behind the scenes.

Popular ORMs include SQLAlchemy (Python), Prisma (Node.js), Hibernate (Java), and Django's built-in ORM. They feel natural to start with, especially when AI is generating the code, because they keep everything in one language. The problem is that they abstract away too much of the database. In production, you'll eventually hit a situation where the ORM's migration logic conflicts with your actual schema, or where you need precise control over how a change is applied across environments like development, staging, and production.

That's where Liquibase earns its place.

Category Liquibase ORM Migrations (SQLAlchemy, Prisma, etc.)
Control Full control over every schema change High-level abstraction, less manual control
Database-agnostic Yes, works across Oracle, Postgres, MySQL, etc. Often framework/language-specific
Rollback support Built-in, explicit rollback scripts Varies, often manual and error-prone
Audit trail Yes, changelog tracks every change ever made Partial
Learning curve Moderate (free courses and certification available) Low initially, painful at scale
AI familiarity Good, Claude and Gemini understand changesets and changelogs well Excellent

Liquibase uses changelogs and changesets, a versioned history of every change ever made to your database schema. You can roll forward, roll back, and audit exactly what changed, when, and why. In a production environment, this is invaluable.

The courses are free on Liquibase's website and you can get certified, which is a genuine bonus.

It also means you can steer the AI confidently when it generates migration scripts, because Claude and Gemini both handle Liquibase XML, YAML, and SQL changelogs reliably.

Oracle DB vs. PostgreSQL vs. MongoDB:

Category Oracle DB PostgreSQL MongoDB
Type Relational (SQL) Relational (SQL) Document (NoSQL)
Maturity 45+ years 30+ years ~15 years
Best for Enterprise, complex transactions General purpose, advanced queries Flexible schema, rapid prototyping
ACID compliance Full Full Partial
Cost Free on OCI always-free tier Free (open source) Free tier, paid tiers on Atlas
AI familiarity Good Excellent Excellent

A quick explanation on ACID: ACID stands for Atomic, Consistent, Isolated, and Durable. It's the set of guarantees a database makes about your transactions. In plain terms, a transaction either fully completes or it doesn't happen at all.

Here's why this matters in practice: imagine you're transferring money from a Savings table to a Checking table. You deduct from Savings, then add to Checking. If the power cuts out between those two steps, what happens to the money? With a fully ACID-compliant database, it doesn't vanish. The whole transaction either lands or it rolls back. It's either in Savings or in Checking. There's no in-between state where it's nowhere. No vibes allowed in the ledger. For anything involving money, user accounts, orders, or sensitive records, this guarantee is non-negotiable.

PostgreSQL is the community favourite for relational databases and genuinely excellent. But if you're already on Oracle Cloud, it's worth knowing that Oracle 23ai introduces native AI Vector Search. This means you don't need a separate vector database like Pinecone to build AI-powered search or recommendation features. You can keep your relational data and your AI embeddings in the same database. For anyone building AI-assisted features into their product, that's a meaningful simplification to your stack.

MongoDB's schema-less nature is appealing for prototypes because you don't have to define your structure upfront. But in production, that same flexibility becomes a liability when your data is inconsistent and your queries are slow. For any serious transactional application, go relational.

5. Version Control and CI/CD: GitLab Is Doing More Than You Think

Most people reach for GitHub by default, and it's a solid choice. But GitLab, especially if you're building a serious project, deserves a closer look, primarily because of how deeply integrated its CI/CD tooling is.

CI/CD stands for Continuous Integration and Continuous Deployment. In plain terms: every time you push code, a pipeline automatically runs, testing it, building it, and deploying it, so you're not doing those steps manually every single time. Once it's set up, it works in the background and you just make edits.

GitLab manages all of this through a single file in your repository called .gitlab-ci.yml. This file defines your pipeline stages, what runs, when, and in what order. And here's where AI becomes genuinely useful: Claude and Gemini can write these files for you.

Open Warp, navigate into your project, start a CLI session, and ask: "Write a GitLab CI/CD pipeline for this FastAPI project that runs tests, builds a Docker image, and deploys to OCI". You'll get a working draft. You can also paste in a failing pipeline log and ask the AI to diagnose it. The full version with Docker image building and registry pushes is covered in Step 4 of the walkthrough.

A basic .gitlab-ci.yml for a FastAPI project might look like this:

stages:
  - test
  - build
  - deploy

test:
  stage: test
  image: python:3.11
  script:
    - pip install -r requirements.txt
    - pytest

build:
  stage: build
  script:
    - docker build -t my-app .

deploy:
  stage: deploy
  script:
    - ./deploy.sh
  only:
    - main
Enter fullscreen mode Exit fullscreen mode

Beyond the local review habit, CodeRabbit integrates directly with GitLab and can be configured to automatically review every merge request your team opens. Once connected, it posts a structured review as a comment on the MR, covering the same ground as the local review but scoped to the diff between your branch and main. For solo developers it's a useful second pass. For small teams it functions as an always-available reviewer who has read the entire codebase and never gets tired.

The combination worth aiming for is: local coderabbit review --plain --type uncommitted before you commit, GitLab MR review automatically when you push a branch, and the pipeline running tests and deployment after the review. By the time code reaches your production branch it has been looked at by the model that wrote it, the model that reviewed it locally, CodeRabbit on the MR, and your test suite. That is a meaningfully more robust process than most small teams run, and none of it requires a dedicated QA engineer.

The key habit to build: when a pipeline fails, copy the logs from GitLab and paste them directly as a prompt into Claude or Gemini. Don't try to debug from memory. The AI will read the exact error and tell you what went wrong. This alone saves hours.

GitLab also has a clean interface, project management tooling, and a container registry built in, meaning you're not cobbling together five different services to manage your project.

6. Frontend: Start Vanilla, Graduate When Ready

Here's my standing recommendation: vanilla HTML, CSS, and JavaScript first.

Not because frameworks are bad. React, Next.js, Vue, these are all excellent. But the fundamentals matter. Gall's Law applies here too. When you understand what a framework is abstracting, you become dramatically better at using it and debugging it.

For production, the progression looks like this:

Use Case Recommendation
Static sites, simple pages Vanilla HTML/CSS/JS
Content-heavy sites, SEO-critical Next.js
Interactive dashboards React
Full-stack with server-side rendering Next.js
Rapid prototyping Any, keep it simple

The advantage of vanilla is zero build tooling, zero dependency conflicts, and full transparency about what your code is doing. The advantage of Next.js is server-side rendering, file-based routing, and first-class deployment on Vercel or Cloudflare Pages.

When you use AI to build a Next.js app, you'll hit some debugging. That's fine. Push through it and you'll understand the framework better than if it had just worked the first time.

7. Mobile: Flutter and Fastlane, With Realistic Expectations

For mobile app development, Flutter deserves its spot on this list. It's Google's cross-platform framework using the Dart language, and it lets you build iOS and Android apps from a single codebase. Claude and Gemini can generate significant amounts of working Flutter code with clear prompts. The widget system is well-documented and AI familiarity with it is solid.

The honest truth: building the app is the easy part. Getting it through Apple's App Store review process and Google Play's requirements is its own discipline. There are real hoops, developer accounts, code signing certificates, privacy policies, age ratings, and binary review rounds.

This is where Fastlane earns its place. Fastlane is an open-source automation tool specifically built for mobile app deployment. It automates the painful parts: building your app, running tests, managing code signing, taking screenshots, and submitting to the App Store or Play Store. You define your deployment workflow in a Fastfile, and Fastlane executes it.

Claude and Gemini both understand Fastlane's configuration structure well. You can ask the AI to generate a Fastfile for your Flutter project that handles both iOS and Android submission, then iterate from there. Combined with a GitLab CI/CD pipeline, you can set up a workflow where a push to your main branch automatically triggers a Fastlane deployment to both stores. Your release process becomes automated and repeatable rather than a manual scramble every time.

The Bigger Picture: Architecture Is the Job Now

Here's what I keep coming back to: ideas are cheap now. Ask Claude or Gemini for startup ideas, feature ideas, marketing angles and you'll have a hundred in an hour. The bottleneck has shifted entirely to implementation and, more specifically, to architecture.

Architecture means knowing which database fits your data model, knowing how your frontend and backend will communicate, knowing where your bottlenecks will be before you hit them, knowing how to manage schema changes without breaking production, and knowing how your code gets from your laptop to a live server.

You don't need to know everything before you start. You need to know enough to make good decisions and recognise bad ones. The rest you learn as you go, and the AI helps fill the gaps when you have the baseline to evaluate what it's telling you.

Don't chase the $1M ARR, 24k-user story without asking yourself: if I wanted to build that, where would I actually start? Because the answer requires architecture. It requires picking a stack, understanding why, and knowing how to steer when things go sideways.


Step-by-Step: From Idea to Production

This is the part most guides skip. Here's how to actually start and finish.

 Step 1: Open VSCode and Start in the Browser

Download and open Visual Studio Code from code.visualstudio.com. It's free and the most widely supported editor for AI-assisted development.

Before touching any terminal, start in the browser with Claude (claude.ai) or Gemini (gemini.google.com). Use this phase to describe your project, plan the architecture, generate your initial project structure and boilerplate, and decide on your folder layout before you write a single file.

If you've never done this before, just say so. Tell the AI:

"I want to build [your idea]. I'm new to this. What should my project structure look like, what commands do I need to run to set it up, and what should I do first?"

It will walk you through it step by step and tell you what to install if you're missing anything.

Step 2: Move to Warp for CLI Sessions

Download Warp from warp.dev and open it alongside VS Code. This is your AI CLI environment.

In Warp, navigate into your project folder:

cd your-project-name
Enter fullscreen mode Exit fullscreen mode

Start a Claude Code or Gemini CLI session from here. From this point, the AI can see your entire codebase. Ask it context-aware questions like:

  • "What's the best way to add authentication to this project based on what's already here?"

  • "Review my current folder structure and suggest improvements."

  • "Write tests for the endpoints in users.py."

Keep VS Code open for editing and writing code, and use its built-in Source Control sidebar for staging changes and writing commit messages. Use Warp for AI CLI sessions, running builds, CodeRabbit reviews, and anything that benefits from a full-screen terminal. They work together; use each for what it's best at.

One Warp feature worth knowing about specifically is its tab panel. Warp lets you open multiple tabs in the same window and switch between them instantly, and once your project has a few moving parts this becomes genuinely useful. A practical setup is one tab per concern: one for your Claude or Gemini CLI session, one for running the Terraform workflow, one for your GitLab pipeline commands and log watching, one for CodeRabbit reviews, and one for general project commands like starting your FastAPI server locally or running tests. Everything lives in one window and you switch between contexts without losing your place in any of them.

This is a small thing that compounds over a full working session. The alternative is constantly interrupting your AI CLI session to run a different command, or juggling multiple terminal windows. The tab setup keeps each workflow isolated and visible.

Before you commit anything, make it a habit to run CodeRabbit in Warp first:

coderabbit review --plain --type uncommitted
Enter fullscreen mode Exit fullscreen mode

Read what it surfaces, address the issues worth fixing, and then commit. This takes two minutes and consistently catches things that both you and the AI that generated the code missed. It is the easiest quality gate you can add to your workflow because it requires no configuration and runs entirely locally.

Step 3: Structure Your Backend as a Modular Monolith

When your AI starts generating backend code, guide it toward a modular monolith structure. This is an architecture pattern where your application lives in a single deployable unit, one backend service, but is organised internally into distinct modules, one per feature.

Each module contains three files.

  • schema.py defines the data structures, what the data looks like.

  • service.py contains the business logic, what the application does with the data.

  • controller.py handles the API endpoints, how the outside world interacts with the feature.

A project structure should follow this template:

app/
├── users/
│   ├── users_schema.py
│   ├── users_service.py
│   └── users_controller.py
├── products/
│   ├── products_schema.py
│   ├── products_service.py
│   └── products_controller.py
├── orders/
│   ├── orders_schema.py
│   ├── orders_service.py
│   └── orders_controller.py
└── main.py
Enter fullscreen mode Exit fullscreen mode

You’ll notice I use explicit names like users_schema.py instead of just schema.py. While traditional architecture often favors generic names inside a folder, explicit naming is an "AI-First" strategy. When you are working with an AI agent across multiple files, generic names like schema.py can lead to "context drift" where the AI confuses the User schema with the Product schema. By being explicit, you ensure that every file carries its own identity, making it easier for you and the AI to single out exactly what needs to change without ambiguity.

This structure does several things for you. Features are easy to identify and isolate. Debugging is faster because you know exactly which file to look in for any given problem. AI collaboration is cleaner because you can point the agent at a specific module without it touching unrelated code. And when your project grows, it gives you a blueprint to extract a module into its own microservice if you ever need to.

Tell the AI explicitly: "When generating backend code for this project, use a modular monolith structure with schema, service, and controller files per feature."

Step 4: Containerise Everything with Docker and Docker Compose

Once your backend structure is in place, the next question is how it actually runs, both on your machine during development and on the OCI VM in production. The answer is Docker, and for orchestrating multiple services together, Docker Compose.

Docker packages your application and everything it needs to run into a container. A container is a self-contained, isolated environment that behaves the same way on your laptop, on a teammate's machine, and on a server in OCI's data centre. No more "it works on my machine." If it runs in the container, it runs everywhere the container runs.

Docker Compose takes this further by letting you define and run multiple containers together using a single YAML file called docker-compose.yml. This is where it becomes genuinely useful for a modular monolith: instead of running one giant process, each concern gets its own container on the same VM. Your FastAPI backend runs in one container. Your Next.js frontend runs in another. Liquibase runs its migrations in another. A reverse proxy like Caddy or Nginx sits in front of all of them and routes incoming traffic to the right place.

This is not Kubernetes. It does not require a cluster, a cloud-native team, or a certification to operate. It is one VM, several containers, one Compose file. And given the compute and storage OCI provides on its always-free tier and low-cost instances, a single VM handles this comfortably for most production workloads.

It is also the building block you need before Kubernetes ever makes sense. Every concept you learn here, images, containers, networking between services, health checks, environment variables, you carry directly into Kubernetes when the time comes. But that time is not now. Start here.

A docker-compose.yml for a typical project on this stack looks like this:

services:
  proxy:
    image: caddy:alpine
    ports:
      - "80:80"
      - "443:443"
    volumes:
      - ./Caddyfile:/etc/caddy/Caddyfile
      - caddy_data:/data
    depends_on:
      - backend
      - frontend
    healthcheck:
      test: ["CMD", "caddy", "version"]
      interval: 30s
      timeout: 10s
      retries: 3

  backend:
    image: your-app-backend
    build:
      context: ./backend
    expose:
      - "8000"
    env_file:
      - .env
    depends_on:
      - redis
    healthcheck:
      test: ["CMD", "curl", "-f", "http://localhost:8000/health"]
      interval: 30s
      timeout: 10s
      retries: 3

  frontend:
    image: your-app-frontend
    build:
      context: ./frontend
    expose:
      - "3000"
    healthcheck:
      test: ["CMD", "curl", "-f", "http://localhost:3000"]
      interval: 30s
      timeout: 10s
      retries: 3

  worker:
    image: your-app-worker
    build:
      context: ./worker
    env_file:
      - .env
    depends_on:
      - redis

  liquibase:
    image: liquibase/liquibase
    volumes:
      - ./liquibase:/liquibase/changelog
    env_file:
      - .env
    command: >
      --changelog-file=changelog/db.changelog-root.yaml
      --url=${DB_URL}
      --username=${DB_USER}
      --password=${DB_PASSWORD}
      update

  redis:
    image: redis:alpine
    expose:
      - "6379"
    healthcheck:
      test: ["CMD", "redis-cli", "ping"]
      interval: 30s
      timeout: 10s
      retries: 3

volumes:
  caddy_data:
Enter fullscreen mode Exit fullscreen mode

A few things worth noting in this file:

Caddy handles HTTPS automatically. Point your domain at the VM, configure a Caddyfile with your domain name, and Caddy requests and renews TLS certificates from Let's Encrypt without any manual steps. Nginx works equally well and the AI knows both, but Caddy requires significantly less configuration to get HTTPS working correctly. Ask the AI to generate a Caddyfile that routes your domain to your backend and frontend containers and it will produce a working starting point.

The expose keyword makes a port available only between containers on the same internal network. The ports keyword maps a container port to the host machine. Your backend and frontend use expose because they should never be directly reachable from the internet. Only Caddy uses ports because it is the only thing that should be.

Liquibase runs as a container too. It connects to your Oracle database, applies any pending changesets, and exits. In production you run it as part of your deployment pipeline before the backend starts. The AI can generate both the Liquibase container configuration and the migration files based on your Pydantic schemas.

The .env file holds your database credentials, API keys, and anything else that should never be committed. These values are injected into the containers at runtime. On the OCI VM, they come from OCI Vault via your startup scripts or a secrets management integration. Locally, they live in a .env file that is listed in your .gitignore. Deciding what belongs in your pipeline's CI/CD variables versus what belongs in OCI Vault is worth a moment's thought, and the AI can help you make that call — more on this in the CI/CD step.

When you SSH into your VM and run docker ps, a healthy setup looks something like this:

CONTAINER ID   IMAGE                  COMMAND                  CREATED        STATUS                  PORTS                                                         NAMES
a1b2c3d4e5f6   your-app-backend       "uv run uvicorn app…"    2 days ago     Up 2 days (healthy)     8000/tcp                                                      app_backend
b2c3d4e5f6a7   your-app-frontend      "docker-entrypoint.s…"   2 days ago     Up 2 days (healthy)     3000/tcp                                                      app_frontend
c3d4e5f6a7b8   your-app-worker        "uv run arq app.wor…"    2 days ago     Up 2 days                                                                             app_worker
d4e5f6a7b8c9   caddy:alpine           "caddy run --config…"    2 days ago     Up 2 days (healthy)     0.0.0.0:80->80/tcp, 0.0.0.0:443->443/tcp, 443/udp            caddy_proxy
e5f6a7b8c9d0   redis:alpine           "docker-entrypoint.s…"   5 weeks ago    Up 5 days (healthy)     6379/tcp                                                      app_redis
Enter fullscreen mode Exit fullscreen mode

Every service running, every health check green, Caddy the only thing exposed to the internet. That is what a clean deployment looks like.

How this connects to your GitLab pipeline

Your .gitlab-ci.yml builds the Docker images, pushes them to GitLab's built-in container registry, and then triggers a deployment on the OCI VM. A deploy stage for this setup looks like this:

stages:
  - test
  - build
  - deploy

test:
  stage: test
  image: python:3.11
  script:
    - pip install -r backend/requirements.txt
    - pytest backend/

build:
  stage: build
  script:
    - docker build -t registry.gitlab.com/yourusername/your-project/backend:$CI_COMMIT_SHA ./backend
    - docker build -t registry.gitlab.com/yourusername/your-project/frontend:$CI_COMMIT_SHA ./frontend
    - docker push registry.gitlab.com/yourusername/your-project/backend:$CI_COMMIT_SHA
    - docker push registry.gitlab.com/yourusername/your-project/frontend:$CI_COMMIT_SHA
  only:
    - main

deploy:
  stage: deploy
  script:
    - ssh ubuntu@your-oci-vm "cd /app && docker compose pull && docker compose up -d"
  only:
    - main
Enter fullscreen mode Exit fullscreen mode

The $CI_COMMIT_SHA tags each image with the exact commit that produced it. This means you always know which version of the code is running in each container, and rolling back is as simple as pulling an earlier tag and restarting. The deploy stage SSHes into the VM, pulls the newly built images, and restarts the relevant containers with zero downtime from the other running services.

When something goes wrong during a deployment, the logs tell you exactly which container failed and why. Copy them from GitLab and paste them as a prompt. The AI will diagnose whether it is a build error, a misconfigured environment variable, a failed health check, or a networking issue between containers, and tell you what to change.

Step 5: Write Your Schemas in Python, Manage the Database with Liquibase

Define your data models in schema.py using Pydantic, which FastAPI uses natively. These Python classes describe what your data looks like:

from pydantic import BaseModel

class UserCreate(BaseModel):
    name: str
    email: str
    password: str

class UserResponse(BaseModel):
    id: int
    name: str
    email: str
Enter fullscreen mode Exit fullscreen mode

For the actual database schema, the tables and columns in Oracle, use Liquibase changesets. Ask the AI to generate them based on your Pydantic models: "Based on these Pydantic schemas, write Liquibase changesets in YAML format to create the corresponding database tables."

Your Liquibase changelog keeps a versioned record of every schema change. Every time you add a column, rename a table, or add an index, it goes through a new changeset. This gives you full rollback capability and a clean audit trail.

Step 6: Provision Oracle Cloud with Terraform

Create a free account at cloud.oracle.com. The Always Free tier gives you an Autonomous Database, compute instances, and object storage.

Before you touch the OCI console to configure anything meaningful, write your infrastructure as code. This is not the advanced step it might sound like. It is the right starting point, and the AI will help you get there.

Terraform works by reading .tf configuration files and applying them against your cloud provider. For OCI, you start by defining the provider and your credentials, then describe the resources you need. Ask Claude or Gemini in a Warp CLI session: "Write a Terraform configuration for an OCI project that provisions an Autonomous Database, a compute instance for my FastAPI backend, a VCN with security lists for HTTP, HTTPS, and SSH, IAM policies scoped to least privilege, and a Vault for storing secrets".

The AI will generate a set of files. The structure typically looks like this:

infra/
├── main.tf          # provider config and root module
├── variables.tf     # environment-specific values
├── outputs.tf       # values to export (e.g. DB connection string)
├── network.tf       # VCN, subnets, security lists
├── compute.tf       # your FastAPI server instance
├── database.tf      # Autonomous Database
├── iam.tf           # policies and dynamic groups
└── vault.tf         # OCI Vault and secrets
Enter fullscreen mode Exit fullscreen mode

Note: Before Terraform can talk to OCI, you'll need a few values that only exist in the console: your tenancy OCID, user OCID, region, and a generated API key fingerprint. These go into your variables.tf or a local terraform.tfvars file and are never committed to GitLab. If you're not sure where to find them, ask the AI: "Walk me through finding my OCI tenancy OCID and setting up an API signing key for Terraform". It will give you the exact navigation path through the console, and you can screenshot anything confusing and paste it directly into the chat.

A few things worth understanding in each file before you apply:

Security lists are OCI's firewall rules. They define which ports are open to the internet and which are locked down. Your FastAPI backend should only accept traffic on the ports it needs, typically 443 for HTTPS and a restricted port for database connections. Your database should not be publicly accessible at all. The Terraform configuration makes these rules explicit and auditable.

IAM policies define what can access what within your OCI tenancy. The principle here is least privilege: your compute instance should only have the permissions it actually needs to do its job, nothing broader. When IAM is configured manually through the console it's easy to accidentally grant too much. When it's in a .tf file it's readable, reviewable, and version-controlled.

OCI Vault is where your secrets live: database passwords, API keys, third-party credentials. Your application reads them from Vault at runtime rather than having them hardcoded in environment files or committed to your repository. Ask the AI to generate both the Vault configuration in Terraform and the Python code in your FastAPI service that retrieves secrets from Vault on startup.

Once your files are ready, initialise and apply from Warp:

cd infra
terraform init
terraform plan
terraform apply
Enter fullscreen mode Exit fullscreen mode

terraform plan shows you exactly what will be created before anything happens. Read it. If something looks wrong, ask the AI to explain what a specific resource block does before you apply. Once you're satisfied, apply provisions everything.

Tip: Newer versions of OCI Autonomous DB support TLS, or wallet-less, connections, which are significantly easier to configure with FastAPI than the older wallet zip file approach. Once your database is provisioned, ask the AI: "Help me enable TLS connections on my OCI Autonomous Database and configure FastAPI to connect without a wallet."

From this point forward, any infrastructure change goes through a .tf file, gets committed to GitLab, and gets applied via Terraform. You can add a Terraform stage to your GitLab CI/CD pipeline so infrastructure changes are applied automatically alongside code deployments, giving you a single pipeline that handles both application and infrastructure.

If apply throws errors, copy the output from Warp and paste it as a prompt. The AI will read the error, identify whether it's a credentials issue, a resource limit, a policy conflict, or a configuration mistake, and tell you exactly what to change.

Step 7: Push to GitLab and Set Up CI/CD

Create a GitLab account at gitlab.com and create a new project.

This is one place where you stay in VS Code rather than switching to Warp. VS Code has a built-in Source Control panel, the branch icon in the left sidebar, and it handles the parts of Git that are most error-prone to do manually: staging individual files, writing commit messages, and reviewing exactly what's changed before it goes anywhere. Click the files you want to stage, write your commit message in the text field, and commit directly from there without touching the terminal.

For the initial setup, you'll need the terminal once. Open VS Code's built-in terminal or use Warp for this part:

git init
git remote add origin https://gitlab.com/yourusername/your-project.git
Enter fullscreen mode Exit fullscreen mode

After that, your day-to-day flow is: make changes in VS Code, run coderabbit review --plain --type uncommitted in Warp to review before committing, stage and commit through the VS Code Source Control sidebar, then push. You can push directly from the sidebar too using the sync button, or from Warp if you prefer the explicit git push. Either works, but keeping the staging and commit message writing in VS Code means you always have a visual diff in front of you when you're deciding what to include in a commit.

Now ask the AI to generate your .gitlab-ci.yml pipeline file. Give it context: "I have a FastAPI backend deployed on OCI and a Next.js frontend on Cloudflare Pages. Write a GitLab CI/CD pipeline that runs my tests on every push and deploys to production when I push to main."

Once this is in place, your deployment process is automated. You make an edit, push it, and the pipeline handles the rest.

What goes in CI/CD variables vs. OCI Vault

GitLab CI/CD has its own secrets store: the variables you set under Settings > CI/CD > Variables in your project. These are injected into the pipeline environment at runtime and are the right place for anything the pipeline itself needs to do its job: your OCI registry credentials so the build stage can push images, your SSH private key so the deploy stage can connect to the VM, and any tokens needed to authenticate with external services during the build.

OCI Vault is for secrets your running application needs after it is deployed: database passwords, third-party API keys, encryption keys, and anything else your FastAPI backend reads at startup or during request handling. These never touch the pipeline directly.

The distinction is about who needs the secret and when. If the pipeline needs it to build or deploy, it goes in GitLab CI/CD variables. If the running container needs it to serve requests, it goes in OCI Vault.

In practice the line is usually clear, but edge cases come up. A good prompt for the AI is: "Here is a list of the secrets in my application. For each one, tell me whether it belongs in GitLab CI/CD variables or OCI Vault, and why." Give it your actual list and it will reason through each one, flag anything that looks like it might be in the wrong place, and explain the security rationale behind each decision. It is a quick sanity check that saves you from accidentally exposing something that should be locked away, or over-engineering the Vault integration for something that only the pipeline ever touches.

Note: GitLab CI/CD variables marked as "masked" are redacted from pipeline logs. Variables marked as "protected" are only available on protected branches. Use both for anything sensitive. The AI can generate the exact SSH key setup and OCI authentication configuration for your deploy stage if you ask it to.

When a pipeline fails: open the failed job in GitLab, copy the log output, and paste it directly into Claude or Gemini as a prompt. Say: "This is my GitLab CI/CD pipeline log. It failed. What went wrong and how do I fix it?" Nine times out of ten, you'll have your answer in under a minute.

That's the full loop. From idea to codebase to deployed infrastructure, with AI working alongside you at every stage and a pipeline handling everything after that.

A final word: logs are your best friend.

When something breaks, and something always does, the logs tell you exactly where. Not approximately. Not probably. Exactly. This is one of the underrated benefits of building with a stack that has clear separation of concerns. When your backend, frontend, worker, proxy, and database each run in their own container with their own logs, a failure doesn't hide. It surfaces in one place, in one service, with a traceable reason. Copy those logs, paste them into Claude or Gemini, and you'll have a diagnosis in under a minute. The more deliberately you've structured your project, the more your logs reward you when things go sideways. Treat them as the source of truth, not an afterthought.

Stack Summary

Layer Technology Why
Terminal Warp Better AI CLI experience outside VS Code
Cloud / DB Oracle Cloud (OCI) + Autonomous Database Mature, 10TB free egress, enterprise-grade
Hosting / CDN Cloudflare Pages + Workers + Wrangler 330+ city edge network, DDoS protection, no egress fees
Backend FastAPI (Python) Fast, readable, excellent AI familiarity
Schema management Liquibase Versioned, rollback-capable, production-safe
Containerisation Docker + Docker Compose Isolated services, consistent environments, Kubernetes on-ramp
Frontend Vanilla JS → React / Next.js Start simple, graduate when ready
Mobile Flutter + Fastlane Cross-platform build, automated deployment
Version control / CI/CD GitLab + .gitlab-ci.yml Integrated pipelines, paste logs to debug
Infrastructure Terraform (from day one) Reproducible, codified infrastructure
Code review CodeRabbit AI review before commit and on every MR

The tools are remarkable. Use them. But use them on a foundation solid enough to build on, and use them properly, with a codebase the AI can actually see.


Cover photo by Jo Lin on Unsplash

Top comments (0)