DEV Community

Fillip Kosorukov
Fillip Kosorukov

Posted on

Building with Claude Code: What a Non-Developer Learned About AI-Assisted Development

I'm not a developer. I have a psychology degree and I run a real estate company and a trading firm. But over the past few months, I've built and deployed a full SaaS product using Claude Code as my primary development tool.

This isn't a tutorial. It's an honest account of what worked, what didn't, and what I think the implications are for people who have product ideas but not programming backgrounds.
The Setup

I run a Vultr VPS (Ubuntu) with a Flask backend, Nginx reverse proxy, Redis caching, and systemd services. I also have a separate stock scanning tool running on the same server with its own cron jobs and a Discord webhook integration.

All of this was built through Claude Code — a command-line tool that lets you delegate coding tasks to Claude directly from your terminal. I use Claude.ai as the planning and advisory layer, and Claude Code as the execution layer.

The split matters. Claude.ai is where I think through architecture decisions, debate approaches, and plan features. Claude Code is where the code gets written, tested, and deployed. Trying to do both in one tool leads to context problems — the planning context is different from the implementation context.
What Actually Works

Iterative problem-solving. The biggest unlock was learning to break problems into small, testable pieces. Instead of asking "build me an audit pipeline," I'd work through it step by step: "parse this API response," then "normalize these business names," then "compare against these queries," then "generate a PDF from these results." Each step was testable on its own.

Error handling and debugging. When something breaks, pasting the error into Claude Code and asking it to fix the issue works surprisingly well. The AI can read stack traces, identify the problem, and propose a fix faster than I could learn to debug it myself. This is probably where I saved the most time.
Deployment and infrastructure. Setting up Nginx configurations, SSL certificates, systemd services, cron jobs — this is the kind of work that would have taken me weeks to learn from
documentation. Claude Code handles it in minutes. The migration from Caddy to Nginx, for example, was a single session.
Security. We did a full security audit that identified 14 issues including a command injection vulnerability. I wouldn't have known to look for these things. The AI caught them, explained why they were dangerous, and fixed them.
What Doesn't Work

Assuming the AI verified its own work. This is the biggest gotcha. Claude Code will report a task as complete without actually verifying that it works. I learned to always test the output myself — run the script, check the output, verify the deployment. Trust but verify isn't just a saying; it's a survival strategy when building with AI assistance.

Long context sessions. After extended sessions, the AI can lose track of earlier decisions or introduce inconsistencies. I deal with this by keeping a MEMORY.md file that documents key architectural decisions, and by starting fresh sessions for new features.

Complex multi-file refactors. When changes need to touch many files simultaneously, the AI sometimes creates inconsistencies between files. Breaking these into sequential single-file changes works better.

The Non-Developer Advantage
Here's something counterintuitive: not knowing how to code might actually be an advantage in some ways.
Experienced developers have strong opinions about architecture, frameworks, and best practices. Those opinions are usually right — but they can also slow down the prototyping phase. I don't have opinions about whether to use Flask or FastAPI. I describe the problem, the AI picks an approach, and we build it. If it works, great. If not, we try something else.

This isn't better than having deep technical knowledge. It's different. It optimizes for speed of iteration at the cost of architectural elegance. For an early-stage product where the priority is validating whether anyone cares about what you're building, that tradeoff makes sense.
Numbers
Some concrete metrics from the build:

Audit pipeline execution time: reduced from ~7 minutes to ~35 seconds through async optimization
API cost reduction: ~90% through query caching on repeat industries
Security issues found and fixed: 14 in one audit session
Reports sent: 117+ across Indianapolis and Albuquerque markets
Time from idea to first outreach email: approximately 3 weeks

Who Should Try This
If you have domain expertise in a specific industry, you understand a real problem that people would pay to solve, and you're comfortable working iteratively through problems rather than having a complete mental model of the solution upfront — AI-assisted development can get you from zero to a deployed product faster than any other path I'm aware of.
The skill that matters most isn't programming. It's the ability to clearly articulate what you want, test whether you got it, and adjust. That's a research skill, a product management skill, and a communication skill. The code is just the output.

Fillip Kosorukov is the founder of LocalMention.io, an AI visibility audit platform for local businesses. He also runs Resilient Capital LLC and Resilient Properties. He holds a summa cum laude degree in Psychology from the University of New Mexico, with published research in the Journal of Substance Use (Taylor & Francis).

Top comments (0)