DEV Community

Elena Revicheva
Elena Revicheva

Posted on • Originally published at aideazz.hashnode.dev

AI-Assisted Development: How I Ship Production Code Without a CS Degree

Originally published on AIdeazz — cross-posted here with canonical link.

When I left my executive role to build AI systems, I had no computer science degree and minimal coding experience. Today, I ship production TypeScript agents that handle real business operations on Oracle Cloud. The difference? AI-assisted development isn't just autocomplete anymore — it's a fundamental shift in how non-traditional developers can build production systems.

The Reality of Executive-to-Builder Transition

Most "learn to code" stories skip the messy middle. Here's mine: I spent 15 years in corporate strategy, ended up in Panama, and decided to build AI agents. Not demos — actual production systems handling customer interactions, payment flows, and operational decisions.

My first attempt at traditional coding bootcamps failed spectacularly. The mental model of memorizing syntax and debugging through print statements felt like learning Latin to have a conversation. Then Claude and Cursor changed the equation.

The shift wasn't about AI writing code for me. It was about having a senior developer available 24/7 who never got frustrated when I asked "why does this async function hang?" for the fifteenth time. AI-assisted development let me skip the hazing ritual of Stack Overflow dismissals and jump straight to building.

Three months later, I had a multi-agent system routing between Groq and Claude, handling Telegram and WhatsApp conversations, deployed on Oracle Cloud. Not pretty code — but working code that processed actual customer requests.

Technical Stack and Constraints

Here's what I actually ship with AI assistance:

Core Infrastructure:

  • TypeScript for everything (type safety saves me from myself)
  • Oracle Cloud Infrastructure (OCI) for compute and networking
  • Supabase for auth and real-time data
  • Railway for quick deployments when OCI feels heavy

AI Orchestration:

  • Groq for speed-critical responses (2-3x faster than Claude)
  • Claude for complex reasoning and code generation
  • Custom router that picks models based on query complexity
  • Fallback chains when providers hit rate limits

Agent Framework:

  • LangChain for agent orchestration (despite its abstractions)
  • Custom TypeScript wrappers for Telegram Bot API
  • WhatsApp Business API integration (painful but necessary)
  • Redis for session management and caching

The constraint that shapes everything: I'm not optimizing for elegant code. I'm optimizing for "works reliably and I can fix it at 3 AM when something breaks."

My Actual Workflow with Cursor and Claude

Forget the marketing videos. Here's how AI-assisted development actually works in my daily flow:

Morning Architecture Session:
I start with Claude (not Cursor) in a browser. Pure conversation about system design. "I need to add payment processing to the WhatsApp bot. Current flow handles text. Stripe webhook needs to update user state. What's the cleanest integration?"

Claude gives me architectural options with tradeoffs. Not code yet — just design patterns and gotchas. This replaces the senior developer I don't have.

Implementation in Cursor:
Once I have a plan, I open Cursor. Key insight: Cursor's AI works best with context. I feed it:

  • Existing file structure
  • Current error logs
  • Specific function signatures

Then I write comments describing what I need:

// Function to process Stripe webhook
// Must be idempotent (handle duplicate events)
// Update user subscription in Supabase
// Send confirmation via WhatsApp
// Log all state changes for debugging
Enter fullscreen mode Exit fullscreen mode

Cursor generates the initial implementation. Here's the critical part — the first version is always wrong in subtle ways. But it's wrong in ways I can understand and fix, unlike starting from scratch.

The Debugging Dance:
When things break (always), I copy error messages back to Claude. Not Cursor — Claude is better at explaining why something failed. "TypeError: Cannot read property 'id' of undefined" means nothing to me. Claude explains: "Your Stripe webhook isn't checking if the customer object exists before accessing the ID. Here's why that happens and three ways to fix it."

Testing Reality:
I don't write unit tests. There, I said it. Instead, I have a Telegram bot that mirrors production but points to test databases. Every change gets manually tested by actually using the bot. AI helps me write integration tests that actually run the full flow.

Production Failures and Fixes

Let's talk about when AI-assisted development fails. Last month, my agent system processed 10,000+ messages. Here are the real breakdowns:

Race Condition in Session Management:
Claude helped me implement Redis sessions. Worked great in testing. In production, concurrent messages from the same user corrupted state. The fix came from describing the problem to Claude with actual logs. It suggested implementing a message queue with proper locking. Took three iterations to get right.

Token Limit Explosions:
My router originally sent full conversation history to determine which AI model to use. Costs exploded. Claude missed this because I never mentioned cost constraints in my initial implementation. Lesson: AI assists with what you ask, not what you need.

Oracle Cloud Networking Mystery:
Spent two days debugging why webhooks worked locally but failed on OCI. Neither Cursor nor Claude could solve it from code alone. Solution required understanding OCI's security lists and route tables. AI couldn't help because the problem wasn't in the code.

Type Safety Betrayal:
TypeScript caught many errors, but runtime type mismatches still happened. Stripe's webhook types didn't match reality. Zod schemas generated by AI had subtle errors. Now I validate everything at runtime and log mismatches.

Cost Analysis and Tradeoffs

Real numbers from last month:

  • Claude API calls: ~$40 (heavy code generation phase)
  • Cursor Pro: $20
  • Groq API: ~$5 (high volume, low cost)
  • Development time saved: ~60-80 hours

The math works if you value time appropriately. But hidden costs exist:

Technical Debt Accumulation:
AI-generated code tends toward verbose solutions. My codebase is probably 30% larger than needed. Refactoring with AI help is possible but requires clear direction.

Debugging Time Tax:
When AI-generated code fails mysteriously, debugging takes longer. You're debugging code you didn't write with a mental model you didn't build. I budget 2x time for debugging vs. human-written code.

Learning Curve Plateau:
After six months, I hit a ceiling. AI helps implement patterns I understand but doesn't teach underlying computer science. I can build production systems but can't optimize algorithms or understand Big O notation.

Scaling Beyond Solo Development

AI-assisted development changes when you're not alone. I recently brought in a contract developer to help scale. Interesting dynamics emerged:

Code Review Challenges:
They immediately spotted AI patterns in my code: over-commented, repetitive error handling, inconsistent style. We established new workflows: AI generates, human reviews and refactors.

Documentation Burden:
AI doesn't write documentation for humans. My code had comments for AI context but lacked architectural documentation. We now require README files written by humans, not AI.

Skill Transfer:
Teaching someone else my AI-assisted workflow felt like explaining how to ride a bicycle. They had to develop their own prompt patterns and context management strategies.

Future Evolution

Here's where I see AI-assisted development heading based on my production experience:

Agent-Based Development:
Instead of IDE integration, specialized agents for different tasks. Architecture agents that understand your entire system. Debugging agents with access to logs and metrics. Deployment agents that handle CI/CD.

Context Windows as Architecture:
System design will optimize for AI context windows. Microservices not for scaling but for fitting entire services in AI context. Documentation formats designed for AI consumption.

AI-First Languages:
New languages designed for AI readability over human readability. Type systems that encode business logic for AI understanding. Error messages written for AI debugging.

Production Monitoring Integration:
AI with real-time access to production metrics, logs, and user feedback. "Fix the spike in error rates" becomes a valid development command.

The tools will improve, but the fundamental shift remains: developers become orchestrators rather than typists. The skill moves from syntax memorization to system thinking and clear communication.

AI-assisted development isn't about AI replacing developers. It's about expanding who can build production systems. As an executive who became a builder, I'm proof that the barrier to entry has fundamentally changed. The question isn't whether AI can help you code — it's whether you can clearly communicate what needs to be built.

— Elena Revicheva · AIdeazz · Portfolio

Top comments (0)