DEV Community

Elena Revicheva
Elena Revicheva

Posted on • Originally published at aideazz.hashnode.dev

AI-Assisted Development: How I Ship Production Code Without a CS Degree

Originally published on AIdeazz — cross-posted here with canonical link.

Six months ago, I was debugging a TypeScript type error at 2 AM, wondering if my non-traditional path into development would finally catch up with me. I'd just deployed a multi-agent system that routes between Groq and Claude APIs, handles WhatsApp and Telegram messages, and manages Oracle Cloud infrastructure. The system works. It makes money. And I built most of it using AI-assisted development tools.

The Executive Who Codes (Badly)

I spent 15 years in corporate strategy and operations. My coding education consisted of SQL queries for business intelligence and some VBA macros that I'm not proud of. When I started AIdeazz, I had a choice: hire developers or learn to build. I chose to build, but not through traditional bootcamps or CS fundamentals courses.

Instead, I went all-in on AI-assisted development. Not as a crutch, but as a force multiplier. I write production TypeScript daily. My code isn't elegant, but it handles real traffic, real users, and real infrastructure complexity. Here's how I actually work, without the marketing fluff.

My current stack: Next.js frontend, Node.js backend services, PostgreSQL on Oracle Cloud, Redis for caching, and a constellation of AI APIs. Everything deployed through GitHub Actions to Oracle Cloud Infrastructure. I touch every part of this stack daily, despite never taking a data structures course.

Cursor + Claude: The Actual Workflow

Forget the demos where someone types "build me a chat app" and magic happens. Real AI-assisted development is messier. Here's my actual workflow for building a recent feature: a Groq-to-Claude fallback system for our WhatsApp agents.

I start with Cursor (VS Code fork with AI built-in) and Claude Sonnet 3.5. First attempt:

// My prompt: "Create a service that tries Groq first, falls back to Claude if Groq fails"
// Claude's response was 80 lines of beautiful abstraction that didn't handle our actual needs
Enter fullscreen mode Exit fullscreen mode

The generated code looked professional. It had interfaces, error boundaries, and clean separation of concerns. It also didn't account for our rate limits, didn't handle partial responses, and assumed both APIs return identical formats. Classic AI-generated code problem: syntactically perfect, semantically useless for production.

Second attempt, I provided context:

// Context: Groq free tier = 30 rpm, Claude = 10 rpm
// We route chat completions only, not embeddings
// Groq returns faster but fails on complex reasoning
// Need to track which API responded for billing
Enter fullscreen mode Exit fullscreen mode

This time, Claude generated code that actually reflected our constraints. But it still missed edge cases: What happens when both APIs are down? How do we handle webhook timeouts from WhatsApp? How do we prevent infinite fallback loops?

The real workflow isn't "AI writes, I deploy." It's:

  1. AI generates initial structure
  2. I test against real API responses
  3. Find edge cases AI missed
  4. Feed errors back to AI with context
  5. Manually fix the parts AI keeps getting wrong
  6. Repeat until it actually works

Oracle Cloud Reality Check

Everyone talks about deploying to Vercel or AWS Lambda. Nobody talks about deploying AI workloads to Oracle Cloud Infrastructure, which is what I actually do. Why? Because Oracle gives you Ampere ARM instances that are genuinely free tier, and when you're bootstrapping, every dollar matters.

Here's what AI-assisted development looks like when deploying to OCI:

Claude doesn't know Oracle's SDK well. It hallucinates methods that don't exist. When I asked it to generate deployment scripts for OCI, it mixed AWS patterns with Oracle's API, creating code that looked plausible but failed immediately.

# What Claude generated (wrong)
oci compute instance launch --availability-domain AD-1 --image-id ocid1.image...

# What actually works
oci compute instance launch --availability-domain jYnA:PHX-AD-1 --subnet-id ocid1.subnet...
Enter fullscreen mode Exit fullscreen mode

The fix? I maintain a personal knowledge base of Oracle-specific patterns. Every time I solve an OCI problem, I document it. Then I feed this documentation to Claude as context. Now it generates correct OCI deployment scripts about 70% of the time.

This is the reality of AI-assisted development: you're building your own fine-tuning dataset through documentation and context, not waiting for the AI to magically understand your infrastructure.

Multi-Agent Systems Without a Distributed Systems Background

Our core product routes messages between WhatsApp Business API, Telegram Bot API, and various LLMs. It's a distributed system by definition. I have no formal distributed systems training. Here's how AI assistance both helps and hurts.

The helps: AI excels at generating boilerplate for API integrations. When I needed to implement WhatsApp's webhook verification, Claude generated the exact signature validation code faster than I could read Meta's documentation.

The hurts: AI doesn't understand your specific failure modes. Our WhatsApp agent was randomly dropping messages. Claude suggested adding retries, implementing exponential backoff, checking for rate limits. All reasonable. All wrong.

The actual issue? Our Redis pub/sub was failing silently when message payloads exceeded 512KB (customers sending photos). No AI caught this because it required understanding our specific infrastructure choices. I found it by adding verbose logging and watching actual traffic.

// What AI suggested
const publishMessage = async (channel: string, message: any) => {
  try {
    await redis.publish(channel, JSON.stringify(message));
  } catch (error) {
    // Retry logic here
  }
}

// What actually fixed it
const publishMessage = async (channel: string, message: any) => {
  const serialized = JSON.stringify(message);
  if (serialized.length > 500_000) { // Redis limit buffer
    // Store in PostgreSQL, publish reference
    const id = await storeOversizedMessage(message);
    await redis.publish(channel, JSON.stringify({ ref: id, oversized: true }));
  } else {
    await redis.publish(channel, serialized);
  }
}
Enter fullscreen mode Exit fullscreen mode

The Tradeoffs Nobody Mentions

After six months of shipping production code with AI assistance, here are the brutal truths:

You read 10x more code than before. AI generates code fast, but you must understand every line before deploying. I spend more time reading and verifying AI code than I would spend writing simpler code myself. The tradeoff: AI attempts complex patterns I wouldn't try alone.

Technical debt accumulates differently. Traditional technical debt comes from shortcuts and poor architecture. AI-assisted technical debt comes from accepting generated code you don't fully understand. Our codebase has entire modules I can modify but couldn't rewrite from scratch.

Debugging becomes archaeological. When AI-generated code fails in production, you're debugging code you didn't write with patterns you might not recognize. Stack traces lead to abstraction layers you didn't design. It's solvable but requires a different mental model.

You must become a better architect. AI can't design systems. It can implement your designs brilliantly, but garbage architecture in means garbage implementation out. I spend 60% of my time on system design, 30% on implementation, 10% on debugging. Traditional development might be 20/60/20.

Your context window is everything. The difference between useful and useless AI assistance is context. I maintain extensive documentation not for other developers, but for AI. Our codebase has more comments explaining "why" than code explaining "how."

What Actually Works in Production

Here's my operational reality for AI-assisted development:

Cursor for daily coding. The inline editing is superior to copy-paste workflows. CMD+K to refactor, CMD+L for chat. I pay for the Pro tier because rate limits kill productivity.

Claude 3.5 Sonnet for architecture. When designing new features, I start with Claude. Not for code, but for thinking through edge cases. It's exceptional at finding failure modes I missed.

GitHub Copilot for boilerplate. Disabled most of the time, enabled for specific tasks. Writing API endpoint handlers? Copilot on. Implementing business logic? Copilot off.

Local LLMs for sensitive code. We handle customer data. Some code can't go to cloud APIs. I run Ollama locally with Codestral for anything touching PII.

Conventional commits + AI. Every commit message is reviewed by AI for clarity. Sounds excessive? Our git log is now actually useful for debugging.

Test generation as verification. I don't trust AI to write implementation code without review. But I absolutely trust it to generate comprehensive test suites. More bugs caught by AI-generated tests than human-written ones.

The Path Forward

AI-assisted development isn't replacing traditional development. It's creating a new category: domain experts who can ship production code without traditional training. I'm not competing with CS graduates on algorithm efficiency. I'm competing on domain knowledge and shipping speed.

Our multi-agent system handles thousands of messages daily. It's not elegant code. A "real" developer would probably rewrite half of it. But it works, it scales (enough), and I can modify it rapidly as customer needs change. That's the promise of AI-assisted development: lowering the bar for functional, not perfect.

The future isn't AI replacing developers. It's AI enabling a new class of builder: people with domain expertise who can implement their ideas directly. The code might not win awards, but the products solve real problems.

Next time someone says AI-assisted development is just autocomplete, remember: I'm running production systems on Oracle Cloud, routing between multiple AI providers, handling real customer messages, without a computer science degree. The tools aren't magic. The opportunity they create is.

— Elena Revicheva · AIdeazz · Portfolio

Top comments (0)