DEV Community

Aditya Kumar
Aditya Kumar

Posted on

We Use AI to Build Our AI Product — Here's What We Learned

At JetStack AI, we build AI-powered tools that help HubSpot teams migrate and manage their portal configurations. Ironically, AI is both our product and our most important development tool.

Using AI to build software sounds like a solved problem in 2026. It isn't. Here's what AI-assisted development actually looks like — the real challenges, the hard-won lessons, and what nobody tells you about AI code generation in production.

AI-Assisted Coding: The 80/20 Problem

Everyone talks about AI-assisted coding like it's magic. "Just prompt it and ship." The reality is messier.

AI is exceptional at getting you 80% of the way there, fast. Boilerplate, CRUD endpoints, test scaffolding, type definitions — tasks that used to eat hours now take minutes. We leaned into this hard and it genuinely accelerated our development velocity.

But that last 20%? That's where the real software engineering work lives.

AI Code Generation Fails on Domain-Specific Logic

Our codebase deals with complex API behaviors, edge cases in third-party platforms, and business logic that doesn't exist in any training data. AI can write the function signature, but it can't know that a specific API returns a field named upperBoundEndpointBehavior instead of upperBoundEndpoint — a subtle difference that caused a real bug in production.

No amount of prompt engineering fixes this. The AI doesn't have the context, and it confidently generates code that looks correct but breaks in ways that are hard to catch in code review.

Lesson: AI-assisted development accelerates what you already understand. It doesn't replace domain knowledge.

Maintaining Code Consistency with AI Tools

When multiple developers and AI coding assistants are generating code, patterns drift fast. One module uses early returns, another uses nested conditionals. One service throws custom errors, another returns result objects.

We spent more time than expected establishing conventions and enforcing them — writing architectural docs, linting rules, and code review guidelines that both humans and AI could follow.

Lesson: The productivity gain from AI code generation is real, but only if you invest in guardrails. Without them, you're just generating inconsistent code faster.

The Over-Engineering Trap in AI-Generated Code

AI loves to be helpful. Ask it to add a feature and it'll also refactor adjacent code, add error handling for impossible scenarios, and create abstractions you didn't ask for.

This sounds harmless until you're reviewing a pull request with 400 changed lines when you expected 40.

Lesson: Be specific in what you ask for. Treat AI like a junior developer who's eager to impress — appreciate the enthusiasm, but scope the work tightly.

Debugging AI-Written Code Is a Different Skill

When you write code, you have mental context about every decision. When AI writes code, you're reverse-engineering someone else's thought process — except that "someone" doesn't have a thought process.

We found that bugs in AI-generated code were harder to track down, not because the code was worse, but because no one had the full mental model of why it was written that way.

Lesson: Never merge code you don't fully understand, regardless of who or what wrote it. This is the most important rule of AI-assisted software development.

Where AI-Powered Development Actually Works

After months of building JetStack AI with AI tools, here's where AI-assisted coding genuinely delivers:

  • Codebase exploration and research — understanding unfamiliar APIs, summarizing documentation, comparing implementation approaches
  • Boilerplate and scaffolding — setting up new modules, writing type definitions, creating test stubs
  • Code review assistance — catching inconsistencies, suggesting edge cases, spotting missing error handling
  • System parity analysis — comparing two codebases and identifying gaps (this saved us weeks of manual work)
  • Contextual debugging — feeding error logs with code context and getting targeted hypotheses

Where AI-assisted development falls short:

  • Complex multi-system integrations where API behavior isn't well-documented
  • Architectural decisions that require understanding business context
  • Anything involving subtle third-party API quirks that aren't in training data

Key Takeaway: AI Is a Development Multiplier

Using AI to build an AI product taught us that AI is a multiplier, not a replacement for software engineering skills. It multiplies good engineering practices and it multiplies bad ones. If your team has strong conventions, clear architecture, and solid code review — AI-powered development will make you faster. If you don't, it'll make you messier, faster.

We're building JetStack AI to help teams work smarter with HubSpot, and AI-assisted development is a core part of how we build it. But the engineers are still very much in charge.


What's your experience with AI-assisted coding? Are you seeing the same challenges? Drop a comment below — we'd love to compare notes.

Top comments (0)