DEV Community

TechPulse Lab
TechPulse Lab

Posted on

I stopped building AI prototypes for clients and started shipping one finished system. Revenue 4x'd.

For about eighteen months I did what most dev-shop-adjacent operators do when the AI wave breaks: I took on "AI projects."

Scope was mushy on purpose. Client says "we want to use AI for customer support" — I quote a discovery phase, we do workshops, we produce a 40-page strategy deck, we build a Streamlit demo that wows the executive team, then momentum dies three weeks into "integration phase" because nobody owns the production system.

I killed that model late last year. What replaced it is producing 4x the revenue with a fraction of the post-sale drag, and I think the framing generalises enough to be worth writing down.

The old model's failure mode

Open-ended AI engagements fail for a reason that has nothing to do with the AI: the buyer doesn't know what done looks like, so neither do you.

Every discovery call uncovered more "while we're at it" requirements. Every prototype invited a new round of stakeholder feedback. Timelines compounded. Scope compounded. Margins collapsed. The prototype ran on my laptop; nobody had ops capacity to productionise it; the relationship ended politely and without revenue beyond phase one.

I ran the numbers on a year of this. Average engagement: 11 weeks elapsed, 62 hours billed, $8,400 collected, one out of six going to any form of production. The rest were essentially paid tutorials.

The new model: one system, fixed scope, shipped

I replaced the menu with a single SKU:

One AI agent. You pick from ten pre-specified types. 90-minute strategy call, up to four hours of build, 30-day support, handover docs. Fixed price.

That's it. No discovery phase. No "AI strategy" deliverable. No open-ended prototype.

The ten types are deliberately narrow:

  1. Content Creation Agent
  2. Customer Support Triage Agent
  3. Social Media Agent
  4. Research & Monitoring Agent
  5. Data Ingestion Pipeline
  6. Code Review Agent
  7. Meeting Prep Agent
  8. Email Management Agent
  9. Report Generation Agent
  10. Lead Qualification Agent

Every one of them is something I've built enough times to know the shape of the work end-to-end. Every one runs on the client's infrastructure — no hosted SaaS, no lock-in, no recurring fee to me.

That constraint is the whole trick. Let me walk through why.

Why "one system" beats "AI strategy"

1. The buyer can actually say yes

"Do you want an AI strategy?" is a question the VP needs to take to three other VPs. "Do you want a lead qualification agent that scores inbound leads against your ICP and routes them to the right AE by end of next week?" is a question the sales ops lead can answer in one meeting.

Selling to an individual operator with a specific pain is ten times faster than selling to a committee with a vague mandate.

2. Scope can't drift if there's nothing to drift into

The contract names the system type, the infrastructure target, the build hours, and the handover artefacts. When the client says "while we're at it, can we also do X?" — X is a second engagement. Not a scope expansion, a second purchase. Clients actually prefer this once they see the first one ship.

3. Production is not a phase, it's the deliverable

Because the system runs on their infra from day one, there's no "now we need to productionise the prototype" cliff. The demo IS the production system. This alone eliminated ~70% of the post-sale drag from the old model.

4. You can actually estimate

After building (say) the Lead Qualification Agent six or seven times, the build is boring. You know what the data looks like, what the integrations look like, where the surprises live. Four hours is not a guess, it's a measured number with a small standard deviation. That makes fixed-price viable.

Worked example: Lead Qualification Agent

Let me make this concrete. Here's what the actual scope looks like for one of the ten systems, because I think the narrowness is the thing people don't believe until they see it.

Deliverable: One agent that ingests inbound leads from the client's CRM or form provider, scores each against an ICP specification the client and I write together during the strategy call, routes high-scoring leads to the appropriate AE with a drafted first-touch email, and logs everything to a daily digest.

Stack (chosen during strategy call based on their existing tooling):

  • LLM: Claude Sonnet or GPT-4o-class (client's existing API account)
  • Orchestration: a small Python service or a workflow tool they already pay for (n8n, Make, Zapier) — whichever has lower cognitive load for their team
  • Data in: webhook from their form provider or CRM
  • Data out: Slack message + CRM field update + daily email digest

Strategy call (90 min) covers:

  • ICP definition (this is usually the first time they've written it down properly)
  • Routing rules (which AE gets what, fallback behaviour)
  • Edge cases (duplicate leads, non-English leads, obvious spam)
  • Observability (how do we know when the agent is getting it wrong)

Build (up to 4 hours):

  • Prompt with few-shot examples drawn from their actual closed-won and closed-lost leads
  • Scoring output structured as JSON with rationale
  • Routing logic
  • Digest job
  • Error handling + a kill switch

Handover:

  • Runbook (how to adjust the ICP, how to read the digest, how to pause the agent)
  • Cost expectations with their expected volume
  • Prompt file checked into their repo (not mine)
  • 30 days of email support for tuning

Total elapsed time from purchase to live system: usually 8-12 working days, most of which is waiting on the client to send over the sample leads for the few-shot examples.

What this does to the P&L

Old model: 11 weeks elapsed, 62 hours billed, $8,400 collected, production-rate 1-in-6.

New model: ~2 weeks elapsed, ~6 hours of my time (90 min call + 4h build + ~30 min admin), fixed fee in the low four figures, production-rate 100% because production IS the product.

The revenue per hour isn't just higher — it's higher and the hours are predictable and the pipeline fills faster because the offer is legible. "One AI agent, fixed price, two weeks" closes itself. "AI transformation engagement" needs six calls to close, if it closes.

When this model doesn't work

I want to be honest about the edges.

  • Enterprises with procurement processes. If legal needs six weeks to approve a $1,500 SOW, this model bounces. Enterprises need enterprise framing — different SKU, different sales motion.
  • Genuinely novel problems. If the client wants something none of my ten systems cover, I don't flex the SKU. I either decline or quote a separate discovery engagement priced as discovery. Flexing the fixed SKU is how you get back to the old failure mode.
  • Clients who want to own the build process. Some teams want to pair on it. That's a coaching engagement, not this one. Different product.
  • Buyers who actually need strategy. Sometimes the honest answer is they shouldn't build an AI system yet. I send them away. This happens more than you'd think and is fine.

The meta-lesson

The interesting thing isn't the AI. It's the SKU design.

The reason consulting revenue is hard and product revenue is easy is that products have a surface area the buyer can evaluate in under a minute. Consulting usually doesn't. You can make consulting have that surface area — by picking one thing, pricing it, scoping it, and refusing to flex — but you have to be willing to say no to the shape-shifting discovery calls that used to feel like progress.

Four hours of build, fixed price, one system, shipped. That's the whole pitch.

If you want to see how I structured the actual SKU — the ten system types, the exact scope, the 30-day support terms — it's live here. Not a sales pitch, just: if you're running a similar shop and want to steal the structure, it's documented. The hundred words of "what you get / what you own afterwards / requirements" on that page was itself six months of painful scope-fighting distilled down.

The boring SKU won. Every time I'm tempted to make it more interesting, I lose money.

Top comments (0)