DEV Community

TechPulse Lab
TechPulse Lab

Posted on • Originally published at aiarmory.shop

The 10 AI projects I'd actually scope for $649 — and the 5 I refuse

Most "build me an AI thing" briefs I get fall into one of two buckets. About 60% of them are real, scoped, shippable in a weekend. The other 40% are open-ended research projects dressed up as engineering tasks, and the only honest answer is no, not for a flat fee, not in four hours, probably not ever in the shape you described.

The skill people pay for isn't the build. The skill is being able to look at a brief and say which bucket it's in within ten minutes.

Here's how I do it, what I'll actually scope for a fixed $649 price, and the ones I send back with a "this isn't ready yet" note.

The five questions I ask before I scope anything

Before I even look at the use case, I run the brief through five filters. If it fails any of them, the project either gets re-scoped, downgraded to a strategy session, or refused.

  1. Is the success criterion falsifiable? "Make the AI write better emails" is not a criterion. "Reduce the time I spend writing prospect outreach from 2 hours/day to under 20 minutes" is. If the buyer can't tell me what "done" looks like in a sentence, the project will scope-creep forever.
  2. Is the input bounded? The agent needs a clearly defined set of inputs — a feed, a folder, an inbox, a set of API endpoints. "It should be able to find anything I might need" is not a bound; it's a research project.
  3. Is the output reviewable in under a minute? If a human can't glance at the output and tell whether it's right, the agent will silently degrade and nobody will notice. Email drafts: yes. Strategic recommendations: usually no. Code suggestions: depends on the codebase.
  4. Does the failure mode matter? A meeting-prep agent that occasionally misses a stakeholder is fine. A lead-qualification agent that misroutes a $50K opportunity is not. High-stakes failure modes need different architecture (or different humans).
  5. Is the data already in machine-readable form? If "first we'd need to extract twelve years of PDFs" is in the plan, it's not a $649 project. It's a $30K data engineering project with an AI layer on top.

If all five answers are yes, I'll quote the flat fee. If any are no, I downgrade or refuse.

The 10 systems I'll actually build for $649

This is the menu I work from. Each one passes the five filters above when scoped properly. They're the ones I've shipped repeatedly enough to know exactly where they break.

1. Content Creation Agent

Drafts blog posts, social posts, or newsletters from a structured brief. Buyer provides the brief format (target audience, key points, tone reference, source URLs). The agent produces a draft that the buyer edits — it's not "press button, ship to readers." The win is reducing a 90-minute first-draft to a 20-minute edit.

Where it breaks: When the buyer doesn't have a written voice guide. We end up regenerating drafts trying to match a feeling. I now require a 500-word voice sample and three "good vs. bad" examples before kickoff.

2. Customer Support Triage Agent

Reads incoming tickets, classifies them (refund / bug / feature / billing / spam), drafts a first reply from the knowledge base, and routes the ticket to the right human queue. Critically, it does not send anything autonomously. The human approves and sends.

Where it breaks: When the knowledge base is "in someone's head." I require an existing FAQ or help center as input. If they don't have one, the project becomes a documentation project first.

3. Social Media Agent

Drafts platform-specific posts from a content brief, suggests a posting schedule, generates 3-5 variations per post. Outputs into a queue tool (Buffer, Typefully, raw markdown) — does not auto-post.

Where it breaks: Buyers expect it to be funny / punchy / on-brand without a brand voice document. Same fix as the content agent: a written voice guide and worked examples are kickoff prerequisites.

4. Research and Monitoring Agent

Tracks a fixed set of topics, competitors, RSS feeds, or search queries. Delivers a daily or weekly digest to email or Slack. The bound is the input list — the agent doesn't go discover new sources.

Where it breaks: When the buyer wants "comprehensive coverage." That's a research firm, not an agent. The honest framing is: "this catches 80% of what matters, you'll occasionally find things it missed." Buyers who can't accept that imperfection are wrong-fit.

5. Data Ingestion Pipeline

Pulls from defined sources (RSS, APIs, scraping with permission, file drops), normalises into a structured format, drops into a database or sheet. The AI part is usually entity extraction or classification — the rest is plumbing.

Where it breaks: Source schemas changing without notice. Build retries, log failures loudly, and accept that this needs maintenance. I include "first 30 days of breakage" as part of the support window.

6. Code Review Agent

Watches PRs, posts review comments on a focused dimension (security / accessibility / test coverage / specific style guide). The narrower the dimension, the better. "General code review" agents are noise generators.

Where it breaks: Trying to do everything. A PR review agent that flags fifteen things will be muted. A PR review agent that reliably catches missing tests on new endpoints is invaluable. Pick one job.

7. Meeting Prep Agent

Reads tomorrow's calendar, looks up attendees, summarises recent emails with each, drafts a one-page brief per meeting, delivered the night before. Bounded inputs (calendar, email, optionally CRM) and a reviewable output (the buyer can scan it before the meeting).

Where it breaks: Privacy/permissions. The agent needs read access to calendar and email, which means an OAuth scope conversation. I won't ship this without the buyer's IT team signed off in writing.

8. Email Management Agent

Triages inbox into categories, drafts replies for the categories that are formulaic (scheduling, "no thanks," "let me check and get back to you"), flags urgent items at the top. Does not send autonomously.

Where it breaks: Buyers who want it to handle nuanced replies. The 20% of email that's actually substantive is exactly the 20% an agent can't help with. The agent earns its keep on the boring 80%.

9. Report Generation Agent

Pulls from defined data sources on a schedule, runs a fixed set of analyses, generates a written report with charts and commentary. The analyses are pre-defined — the agent doesn't decide what's interesting.

Where it breaks: Stakeholders who later ask "but what about [analysis we never specified]?" The contract has to lock the report shape. New analyses are change requests, not bugs.

10. Lead Qualification Agent

Scores inbound leads against a rubric, routes to the right rep, drafts outreach for the top tier. Requires a written rubric (this is the "voice guide" of sales) and a clear definition of what "qualified" means for the business.

Where it breaks: The rubric doesn't exist. Building it during the project doubles the scope. I now require a written rubric or downgrade to a strategy session to produce one first.

The ones I refuse

There are five briefs that come in regularly that I won't quote a flat fee for, and I'd rather be honest about why than take the deposit and underdeliver.

"An AI that knows our whole business." This is a knowledge-base project plus a retrieval-augmented-generation project plus a permissions-and-access-control project plus an agent. Each piece has its own complexity budget. Quoting it as one is dishonest.

"An AI that learns from our customers' interactions and gets better over time." Online learning systems with feedback loops are a serious engineering and evaluation discipline. Most "self-improving agents" pitched at this price are actually static prompts with a fine-tuning fantasy on top.

"An AI that handles end-to-end [complex business process]." End-to-end means at least three stage transitions, each with its own failure mode. Build it as three agents with human handoffs in between, and the price is three projects.

"An agent that uses our internal tool that has no API." Building the API to your internal tool is the project. The agent on top is the easy part. Quote both, or neither.

"An AI replacement for [employee role]." A role is composed of dozens of distinct tasks, each with different complexity. The honest framing is: which 3-5 tasks within this role are highest-frequency and lowest-stakes, and let's automate those. The buyer who wants "a replacement" is expressing a workforce decision dressed up as an engineering brief, and that's not a project I want to be the front end of.

How to use this list

If you're a buyer:

  • Match your need to one of the 10 above before talking to anyone.
  • Write the success criterion in one sentence. If you can't, you're not ready to scope a build yet — you're ready for a planning session.
  • If your project is on the "refuse" list, don't be discouraged — split it into pieces that aren't.

If you're a builder selling AI services:

  • Productise around the 10 above. Refuse around the 5 below. Resist the temptation to say yes to revenue you can't deliver cleanly.
  • The five filter questions are the actual product. The build itself is just the lazy version.

Going further

If you want one of the 10 above shipped on your infrastructure for a flat fee, I do this as a productised offering: 90-minute strategy call, up to 4 hours of build work, full handover docs, 30 days of support, and the matching AI Armory prompt pack thrown in. It's Single AI System Setup on aiarmory.shop, $649, fixed.

But you really don't need me. The five filters and the 10-system menu above are the actual asset. Pick one, scope it tightly, build it yourself in a weekend, and you'll learn more about what your business actually needs from AI than any vendor will tell you.

Just don't say yes to the briefs on the refuse list. Not even your own.

Top comments (0)