What a Military Entrepreneurship Course Taught Me About AI Startups
I went to Virginia Military Institute. The education is rigorous, the hierarchy is real, and the entrepreneurship curriculum (EE382) draws from MIT's foundational frameworks — Lean Startup, BMC, IDE theory, market analysis.
I've been building an AI agent business for the past several months. When I went back through these frameworks and applied them honestly, the results were uncomfortable.
Here's what I learned.
Innovation = Invention × Commercialization
The formula sounds simple. It's brutal in practice.
VMI's curriculum opens with: Innovation = Invention × Commercialization
If either factor is zero, your innovation score is zero. Full stop.
I had built a 13-agent orchestration system. PAX protocol for inter-agent communication. Watchdog restarts. Structured handoffs. Vault architecture. Real engineering.
My commercialization score? Zero. No sales. No customer interviews. No revenue.
Innovation score: 0.
Most builders in the AI space are in the same trap. We're inventing. We're not innovating — because invention without commercialization is just a very impressive hobby.
SME vs IDE: Know What You're Building
The MIT frameworks distinguish two types of businesses:
SME (Small/Medium Enterprise): Local market, linear growth, cash flow as the reward. Think: your local HVAC company.
IDE (Innovation-Driven Enterprise): Global market, exponential growth, wealth creation as the goal. Think: Stripe, GitHub, any developer tool with 70%+ gross margins.
AI agent tools are pure IDE territory. Digital products. Global market. Near-100% gross margin per unit. No geographic constraints.
But the IDE framework comes with a warning that most builders ignore:
IDE companies start by losing money. The structure requires traction before it justifies itself.
If you're burning API costs building a cathedral before your first customer, you're in IDE territory whether you know it or not. The question is whether you'll hit the inflection point before you run out of runway or motivation.
The Lean Startup Violations Nobody Talks About
Steve Blank's Lean Startup is taught at MIT, used at VMI, and ignored by most developers.
The core insight: startups search for a business model; companies execute one.
Here's the audit I ran on my own project:
Violation 1: Zero customer interviews
Blue River Technology (precision agriculture) talked to 100 farmers in 10 weeks before writing a line of code. They went to the building.
I had read hundreds of HN threads and Reddit posts. That's secondary research — useful, but not the same as primary research.
Secondary research tells you what problems exist. Primary research tells you if your solution solves them.
Big difference.
Violation 2: Built the cathedral before the MVP
DipJar's first MVP had no credit card reader. It just counted card insertions — a proof-of-concept that their core assumption (people would tap cards if prompted) was valid.
I built production-grade agent orchestration before validating that my target customer would pay $97 for it.
The Lean rule: MVP tests one hypothesis, not a product. If you could demo your concept with a Google Form and a Notion doc, you probably should have done that first.
Violation 3: The mermaid strategy
The curriculum has a great name for trying to market to hybrid customer segments: the mermaid strategy. You're marketing to someone who's neither woman nor fish.
My product had three personas: a solo SaaS founder, a power developer, and a content creator. Different channels. Different price sensitivity. Different purchase triggers. Different value propositions.
Marketing that tries to speak to all three speaks clearly to none.
Fix: Pick a beachhead. One segment. Prove that segment pays. Then expand.
Market Timing: The Growth Phase Advantage (and Trap)
The industry lifecycle framework maps markets through: Emerging → Growth → Differentiation → Mature → Decline.
AI agent tooling (specifically Claude Code multi-agent patterns) is in early Growth phase. Characteristics:
- Many new entrants, no dominant player
- Software margins available (70%+)
- Explosive growth metrics (396-upvote HN threads, viral dev.to posts)
- Low entry barriers — a GitHub repo and landing page puts you in market
The Growth phase prescription: capture market share fast. Don't perfect the product. The Differentiation stage is next — that's when you need to be a named brand, not a newcomer.
But here's the trap: the biggest competitive threat in Growth-phase AI tooling isn't CrewAI or another startup. It's the platform itself.
If Anthropic publishes an official "multi-agent starter" on their docs page, any product built around configuration files loses its core differentiation overnight.
This means your moat cannot be the product. It has to be the story — the community — the proof that your system runs a real business. Those are things Anthropic can't replicate with a docs update.
The BMC Block That Kills AI Startups
The Business Model Canvas has nine blocks. Most AI startups fail on one specific block that nobody talks about: Customer Relationships.
The BMC framework breaks customer relationships into three stages: Get → Keep → Grow.
Most AI product builders have a "Get" strategy (PH launch, HN post, dev.to article). Almost none have a "Keep" strategy.
Getting a new customer costs 10x more than retaining one. If your entire model is Get, you're on a treadmill — running hard to stay in place.
Keep mechanisms for B2B developer tools:
- Email sequence tied to first-use milestones
- Community (Discord, GitHub Discussions) that creates social investment
- Regular updates that customers can point to as proof their purchase aged well
- Use case spotlights that validate the customer's decision to peers
Grow mechanisms:
- Upsell path (free → paid → pro → done-with-you)
- Referral incentive for engineers (GitHub stars, testimonial trade)
If your BMC doesn't have entries in Keep and Grow, you don't have a business model. You have a launch plan.
The Competitive Analysis Nobody Does
The VMI curriculum makes a distinction that changed how I think about competition:
Good competition = companies doing your market badly. These are your steals.
Bad competition = companies doing your market well. These are your benchmarks.
Most builders benchmark against the successful players. That tells you how good things are — not where customers are suffering.
The right move: find the 1-star reviews. Find the Reddit threads where users complain about CrewAI's pricing or the complexity of agent setup. Find the unmet needs that successful competitors have deprioritized.
That's your wedge.
Also: the biggest threats come from outside your industry. For AI agent tooling, that's Anthropic themselves. GitHub Copilot expanding scope. VS Code native agents.
Map the existential threats separately from the competitive threats. Treat them differently.
What the Mindset Framework Actually Means
The MIT entrepreneur mindset module has a formula:
Achievement = Talent × (Hard Work)²
But the one that stuck: "Airport test" — be genuine, not always 'on'.
There's a trap in the AI agent space where everything becomes mythology. Every product is a "system." Every tool is a "framework." Every launch is an "operation."
The curriculum calls this "always on" — projecting a persona so completely that customers can't see through to whether you solve their problem.
Engineers evaluate GitHub repos by commits per week, documentation quality, and README clarity. They don't care about the brand name. They care about whether it works.
Before you name your agent system after a Greek god, ask: does your target customer relate to that, or does it confuse them about what problem you solve?
The Framework That Ties It Together
After mapping everything, here's the diagnostic I use now:
| Question | Pass/Fail Signal |
|---|---|
| Have you talked to 10+ customers? | If no: you're guessing |
| Is your MVP testable without the full product? | If no: you overbuit |
| Do you have ONE beachhead segment? | If no: you have a mermaid |
| What is your Keep strategy? | If blank: you have a launch plan, not a business |
| What happens if the platform ships your product? | If fatal: your moat is wrong |
The answers aren't comfortable. But they're cheaper to find out now than after launch.
One More VMI Lesson
There's something particular about military education that shapes how you approach risk.
You don't plan for the expected outcome. You plan for what happens when the expected outcome fails.
The Lean Startup equivalent: assume your current plan is wrong. Not might be wrong. Is wrong. The question is how quickly you can discover in which direction.
Build your discovery infrastructure before you build your product. Talk to customers before you write code. Watch someone use your tool before you optimize the UX.
The business plan rarely survives first contact with the customer.
Plan accordingly.
Building an AI-operated business in public. Agent-written systems, human decisions. Follow along at whoffagents.com.
Top comments (0)