On December 27, 2024, I heard Greg Isenberg break down AI mobile apps generating $50K+ MRR on his podcast. He laid out five criteria for picking a market: (1) audience actively spends money, (2) repeating problem, (3) solution involves photo/video input, (4) accuracy matters enough to pay for, (5) existing tools are weak.
I immediately thought: cigars.
$57 billion global market. 5–7% annual growth. The top competitor — Cigar Scanner, 150K users — had been pulled from the App Store. Cigar Dojo had 29K members on a desktop-first platform with zero AI. Every existing app was basically a manual database where you type in what you're smoking.
By December 28, I had a functional app with AI-powered cigar identification, a digital humidor, tasting journal, smoke session tracker, social feed, achievement badges, subscription billing, and a referral engine. Fifteen major features. Forty-eight hours. One person.
This post breaks down exactly how I did it — the architecture, the methodology, and the prompt engineering approach that made the speed possible.
The Stack
- Mobile: React Native / Expo (iOS + Android)
- Web: Next.js 14
- Backend: Supabase (PostgreSQL + Row Level Security + Auth + Storage)
- AI — Cigar Identification: Claude Vision API
- AI — Concierge Chat: Claude Sonnet
- AI — Image Generation: Gemini / Imagen 3.0
- Payments: RevenueCat (mobile), Stripe (web)
- Development: Replit Agent + Claude for architecture/prompts
One database. One API. Three interfaces (mobile app, web app, B2B partner portal).
The Method: AI-Prompt Sequencing
Here's the thing that made 48 hours possible. I didn't build features one by one. I didn't even write code directly for the first several hours.
Instead, I used Claude to architect the entire product — market validation, feature specs for every screen, database schema, monetization model, and go-to-market strategy. That gave me a comprehensive blueprint.
Then came the part that changed how I build everything: I had Claude design a series of complete, sequenced prompts that I could feed directly to Replit Agent.
Each prompt was a self-contained module. Not "build me a social feed." Each one included:
- Database schema (exact tables, columns, types, RLS policies)
- API routes (endpoints, request/response shapes, error handling)
- UI components (screens, state management, user flows)
- Integration points with previously built modules
Approximately 12 prompts. Ordered by dependency — authentication first, then core data models, then features that depend on those models, then features that depend on other features.
The key insight: each prompt assumed the previous ones were already implemented. So prompt #7 (badges system) could reference the database tables created in prompt #3 (humidor) and prompt #5 (smoke sessions) without re-specifying them.
I fed each prompt to Replit Agent in sequence, tested, fixed edge cases, and moved to the next one.
The AI Integration Layer
Cigar Identification (Claude Vision)
The core feature. User photographs a cigar band. The image goes to Claude Vision with a structured prompt requesting JSON output:
{
"brand": "Arturo Fuente",
"product_line": "Don Carlos",
"vitola": "Robusto",
"country_of_origin": "Dominican Republic",
"wrapper_type": "Cameroon",
"ring_gauge": 50,
"length_inches": 5.25,
"strength": "Medium-Full",
"price_range_single": "$12-18",
"price_range_box": "$180-240",
"confidence": 0.92
}
Confidence scores determine the UX path. High confidence → show results directly. Lower confidence → suggest manual verification or route to the AI concierge for a second opinion.
Don Carlos AI Concierge (Claude Sonnet)
This is where it gets interesting. Don Carlos is a branded AI persona — a distinguished gentleman with deep cigar knowledge, warmth, and cultural sophistication. But the real power is context injection.
Every conversation receives:
- The user's complete humidor inventory
- Their tasting history and ratings
- Previous smoke sessions
- Their stated flavor preferences
So when a user asks "what should I smoke tonight?" Don Carlos isn't giving generic recommendations. He's looking at what's actually in your humidor, what you've rated highly, and what you haven't tried yet.
He can also identify cigars from photos when direct scanning is inconclusive, suggest drink pairings based on flavor profiles, and maintain conversation persistence across sessions.
Self-Building Image Library (Gemini/Imagen 3.0)
This one's my favorite piece of architecture. When a scan identifies a cigar that doesn't have an existing image in the database, the system generates a photorealistic product photo using Gemini.
The prompt specifies studio lighting, dark wood background, accurate band details, and elegant composition. The image gets stored in Supabase Storage and cached for all future users who scan the same cigar.
Users unknowingly build a premium image database. Cost: roughly $20–80 per 1,000 photorealistic images. Competitors would need to photograph thousands of cigars manually to replicate what the community builds for us passively.
A generation logging table tracks requests, success rates, costs, and provides a premium gating option — free users see existing images, premium users trigger generation for cigars not yet in the library.
The Feature That Came From a Dumb Observation
Here's a product insight that had nothing to do with AI.
Cigar smoking is a 45–90 minute ritual where people are literally just sitting there. That's an insanely long potential session time that most consumer apps would kill for.
This led to Smoke Session Mode — a companion experience with a real-time animated burning cigar that progresses as time passes, complete with ash buildup and smoke wisps.
The session timer isn't arbitrary. It uses a formula:
baseTime = ringGauge × lengthInches / 30
adjustedTime = baseTime × strengthMultiplier
So a thick, long, full-bodied cigar gets a longer estimated session than a slim mild one. Personalized to the exact cigar you scanned.
At 33% and 66% through the session, the app prompts you with Three Thirds flavor education — explaining how the flavor profile shifts as you smoke through the first, second, and third portions. Most cigar smokers don't know this. Now they learn it in real time, while smoking.
Ambient mode dims the UI to show just the burning cigar and timer. Haptic feedback fires at phase transitions. Session history tracks everything with ratings and notes.
No competitor has anything like this. And it emerged from a simple observation about session length, not from a feature spec.
The Gamification Layer
47 badges across 10 categories. This sounds excessive until you understand the retention strategy.
The design principle: every celebration should feel earned. Confetti for logging in is cringe. Confetti for breaking a 30-day smoking streak? That hits different.
Badge categories span collection milestones (humidor size), exploration (trying cigars from different countries), social engagement (sharing, reviewing), session commitment (completing full smoke sessions), and knowledge (identifying cigars correctly on first scan).
The leaderboard runs a "Scan of the Day" algorithm that surfaces interesting scans — rare cigars, high-confidence identifications, first-time-scanned brands — rather than just ranking by volume.
The Business Model Nobody Sees
The consumer app is a Trojan horse. The real revenue is B2B.
Consumer subscriptions ($6.99/week or $49.99/year on mobile, $9.99/month or $79.99/year on web) provide baseline revenue. But the real play is lounge partnerships.
CigarSnap drives foot traffic to lounges through the lounge finder with check-ins and foot traffic analytics. Free listings prove value. Once a lounge sees 50 check-ins a month from CigarSnap users, the conversation about a $49–149/month Preferred Partner listing sells itself.
138 Texas lounges mapped at launch. B2B revenue projections for DFW alone: $2,470/month conservative, $7,920/month aggressive.
The consumer app is top of funnel. B2B is the real money.
What I'd Do Differently
Testing between prompts. I should have written integration tests after each prompt module instead of doing a big QA pass at the end. Some dependency issues between modules would've been caught earlier.
The web rebuild. The initial build was mobile-first through Replit Agent. The Next.js web app came later as a separate prompt sequence. I should have designed both interfaces from the start with a shared component library rather than rebuilding UI components.
Scope discipline. I almost built a CRM inside CigarSnap because a lounge owner told me about her $25K/year CRM spend. I had to physically stop myself. CigarSnap is a lead source that feeds into a CRM — it's not a CRM itself. Knowing what NOT to build is harder than building.
The Numbers
| Metric | Value |
|---|---|
| Idea to functional MVP | ~48 hours |
| Features at launch | 15+ |
| Full ecosystem build (mobile + web + B2B portal) | Under 30 days |
| Texas lounges mapped | 138+ |
| Reddit leads scraped for validation | 3,522 at $1.08 total cost |
| Achievement badges | 47 across 10 categories |
| AI image generation cost | $20–80 per 1,000 images |
| Global cigar market | $57B+ |
About Me
I'm Matt Cretzman. I build AI agent systems that run entire business functions — and companies that depend on them working.
CigarSnap is one of seven ventures I'm currently running. The others span legal tech (TextEvidence), AI coaching (Skill Refinery), B2B lead generation (LeadStorm AI), EdTech (HeyBaddie), meeting management (MyPRQ), and the AI marketing agency that started it all (Stormbreaker Digital).
I write about the technical side of building these at mattcretzman.com and share the less-filtered version on Substack.
If you're building with AI agents, I'd love to hear what your stack looks like. Drop a comment or find me on LinkedIn or GitHub.
Top comments (0)