Yesterday I launched The Hive publicly — a multi-agent AI system running on my Mac Studio M4 Max — with Warren B making paper trades on Alpaca and Felix RE starting to scout distressed properties. Day 1 was about getting agents live. Day 2 was about giving one of them a voice and a real company behind it.
Introducing Abode AI Incorporated
Today I formally spun up Abode AI Incorporated — an AI-first real estate solutions company. The flagship product is Liam Pryor, an AI voice agent who makes outbound cold calls, handles inbound callbacks, books follow-up appointments, and processes his own transcripts.
Liam runs on ElevenLabs Conversational AI + Twilio for the actual phone infrastructure. The personality prompt was built by stacking three philosophies on top of each other:
- Ryan Serhant's relationship-first sales — leads with curiosity, never desperation
- Alex Hormozi's value equation — every interaction is a value exchange, not a pitch
- Win Without Pitching Manifesto — never sell, just explore problems
The result is an agent that doesn't sound like a robot dialer. He sounds like a sharp acquisitions guy who genuinely wants to understand your situation before ever mentioning a number.
How Liam Makes a Call
When Felix RE identifies a distressed property lead, it packages the context and passes it to the voice engine. Here's what gets injected into the ElevenLabs dynamic_variables payload before each call:
const dynamicVars: Record<string, string> = {
owner_name: lead.ownerName,
address: expandAddress(lead.address),
city: lead.city,
state: lead.state,
call_type: lead.callType, // "cold" | "follow-up" | "pre-foreclosure"
requires_recording_disclosure: "true",
investor_name: "Abode AI",
company_name: "Abode AI",
};
// The secret sauce — municipal distress data injected per lead
if (lead.distressContext) {
dynamicVars.distress_signals = lead.distressContext;
}
That distress_signals variable is what makes this different from every other AI dialer out there.
Post-Call: Groq Classifies Every Transcript
After the call ends, the transcript comes back from ElevenLabs. I pipe it through Groq (Llama 3.3 70B, runs in ~1 second) with a structured classification prompt:
"outcome": "interested" | "not_interested" | "callback_scheduled" |
"no_answer" | "voicemail" | "wrong_number" | "dnc_request"
If the outcome is callback_scheduled, the system resolves relative times ("call me back Tuesday around this time") into absolute ISO 8601 datetimes and automatically creates a Google Calendar event — so I have a human-readable record even if I never touch the dashboard. Then it re-queues the lead for Liam to auto-call at that exact time.
DNC requests are handled immediately and atomically — the number hits a local JSON blocklist before the function returns.
The Distress Signal Pipeline
The real moat here isn't the voice agent. It's the data feeding into it.
Every night at 1 AM, a scraper pipeline hits 6 public municipal portals across three markets:
| Market | County | Signals |
|---|---|---|
| Atlanta | Fulton County | Code violations, tax delinquency |
| Memphis | Shelby County | Code violations, tax delinquency |
| Indianapolis | Marion County | Code violations, tax delinquency |
The pipeline: run each scraper sequentially (rate-limit polite), upsert raw signals into SQLite, run an enrichment pass to normalize addresses and match signals to active pipeline leads, then write a distress-signals.md file into each lead's folder.
Total monthly cost: /bin/zsh. All public records. All open-source tools (Puppeteer, SQLite, Node.js).
When Liam calls a homeowner with an open code violation, he already knows about it. That's not a magic trick — it's just doing the research before picking up the phone.
The Debug That Wasn't a Bug
Spent a few hours today troubleshooting voice agent instability. Liam kept cutting himself off mid-sentence. I tuned turn eagerness, adjusted silence timeouts, toggled speculative turn detection — nothing worked.
Turned out it wasn't a configuration issue at all. It was ElevenLabs error 1002 — quota exceeded on the free tier. The agent was failing silently at the API layer, not at the conversation layer.
Lesson: when a voice agent behaves erratically, check API quota before touching model settings.
What's Next
Day 3:
- Launch the Abode AI landing page (built with Lovable, already prompted)
- Upgrade to ElevenLabs paid plan — unlock the quota ceiling
- First real calls to actual homeowners using live pipeline leads
- Warren B transitions from paper trading to evaluating real position sizing
The infrastructure is ready. Tomorrow we find out if homeowners pick up.
Building The Hive in public. All code is TypeScript, all inference is local (Ollama + LM Studio + Groq). Questions or thoughts — drop them in the comments.
Top comments (0)