DEV Community

Caspar Bannink
Caspar Bannink

Posted on • Originally published at homescout.io

Building an AI email composer because copy-paste rental inquiries have a 12% response rate

The Dublin rental market has a response rate problem. The average renter sends a generic inquiry to 20+ properties and hears back from maybe 2 or 3. I measured this through user interviews and it tracks with what academic research on UK/Irish rental markets has found: response rates for cold rental inquiries sit somewhere between 12% and 20% depending on the area and season.

This is a signal quality problem wrapped in a UX problem. And it's exactly the kind of thing you can build against.

What the data actually looks like

When a listing goes live on Daft.ie (the dominant Irish rental portal), it typically receives 50-150 inquiries in the first 48 hours. There's no landlord-side inbox tooling beyond email forwarding. No status tags, no filtering, no templates for responses.

The landlord opens it, skims a subset, replies to whoever seems credible and convenient, and moves on. The inquiry that lands in position #78 in their inbox could be from a perfect tenant. They'll never see it.

On the applicant side, the constraint is symmetric. Renters know supply is short and competition is high. Rational strategy is to apply wide. So they send the same message to 20 places. Landlords recognize this pattern and deprioritize anything that sounds like a blast email. The market equilibrium is low signal on both sides.

The technical problem: generic text is detectable

From a signal processing standpoint, a landlord reading 150 emails is doing a classification task. "Does this person seem like they actually want THIS property, or are they spamming everything?" Generic emails fail that classifier.

What passes the classifier:

  • Mentions something specific from the listing (the garden, the commute distance to a named workplace, the pet policy)
  • Answers the implicit screening questions (move-in date, lease length preference, employment type)
  • Has a coherent reason for wanting that specific location

Writing this for each of 20 properties manually is O(n) effort. Most people won't do it. That's the gap to build against.

What we built and how it works

HomeScout has an AI email composer that generates property-specific inquiry drafts. The flow:

  1. User finds a listing they want to apply to
  2. They click "Generate inquiry"
  3. The system reads the listing data (title, description, location, price, landlord-specified requirements if any)
  4. It generates a draft that references specifics from the listing and includes answers to the standard screening questions
  5. User reviews and edits before sending

A few things we got wrong in early versions:

Too formal. First drafts sounded like legal correspondence. Real inquiry emails are conversational. People don't write "I would like to express my interest in the aforementioned property." They write "I'd love to come view this, I work nearby and the commute would be ideal."

Hallucinating specifics. Early versions made up details that weren't in the listing. "I noticed the property has a south-facing garden" when the listing said nothing about the garden's orientation. That's a problem when the landlord reads it. We added a strict constraint: only reference what's explicitly in the listing data.

Missing the implicit screening questions. The questions landlords actually care about (when can you move, how long do you want, are you employed) aren't always in the listing. We built a user profile layer so those answers get included from the renter's saved preferences rather than being generated per-inquiry.

The model choice question

This is one of those tasks where the quality delta between models matters more than the cost delta. The difference between a mediocre and a good inquiry email is meaningful, and the output is user-facing in a context where the stakes are real (someone might get or not get a viewing based on this).

We use GPT-4o for this feature. We tested smaller models. The failure modes were too common: hallucinated specifics, wrong tone, missing context from the listing. 4o is measurably better on this specific task. The cost is acceptable given the feature's value.

We also considered fine-tuning on successful inquiry emails, but our dataset isn't large enough to make that worthwhile yet. It's on the roadmap once we have more throughput.

Prompt engineering notes

The prompt structure that works best for this:

  • System prompt establishes the persona (renter, not an AI assistant) and the tone (conversational, specific, not corporate)
  • Listing data is passed as structured context, not raw HTML scrape
  • User profile answers are included as a short bullet list for the model to draw from
  • Explicit negative instructions: don't fabricate specifics, don't use formal register, don't mention that this is AI-generated
  • Output format: plain text, no subject line, 3-4 short paragraphs

The negative instructions matter more than the positive ones. The default model behavior drifts toward formality and generic language. You have to actively constrain it away from those patterns.

What comes next

The honest answer is that email composer is a partial fix. The underlying problem is that landlord inboxes have no structure, so even a good inquiry can get lost. The complete solution involves building tools on the landlord side too: structured applicant pipelines, automated first-response, viewing scheduler. That's a different product scope.

For now, improving inquiry quality is the lever we can pull on the renter side. And it does move the needle. Users who use the composer report better outcomes than users who don't.

I wrote a longer breakdown of the full response rate problem and its market context here: https://homescout.io/guide/why-landlords-never-reply-dublin-rental


Caspar Bannink. Founder of HomeScout.io. Building AI-powered rental search for Dublin.

Top comments (0)