DEV Community

Aloysius Chan
Aloysius Chan

Posted on • Originally published at insightginie.com

Understanding the OpenClaw Tour Booking Skill: Automating Property Showing Calls with AI

Understanding the OpenClaw Tour Booking Skill: Automating Property Showing

Calls with AI

In the fast‑moving world of real estate, agents spend countless hours on the
phone coordinating property showings. The OpenClaw tour‑booking skill is a
purpose‑built sub‑agent that takes over this repetitive task, allowing
realtors to focus on closing deals while an AI‑driven voice assistant handles
outbound calls to listing offices. This post walks through what the skill
does, how it works under the hood, and why it can be a game‑changer for
brokerages looking to scale their showing workflow.

What the Skill Is Designed For

The tour‑booking skill lives in the skills/skills/danielfoch/tour-booking
folder of the OpenClaw repository. Its primary responsibility is to execute
the call‑execution layer for property showing bookings. When a parent workflow
— such as a lead‑nurturing pipeline or a CRM‑triggered automation — needs to
schedule a showing, it hands off the job to this skill. The skill then:

  • Builds a consistent call prompt from the listing and client data.
  • Sends an outbound call request to ElevenLabs (or runs a dry‑run for testing).
  • Normalizes the call outcome into structured status fields that downstream steps can consume.

By encapsulating these steps, the skill provides a clean, reusable interface:
give it a job ID, client name, listing address, office phone, preferred time
window, and timezone, and it returns a booking‑outcome JSON that tells you
whether the showing is confirmed, needs a callback, or failed.

Core Inputs

Each call job expects a small set of fields:

job_id

A unique identifier for the tracking of this particular showing request.
Enter fullscreen mode Exit fullscreen mode

client_name

The name of the prospective buyer or tenant whose showing is being arranged.
Enter fullscreen mode Exit fullscreen mode

listing.address

The full street address of the property.
Enter fullscreen mode Exit fullscreen mode

listing.office_phone

The phone number of the listing office that will be called.
Enter fullscreen mode Exit fullscreen mode

preferred_windows_text

A human‑readable description of the desired time window, e.g., “tomorrow between 2 pm and 5 pm”.
Enter fullscreen mode Exit fullscreen mode

timezone

The IANA timezone string (e.g., America/New_York) used to convert the window into local times for the script.
Enter fullscreen mode Exit fullscreen mode

These inputs are typically supplied as a JSON file (job.json) that the skill’s
preparation script consumes.

Step‑by‑Step Execution Flow

1. Build the Payload

The first script, prepare_call_payload.py, reads the job JSON and creates a
structured payload that the calling script will consume. It does the
following:

  • Normalizes the address and phone number.
  • Converts the preferred window into concrete start‑end timestamps in the listing office’s local time.
  • Formats a call script that instructs the ElevenLabs agent to:
    • State clearly that it is an AI assistant calling on behalf of the realtor.
    • Ask for available slots inside the requested window first; request alternatives if unavailable.
    • Confirm the final slot with exact date and local time before ending the call.
    • If the office cannot confirm, mark the result as pending_callback and capture any callback requirements (e.g., preferred callback time, contact person).
  • Outputs the payload as call-payload.json.

This step guarantees that every call follows the same conversational
guardrails, reducing variability and improving compliance with real‑estate
calling regulations.

2. Place the Outbound Call

With the payload ready, the skill invokes place_outbound_call.py. The script
can operate in two modes:

  • Dry‑run (default safe mode) – No actual call is placed. Instead, the script simulates the ElevenLabs request and returns a fabricated result that mirrors what a live call would produce. This is invaluable for unit testing, CI pipelines, and demonstration environments.
  • Live mode – The script contacts the ElevenLabs API, sending the payload as the conversation script. ElevenLabs’ voice agent then places the outbound call to the listing office, interacts with the human recipient using the guardrails defined earlier, and returns a raw call‑result JSON containing:
    • Transcript or summary of the conversation.
    • Detected intent (e.g., slot offered, slot declined, request for callback).
    • Any extracted date/time proposals.
    • Call metadata such as duration, timestamp, and success/failure flags.

The output of this step is stored in call-result.json.

3. Parse the Outcome

The final script, parse_call_result.py, takes the raw ElevenLabs result and
converts it into a normalized booking‑outcome JSON (booking-outcome.json).
Its responsibilities include:

  • Mapping the raw intent to one of the standardized status codes: confirmed, pending_callback, declined, or failed.
  • Extracting the confirmed showing date and time (if any) and converting them to UTC for storage in the CRM.
  • Capturing callback details (preferred time, contact name, phone number) when the status is pending_callback.
  • Preserving the original transcript for audit or quality‑assurance purposes.

Because the output follows a strict schema, downstream workflows (e.g.,
updating a showing calendar, triggering notifications, or escalating to a
human agent) can consume it without needing to parse free‑form text.

Why This Matters for Real Estate Teams

Real‑estate brokerages often face bottlenecks when agents manually call
listing offices to secure showing slots. The process is time‑consuming, prone
to miscommunication, and difficult to scale as lead volume grows. By
delegating the call execution to the OpenClaw tour‑booking skill, teams gain
several advantages:

  • Consistency : Every call follows the same script, ensuring that the AI assistant always identifies itself, respects the requested window, and seeks confirmation before hanging up.
  • Speed : The AI can place dozens of calls in parallel, far outpacing a human agent’s dialing rate.
  • 24/7 Availability : The skill can be triggered outside regular business hours, allowing early‑morning or evening showing requests to be processed without human intervention.
  • Data‑Driven Insights : Structured outcomes enable analytics on showing conversion rates, average response times, and common objections raised by listing offices.
  • Cost Efficiency : Reducing the manual call load frees agents to focus on higher‑value activities such as client consultations, negotiations, and closing.

Integration with CRM and Workflow Engines

The skill is deliberately designed to be a plug‑in component. The booking-
outcome.json
file contains fields that map directly to common CRM entities:

  • showing_id – can be linked to a showing record or appointment.
  • status – triggers workflow transitions (e.g., move lead to “Showing Scheduled” or schedule a follow‑up task).
  • showing_start_utc / showing_end_utc – populate calendar slots.
  • callback_requested and callback_details – generate a follow‑up task or reminder.

Many teams use a simple wrapper script that watches for the outcome file and
then issues a REST call to their CRM (Salesforce, HubSpot, Zoho, or a custom
system) to create or update records. Because the skill outputs plain JSON,
integration can be achieved with virtually any middleware, including Zapier,
Integromat, or Apache NiFi.

Monitoring, Logging, and Observability

Production deployments benefit from visibility into each call attempt. The
skill logs key events to stdout, which can be captured by a logging agent
(Fluentd, Logstash, or CloudWatch). Typical log entries include:

  • Job ID and timestamp when preparation starts.
  • Generated payload size (useful for detecting malformed inputs).
  • Dry‑run vs. live flag.
  • ElevenLabs API response status code and latency.
  • Parsed outcome status and any extracted showing time.
  • Error messages if any step fails.

By forwarding these logs to a centralized system, operators can create
dashboards that show:

  • Number of calls attempted per hour.
  • Live vs. dry‑run ratio.
  • Success rate (confirmed + pending_callback) vs. failures.
  • Average call duration.
  • Distribution of callback requests.

Alerts can be configured to notify the team when the failure rate exceeds a
threshold, prompting a quick investigation of ElevenLabs quotas, network
issues, or script regressions.

Error Handling and Retry Strategies

Real‑world telephony is unpredictable. The skill incorporates several
defensive measures:

  • Validation stage – The preparation script checks that all required fields are present and that the phone number matches a valid E.164 format. Invalid jobs are rejected early with a clear error code.
  • Transient failures – If the ElevenLabs API returns a 5xx or a timeout, the placement script can be retried with exponential backoff (configurable via environment variables).
  • Call‑level retries – When a live call fails to connect (e.g., busy line, no answer), the script logs the outcome as failed but includes a retry flag; a supervising workflow can decide to re‑queue the job after a delay.
  • Dead‑letter queue – Jobs that repeatedly fail after a configurable number of attempts are moved to a separate queue for manual review, preventing endless loops.

These mechanisms ensure that temporary glitches do not corrupt the pipeline
and that problematic listings receive human attention when needed.

Scaling Parallel Calls

Because each call job is independent, the skill scales horizontally. A common
pattern is to:

  1. Batch a list of showing requests into a queue (RabbitMQ, AWS SQS, Google Pub/Sub).
  2. Spawn multiple worker processes, each pulling a job, running the three‑step sequence, and publishing the outcome to a downstream topic.
  3. Monitor queue depth and worker utilization via standard metrics (e.g., Prometheus).

In practice, a modest deployment with four concurrent workers can handle
120–150 calls per hour, depending on ElevenLabs rate limits and average call
length. Adjusting the worker count or requesting higher throughput from
ElevenLabs allows the system to keep pace with peak lead influx periods such
as new listing launches or open‑house weekends.

Security and Data Privacy

The skill treats all personal data as sensitive. It employs the following
safeguards:

  • No personally identifiable information (PII) is written to disk beyond the transient JSON files; these files are stored in a temporary directory with restricted permissions.
  • API keys for ElevenLabs are expected to be injected via environment variables or a secret manager (AWS Secrets Manager, HashiCorp Vault), never hard‑coded.
  • All network calls to ElevenLabs use HTTPS with certificate validation.
  • The skill does not retain audio recordings; only textual transcripts and derived metadata are kept, minimizing storage liability.

When deploying in a regulated environment, administrators can additionally
encrypt the temporary files at rest and ensure the worker nodes run inside a
isolated VPC or private subnet.

Compliance with TCPA and Real‑Estate Regulations

Outbound voice calls in the United States fall under the Telephone Consumer
Protection Act (TCPA) and various state‑specific telemarketing rules. The
tour‑booking skill helps maintain compliance by:

  • Explicitly stating that the caller is an AI assistant working on behalf of a licensed realtor, satisfying identification requirements.
  • Respecting the requested time window and avoiding calls outside of typical business hours unless the client has explicitly authorized after‑hours contact.
  • Providing a clear opt‑out mechanism: if the recipient indicates they do not wish to receive further calls, the script marks the outcome as declined and logs the refusal for future suppression.
  • Keeping detailed logs that can be used to demonstrate compliance during an audit.

Teams should still consult their legal counsel to ensure that their specific
use case (e.g., calling numbers sourced from third‑party lists) complies with
all applicable regulations.

Future Roadmap and Community Contributions

The OpenClaw project thrives on community feedback. Planned enhancements for
the tour‑booking skill include:

  • Multi‑language support – Adding language detection and switching the call script to Spanish or French based on client preferences.
  • Advanced sentiment analysis – Using ElevenLabs’ upcoming sentiment flags to detect frustration or enthusiasm and adapt the conversation in real time.
  • Integration with calendar APIs – Directly creating Google Calendar or Outlook events when a showing is confirmed, eliminating a manual sync step.
  • Feedback loop – Allowing the recipient to rate the call experience via a short IVR survey, feeding the data back into model improvements.
  • Custom voice models – Enabling brokerages to upload their own branded voice talent for a more personalized caller experience.

Developers interested in contributing can fork the repository, propose changes
via pull requests, and discuss ideas in the project’s Discussions tab. The
maintainers welcome improvements to the scripts, additional unit tests, and
documentation enhancements.

Getting Started

To experiment with the tour‑booking skill in your own environment, follow
these steps:

  1. Clone the OpenClaw skills repository:

     git clone https://github.com/openclaw/skills.git
    
  2. Navigate to the skill directory:

     cd skills/skills/danielfoch/tour-booking
    
  3. Create a sample job JSON (e.g., job.json) with the required fields.

  4. Run the preparation script:

     python3 scripts/prepare_call_payload.py --job job.json --output /tmp/call-payload.json
    
  5. Execute a dry‑run to see the generated payload and simulated result:

     python3 scripts/place_outbound_call.py --payload /tmp/call-payload.json --output /tmp/call-result.json --dry-run
    
  6. Parse the outcome:

     python3 scripts/parse_call_result.py --input /tmp/call-result.json --output /tmp/booking-outcome.json
    
  7. Inspect booking-outcome.json to verify the status and any extracted showing details.

When you are confident with the dry‑run results, switch the --live flag to
place actual calls via ElevenLabs. Remember to configure your ElevenLabs API
key in the environment variables expected by the placement script.

Conclusion

The OpenClaw tour‑booking skill exemplifies how purpose‑built AI agents can
streamline repetitive, communication‑heavy tasks in real estate. By
encapsulating payload generation, outbound calling via ElevenLabs, and outcome
normalization into a reusable sub‑agent, it empowers brokerages to automate
showing scheduling with confidence and consistency. Whether you are a solo
agent looking to reclaim hours each week or a large brokerage aiming to scale
operations without proportionally increasing headcount, integrating this skill
into your workflow can deliver measurable efficiency gains, better data
quality, and an improved experience for both clients and listing offices.

If you found this overview helpful, consider diving into the repository,
experimenting with the scripts, and sharing your feedback with the OpenClaw
community. The more we refine these building blocks, the closer we get to a
fully automated, intelligent real‑estate ecosystem.

Skill can be found at:
booking/SKILL.md>

Top comments (0)