DEV Community

Ken Yeung
Ken Yeung

Posted on

Build a WhatsApp AI Agent in 10 Minutes with wati-cli

Build a WhatsApp AI Agent in 10 Minutes with wati-cli

No dashboard. No drag-and-drop. Just your terminal, an LLM, and a few commands.


We just shipped @wati-io/wati-cli — a command-line tool that lets you build WhatsApp AI agents entirely from your terminal. Every command returns structured JSON. Every exit code is meaningful. It's built for scripts, automation pipelines, and AI agents — not humans clicking buttons.

Here's how to go from zero to a working AI support agent in 10 minutes.


What we're building

A WhatsApp AI agent that:

  • Listens for incoming customer messages via webhooks
  • Pulls the customer's profile and recent conversation history
  • Uses an LLM to generate a contextual reply
  • Sends the reply back on WhatsApp
  • Assigns complex issues to a human

No dashboard involved. No flow builder. Just code.


Prerequisites

  • Node.js 18+
  • A Wati account (wati.io)
  • An Anthropic or OpenAI API key (or any LLM provider)
  • A public URL for webhooks (use ngrok for local dev)
npm install -g @wati-io/wati-cli
wati configure init
Enter fullscreen mode Exit fullscreen mode

Follow the prompts to enter your Wati API base URL and auth token. Done in 30 seconds.


Step 1: Subscribe to incoming messages

wati webhooks subscribe \
  --url https://your-server.com/webhook \
  --events message,newContactMessageReceived
Enter fullscreen mode Exit fullscreen mode
{
  "id": "wh_abc123",
  "url": "https://your-server.com/webhook",
  "events": ["message", "newContactMessageReceived"],
  "status": "Enabled"
}
Enter fullscreen mode Exit fullscreen mode

Now every time a customer sends a WhatsApp message, Wati pushes it to your endpoint. You can subscribe to other events too — sentMessageDELIVERED, sentMessageREAD, templateMessageSent, and more.

Check your active webhooks anytime:

wati webhooks list --pretty
Enter fullscreen mode Exit fullscreen mode

Step 2: Get customer context

When a message arrives at your webhook, your agent needs context — who is this person, and what's the conversation history?

# Who is this?
wati contacts get +1XXXXXXXXXX
Enter fullscreen mode Exit fullscreen mode
{
  "name": "John Doe",
  "phone": "+1XXXXXXXXXX",
  "tags": ["premium", "ecommerce"],
  "customParams": [
    { "name": "plan", "value": "enterprise" },
    { "name": "ltv", "value": "2400" }
  ]
}
Enter fullscreen mode Exit fullscreen mode
# What have we been talking about?
wati conversations messages +1XXXXXXXXXX --page-size 5
Enter fullscreen mode Exit fullscreen mode
[
  { "text": "Where's my order?", "type": "received", "timestamp": "2026-03-06T14:20:00Z" },
  { "text": "Checking now — one moment!", "type": "sent", "timestamp": "2026-03-06T14:21:00Z" }
]
Enter fullscreen mode Exit fullscreen mode

Two commands. Full context. Your LLM now knows it's talking to a premium customer asking about an order.


Step 3: Generate a reply with any LLM

The CLI is LLM-agnostic — use whatever model fits your needs. We'll use Anthropic's Claude Opus 4.6 here (currently the top-scoring model for agentic tasks), but you can swap in OpenAI, Gemini, or any provider.

import anthropic, subprocess, json

def wati(args):
    result = subprocess.run(["wati"] + args, capture_output=True)
    return json.loads(result.stdout)

# Get contact + history via CLI
contact = wati(["contacts", "get", phone])
messages = wati(["conversations", "messages", phone, "--page-size", "5"])

# Build prompt
system_prompt = f"""You are a support agent for an ecommerce store.
Customer: {contact['name']} (plan: {contact['customParams'][0]['value']}, LTV: ${contact['customParams'][1]['value']})
Recent conversation:
{json.dumps(messages, indent=2)}

New message: "{incoming_message}"

Reply helpfully and concisely. If the issue is complex or the customer is upset, respond with ESCALATE."""

# Generate reply with Claude Opus 4.6
client = anthropic.Anthropic()
response = client.messages.create(
    model="claude-opus-4-6",
    max_tokens=1024,
    messages=[{"role": "user", "content": system_prompt}]
)
reply = response.content[0].text
Enter fullscreen mode Exit fullscreen mode

Using OpenAI instead?

import openai

response = openai.chat.completions.create(
    model="gpt-5.2",
    messages=[{"role": "system", "content": system_prompt}]
)
reply = response.choices[0].message.content
Enter fullscreen mode Exit fullscreen mode

Step 4: Send the reply (or escalate)

if "ESCALATE" in reply:
    # Route to human
    subprocess.run(["wati", "contacts", "assign-teams",
        "--target", phone, "--teams", "Escalations"])
    subprocess.run(["wati", "conversations", "assign-operator",
        "--target", phone, "--email", "senior@company.com"])
    subprocess.run(["wati", "conversations", "send-text",
        "--target", phone,
        "--text", "I'm connecting you with a specialist who can help further."])
else:
    # Auto-reply
    subprocess.run(["wati", "conversations", "send-text",
        "--target", phone, "--text", reply])
    subprocess.run(["wati", "conversations", "update-status",
        "--target", phone, "--status", "solved"])
Enter fullscreen mode Exit fullscreen mode

Customer gets a contextual, personalized reply on WhatsApp. Complex issues go to humans. No one opened a dashboard.


Step 5: Track delivery and engagement

Want to know when your message was delivered and read? Subscribe to delivery events:

wati webhooks subscribe \
  --url https://your-server.com/tracking \
  --events sentMessageDELIVERED,sentMessageREAD,sentMessageREPLIED
Enter fullscreen mode Exit fullscreen mode

Now your agent knows if the customer saw the message, and can follow up if they didn't.


The full picture

Customer sends WhatsApp message
       │
       ▼
  Wati webhook (message event) → your server
       │
       ▼
  wati contacts get (who is this?)
  wati conversations messages (what's the history?)
       │
       ▼
  LLM generates reply with full context
       │
       ├── Simple query → wati send-text (auto-reply)
       │                  wati update-status (mark solved)
       │
       └── Complex issue → wati assign-teams (escalate)
                           wati assign-operator (route to human)
       │
       ▼
  wati webhooks (track delivery + read receipts)
Enter fullscreen mode Exit fullscreen mode

Six CLI commands. One LLM call. A fully functional AI support agent with human escalation and delivery tracking.


More things you can build

AI Lead Qualifier

wati webhooks subscribe --url https://... --events newContactMessageReceived
# New unknown contact messages you →
wati contacts add --number +55... --name "..."
# LLM qualifies based on message content →
wati contacts assign-teams --target +55... --teams "Hot Leads"
wati conversations send-text --target +55... --text "Thanks! Our sales team will be in touch."
Enter fullscreen mode Exit fullscreen mode

Smart Campaign Manager

# "Send a promo to recent customers"
wati contacts list --page-size 100
wati templates list
wati templates send \
  --template-name winter_promo \
  --broadcast-name "auto-$(date +%s)" \
  --recipients '[...]'
# Track results
wati campaigns overview --date-from 2026-02-01T00:00:00Z --date-to 2026-03-01T00:00:00Z
Enter fullscreen mode Exit fullscreen mode

Interactive Conversations

# Send buttons for quick replies
wati conversations send-interactive \
  --target +55... \
  --type buttons \
  --data '{"body":"How can I help?","buttons":[{"text":"Track Order"},{"text":"Returns"},{"text":"Talk to Human"}]}'
Enter fullscreen mode Exit fullscreen mode

Available webhook events

The CLI supports the full range of Wati webhook events:

Event When it fires
message Any incoming message
newContactMessageReceived Message from a new contact
sessionMessageSent Session message sent
templateMessageSent Template message sent
sentMessageDELIVERED Message delivered to phone
sentMessageREAD Message read by recipient
sentMessageREPLIED Recipient replied
whatsAppPaymentCaptured Payment received
ctaUrlClicked CTA link clicked

Compose these events to build sophisticated agent behaviors — follow up on unread messages, track campaign engagement, trigger flows on payments.


Why a CLI?

Because the future of SaaS isn't dashboards — it's infrastructure that AI agents consume.

Traditional chatbot platforms give you a drag-and-drop flow builder. That made sense when humans were building bots. But now AI agents are building and running the workflows. They don't need a GUI. They need structured data, deterministic commands, and meaningful exit codes.

That's what wati-cli is:

wati contacts get +1...     → { "name": "John", "tags": ["premium"] }
wati conversations messages   → [{ "text": "Where's my order?", ... }]
wati conversations send-text  → { "result": true }
wati webhooks list            → [{ "url": "...", "events": [...] }]
Enter fullscreen mode Exit fullscreen mode

Clean input. Clean output. Composable with anything — Python, Node.js, n8n, LangChain, CrewAI, OpenClaw, or whatever you build agents with.


Get started

npm install -g @wati-io/wati-cli
wati configure init
wati webhooks subscribe --url https://your-server.com/hook --events message
Enter fullscreen mode Exit fullscreen mode

Three commands to go from nothing to receiving WhatsApp messages programmatically.

Full docs and source: npmjs.com/package/@wati-io/wati-cli

Build something cool? We'd love to see it.


@wati-io/wati-cli v0.2.0 — now with full webhook support.

Top comments (0)