When most developers think about integrating AI into their apps, the default move is to build a chatbot. But for my coffee shop app, BrewHub PHL, I didn't want users talking to an AI. I wanted the AI doing the heavy lifting in the background.
Customer retention is notoriously hard for local businesses. Figuring out who hasn't visited in a while, drafting a personalized message, and issuing a custom discount code usually takes hours of manual marketing work.
I decided to fully automate this using Gemini 2.5 Flash, Supabase, and a simple weekly cron job. Here is how I built a headless AI retention agent that automatically wins back lapsed customers while I sleep.
The Architecture
The pipeline runs every Monday at 10 AM and consists of four steps:
The Data Layer: A Supabase RPC finds eligible lapsed customers.
The Privacy Layer: The script strips all Personally Identifiable Information (PII) before it touches the LLM.
The Brains: The Gemini API generates hyper-personalized SMS messages and forces the output into strict JSON.
The Execution: The system generates physical POS vouchers and sends the SMS via Twilio.
Step 1: Finding Eligible Customers
I didn't want to spam one-off visitors. To find the right targets, I wrote a Postgres RPC in Supabase called get_lapsed_customers_eligible_for_retention.
It filters the database for users who:
Have ordered at least 3 times (loyal customers).
Haven't ordered in the last 14 days.
Haven't received a marketing voucher in the last 90 days (the cooldown period).
SQL
-- The Supabase RPC handles the heavy data filtering instantly
SELECT id, full_name, phone, favorite_drink, days_since_last_visit
FROM get_lapsed_customers_eligible_for_retention(
p_min_orders := 3,
p_lapsed_days := 14,
p_cooldown_days := 90,
p_batch_limit := 10
);
Step 2: Privacy by Design
Sending raw customer data to an LLM is a terrible idea. Before the data leaves my server, the script maps the Supabase response to a strictly anonymous payload.
Names and phone numbers are dropped. Gemini only sees the customer_id, their favorite_drink, and days_since_last_visit.
Step 3: Prompting for Structured JSON with Gemini 2.5
This is where the magic happens. I don't just want Gemini to write a message; I need it to return an array of objects that my code can iterate over to send SMS messages.
Using the official @google/generative-ai SDK, I pass responseMimeType: "application/json" to guarantee the output won't break my script.
JavaScript
import { GoogleGenerativeAI } from "@google/generative-ai";
const genAI = new GoogleGenerativeAI(process.env.GEMINI_API_KEY);
const model = genAI.getGenerativeModel({
model: "gemini-2.5-flash",
generationConfig: {
responseMimeType: "application/json",
}
});
const prompt = `
You are the BrewHub PHL Retention Agent. I will provide a list of anonymous customer profiles.
For each customer, write a short, friendly, and highly personalized SMS text message under 160 characters.
Acknowledge that we haven't seen them in a while, mention their favorite drink by name, and offer them a $5 voucher to come back.
Return ONLY a JSON array with this exact structure:
[
{ "customer_id": "uuid-here", "sms_message": "Hey! It's been a while..." }
]
Customer Data:
${JSON.stringify(anonymousCustomerList)}
`;
const response = await model.generateContent(prompt);
const aiDecisions = JSON.parse(response.text());
Because Gemini 2.5 Flash is incredibly fast, this entire batch generation takes just a few seconds.
Step 4: Fulfillment and SMS
Once Gemini hands back the clean JSON array of messages, my cron job loops through the results.
For each customer_id, it:
Generates a unique secure voucher code (e.g., 5OFF-A3F9C1).
Inserts that active voucher into my Supabase vouchers table.
Appends the code to Gemini's personalized message: "Show this code to one of our baristas: 5OFF-A3F9C1".
Dispatches the final message via Twilio.
The Result
By moving AI out of the chat window and into a scheduled backend worker, the system feels like magic. Customers get a highly personalized text referencing their actual favorite order, complete with a working POS discount code, and I don't have to lift a finger.
The Gemini API's strict JSON output makes it incredibly reliable for server-to-server data pipelines, proving that the real power of modern LLMs is as a background reasoning engine.
If you want to see the Supabase + Next.js architecture in action (or just want to order some coffee in Philadelphia), you can check out the live web app at brewhubphl.com.
Top comments (0)