A friend of mine runs a beauty salon. Every evening she manually texts appointment reminders to her clients. She tracks inventory in Excel. And she pays 20% commission to a booking platform — for clients she spent years building relationships with herself.
So I built Pronto: an open-source POS, CRM, booking system, inventory tracker, and omnichannel notification engine for service businesses. Self-hosted or cloud. Zero commission. One command to install.
This post is about the technical decisions that were actually hard.
The architecture in one paragraph
Next.js 14 API routes + Supabase (PostgreSQL + Auth) + Cloudflare R2 for file storage. Notifications via Resend/SMTP, Telegram Bot API, Meta WhatsApp Cloud API, and Viber Bot API. Docker Compose for self-hosting. PWA via next-pwa for offline POS. Multitenancy via Supabase Row Level Security.
Credential model: each self-hoster (or SaaS tenant) brings their own API keys — Telegram bot token, WhatsApp Phone Number ID + Access Token, Viber token. The platform owner pays nothing to the messenger providers.
Problem 1: The double-booking race condition
App-level booking conflict checks have a classic race condition. Two clients simultaneously pick the same slot → both pass the check → both get confirmed.
The fix: a PostgreSQL trigger that runs atomically at commit time.
CREATE OR REPLACE FUNCTION check_booking_conflict()
RETURNS TRIGGER AS $$
BEGIN
IF EXISTS (
SELECT 1 FROM bookings
WHERE business_id = NEW.business_id
AND employee_id IS NOT DISTINCT FROM NEW.employee_id
AND status NOT IN ('cancelled', 'no_show')
AND (NEW.start_time, NEW.end_time) OVERLAPS (start_time, end_time)
AND id != NEW.id
) THEN
RAISE EXCEPTION 'Booking conflict: slot already taken';
END IF;
RETURN NEW;
END;
$$ LANGUAGE plpgsql;
IS NOT DISTINCT FROM handles the NULL case — when employee_id is NULL (group bookings), it still works correctly. The API returns HTTP 409 on conflict, and the UI refreshes the slot grid automatically.
Problem 2: Docker broke auth callbacks
The app worked perfectly locally. Inside a Docker container, Supabase auth redirected back to http://0.0.0.0:3000/auth/callback — because request.url resolves to the container's internal address, not the real hostname.
The fix: read NEXT_PUBLIC_SITE_URL explicitly instead of deriving origin from the request.
const siteUrl = process.env.NEXT_PUBLIC_SITE_URL || 'http://localhost:3000'
const { data, error } = await supabase.auth.exchangeCodeForSession(code)
return NextResponse.redirect(`${siteUrl}/dashboard`)
This made the install truly one-command — no post-install auth debugging required.
Problem 3: WhatsApp's 24-hour window
Nobody documents this clearly upfront: Meta Cloud API only allows free-form text within a 24-hour window after the client initiates contact. Business-initiated messages — appointment reminders, birthday greetings, re-activation nudges — require pre-approved Message Templates (HSM) submitted through Meta Business Manager.
This matters if you're building anything that sends proactive notifications via WhatsApp. You either get your templates approved first, or your reminder system silently fails for business-initiated messages. I documented this honestly in the README as a known limitation.
Automatic migrations on startup
Self-hosters shouldn't have to run database migrations manually. I wrote a scripts/migrate.js that runs all Supabase migrations in order before the app starts, using the pg library directly.
# docker-compose.yml
services:
migrate:
build: .
command: node scripts/migrate.js
depends_on:
db:
condition: service_healthy
app:
build: .
depends_on:
migrate:
condition: service_completed_successfully
The service_completed_successfully condition means the app won't start until all 18 migrations have run cleanly. docker compose up -d is now genuinely one command.
The full stack
Frontend: Next.js 14 + Tailwind + shadcn/ui
Backend: Next.js API routes + Supabase
Database: PostgreSQL (Supabase), 18 migrations
Auth: Supabase Auth — Email + Google OAuth
Notifications: Resend, Telegram Bot API, Meta WhatsApp Cloud API, Viber Bot API
Storage: Cloudflare R2
Self-hosting: Docker Compose, multi-stage build, non-root user
PWA: next-pwa 5.6.0, IndexedDB for offline POS
i18n: next-intl 4.9.0 (EN active, more coming)
What's live in v1.0
POS — completes a sale in 3 clicks, works offline via PWA + IndexedDB
CRM — full client history, visit patterns, birthday, tags, notes
Booking calendar — drag & drop, week view, staff columns
Public booking page — name + phone only, no client account required
Inventory — stock tracking, low-stock alerts via all channels
Notifications — all 4 channels fire automatically: booking confirmed → 24h reminder → 1h reminder → thank-you → 30-day re-activation → birthday → low-stock
Multitenancy — each business isolated via Supabase RLS, own subdomain.
Running costs at zero customers
~$20–25/month on an existing DigitalOcean server. One Starter customer ($19/mo) covers it. Two Pro customers ($39/mo) puts it in the green.
What I'm still figuring out
Payment processing as a solo founder based in Georgia (the country) is genuinely difficult. LemonSqueezy and Polar.sh both don't support bank payouts to Georgia. Paddle has been pending verification for two weeks. Currently testing Dodo Payments. If you've solved this from a non-Stripe-supported country, I'd genuinely appreciate knowing what worked.
GitHub: github.com/SGrappelli/pronto — MIT, Docker Compose, CI/CD
Live cloud version: trypronto.app
Happy to answer questions about any of the technical decisions above.
Top comments (0)