I got tired of sitting on a Rolex waitlist with zero information. No position, no ETA, no way to know if my wait was normal or if I was getting strung along. When I went looking for data, all I found was Reddit threads with hundreds of anecdotes buried in noise.
So I built unghosted.io, a structured tracker where collectors submit their wait times anonymously. 550+ reports from 62 countries in the first week.
Here's how I built it, what I learned, and what surprised me about the data.
The Problem
Millions of people sit on authorized dealer (AD) waitlists for luxury watches from brands like Rolex, Patek Philippe, Audemars Piguet, and others. The system is deliberately opaque. There's no queue number, no ETA, no transparency. Dealers decide who gets what based on relationships, purchase history, location, and their own internal criteria.
The experience varies wildly depending on where you are in the world. A collector in Dubai might walk into an AD and get a Submariner the same day, while someone in New York waits 6 months for the same watch. But there was no way to know that because the only "data" available was scattered across forum posts and Reddit threads. Someone would post "I waited 8 months for my Submariner" and that was it. No way to aggregate, filter, or compare across regions or purchase history levels.
The Stack
I went with a stack optimized for speed to ship and low ongoing cost:
- Next.js 16 (App Router) - ISR for page caching so brand pages revalidate hourly instead of on every request
- Supabase (Postgres + Row Level Security) - handles the entries table, subscribers, and all the data layer
- Plotly.js (plotly.js-basic-dist-min) - interactive scatter plots showing wait time vs purchase history
- Vercel - hosting with automatic deployments from GitHub
- New Relic - browser monitoring for Core Web Vitals
Total monthly cost: under $30 (Vercel Pro + Supabase free tier + New Relic free tier).
The Data Model
The core entries table is simple:
- brand (text)
- family (text) - e.g., "Submariner", "Nautilus"
- model (text) - specific reference like "126610LN"
- wait_time (text) - bucketed: "Walk-in / same day", "1-3 months", "1-2 years", etc.
- region (text) - geographic location of the AD
- purchase_date (text)
- purchase_history (text) - "No prior purchases" through "6+ purchases / VIP"
- status (text) - pending/published/flagged
- followup_frequency (text) - how often they follow up with their AD (for long waits)
Region is a critical field. Wait times differ dramatically by location. The same watch can be a walk-in purchase in one city and a 2-year wait in another. Every report captures where the AD is located so collectors can compare their local market against others worldwide.
New submissions default to "pending" for moderation. I built outlier detection that flags entries deviating 3+ tiers from the model average, plus duplicate detection that rejects identical brand/model/wait/region combos within 24 hours.
The Architecture Decisions That Mattered
ISR over force-dynamic. Early on I had force-dynamic on every page, which meant every page load hit Supabase. Switching to ISR with 1-hour revalidation on brand pages and 5-minute revalidation on the homepage cut server load dramatically while keeping data reasonably fresh.
Bucketed wait times instead of freeform. I initially considered letting users enter exact months. But people remember "about 6 months" not "5.7 months." Bucketed options (1-3 months, 3-6 months, etc.) give cleaner data with less friction in the submit flow.
Plotly over Chart.js or Recharts. I needed interactive scatter plots where users could hover over individual data points and see the details. Plotly handles this natively. The trade-off is bundle size, which is why I use plotly.js-basic-dist-min instead of the full package.
RLS for security, service role for API writes. Anonymous users can read published entries through the anon key. All writes go through an API route that uses the service role key after server-side validation. This prevents direct database manipulation while keeping the read path fast.
What the Data Shows
Some things the data revealed that I didn't expect:
Submariners are way easier to get than people think. The median wait is under 3 months, and 25% of reports are walk-ins. The internet narrative of "impossible to get a Sub" doesn't match the actual data.
Purchase history matters more than time. The scatter plots clearly show that collectors with 2-3+ prior purchases get watches faster than first-time buyers waiting years. The system rewards relationship building over patience.
Datejusts are practically available on demand. Most reports show walk-in or under 1 month waits. ADs use Datejusts as relationship starters. They offer you one to get you in the door, then you work toward the sport models.
FP Journe is in a league of its own. With under 900 watches produced per year, the waitlist is essentially an application process. Most boutiques have stopped accepting new names for the Chronometre Bleu entirely.
Location matters more than most people realize. Europe and Asia report different patterns than the US, especially for Tudor and Vacheron Constantin. Some markets have significantly shorter waits for models that are considered "impossible" in other regions. A collector in Singapore might get a Royal Oak in 3 months while someone in London waits over a year. The data makes these regional differences visible for the first time.
The Distribution Strategy
Building the product was the easy part. Getting people to submit data was the hard part.
Reddit was the primary channel. I posted to r/rolex first with the angle "I built a structured version of the AD Wait Time Megathread." That post hit 18K views, 44 upvotes, and 30 comments. The key was framing it as a community tool, not a product launch. No self-promotion, just "here's a thing that solves a problem we all have."
The second post to r/Watches (3.3M members) used a multi-brand data angle and generated 10+ organic submissions overnight from collectors in the US, Canada, Europe, and Asia.
SEO was the long game. I built 65 pages targeting specific search queries: brand pillar pages (rolex-waitlist-times, patek-philippe-waitlist), model pages (rolex-submariner-wait-time), and reference pages (/rolex/126610LN). Each page has JSON-LD schema (Article + FAQPage), canonical tags, and FAQ questions matching Google's "People Also Ask" phrasing.
Within 5 days, I was ranking page 1 for several brand-specific waitlist queries (AP at position 4, Tudor at position 5, Patek at position 6).
Mistakes I Made
Claiming "500+ reports" in the title before I had 500 Rolex-specific reports. A reader called me out. Trust is everything in this niche. I changed it to match reality.
Not having a Content Security Policy that included Google Analytics. When I added GA4, my own CSP blocked the script. Users saw broken analytics and I missed traffic data for a day.
Forgetting to update RLS policies after adding a new column. I added a followup_frequency column to the database and updated the API route, but the Supabase insert failed because the API was using the anon key. Switching to the service role key for writes fixed it. A real user reported the bug in my Reddit thread, which was both embarrassing and fortunate.
What's Next
- 1,000 reports by mid-May. The dataset is the moat. Everything else is secondary.
- Geographic filtering on scatter plots. Users are asking for it in comments. Being able to filter by region will let collectors compare their local market against the global average.
- Monthly "State of the Waitlist" report. Publishable data that journalists and bloggers can cite and link to.
The Takeaway for Builders
If you're building a data product:
- Pick a niche where information asymmetry exists. Luxury watch waitlists are deliberately opaque. That opacity is the opportunity.
- Let the community own the data. I don't scrape. People voluntarily submit because the tool helps them. That makes the data defensible and the growth organic.
- Ship the simplest version that proves the concept. My MVP was a submit form, a Supabase table, and a scatter plot. Everything else came after I had 100+ reports.
- Distribution is the hard part. Reddit worked because I was a genuine member of the community solving a real problem. If I'd posted a "check out my app" link, it would have been removed.
The site is free, open, and live at unghosted.io. If you're a watch collector sitting on a waitlist, submit your data. If you're a builder, I'd love to hear your feedback on the approach.
Stack: Next.js 16, Supabase, Plotly.js, Vercel, New Relic. Solo build. GitHub.
Top comments (0)