I wanted to build the "PCPartPicker for robots" — a comparison directory where buyers could browse commercial robots from different manufacturers, filter by specs, and request quotes. Here's how I built GrabaRobot with a surprisingly simple stack that handles 500+ products with zero API costs.
Why Robots Need a Comparison Tool
If you're a facility manager looking to buy a cleaning robot, or a warehouse operator evaluating AMRs (Autonomous Mobile Robots), you face a fragmented market. There's no central place to compare models across manufacturers. You end up visiting 20+ manufacturer websites, filling out contact forms, and waiting days for pricing.
I saw the same gap in the B2B drone market, delivery robot space, and industrial automation. So I built a directory that covers 25 robot categories — from cleaning robots and delivery bots to palletizing robots and robot dogs.
The Stack
Next.js 15 (App Router)
SQLite (via better-sqlite3)
Tailwind CSS v4
Vercel (hosting)
No database server. No ORM. No API layer. Just a SQLite file that ships with the build.
Why SQLite, Not Postgres?
For a product catalog that changes weekly (not hourly), SQLite is perfect:
-
Zero infrastructure — the
.dbfile lives in the repo - Blazing reads — no network round-trip, sub-millisecond queries
- Simple migrations — just run SQL scripts
- Free — no managed database costs
The tradeoff? No concurrent writes. But since all product data is updated through scripts (not user input), this is a non-issue.
Data Architecture
The schema supports 25 categories, 270 manufacturers, and 501 products:
-- Core tables
categories (id, slug, priority, ...)
products (id, slug, category_id, manufacturer_id, price_display, specs JSON, ...)
manufacturers (id, slug, country, city, founded, ...)
scenes (id, slug, category_id, ...) -- use-case contexts like "commercial", "residential"
-- i18n
translations (entity_type, entity_id, locale, field, value)
The translations table is the interesting part. Instead of adding name_en, name_zh columns to every table, I use a single EAV (Entity-Attribute-Value) table. This means adding a new language requires zero schema changes.
function getTranslations(entityType: string, entityId: number, locale: string) {
const rows = db.prepare(
"SELECT field, value FROM translations WHERE entity_type = ? AND entity_id = ? AND locale = ?"
).all(entityType, entityId, locale);
return Object.fromEntries(rows.map(r => [r.field, r.value]));
}
SEO Architecture
This is a content-heavy B2B site, so SEO is critical. Here's what I implemented:
Dynamic Sitemap
Next.js App Router makes this trivial with app/sitemap.ts:
export default function sitemap(): MetadataRoute.Sitemap {
const categories = getAllCategories();
const products = getAllProductParams();
// ... generates 989 URLs with proper priorities
}
Structured Data
Every product page includes JSON-LD with Product schema:
- Name, description, manufacturer
- Price range
- Image
- Category breadcrumbs
Per-Category Meta
Each category gets optimized title/description templates that include the year and pricing hooks:
category.meta_title ?? `${category.name} — Compare Models & Prices from China (2026)`
Product Pages at Scale
With 501 products, I needed a system to generate pages efficiently. Each product page includes:
- Hero section with image, specs, and pricing
- Detailed specifications table
- Key features list
- Use case scenarios
- Related products (links to 6 similar products)
- Manufacturer link (linking to the manufacturer profile page)
- Inquiry form
The key SEO insight: every product card links to the product detail page. This sounds obvious, but I initially only had "Get Quote" buttons. Adding "View Details" links created 1,500+ internal links that help Google discover and index product pages.
Content Strategy
Beyond product pages, the site has:
- 66 blog posts — buying guides, industry analyses, brand comparisons
-
25 category landing pages with scene-based sub-pages (e.g.,
/robots/cleaning-robot/commercial/) - 270 manufacturer profiles with product listings
- Price guides and "Best of 2026" pages for high-intent keywords
Results So Far
The site launched 2 weeks ago. Early Google Search Console data:
- 989 URLs in sitemap, 16 indexed so far
- 663 impressions in the first week of meaningful data
- Average position 11.3 (approaching page 1!)
- 34 unique search queries triggering the site
- Ranking for terms like "warehouse robot", "cleaning robot manufacturer", "robot supplier directory"
The trajectory is promising — impressions grew from 2 to 663 in 9 days, and average position improved from 46 to 11.
What I'd Do Differently
Start with fewer products — 501 is great for coverage but means 868 pages are still waiting for Google to index. Starting with 50 high-quality pages might have gotten faster initial traction.
Build backlinks from day one — domain authority matters. One external link after 2 weeks isn't enough.
Add comparison tools — static product cards are fine, but interactive comparison tables would increase time-on-page and reduce bounce rate.
Try It Out
- GrabaRobot Directory — Browse 500+ robots across 25 categories
- Manufacturer Index — 270 verified robot manufacturers
- Buying Guide — Step-by-step sourcing guide
The full source is a Next.js app with SQLite — no complex infrastructure needed.
Have you built niche B2B directories? What stack did you use? I'm curious about how others handle product data at this scale.
Top comments (0)