When this started, I was not trying to build a hackathon project.
I was trying to rent GPUs for my ML project.
What I wanted was simple:
- price per hour
- in my local currency
- visible immediately
What I got on many sites was the opposite: generic hero copy, too many sections, and pricing buried somewhere I had to hunt for. Then I had to mentally convert currency and estimate actual cost.
That friction became the core idea behind Persite.
Persite is a locale-aware and intent-aware personalization system. It tries to answer this:
If someone arrives with clear intent, why are we still forcing everyone through the same static page?
The problem I wanted to solve
Most websites treat a buyer in Kenya the same as a buyer in the US or Germany. Same message, same CTA, same layout priority.
I think that is a product problem, not just a translation problem.
The key point for me:
- translation alone is not enough
- intent alone is not enough
- locale plus intent is where it starts making sense
I also wanted a privacy-friendly approach. I did not want profile tracking or long-term behavior graphs. I wanted lightweight signals:
- locale
- URL params
- UTM data
- referrer
What I built
I built two surfaces:
- A landing page that adapts full-page content based on landing intent (
judge,github,investor,browse) and locale. - A demo e-commerce store (
/demo) that adapts hero copy, product ordering behavior, and localized product description content.
Both surfaces have a draggable control panel so you can switch locale and intent quickly and see why a variant was selected.
High-level architecture
The flow is straightforward:
- Detect signals (locale + intent)
- Choose a variant with deterministic rules
- Send selected content to Lingo API for localization
- Render localized result
- Expose decision metadata in panel
I kept the logic explicit and finite on purpose. I wanted a demo that can be explained under time pressure.
The code that made the project real
This is the core hero localization flow in my API route:
const step3Decision = buildStep3Decision({
intent: intentDetection.intent,
source: intentDetection.source,
});
const localizablePayload: LocalizablePayload = {
headline: step3Decision.template.baseContent.headline,
subheadline: step3Decision.template.baseContent.subheadline,
ctaLabel: step3Decision.template.baseContent.ctaLabel,
};
const localizationResponse = await fetch(LINGO_LOCALIZE_URL, {
method: "POST",
headers: {
"X-API-Key": apiKey,
"Content-Type": "application/json",
},
body: JSON.stringify({
engineId,
sourceLocale: "en",
targetLocale: localeDetection.locale,
data: localizablePayload,
}),
cache: "no-store",
});
That block is where static template content becomes locale-adapted output.
And this is the part that saved me when product localization became too slow:
const chunks = chunkProducts(toTranslate, CHUNK_SIZE);
for (let i = 0; i < chunks.length; i += MAX_PARALLEL_CHUNKS) {
const group = chunks.slice(i, i + MAX_PARALLEL_CHUNKS);
const groupResults = await Promise.all(group.map((chunk) => localizeChunk(chunk)));
for (const result of groupResults) {
Object.assign(translatedByKey, result);
}
}
That was a practical turning point.
Where the clean design broke
My initial clean idea was bigger: make this portable as a script that works on any website.
Reality check: every website has a different structure, different content ownership, and different component boundaries.
So I narrowed scope to a controlled environment where I could show the value clearly:
- deterministic variant model
- explainable decisions
- real localization behavior
- fast enough demo interactions
That scope cut is honestly what made it shippable.
Biggest pain point and messy workaround
The biggest pain was localization latency when payloads got large.
I hit situations where requests were taking far too long. I ended up doing a mix of:
- parallel chunked localization
- selective localization (only high-impact copy)
- caching by locale + intent + content key
- fallback paths for missing translations
It is not the purest architecture, but it crossed the finish line and stayed understandable.
Trade-offs I accepted
I made deliberate trade-offs:
- I did not localize everything through the API.
- I used hybrid content strategy:
- dynamic/high-impact copy through API
- repeated UI labels via locale dictionaries
- technical product names/spec tokens kept unchanged
- I optimized for demo clarity over maximum abstraction.
I think this was the right call for a hackathon MVP.
One extra thing I intentionally modeled
In the demo store data, I also reflected a real market situation: GPU and RAM prices being elevated due to AI-era supply pressure.
That was intentional. I wanted the demo to feel like it understands real buyer context, not just UI translation.
How to run it
git clone https://github.com/mutaician/persite
cd persite
pnpm install
Create .env:
LINGO_API_KEY=your_lingo_api_key
LINGO_ENGINE_ID=your_lingo_engine_id
Run dev server:
pnpm dev
Build check:
pnpm run build
Useful routes to test:
http://localhost:3000/?intent=judge&locale=de-DEhttp://localhost:3000/?intent=investor&locale=sw-KEhttp://localhost:3000/demo?intent=compare&locale=fr-FRhttp://localhost:3000/demo?intent=budget&locale=pt-BR
What I would build next
The missing piece is portability.
I want a reusable integration layer that can plug into arbitrary websites and personalize key surfaces (especially pricing and plan-selection pages) based on intent and locale, without requiring each team to rewrite their whole frontend.
That is where this can move from a strong demo to a deployable product.
Links
- Live demo: https://persite-seven.vercel.app/
- Repository: https://github.com/mutaician/persite
Top comments (0)