We’ve all been there: staring at a dropdown with 50+ options, wishing the app would just know what we want. Usually, we solve this with a heavy backend search API. But what if you could build a lightning-fast, "psychic" recommendation engine entirely on the client side?
Here’s how to build a smart template recommender that anticipates user needs with zero latency.
The Core Logic: Context is King đź‘‘
Most search bars look at the whole text. To make recommendations feel intentional, we focus only on the intent. We extract the first line or sentence the user types, strip out the "noise" (stop words like the, a, with), and break it into tokens.
If a user types: "Create a new React component for the login page," our engine sees: ["Create", "React", "component", "login"].
The Brain: The Bitap Algorithm đź§
To handle typos and partial matches, we use Fuse.js. Under the hood, it utilizes the Bitap algorithm, which uses bitmasking to find matches within a specific "fuzziness" threshold. It treats text not just as strings, but as bit patterns, allowing it to be incredibly fast for client-side operations.
Ranking: Hits vs. Quality
A single "perfect match" isn't always the best result. We use a two-tiered scoring system to rank templates:
- Hit Count (Quantity): How many search tokens matched the template name?
- Average Score (Quality): How "fuzzy" were those matches?
The Scoring Formula
We calculate the relevance of a template using:
Average Score = {Sum{Bitap Scores}}/{Hit Count}
The Ranking Priority:
- Higher Hit Count always wins (matching "React" and "Component" is better than just matching "React" perfectly).
- Lower Average Score acts as the tie-breaker for quality.
Clean TypeScript Implementation
import Fuse from 'fuse.js';
export const getRecommendations = (input: string, list: any[]) => {
const tokens = input.split(/[\n.!?;]/)[0].toLowerCase()
.split(/\s+/).filter(w => w.length > 2);
const fuse = new Fuse(list, {
keys: ['name'],
threshold: 0.45, // Bitap fuzziness threshold
includeScore: true
});
const scoreMap = new Map();
tokens.forEach(token => {
fuse.search(token).forEach(({ item, score }) => {
const entry = scoreMap.get(item.id) || { item, totalScore: 0, hitCount: 0 };
entry.totalScore += (score ?? 1);
entry.hitCount += 1;
scoreMap.set(item.id, entry);
});
});
return Array.from(scoreMap.values())
.map(m => ({ ...m, avg: m.totalScore / m.hitCount }))
.filter(m => m.avg < 0.4)
.sort((a, b) => (b.hitCount - a.hitCount) || (a.avg - b.avg))
.slice(0, 5).map(m => m.item);
};
Why This Works
- Zero Latency: No API calls means the UI updates as fast as the user types.
- Deduplication: Move the top 5 matches to a "Recommended" section and hide them from the main list to keep the UI clean.
- Privacy: No user data ever leaves the browser.
By combining Bitap-powered fuzzy matching with a "hit-heavy" ranking logic, you create a UX that feels less like a tool and more like an assistant.
Top comments (3)
Nice approach. The hit-count-first ranking makes it feel more like intent prediction than just fuzzy search. Curious how you handle token importance though.
I often think about how intuition and psychological patterns influence our decisions online. It turns out that many tools, like client-side recommendations, can play with our subconscious expectations, and you can really notice it. I used to be skeptical about these “psychic” services, but after exploring them myself, I realized there’s more than just marketing. My experience showed that intuitive hints can actually work and even help you understand yourself more if you approach them without biases.
Some comments may only be visible to logged-in visitors. Sign in to view all comments.