Look, I've been a Postgres fanboy for years. It's my go-to database for nearly everything I build. Rock-solid reliability, fantastic scaling, killer features, what's not to love? But there's one hill I will die on: SQL databases absolutely suck at text searching.
The Text Search Nightmare
Anyone who's ever tried to build a decent search feature using just SQL has probably wanted to throw their laptop out the window at some point. I know I have.
Last month, I was working on a website with just 500 items. I thought the data is not much and I just wanted searching on multiple fields so I implemented the search using Prisma ORM. It took like 2–3 seconds for just some simple searches which is obv not good for such small data.
And that's not even the worst part! The search was painfully basic - no fuzzy matching, no handling of spelling mistakes, no synonyms, nothing. Type "labtop" instead of "laptop"? Zero results. Search for "t-shirt" when the database had "tshirt"? Sorry, nothing found. It was like working with a search engine from 1995.
Why SQL Databases Fall Flat on Text Search
After many late nights, several Reddit threads, and a minor existential crisis, I tried to figure out why SQL databases are so terrible at this:
They're Built for a Different World SQL databases were designed in the 70s to handle structured data with clear relationships, think accounting records and inventory systems. They weren't built with the idea that someday we'd be throwing massive amounts of unstructured text at them and expecting instant results.
Brute Force is Painful When you run something like a Prisma query with contains: 'coffee', under the hood it's still using
WHERE description LIKE '%coffee%'
. Your database has no choice but to read EVERY SINGLE ROW and check if that string appears anywhere. It's like searching for a specific sentence in a library by reading every book cover to cover. No shortcuts.Traditional Indexes Don't Help The regular B-tree indexes that make your primary key lookups lightning fast? Nearly useless for text searching. They're amazing for exact matches and ranges (like finding all products between $10-$20), but they fall apart with substring searching.
I tried every indexing strategy I could find with Prisma. The improvements were minimal at best.
The Hybrid Approach That Saved My Sanity
After two weeks of fighting with Prisma and Postgres, I finally convinced the client to let me integrate a dedicated search engine, we chose Meilisearch. Setup took an afternoon, and suddenly our searches were blazing fast (25–50ms!) AND more relevant.
Tutorial: Implementing a Basic Search with Prisma vs Meilisearch
Prisma Search Implementation
Here's how I initially implemented search with Prisma:
// Basic Prisma search query
async function searchProducts(query: string) {
return prisma.product.findMany({
where: {
OR: [
{ name: { contains: query, mode: 'insensitive' } },
{ description: { contains: query, mode: 'insensitive' } }
]
}
});
}
This approach has several issues:
- Cannot use indexes effectively with contains
- No relevance ranking
- Extremely slow on larger datasets
- No fuzzy matching or handling of typos
Setting Up Meilisearch with Prisma
Now let's look at setting up Meilisearch as an alternative.
Run the following docker commands to start a meilisearch instance locally.
Reference: https://www.meilisearch.com/docs/learn/self_hosted/install_meilisearch_locally
# Fetch the latest version of Meilisearch image from DockerHub
docker pull getmeili/meilisearch:v1.13
# Launch Meilisearch in development mode with a master key
docker run -it --rm \
-p 7700:7700 \
-e MEILI_ENV='development' \
-v $(pwd)/meili_data:/meili_data \
getmeili/meilisearch:v1.13
# Use ${pwd} instead of $(pwd) in PowerShell
- Install the meilisearch package
npm install meilisearch
- Initialize the client:
import { MeiliSearch } from 'meilisearch';
const client = new MeiliSearch({
host: 'http://localhost:7700'
});
- Creating a index in meilisearch:
const productsIndex = client.index('products')
A helper function to add existing data in meilisearch
async function syncProductsToMeilisearch() {
const products = await prisma.product.findMany();
await productsIndex.addDocuments(products);
}
- Adding a search route at /api/search/route.ts
import { NextRequest, NextResponse } from "next/server";
export async function GET(request: NextRequest) {
const searchParams = request.nextUrl.searchParams;
const query = searchParams.get('q') || '';
try {
const results = await productsIndex.search(query, {
attributesToHighlight: ['name', 'description']
});
return NextResponse.json(results);
} catch (error) {
console.error('Search error:', error);
return NextResponse.json({message: "Internal Server Error"}, {status:500})
}
}
Benefits of this approach:
- Blazing fast results (typically 25–50ms)
- Typo tolerance out of the box
- Relevance ranking designed for search
- Highlights in the results
- Easy filtering and faceting
The Takeaway
Don't repeat my mistake. If search is important to your application, recognize early that SQL databases have fundamental limitations here. They're amazing tools, just not for this specific job.
Use SQL databases with Prisma for what they excel at, and pair them with a dedicated search solution (Elasticsearch, Meilisearch, Algolia, etc.) for text searching. Your users will thank you, your servers will thank you, and your future self will definitely thank you.
Your sanity is worth the extra infrastructure complexity. Trust me on this one.
Connect with Me
If you enjoyed this post and want to stay in the loop with similar content, feel free to follow and connect with me across the web:
- Twitter: Follow me on Twitter for bite-sized tips, updates, and thoughts on tech.
- Medium: Check out my articles on Medium where I share tutorials, insights, and deep dives.
- Email: Got questions, ideas, or just want to say hi? Drop me a line at codezera3@gmail.com.
Your support means a lot, and I'd love to connect, collaborate, or just geek out over cool projects. Looking forward to hearing from you!
Top comments (0)