- π₯Connect: https://xam-heisenberg-company.vercel.app/
- π₯GitHub: https://github.com/Subham-Maity
- π₯Twitter: https://twitter.com/TheSubhamMaity
- π₯LinkedIn: https://www.linkedin.com/in/subham-xam
- π₯Insta: https://www.instagram.com/subham_xam
MeiliSearch - The Complete Production Setup Guide (2026)
How I Ended Up Down the Search Engine Rabbit Hole
Okay so here's the thing. I was building a personal finance tracker for myself. Nothing fancy, just something to manage my accounts, transactions, EMI schedules, investment positions... you know, the usual "I got tired of 10 Google Sheets" kind of project. The backend was NestJS + Postgres/Prisma, clean architecture, the works.
At some point I added a global search bar to the frontend. Cmd+K style. You type, results appear across accounts, investments, transactions, all at once. Simple enough to want. Absolutely not simple to build correctly.
My first thought was "just ILIKE %query% in Postgres." Worked for 2 minutes. Then I tried to search for something with a typo and got zero results. Tried to search across 3 tables at once and ended up with the most cursed SQL JOIN you've ever seen. Tried to highlight matching text and gave up.
That's when I actually sat down and looked at what people use for this properly.
Picking a Search Engine: The Honest Comparison
Here's the deal β there are more options than you'd expect, and most tutorials just pick one without explaining why. Let me actually show you the landscape:
| Engine | Open Source | Typo Tolerance | Hosting | Setup Complexity | Speed | Best For |
|---|---|---|---|---|---|---|
| Elasticsearch | Yes (BSL) | Configurable | Self-host / Elastic Cloud | High β JVM, configs, mappings | Fast at scale | Log analytics, massive datasets, enterprise |
| OpenSearch | Yes (Apache 2) | Configurable | Self-host / AWS | High β same as ES | Fast at scale | AWS shops, ES alternative |
| Algolia | No (SaaS) | Built-in | Managed only | Very low | Very fast | Startups with budget, e-commerce |
| Typesense | Yes (GPL) | Built-in | Self-host / Cloud | Medium | Very fast | Algolia alternative, smaller datasets |
| MeiliSearch | Yes (MIT) | Built-in | Self-host | Low | Very fast | Developer experience, side projects to production |
| Postgres FTS | Yes | No | Same DB | Zero | Okay | Simple apps, already on Postgres |
| SQLite FTS5 | Yes | No | Same DB | Zero | Fast for small | CLI tools, local apps |
Here's my honest take after going through this:
Elasticsearch is incredibly powerful but it genuinely feels like you need a DevOps certification to operate it in production. It's a JVM app. The config surface area is enormous. For log analytics at scale? Perfect. For a product search bar on a side project? Way overkill.
Algolia is the opposite β it just works, the DX is amazing, and you'll have search running in 20 minutes. Then you look at the pricing for anything above 10k operations/month and close the tab.
Typesense is solid and worth looking at. The API is clean, performance is great. Honestly, if you're choosing between Typesense and MeiliSearch today, it's genuinely close.
MeiliSearch is what I went with. MIT licensed, Docker image is tiny (~50MB), search results appear in <50ms in most cases, typo tolerance works out of the box, and the JS SDK is properly typed. The configuration model is simple enough that I understood the whole thing in one afternoon. It also has the best NestJS integration story of the bunch.
If you're on Postgres already and your search needs are genuinely simple (one table, no typo tolerance needed), pg_trgm or full text search is worth trying first. But the moment you want multi-index search, typo tolerance, or highlighted snippets β reach for a dedicated engine.
I'm using MeiliSearch. The rest of this post is the complete production setup.
Before You Touch Any Code β Read These Five Rules
I spent about half a day debugging issues that all traced back to violating one of these. Saving you the pain:
Rule 1 β MEILISEARCH_URL is the only thing that changes between environments.
Your NestJS code never hardcodes a host. It always reads from process.env.MEILISEARCH_URL. The .env file is what changes per machine, per deployment topology. Code stays identical everywhere.
Rule 2 β Never use the Master Key in your application code.
The master key is a one-time bootstrap tool. Use it once to generate scoped API keys, then your app only ever uses those scoped keys. The master key never appears in search.service.ts or any NestJS code.
Rule 3 β Always pin the MeiliSearch Docker image version.
Never use :latest. It silently upgrades between incompatible versions and corrupts your database on restart. Always pin: getmeili/meilisearch:v1.37.0.
Rule 4 β MEILI_ENV=production is not optional.
Development mode accepts unauthenticated requests β a security hole. Production mode enforces API key auth on every request. Always set this regardless of environment.
Rule 5 β Port 7700 must always be reachable from wherever NestJS runs.
MeiliSearch always runs in Docker. NestJS might run in Docker, raw on the same machine, or on a completely different server. The only requirement: port 7700 on the MeiliSearch host must be reachable from wherever NestJS is.
How MEILISEARCH_URL Works Across All Deployments
This is the entire mental model. Everything else follows from it.
| Deployment scenario | Where NestJS runs | Where MeiliSearch runs |
MEILISEARCH_URL in .env
|
|---|---|---|---|
| Local dev β both in Docker Compose | Docker container | Docker container (same compose) | http://meilisearch:7700 |
| Local dev β NestJS raw, Meili in Docker | Directly on Windows/Mac/Linux host | Docker on same machine | http://localhost:7700 |
| Production β both in Docker Compose | Docker container | Docker container (same compose) | http://meilisearch:7700 |
| Production β NestJS on server, Meili on same server | Directly on Linux server | Same server (Docker, port exposed) | http://localhost:7700 |
| Production β NestJS on one server, Meili on separate server | Server A | Server B | http://<SERVER_B_IP>:7700 |
The rule in one sentence: If NestJS and MeiliSearch are in the same Docker Compose network, use the service name meilisearch. If NestJS is outside Docker (running raw on a host), use localhost or the remote server IP depending on where MeiliSearch is.
Most of us doing local dev run NestJS raw (npm run start:dev) and MeiliSearch in Docker. That's Scenario A:
Your Machine
βββ NestJS β runs raw (npm run start:dev)
βββ Docker Desktop
βββ meilisearch container β port 7700 exposed to host
Your .env just needs:
MEILISEARCH_URL=http://localhost:7700
When Docker exposes ports: "7700:7700", port 7700 inside the container maps to port 7700 on your machine. Your NestJS talks to localhost:7700 like any other local service. Works on Windows, Mac, Linux as long as Docker Desktop is running.
Directory Structure
Here's what the full setup looks like before we start writing files:
project-root/
βββ .env β secrets for this machine β never commit
βββ .env.example β committed template with placeholder values
βββ docker-compose.yml β runs MeiliSearch (and optionally your app)
βββ Dockerfile β only needed if you run NestJS in Docker too
βββ scripts/
β βββ create-meili-keys.sh β one-time key generation (run once per environment)
βββ src/
βββ search/
βββ index.ts β barrel
βββ search.module.ts β @Global module
βββ search.service.ts β MeiliSearch client wrapper
βββ search.config.ts β config via ConfigService
βββ indexes/
βββ index.ts β barrel
βββ [feature].index.ts β one file per index (settings + type-safe queries)
β οΈ Important SDK Fixes β Read Before Writing Any Code (v0.40+)
Three breaking changes caught me off guard. Flagging them upfront:
1. Client casing changed. The meilisearch npm package updated its primary export from MeiliSearch to Meilisearch (lowercase s). If you hit TypeError: MeiliSearch is not a constructor, this is why. Use the lowercase variant everywhere:
// β Old
import { MeiliSearch } from 'meilisearch';
const client = new MeiliSearch({ ... });
// β
Current
import { Meilisearch } from 'meilisearch';
const client = new Meilisearch({ ... });
2. waitForTask moved. client.waitForTask() no longer exists on the root client. Use client.tasks.waitForTask(uid). Also, tasks that fail (like "index already exists") no longer throw β they return a task object where status === 'failed'. Always check completedTask.status === 'failed' explicitly.
3. Rate limiting on search endpoints. A global search bar triggering parallel requests (hitting /search for 4 indices at once) will instantly exhaust default NestJS Throttler limits (10 req/min), giving HTTP 429. Fix: add @Throttle({ default: { limit: 120, ttl: 60000 } }) on your search controllers.
Step 1 β .env File
Create .env in your project root. Never commit this file.
# βββ MeiliSearch βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
#
# Set MEILISEARCH_URL to match YOUR deployment topology:
#
# NestJS in Docker + MeiliSearch in same Docker Compose:
# MEILISEARCH_URL=http://meilisearch:7700
#
# NestJS running raw on same machine as MeiliSearch Docker:
# MEILISEARCH_URL=http://localhost:7700
#
# NestJS on a separate server from MeiliSearch:
# MEILISEARCH_URL=http://<MEILISEARCH_SERVER_IP>:7700
#
MEILISEARCH_URL=http://localhost:7700
MEILISEARCH_MASTER_KEY=your-master-key-minimum-16-chars-change-this
MEILISEARCH_BACKEND_KEY=fill-this-in-after-step-4
MEILISEARCH_SEARCH_KEY=fill-this-in-after-step-4
Commit .env.example (safe to commit, no real values):
MEILISEARCH_URL=REPLACE_ME_see_comments_above
MEILISEARCH_MASTER_KEY=REPLACE_ME_min_16_chars
MEILISEARCH_BACKEND_KEY=REPLACE_ME_after_key_generation
MEILISEARCH_SEARCH_KEY=REPLACE_ME_after_key_generation
Generate a strong master key:
openssl rand -hex 32
Minimum 16 characters required in production mode. 32+ recommended.
Step 2 β docker-compose.yml
This file runs MeiliSearch. It does not include your NestJS app β your app runs however you normally run it (npm run start:dev, node dist/main.js, or in its own Docker container). MeiliSearch is always the only required service here.
services:
meilisearch:
image: getmeili/meilisearch:v1.37.0 # β pinned version, never :latest
restart: unless-stopped # auto-restarts on server reboot or crash
ports:
- "7700:7700" # exposes port to host
environment:
- MEILI_ENV=production # enforces API key auth on ALL requests
- MEILI_MASTER_KEY=${MEILISEARCH_MASTER_KEY}
- MEILI_DB_PATH=/meili_data/data.ms
- MEILI_DUMP_DIR=/meili_data/dumps
- MEILI_SNAPSHOT_DIR=/meili_data/snapshots
- MEILI_SNAPSHOT_INTERVAL_SEC=86400 # auto-snapshot every 24h
volumes:
- meilidata:/meili_data # named volume β works on Windows, Mac, Linux
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:7700/health"]
interval: 10s
timeout: 5s
retries: 5
start_period: 20s
volumes:
meilidata:
Why each decision:
-
ports: 7700:7700β always exposed regardless of topology. NestJS outside Docker reaches vialocalhost:7700. NestJS on a remote server reaches via<ip>:7700. NestJS inside the same Docker Compose uses the service namemeilisearch:7700internally β but keeping the port exposed is harmless and allows curl/browser access for debugging. - Named volume
meilidataβ not a host path. Works cross-platform with no permission issues on Windows or Mac. -
restart: unless-stoppedβ MeiliSearch comes back automatically after machine reboot or Docker Desktop restart. -
MEILI_SNAPSHOT_INTERVAL_SEC=86400β automatic daily snapshots inside the volume. Data is always recoverable without manual steps. -
start_period: 20sβ health check grace period so Docker doesn't mark it unhealthy while still booting.
If you also want to run NestJS in Docker (e.g. production server), add this optional app service block to the same file:
# ββ Optional: add this service only if NestJS runs in Docker too βββββββββββββ
app:
build:
context: .
dockerfile: Dockerfile
ports:
- "3000:3000"
env_file:
- .env
depends_on:
meilisearch:
condition: service_healthy # waits for MeiliSearch to pass health check
networks:
- app-network
restart: unless-stopped
# ββ Add this network block only if you added the app service above ββββββββββββ
networks:
app-network:
driver: bridge
# ββ Also add meilisearch to the same network if you added the app service βββββ
# Under the meilisearch service, add:
# networks:
# - app-network
# And change MEILISEARCH_URL in .env to: http://meilisearch:7700
Step 3 β Dockerfile (Only if NestJS Runs in Docker)
Skip this step entirely if you run NestJS raw with npm run start:dev or node dist/main.js.
FROM node:20-alpine AS builder
WORKDIR /app
COPY package*.json ./
RUN npm ci
COPY . .
RUN npm run build
# ββ Production image β no dev dependencies βββββββββββββββββββββββββββββββββββ
FROM node:20-alpine
WORKDIR /app
COPY package*.json ./
RUN npm ci --omit=dev
COPY --from=builder /app/dist ./dist
EXPOSE 3000
CMD ["node", "dist/main.js"]
Step 4 β Generate Scoped API Keys (One-time Per Environment)
Run this once when you first set up MeiliSearch on any machine. The keys persist in the Docker volume β they survive restarts and reboots.
4a β Start MeiliSearch
docker compose up meilisearch -d
# Wait until healthy
docker compose ps
# meilisearch should show: healthy
Verify it's up from your host machine:
curl http://localhost:7700/health
# β {"status":"available"}
4b β Run the Key Generation Script
On Mac/Linux or Windows WSL/Git Bash:
Create scripts/create-meili-keys.sh:
#!/bin/bash
set -e
if [ -z "$MEILISEARCH_MASTER_KEY" ]; then
echo "ERROR: MEILISEARCH_MASTER_KEY is not set."
echo "Run: export MEILISEARCH_MASTER_KEY=your-key"
exit 1
fi
MEILI_HOST="http://localhost:7700"
echo ""
echo "=== Creating backend key (full write access β NestJS server only) ==="
curl -s -X POST "${MEILI_HOST}/keys" \
-H "Authorization: Bearer ${MEILISEARCH_MASTER_KEY}" \
-H "Content-Type: application/json" \
--data '{
"name": "backend-key",
"description": "NestJS backend β write, index, search. Never expose to frontend.",
"actions": [
"documents.add",
"documents.delete",
"documents.get",
"indexes.create",
"indexes.get",
"indexes.update",
"indexes.delete",
"settings.get",
"settings.update",
"tasks.get",
"search"
],
"indexes": ["*"],
"expiresAt": null
}'
echo ""
echo "=== Creating search-only key (safe to expose to frontend clients) ==="
curl -s -X POST "${MEILI_HOST}/keys" \
-H "Authorization: Bearer ${MEILISEARCH_MASTER_KEY}" \
-H "Content-Type: application/json" \
--data '{
"name": "search-key",
"description": "Frontend search only β read only, no write access.",
"actions": ["search"],
"indexes": ["*"],
"expiresAt": null
}'
echo ""
echo "=== Done. Copy the 'key' field values above into your .env ==="
echo " MEILISEARCH_BACKEND_KEY = key from backend-key response"
echo " MEILISEARCH_SEARCH_KEY = key from search-key response"
export MEILISEARCH_MASTER_KEY=your-master-key-here
bash scripts/create-meili-keys.sh
On Windows PowerShell (if you don't have WSL/Git Bash):
$masterKey = "your-master-key-here"
# Backend key
curl.exe -X POST "http://localhost:7700/keys" `
-H "Authorization: Bearer $masterKey" `
-H "Content-Type: application/json" `
--data '{\"name\":\"backend-key\",\"actions\":[\"documents.add\",\"documents.delete\",\"documents.get\",\"indexes.create\",\"indexes.get\",\"indexes.update\",\"indexes.delete\",\"settings.get\",\"settings.update\",\"tasks.get\",\"search\"],\"indexes\":[\"*\"],\"expiresAt\":null}'
# Search key
curl.exe -X POST "http://localhost:7700/keys" `
-H "Authorization: Bearer $masterKey" `
-H "Content-Type: application/json" `
--data '{\"name\":\"search-key\",\"actions\":[\"search\"],\"indexes\":[\"*\"],\"expiresAt\":null}'
Copy the key field from each response into your .env:
MEILISEARCH_BACKEND_KEY=paste-backend-key-here
MEILISEARCH_SEARCH_KEY=paste-search-key-here
Key scoping rules (important):
-
MEILISEARCH_BACKEND_KEYβ used by NestJS server code. Full write access. Never sent to browser. -
MEILISEARCH_SEARCH_KEYβ safe for frontend clients. Search-only, read-only. -
MEILISEARCH_MASTER_KEYβ used only in this script. Never appears in NestJS code. Never sent anywhere. - Keys are deterministic: same master key β same key values. Safe to regenerate if you lose them.
Step 5 β Install the JS SDK
npm install meilisearch
Official SDK. Fully typed. No community wrapper needed.
Step 6 β NestJS Integration
Four files make up the search infrastructure. All of them live in src/search/.
src/search/search.config.ts
import { Injectable } from '@nestjs/common';
import { ConfigService } from '@nestjs/config';
@Injectable()
export class SearchConfig {
constructor(private readonly configService: ConfigService) {}
// Reads MEILISEARCH_URL from .env
// This is the only place the host is read β never hardcoded anywhere else
get host(): string {
const url = this.configService.get<string>('MEILISEARCH_URL');
if (!url) throw new Error('MEILISEARCH_URL is not set in environment');
return url;
}
// Backend key β used for all server-side operations (write + search)
get backendApiKey(): string {
const key = this.configService.get<string>('MEILISEARCH_BACKEND_KEY');
if (!key) throw new Error('MEILISEARCH_BACKEND_KEY is not set in environment');
return key;
}
// Search key β safe to return to frontend for direct client-side search
get searchApiKey(): string {
const key = this.configService.get<string>('MEILISEARCH_SEARCH_KEY');
if (!key) throw new Error('MEILISEARCH_SEARCH_KEY is not set in environment');
return key;
}
}
src/search/search.service.ts
This is the wrapper around the MeiliSearch client. All other parts of your app go through this service β they never instantiate their own client.
import { Injectable, Logger, OnModuleInit } from '@nestjs/common';
import { Meilisearch, Index, SearchParams, SearchResponse, Settings } from 'meilisearch';
import { SearchConfig } from './search.config';
@Injectable()
export class SearchService implements OnModuleInit {
private readonly logger = new Logger(SearchService.name);
private client: Meilisearch;
constructor(private readonly config: SearchConfig) {}
onModuleInit() {
// host resolves from MEILISEARCH_URL in .env
// works whether NestJS is in Docker, raw on Windows/Mac/Linux, or on a remote server
this.client = new Meilisearch({
host: this.config.host,
apiKey: this.config.backendApiKey,
});
this.logger.log(`MeiliSearch client initialized β ${this.config.host}`);
}
// βββ Health βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
async ping(): Promise<boolean> {
try {
await this.client.health();
return true;
} catch {
return false;
}
}
// βββ Index access βββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
getIndex<T>(indexName: string): Index<T> {
return this.client.index<T>(indexName);
}
// βββ Documents βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
async addDocuments<T extends Record<string, unknown>>(
indexName: string,
documents: T[],
primaryKey = 'id',
) {
const task = await this.client.index<T>(indexName).addDocuments(documents, { primaryKey });
this.logger.log(`Enqueued addDocuments β ${indexName} (task ${task.taskUid})`);
return task;
}
async updateDocuments<T extends Record<string, unknown>>(
indexName: string,
documents: T[],
primaryKey = 'id',
) {
const task = await this.client.index<T>(indexName).updateDocuments(documents, { primaryKey });
this.logger.log(`Enqueued updateDocuments β ${indexName} (task ${task.taskUid})`);
return task;
}
async deleteDocument(indexName: string, documentId: string | number) {
const task = await this.client.index(indexName).deleteDocument(documentId);
this.logger.log(`Enqueued deleteDocument β ${indexName}/${documentId} (task ${task.taskUid})`);
return task;
}
async deleteDocuments(indexName: string, documentIds: Array<string | number>) {
const task = await this.client.index(indexName).deleteDocuments(documentIds);
this.logger.log(`Enqueued deleteDocuments β ${indexName} (${documentIds.length} docs, task ${task.taskUid})`);
return task;
}
// βββ Search βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
async search<T>(
indexName: string,
query: string,
params?: SearchParams,
): Promise<SearchResponse<T>> {
return this.client.index<T>(indexName).search(query, params);
}
// βββ Settings ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
async updateIndexSettings(indexName: string, settings: Settings) {
const task = await this.client.index(indexName).updateSettings(settings);
this.logger.log(`Enqueued updateSettings β ${indexName} (task ${task.taskUid})`);
return task;
}
// βββ Tasks βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
async waitForTask(taskUid: number, timeoutMs = 30000) {
return this.client.waitForTask(taskUid, { timeoutInMs: timeoutMs });
}
// βββ Backup ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
async createDump() {
const task = await this.client.createDump();
this.logger.log(`Dump creation enqueued (task ${task.taskUid})`);
return task;
}
// βββ Raw client (escape hatch for SDK features not wrapped above) βββββββββ
getClient(): Meilisearch {
return this.client;
}
}
src/search/search.module.ts
import { Global, Module } from '@nestjs/common';
import { ConfigModule } from '@nestjs/config';
import { SearchConfig } from './search.config';
import { SearchService } from './search.service';
@Global() // any module can inject SearchService without re-importing SearchModule
@Module({
imports: [ConfigModule],
providers: [SearchConfig, SearchService],
exports: [SearchConfig, SearchService],
})
export class SearchModule {}
src/search/index.ts
export * from './search.module';
export * from './search.service';
export * from './search.config';
export * from './indexes';
Step 7 β Per-Index Files (One Per Index)
Each index gets its own file. Define searchable/filterable/sortable attributes here. These settings are applied on every app startup via OnModuleInit β idempotent, safe to run repeatedly.
src/search/indexes/product.index.ts
This is an example. Replace product with whatever your domain is β investments, accounts, transactions, whatever.
import { Injectable, Logger, OnModuleInit } from '@nestjs/common';
import { SearchService } from '../search.service';
export const PRODUCT_INDEX = 'products';
export interface ProductDocument {
id: string;
name: string;
description: string;
brand: string;
category: string;
price: number;
inStock: boolean;
createdAt: string;
}
@Injectable()
export class ProductIndex implements OnModuleInit {
private readonly logger = new Logger(ProductIndex.name);
constructor(private readonly searchService: SearchService) {}
async onModuleInit() {
await this.applySettings();
}
private async applySettings() {
try {
const task = await this.searchService.updateIndexSettings(PRODUCT_INDEX, {
// Only fields users will search β controls relevance and index size
searchableAttributes: ['name', 'description', 'brand', 'category'],
// Fields available for .filter() queries
filterableAttributes: ['category', 'brand', 'price', 'inStock'],
// Fields available for .sort() queries
sortableAttributes: ['price', 'createdAt', 'name'],
rankingRules: [
'words', // more query words matched = higher rank
'typo', // fewer typos = higher rank
'proximity', // query words closer together = higher rank
'attribute', // earlier searchableAttribute = higher rank
'sort', // user-specified sort order
'exactness', // exact match = higher rank
],
typoTolerance: {
enabled: true,
minWordSizeForTypos: {
oneTypo: 4, // allow 1 typo for words >= 4 chars
twoTypos: 8, // allow 2 typos for words >= 8 chars
},
},
pagination: {
maxTotalHits: 1000,
},
});
await this.searchService.waitForTask(task.taskUid);
this.logger.log(`Settings applied β ${PRODUCT_INDEX}`);
} catch (error) {
// Log but do not throw β app still starts even if settings fail
this.logger.error(`Failed to apply settings β ${PRODUCT_INDEX}`, (error as Error).stack);
}
}
// βββ Type-safe query methods ββββββββββββββββββββββββββββββββββββββββββββββ
async searchProducts(
query: string,
options?: {
category?: string;
brand?: string;
inStock?: boolean;
maxPrice?: number;
page?: number;
},
) {
const filters: string[] = [];
if (options?.category) filters.push(`category = "${options.category}"`);
if (options?.brand) filters.push(`brand = "${options.brand}"`);
if (options?.inStock !== undefined) filters.push(`inStock = ${options.inStock}`);
if (options?.maxPrice !== undefined) filters.push(`price <= ${options.maxPrice}`);
return this.searchService.search<ProductDocument>(PRODUCT_INDEX, query, {
filter: filters.join(' AND '),
hitsPerPage: 20,
page: options?.page ?? 1,
attributesToHighlight: ['name', 'description'],
});
}
async indexProducts(products: ProductDocument[]) {
return this.searchService.addDocuments(PRODUCT_INDEX, products, 'id');
}
async removeProduct(productId: string) {
return this.searchService.deleteDocument(PRODUCT_INDEX, productId);
}
}
src/search/indexes/index.ts
export * from './product.index';
// export * from './[feature].index'; β add each new index here
Step 8 β Wire into AppModule
import { Module } from '@nestjs/common';
import { ConfigModule } from '@nestjs/config';
import { SearchModule } from './search';
import { ProductIndex } from './search/indexes';
@Module({
imports: [
ConfigModule.forRoot({ isGlobal: true }),
SearchModule,
// ... rest of your modules
],
providers: [
ProductIndex, // registers the OnModuleInit settings sync
],
})
export class AppModule {}
Step 9 β Start Everything
Scenario A: NestJS raw on same machine, MeiliSearch in Docker
# 1. Start MeiliSearch (Docker Desktop must be running on Windows)
docker compose up meilisearch -d
# 2. Verify it's healthy
curl http://localhost:7700/health
# β {"status":"available"}
# 3. Start NestJS normally
npm run start:dev
# or
npm run start:prod
Your .env has MEILISEARCH_URL=http://localhost:7700.
Scenario B: Both NestJS and MeiliSearch in Docker Compose
docker compose up --build -d
docker compose ps # both services should show healthy/running
Your .env has MEILISEARCH_URL=http://meilisearch:7700.
Scenario C: NestJS on one server, MeiliSearch on a separate server
# On the MeiliSearch server:
docker compose up meilisearch -d
# On the NestJS server β .env has:
# MEILISEARCH_URL=http://<MEILISEARCH_SERVER_IP>:7700
npm run start:prod
Firewall note for Scenario C: Port
7700on the MeiliSearch server must allow inbound TCP connections from the NestJS server's IP. Restrict it to only that IP β do not open7700to the public internet.
Step 10 β Adding a New Index (Checklist)
Every time you add a new searchable feature, touch these files in order:
1. Create src/search/indexes/my-feature.index.ts
import { Injectable, Logger, OnModuleInit } from '@nestjs/common';
import { SearchService } from '../search.service';
export const MY_FEATURE_INDEX = 'my-feature';
export interface MyFeatureDocument {
id: string;
title: string;
body: string;
// only fields the search actually needs
}
@Injectable()
export class MyFeatureIndex implements OnModuleInit {
private readonly logger = new Logger(MyFeatureIndex.name);
constructor(private readonly searchService: SearchService) {}
async onModuleInit() {
try {
const task = await this.searchService.updateIndexSettings(MY_FEATURE_INDEX, {
searchableAttributes: ['title', 'body'],
filterableAttributes: ['status', 'authorId'],
sortableAttributes: ['createdAt'],
});
await this.searchService.waitForTask(task.taskUid);
this.logger.log(`Settings applied β ${MY_FEATURE_INDEX}`);
} catch (error) {
this.logger.error(`Settings failed β ${MY_FEATURE_INDEX}`, (error as Error).stack);
}
}
async search(query: string) {
return this.searchService.search<MyFeatureDocument>(MY_FEATURE_INDEX, query);
}
async index(documents: MyFeatureDocument[]) {
return this.searchService.addDocuments(MY_FEATURE_INDEX, documents, 'id');
}
}
2. Add barrel export in src/search/indexes/index.ts:
export * from './my-feature.index';
3. Register in AppModule providers:
providers: [MyFeatureIndex],
4. Export from your feature module if other modules need to inject it:
exports: [MyFeatureIndex],
The Full End-to-End Architecture (Production Pattern)
When you're building something real, you don't want MeiliSearch writes to be synchronous blocking calls in your main request path. If MeiliSearch has a hiccup, you don't want your POST /investments endpoint to fail.
Here's the pattern that handles this correctly:
1. SearchOutboxHelper β Write Events Atomically with Your Entity
Why: If MeiliSearch crashes or is temporarily unavailable, your app must still function. By using the Outbox Pattern, you write the search event into PostgreSQL (your source of truth) within the same transaction as the entity creation. A background worker then safely syncs it to MeiliSearch.
Inside your repository, wrap Prisma queries in a $transaction and call the outbox helper:
import { writeSearchOutbox } from 'src/search/helpers/search-outbox.helper';
async createInvestment(data: any, userId: string) {
return this.prisma.$transaction(async (tx) => {
// 1. Create the entity
const investment = await tx.investments.create({ data });
// 2. Queue for Search Indexing (Atomic β same transaction)
await writeSearchOutbox(tx, {
entity_type: 'investments',
entity_id: investment.id,
action: 'UPSERT',
user_id: userId,
payload: { ...investment },
});
return investment;
});
}
2. BullMQ Worker β Queue Processing
The SearchQueueService listens for PostgreSQL AuditLog outbox records and enqueues them to BullMQ (search-sync queue). The SearchSyncProcessor consumes these jobs, pushes the payload to MeiliSearch via SearchService, and marks the record as search_synced = true.
If MeiliSearch is down, the jobs stay in the queue and retry. Your core app keeps working.
3. Search Controller β Tenant-Isolated API Route
We don't expose the MeiliSearch search key directly to the frontend β we want backend-level tenant isolation (filter by user_id on every query so users can't see each other's data):
@Get()
@Throttle({ default: { limit: 120, ttl: 60000 } }) // Higher limit for fast debounced typing
async search(
@GetAdmin('id') userId: string,
@Query('q') query: string,
@Query('service') service: string, // e.g. 'investments'
) {
return this.searchService.search(service, query, { filter: `user_id = "${userId}"` });
}
4. Frontend Integration β Global Search Bar
const handleSearch = useDebouncedCallback(async (query: string) => {
// Fire requests concurrently across multiple indices
const [investments, accounts] = await Promise.all([
axios.get(`/xam/search?q=${query}&service=investments`),
axios.get(`/xam/search?q=${query}&service=accounts`)
]);
setResults([...investments.data.hits, ...accounts.data.hits]);
}, 300); // 300ms debounce
The @Throttle decorator on the controller is why this works without 429s. Standard NestJS throttler defaults would kill this immediately at 4 concurrent requests per keystroke.
Backup and Recovery
Dumps β Use for Version Migrations
Portable, human-readable export. Works across MeiliSearch versions.
# Create a dump
curl -X POST http://localhost:7700/dumps \
-H "Authorization: Bearer ${MEILISEARCH_BACKEND_KEY}"
# β returns { "taskUid": 1 }
# Poll until status = "succeeded"
curl http://localhost:7700/tasks/1 \
-H "Authorization: Bearer ${MEILISEARCH_BACKEND_KEY}"
# Dump file is at /meili_data/dumps/ inside the Docker volume
Snapshots β Use for Fast Rollbacks
Binary-exact copy. Faster to restore than dumps but version-specific β do NOT use across MeiliSearch version upgrades.
Daily auto-snapshots are already configured in the docker-compose.yml above via MEILI_SNAPSHOT_INTERVAL_SEC=86400. Snapshots live at /meili_data/snapshots/ inside the volume.
Upgrading MeiliSearch Versions
# Step 1: Create a dump from the running instance
curl -X POST http://localhost:7700/dumps \
-H "Authorization: Bearer ${MEILISEARCH_BACKEND_KEY}"
# Step 2: Wait for task to succeed, note the dump filename
# Step 3: Update docker-compose.yml
# getmeili/meilisearch:v1.37.0 β getmeili/meilisearch:vX.Y.Z
# Step 4: Temporarily add import command to docker-compose.yml meilisearch service:
# command: ["meilisearch", "--import-dump=/meili_data/dumps/YOUR_DUMP_FILE.dump"]
# Step 5: Start the new version
docker compose up meilisearch -d
# Step 6: Verify data is intact, then remove the --import-dump command line
# Step 7: Restart normally
docker compose up meilisearch -d --force-recreate
Verify Everything is Connected
# Health check (from host machine β always localhost:7700)
curl http://localhost:7700/health
# β {"status":"available"}
# List API keys (uses master key)
curl http://localhost:7700/keys \
-H "Authorization: Bearer ${MEILISEARCH_MASTER_KEY}"
# Verify backend key works
curl http://localhost:7700/indexes \
-H "Authorization: Bearer ${MEILISEARCH_BACKEND_KEY}"
# Check logs
docker compose logs meilisearch
docker compose logs app # only if NestJS is in Docker
Key API Reference (meilisearch JS SDK)
Quick cheat sheet for the most common operations:
import { Meilisearch } from 'meilisearch';
const client = new Meilisearch({
host: process.env.MEILISEARCH_URL, // from .env β never hardcoded
apiKey: process.env.MEILISEARCH_BACKEND_KEY,
});
const index = client.index('products');
// Add / update documents
await index.addDocuments(docs, { primaryKey: 'id' });
await index.updateDocuments(docs, { primaryKey: 'id' });
// Delete
await index.deleteDocument('doc-id-123');
await index.deleteDocuments(['id1', 'id2', 'id3']);
await index.deleteAllDocuments();
// Search
const results = await index.search('wireless headphones', {
filter: 'category = "Electronics" AND price < 200',
sort: ['price:asc'],
hitsPerPage: 20,
page: 1,
attributesToHighlight: ['name', 'description'],
});
// results.hits β matched documents
// results.totalHits β total count
// results.processingTimeMs β query latency in ms
// Settings (idempotent β safe to call on every startup)
await index.updateSettings({
searchableAttributes: ['name', 'description'],
filterableAttributes: ['category', 'price'],
sortableAttributes: ['price', 'createdAt'],
});
// All write operations are async β poll the task to confirm
const task = await index.addDocuments(docs);
const result = await client.tasks.waitForTask(task.taskUid); // v0.40+ syntax
// result.status: 'succeeded' | 'failed' | 'enqueued' | 'processing'
// Health
const health = await client.health();
// health.status: 'available'
Common Mistakes Reference
// β Hardcoding the host in application code
const client = new Meilisearch({ host: 'http://localhost:7700' });
// β
Always read from environment
const client = new Meilisearch({ host: process.env.MEILISEARCH_URL });
// β Using master key in application code
const client = new Meilisearch({ apiKey: process.env.MEILISEARCH_MASTER_KEY });
// β
Use the scoped backend key
const client = new Meilisearch({ apiKey: process.env.MEILISEARCH_BACKEND_KEY });
// β Using :latest Docker image tag
image: getmeili/meilisearch:latest
// β
Always pin to a specific version
image: getmeili/meilisearch:v1.37.0
// β MEILI_ENV not set β anyone can hit the API without a key
// β
Always set
- MEILI_ENV=production
// β Using MEILISEARCH_URL=http://meilisearch:7700 when NestJS runs outside Docker
// Docker service names only resolve inside Docker networks
// β
Use localhost:7700 when NestJS runs raw on the same machine as Docker
// β Using MEILISEARCH_URL=http://localhost:7700 when NestJS runs inside Docker
// localhost inside a container = the container itself, not the host
// β
Use http://meilisearch:7700 (service name) when both are in same Docker Compose
// β Opening port 7700 to the public internet when MeiliSearch is on a separate server
// β
Restrict firewall to allow 7700 only from the NestJS server's IP
// β Host-path volume mounts β permission/performance issues on Windows and Mac
volumes:
- ./meili_data:/meili_data
// β
Named Docker volumes β works everywhere
volumes:
- meilidata:/meili_data
// β Not setting searchableAttributes β MeiliSearch indexes all fields by default
// Causes large index sizes and slower search on documents with many fields
// β
Always set searchableAttributes explicitly in per-index OnModuleInit
// β Upgrading Docker image version without creating a dump first
// β
Always dump β upgrade β import
Wrapping Up
That's the full production setup β Docker config, key management, NestJS service layer, per-index files, the outbox pattern for resilient indexing, backup/recovery, and the common mistakes that'll waste your afternoon.
The thing I actually like about MeiliSearch is that it doesn't fight you. The docker-compose.yml is 30 lines. The JS SDK is properly typed. The search results are genuinely good out of the box. For a Cmd+K style search across multiple entity types in a NestJS app, this is the stack I'd reach for every time now.
If you have questions, drop them in the comments. If you ran into a specific deployment scenario I didn't cover, let me know.
Top comments (0)