This tutorial was written by James Shisiah.
When it comes to databases, the age-old debate never ends: relational versus document.
On one side, you’ve got the classic relational database—PostgreSQL—organized, structured, and always insisting that every piece of data belongs neatly in its own row and column. Think of it as the well-disciplined librarian who won’t let you borrow a book without filling out three forms—the emphasis is on organization, structure, and order.
On the other side, you’ve got MongoDB, the document database—with flexible schema, and more like a cool café owner who says, “Just scribble your order on a napkin, I’ll figure it out.” Document databases emphasise flexibility, ease, and speed.
In this tutorial, we will take a sample Laravel blog app (with users, posts, and comments) that’s running on PostgreSQL and migrate it to MongoDB. Along the way, you’ll see how to remodel your data from tables into documents, and how this change affects querying, relationships, and performance. Additionally, we’ll explore how MongoDB Atlas can serve as both your database and a full-text search solution—eliminating the need for third-party tools like Elasticsearch to handle full-text search needs.
Let's get started.
Prerequisites
- Knowledge of Laravel, PHP, and relational databases
- Tools for running the sample app: Git, Docker, (for local Postgres + Elastic setup)
- MongoDB Atlas account (free tier)
- Optional local setup without Docker: Composer, install Postgres, NodeJS and NPM, MongoDB, and Elasticsearch; these tools are only necessary if you want to run the sample application locally without the simplicity of running one Docker command, or if you don’t have Docker installed
1. Getting started with the sample app
Before we jump into the migration flow, let’s explore the starting point—a simple Laravel blog app powered by PostgreSQL.
This app is our “before” picture—it has most of what you would expect in a classic blog: users, posts, and comments, all neatly stored in relational tables.
Step 1: Clone the repository
First, grab the source code from GitHub:
git clone git@github.com:mongodb-developer/laravel-postgresql-to-mongodb.git
cd laravel-postgresql-to-mongodb
The repo already includes a README.md with setup instructions—but let’s walk through the basics here.
Step 2: Explore the database schema
Open up the database/migrations folder. You’ll find the following:
- users: holds basic user info like name, email, and password
- posts: each post belongs to a user
- comments: each comment belongs to both a user and a post
Here’s what the relationships look like in Eloquent terms:
// App\Models\User.php
public function posts() {
return $this->hasMany(Post::class);
}
public function comments() {
return $this->hasMany(Comment::class);
}
// App\Models\Post.php
public function user() {
return $this->belongsTo(User::class);
}
public function comments() {
return $this->hasMany(Comment::class);
}
// App\Models\Comment.php
public function post() {
return $this->belongsTo(Post::class);
}
public function user() {
return $this->belongsTo(User::class);
}
In a relational structure, everything is connected with foreign keys and JOINs.
Step 3: Run the app locally with Docker Compose
There’s no need to install PHP, Composer, or PostgreSQL manually (unless you want to explore the manual setup). The repository ships with a ready-to-go Docker Compose YAML file.
Just run:
docker compose up -d
This will spin up containers for:
- The Laravel app (PHP + Nginx + Laravel Queue Worker).
- A PostgreSQL database.
- Elasticsearch (more on that later).
You can list the started containers by running the command:
docker compose ps --status=running --format 'table {{.Name}}\t{{.Service}}'
If everything worked well, your terminal should display the following list of container names:
NAME SERVICE
laravel_app app
laravel_elasticsearch elasticsearch
laravel_nginx nginx
laravel_postgres postgres
laravel_queue laravel_queue
You can stream logs from an individual container by running the command docker compose logs -f service_name.
For example:
docker compose logs -f postgres
Once everything is running, open your browser and visit:
You should see the blog home page:
2. Why MongoDB?
Migrating from Postgres to MongoDB is not about replacing it with a “better” database—it’s about choosing the right tool for the job. Let’s break it down.
Flexible schema: No more AddColumnToTable Migrations
In PostgreSQL, every schema change means running a migration—adding columns, updating types, reindexing data.
MongoDB, on the other hand, lets you store documents with different shapes and fields in the same collection.
Do you need to add a subtitle to blog posts? Just start saving it. No php artisan migrate required.
Embedded documents model data more naturally
In a blog, a post often “owns” its comments. Instead of joining tables, MongoDB lets you embed comments directly inside a post document—everything related stays together:
{
"title": "Why Even Use MongoDB?",
"author": "James Doe",
"comments": [
{ "user": "Alex", "text": "Great read!" },
{ "user": "Maya", "text": "Thanks for sharing!" },
{ "user": "Shisiah", "text": "I will try this!" },
]
}
The above flow simplifies queries and reflects how data is used in real-world applications. For datasets that grow large or evolve over time, you can apply MongoDB design patterns—such as the Bucket Pattern or Subset Pattern—to manage document size and maintain efficient access while preserving locality of related data.
Built-in full-text search (Atlas Search)
When you want to integrate full-text search in your application, you may consider tools like Elastic, Algolia, MeiliSearch, TypeSense, etc. With MongoDB Atlas, everything is covered. You get:
- Full-text search and fuzzy matching.
- Relevance scoring.
- Highlighting—surrounding the matching search terms with a tag.
- Custom analyzers.
- Real-time synchronization—any changes to your data are reflected in the search index with minimal latency.
All of it runs inside your MongoDB cluster—no need to maintain a third-party, full-text search service.
Built-in vector search
Vector Search focuses on semantic search, allowing you to search through data based on the meaning captured in vectors. To point out the differences: Atlas Search is best for handling text-based searches, while Atlas Vector Search enables searching through data based on semantic meaning captured in vectors and is suited for more advanced, context-aware searches, such as AI and recommendation systems.
Simpler infrastructure
Running PostgreSQL + a third-party, full-text search means juggling two databases, two sets of credentials, and two maintenance lifecycles.
With MongoDB Atlas, you get a single managed service that handles:
- Storage.
- Backup.
- Indexing.
- Search.
Fewer moving parts, fewer integration bugs, full-text search, and more time to focus on building features.
3. Adding full-text search with Postgres + Elasticsearch
Why Postgres alone may not be ideal for rich search
While PostgreSQL has a tsvector type and can handle basic full-text search, it quickly becomes limiting when you need features like:
- Relevance scoring.
- Fuzzy matching (handling typos).
- Phrase and proximity search.
- Highlighting matched terms.
That’s where Elasticsearch shines—it’s built for search-first use cases.
Adding Elasticsearch via Docker Compose
In this tutorial’s repo, Elasticsearch is already included in the docker-compose.yml. That means you can spin up the entire environment—Laravel, PostgreSQL, and Elasticsearch—with a single command, which you already executed earlier:
docker compose up -d
Note: While we’ve combined all containers into a single docker-compose.yml for convenience in this tutorial, this setup isn’t ideal for production. In a real deployment, each service—the app, database, search engine, and queue—should run independently. This separation improves scalability, fault isolation, security, and makes it easier to update or restart one component without affecting the others.
Integrating Laravel Scout with Elasticsearch
We’re using Laravel Scout with the Elastic driver to keep the implementation simple and elegant. Since Elasticsearch is not supported out of the box by Laravel Scout, we added the Explorer package that works on top of Laravel Scout and integrates with Elasticsearch.
All you need to do is make the models for full-text search searchable:
use JeroenG\Explorer\Application\Explored;
use Laravel\Scout\Searchable;
class Post extends Model implements Explored
{
use Searchable;
}
use JeroenG\Explorer\Application\Explored;
use Laravel\Scout\Searchable;
class User extends Model implements Explored
{
use Searchable;
}
Once you have set up your models as searchable, you must first run the following commands to create the indexes in Elasticsearch. This is done only once or whenever you set up a new Elasticsearch instance:
docker compose exec app php artisan scout:queue-import "App\Models\Post"
docker compose exec app php artisan scout:queue-import "App\Models\User"
Going forward, Scout will automatically synchronize your models to Elasticsearch whenever they’re created, updated, or deleted.
Searching posts by keyword
The PostController@index method already supports both regular listing and keyword search:
public function index(Request $request)
{
if ($request->filled('q')) {
$term = $request->input('q');
$posts = Post::search($term)->latest()->paginate(10)->withQueryString();
$posts->getCollection()->load('user')->loadCount('comments');
} else {
$query = Post::with('user')->withCount('comments');
$posts = $query->latest()->paginate(10)->withQueryString();
}
if ($request->wantsJson()) {
return response()->json($posts);
}
return view('posts.index', compact('posts'));
}
You can try it out in your terminal:
curl -H "Accept: application/json" "http://localhost:8080?q=laravel"
Or use the search field in your browser:
Next, we will learn how MongoDB Atlas can provide all this full-text capability natively, without the need for Elasticsearch or extra infrastructure.
4. Setting up MongoDB Atlas
Before we migrate our Laravel app to MongoDB, let’s set up a free cluster on MongoDB Atlas—MongoDB’s fully managed cloud service.
Step 1. Create a free cluster
Go to MongoDB Atlas and sign up (or log in if you already have an account).
Click “Build a Database.”
Choose the Free Tier (Free) option—it’s perfect for testing and tutorials.
Name your cluster and pick your preferred cloud provider and region (choose one close to you for better performance):
Click “Create Deployment.”
MongoDB Atlas will take a few minutes to provision your cluster.
Step 2: Configure network access and database user
Once the cluster is ready:
Under Security → Network Access, click “Add IP Address.”
You can add your current IP or allow access from anywhere (0.0.0.0/0) for development purposes.
Note: If you get such an error, confirm that you have set your current IP address properly:
local.ERROR: No suitable servers found (`serverSelectionTryOnce` set)
Under Security → Database Access, click “Add New Database User.”
Choose a username and password you’ll use in your Laravel .env file.
Under Database User Privileges, assign the “Read and write to any database” role for simplicity in this tutorial:
Step 3. Obtain the connection string
From your cluster dashboard, click “Connect.”
Choose “Connect your application.”
Copy the provided connection string:
A connection string looks like this:
mongodb+srv://<username>:<password>@cluster0.xxxxxx.mongodb.net/?retryWrites=true&w=majority
Keep this string handy—we’ll paste it into the Laravel .env file once we switch the database driver to MongoDB.
Next step: We’ll install the MongoDB Laravel driver (mongodb/laravel-mongodb) and connect our app to this new cluster.
5. Adding MongoDB support to Laravel
Now that your MongoDB Atlas cluster is ready, let’s connect Laravel to it using the official MongoDB Laravel driver. The mongodb branch of the sample project repository already has these changes. You can check out the branch with the command:
git checkout mongodb
Let’s walk through the changes to gain a better understanding.
Step 1. Install the MongoDB Laravel package
The mongodb/laravel-mongodb package requires the MongoDB PHP extension to be installed before it can be used. We have included a line in the Dockerfile.app to install the extension. If you are testing this application without Docker, first install and enable the extension for your operating system. Then, you can proceed to install the Laravel package.
Run the following command in your project root to install the Laravel package:
composer require mongodb/laravel-mongodb
This package extends Laravel’s database layer and Eloquent ORM to work seamlessly with MongoDB.
Step 2. Update environment variables
Open your .env file and add a new MongoDB connection string (replace with your Atlas credentials):
DB_CONNECTION=mongodb
DB_HOST=cluster0.xxxxxx.mongodb.net
DB_PORT=27017
DB_DATABASE=laravel_blog
DB_USERNAME=<your_username>
DB_PASSWORD=<your_password>
If you’re using the full connection URI copied in the previous step from MongoDB Atlas, you can instead define it like this; remember to set your password:
DB_CONNECTION=mongodb
DB_URL="mongodb+srv://<username>:<password>@cluster0.xxxxxx.mongodb.net/laravel_blog?retryWrites=true&w=majority"
Step 3. Update config/database.php
Open config/database.php and add a new MongoDB connection entry inside the connections array:
'mongodb' => [
'driver' => 'mongodb',
'dsn' => env('DB_URI'),
'database' => env('DB_DATABASE', 'laravel_blog'),
],
You can leave your existing PostgreSQL configuration intact—we’ll use it for comparison during the migration steps.
Step 4: Switch Eloquent models to MongoDB models
The mongodb/laravel-mongodb package provides its own base model that replaces the default Illuminate\Database\Eloquent\Model.
In each model you want to store in MongoDB, update the use statement:
use MongoDB\Laravel\Eloquent\Model;
// instead of Illuminate\Database\Eloquent\Model
For example:
namespace App\Models;
use MongoDB\Laravel\Eloquent\Model;
class Post extends Model
{
protected $connection = 'mongodb';
protected $collection = 'posts';
}
We have also dropped JeroenG\Explorer\Application\Explored, which we used for Elasticsearch, since we will be working with MongoDB Atlas Search.
For the User model:
use MongoDB\Laravel\Auth\User as Authenticatable;
// instead of Illuminate\Foundation\Auth\User as Authenticatable
That’s it—your models are now MongoDB-ready!
Step 5: Updating Laravel configuration files
You will also need to make changes to certain configuration files for your application to work with MongoDB fully. You can refer to the MongoDB docs for details on updating configurations for your cache store, session driver, and queue connections. If your application uses stateless auth tokens, you can find more information in the user authentication docs on integrating Laravel Passport or Sanctum.
Cache and locks
Add a store configuration by specifying mongodb as the default driver, and adding the following snippet in the stores array in config/cache.php:
'stores' => [
'mongodb' => [
'driver' => 'mongodb',
'connection' => 'mongodb',
'collection' => 'cache',
'lock_connection' => 'mongodb',
'lock_collection' => 'cache_locks',
'lock_lottery' => [2, 100],
'lock_timeout' => 86400,
],
],
HTTP sessions
Since we already have a connection to MongoDB configured in the config/database.php file, you can also set it as the session driver and connection. Specify the following in the .env file:
SESSION_DRIVER=mongodb
# Optional, this is the default value
SESSION_CONNECTION=mongodb
Then, in the config/session.php file:
<?php
return [
'driver' => env('SESSION_DRIVER', 'mongodb'),
'connection' => env('SESSION_CONNECTION', 'mongodb'),
'table' => env('SESSION_TABLE', 'sessions'),
];
Laravel queues
Update the queue driver and connection to use MongoDB. In the .env file:
QUEUE_CONNECTION=database
DB_QUEUE_CONNECTION=mongodb
Update the config/queue.php file, connections array:
'connections' => [
'database' => [
'driver' => 'mongodb',
'connection' => env('DB_QUEUE_CONNECTION', 'mongodb'),
'table' => 'jobs',
'queue' => 'default',
],
],
Set the failed jobs to be stored in your MongoDB connection:
'failed' => [
'driver' => env('QUEUE_FAILED_DRIVER', 'mongodb'),
'database' => env('DB_CONNECTION', 'mongodb'),
'table' => 'failed_jobs',
],
Set Job batching to use your MongoDB connection:
'batching' => [
'driver' => 'mongodb',
'database' => env('DB_CONNECTION', 'mongodb'),
'table' => 'job_batches',
],
6. Designing a document model
Moving from PostgreSQL to MongoDB is not simply changing the database driver—it’s also about rethinking how data is structured. MongoDB stores information as documents, not rows, so relationships that once required joins can often be represented more naturally.
The original relational schema
In our PostgreSQL setup, we had three tables:
- users → stores user info (id, name, email)
- posts → each post references a user via user_id
- comments → each comment references a post and the user who commented (post_id, user_id)
Typical query flow:
- Fetch posts with their author (JOIN users)
- Fetch comments per post (JOIN comments)
It’s normalized, but it requires multiple joins—and as the dataset grows, those joins add up.
Rethinking it for MongoDB
In MongoDB, we can utilize embedded documents to keep related data together.
Instead of splitting everything across multiple collections, we group data by how it’s accessed most often.
Here’s our new structure:
Users—referenced
Each user still lives in a users collection. We reference users from other collections when needed (e.g., posts, comments).
{
"_id": ObjectId("..."),
"name": "Jane Doe",
"email": "jane@example.com"
}
Posts—referenced to users
Each post belongs to a user but remains in its own collection. We store the user_id reference so we can still query posts by the author. We are also including the name of the user (owner_name) in the posts collection. This allows us to easily show a list of posts and their creators with one query. Since we have the user_id, we can always retrieve more details about the user, depending on the app's needs.
{
"_id": ObjectId("..."),
"user_id": ObjectId("..."),
"title": "Migrating from Postgres to MongoDB",
"content": "The age-old debate never ends: relational vs. document...",
"created_at": "...",
"owner_name": "...",
"updated_at": "..."
}
Comments—embedded in posts, reference commenter
Comments belong tightly to posts, so we’ll embed them directly inside the post document. However, each comment still references the user who made it—keeping the design flexible.
{
"_id": ObjectId("..."),
"user_id": ObjectId("..."),
"title": "Migrating from Postgres to MongoDB",
"content": "The age-old debate never ends: relational vs. document...",
"owner_name": "...",
"comments": [
{
"_id": ObjectId("..."),
"user_id": ObjectId("..."),
"text": "Awesome tutorial!",
"created_at": "..."
},
{
"_id": ObjectId("..."),
"user_id": ObjectId("..."),
"text": "I can finally get Atlas Search working.",
"created_at": "..."
}
]
}
This approach works well for our blog scenario because:
- Posts are often fetched together with their comments.
- Comments rarely exist without a post.
- Users can still be looked up or joined when more details are needed.
This same thought process applies to your own applications—focus on how your data is accessed, then decide what to embed for quick retrieval and what to reference to keep documents lean and relationships flexible.
7. Migrating data with MongoDB Relational Migrator
Now that we’ve designed our new document model, it’s time to copy the actual data from PostgreSQL to MongoDB Atlas—without manually exporting and transforming everything.
We’ll use the MongoDB Relational Migrator, a graphical tool that helps analyze relational schemas, visualize relationships, and generate an equivalent MongoDB structure automatically.
Step 1. Analyze the PostgreSQL schema
Download and install MongoDB Relational Migrator for your host operating system.
Launch it and create a new migration project.
Connect it to your PostgreSQL database (the same one running inside Docker or locally) using port 5433, which we mapped to the host machine in Docker Compose.
The tool will automatically inspect your schema—showing your tables (users, posts, comments) and their relationships.
You’ll get a visual map of how your current data is organized—a perfect starting point for the transformation.
The tool lists all tables in your database, including those Laravel-specific tables. For the sake of this tutorial, our focus is on users, posts, and comments. As you can see, we have not selected the comments table as it will be embedded inside the posts collection.
Step 2: Generate MongoDB schema suggestions
Click “Next.”
In the window, click on the settings icon at the top left:
Then select Single inherited primary key—this tells the Relational Mapper to inherit the primary keys from Postgres:
Relational Migrator will propose a new structure based on your relationships. You can then:
- Embed comments inside posts (since they’re tightly coupled).
- Keep users as a separate collection, referenced by user_id.
This aligns with the data model we designed earlier—comments reside within posts, while users are reusable across multiple posts. For your specific application requirements, you may change the mappings of a collection by selecting the collection from the list under MongoDB on the left pane:
Then add/remove mappings for the selected collection using the right side pane. For example, in the embedded comments, we have unselected id and post_id as they are no longer needed. We are also adding the owner’s name directly to the post by referencing the users table. This is known as the Extended Reference Pattern. The Relational Migrator tool will pick the name and add it to the post collection, making it easier to list posts and the creator’s name in one query. Since we also have the user_id on the post, if we want more information about the user, we can easily query based on the user_id.
Click on the “Mappings from relational tables Add” button to configure. Then select Embedded documents from the options, and choose your source table and the columns you want to embed. You can also rename the columns you want to embed. We have renamed the user’s name column to owner_name.
From the Main Diagram window, click “MDB” in the bottom-left corner to only show the MongoDB mappings:
You can now preview the transformed structure before running the actual migration:
Step 3. Run the migration to MongoDB Atlas
Once you’re satisfied with the mapping, connect Relational Migrator to your MongoDB Atlas cluster (use the same connection string from earlier).
Start the migration—go to the Data Migration tab and create a migration job. The tool will automatically copy and transform your data from PostgreSQL to MongoDB. The duration of the migration will depend on the size of your data. For our test blog, with only a few rows, it takes less than five seconds.
When it’s done, open your MongoDB Atlas dashboard, browse collections, and check the new collections (users, posts) in the laravel_blog database. For the posts collection, you should see something like this:
And just like that, your data is now in MongoDB Atlas, ready to be used in your Laravel app. To find out more use cases for the MongoDB Relational Migrator and how you can use it in your real-world project, please read the docs. You can also watch this Relational Migrator 101 YouTube video for a better understanding.
To check the status of your database using the Laravel command php artisan db:show, you need to add the clusterMonitor role to the user you created.
- Go to your MongoDB Atlas dashboard.
- Navigate to Database Access.
- Edit the user your app uses.
- Under Specific privileges, assign the clusterMonitor role to your user and save. This allows serverStatus check without full admin rights.
Now, you can run:
php artisan db:show
And you should see something like this:
8. Updating application code
Let’s now adjust the Laravel code to work with the new document model.
Adjusting queries for MongoDB
Unlike PostgreSQL, MongoDB prefers modeling related data by embedding or referencing IDs for fast, local reads—although you can still perform join-like operations with the $lookup aggregation stage when needed.
Since we embedded comments inside each post, we no longer need to query the comments collection separately. Hence, we should modify the Post model and drop the comments relationship. Also, remove the definition from the User model. Drop the Comment model and all its references in the code.
The MongoDB Laravel package handles searching on a collection’s search index, so we can also drop Laravel Scout and its configs from the project completely:
composer remove laravel/scout
Here is how the updated Post model looks:
<?php
namespace App\Models;
use Illuminate\Database\Eloquent\Factories\HasFactory;
use Illuminate\Database\Eloquent\Relations\BelongsTo;
use Illuminate\Support\Collection;
use MongoDB\Laravel\Eloquent\Model;
class Post extends Model
{
use HasFactory;
public const SEARCH_INDEX = 'posts_search_index';
public static function findByMixedId($id): Post|null
{
$post = static::where('_id', $id)->first();
if (!$post && is_numeric($id)) {
$post = static::where('_id', (int)$id)->first();
}
return $post;
}
protected $fillable = ['title', 'body', 'user_id', 'comments', 'owner_name'];
public function user(): BelongsTo
{
return $this->belongsTo(User::class);
}
// Helper method to get comments count
public function getCommentsCountAttribute(): int
{
return count($this->comments ?? []);
}
// Helper method to get comments as a collection
public function getCommentsAttribute($value): Collection
{
return collect($value ?? []);
}
}
We have also added a few helper functions—for example, getting the comments attributes—and removed the comments relation. The findByMixedId method was added to handle flexible ID lookups in MongoDB, where the _id field can be either a string (e.g., ObjectId) or an integer.
For our use case, we migrated the data from a Postgres table that was using incremental integers as primary keys. It first searches by the ID as provided, and if not found and the ID is numeric, it retries with the ID cast to an integer, ensuring compatibility with mixed ID types. MongoDB automatically assigns records created after the migration with the ObjectId string as the primary key. If you are starting a new Laravel project with MongoDB, then the default Laravel find method should work with the autogenerated ObjectId string.
Creating a post
In creating a post, we are adding owner_name to maintain easy querying of the posts list:
$post = Post::create([
'title' => $validated['title'],
'body' => $validated['body'],
'user_id' => auth()->id(),
'owner_name' => auth()->user()->name, // Include owner's name for extended reference pattern and searchability
'comments' => [], // Initialize comments as an empty array
]);
MongoDB automatically assigns an _id, and we can later append comments directly to this document.
Adding a comment (embedded)
Since comments are embedded, you’ll push a new object to the comments array:
$post = Post::findByMixedId($postId);
$post->push('comments', [
'user_id' => 2,
'body' => 'This looks awesome!',
'created_at' => now(),
'updated_at' => now(),
]);
Fetching posts with comments
Fetching a post automatically includes its comments:
$posts = Post::with('user')->latest()->get();
Or filter by keyword (MongoDB full-text search will come next):
$results = Post::where('title', 'like', '%MongoDB%')->get();
Each returned post now contains a comments array ready for display—no joins or separate queries needed to get comments.
Updating and deleting embedded comments
We use the comment index in the array to update a comment:
// Get the comments array
$comments = $post->comments->toArray();
// Update the comment body
$comments[$commentIndex]['body'] = $newBody;
$comments[$commentIndex]['updated_at'] = now();
// Save the updated comments array back to the post
$post->comments = $comments;
$post->save();
To remove a comment, we find the comment by index and then pull it from the array:
$comment = $comments[$commentIndex];
// Remove comment from the embedded array
$post->pull('comments', $comment);
At this point, your Laravel app reads directly from MongoDB, stores, and updates posts and embedded comments in a single document.
9. Enabling MongoDB Atlas Search
MongoDB Atlas comes with built-in full-text search, powered by Apache Lucene.
And the best part? The MongoDB Laravel package provides the search() method as a query builder method and as an Eloquent model method. Once you have made your model to extend MongoDB\Laravel\Eloquent\Model, you can use the search() method to run Atlas Search queries on documents in your collections.
Step 1: Create Atlas Search indexes for the searchable collections
Before you can perform searches, you must first create search indexes for your collections. MongoDB Atlas provides two options for creating indexes:
- For new collections, you can call the create() method on the schema facade and pass the searchIndex() helper method with index creation details. This is done in the Laravel migration files as described in the Schema Builder Laravel MongoDB docs.
- For existing collections, access a collection, then call the createSearchIndex() method from the MongoDB PHP Library, as shown in the following code:
$collection = DB::connection('mongodb')->getCollection('posts');
$collection->createSearchIndex(
['mappings' => ['dynamic' => true]],
['name' =>'posts_search_index']
);
We have used this second method to create indexes from the existing collections that we migrated from Postgres. We added this Laravel command to automate the index creation, the Laravel way:
app/Console/Commands/CreateSearchIndexesCommand.php:
php artisan app:create-search-indexes
You can view the created search indexes from your MongoDB Atlas dashboard.
Database –> Search & Vector Search:
Once you have created the search indexes by following either of the two options described, any new records created will automatically be added to the search indexes. This simplifies the application as there is no need to manage Laravel Scout synchronization jobs.
Step 2: Modifying the Laravel controller code to use Atlas search()
Below is the index method of the PostController class. The index method now integrates MongoDB Atlas Search for full‑text queries. When a q parameter is provided, it builds a Search::text operator targeting title and body, caches the total hit count for pagination, and fetches a page via an aggregation pipeline using $search, sorted by created_at with skip/limit. Raw documents are hydrated into Post models and wrapped in a LengthAwarePaginator, using the configured Post::SEARCH_INDEX. When no query is present, it falls back to the standard Eloquent pagination. JSON and Blade responses continue to work as before:
public function index(Request $request)
{
if ($request->filled('q')) {
$term = $request->input('q');
$perPage = $request->input('perPage', 10);
$page = $request->input('page', 1);
$skip = ($page - 1) * $perPage;
// Get total count for the search term
// We are using cache here so that we don't have to count every time for the same term
$keyTerm = preg_replace('/\s+/', '_', strtolower($term));
$total = Cache::remember("search_total_{$keyTerm}_posts_search_index", 3600, function () use ($term) {
return Post::search(
operator: Search::text(
path: ['title', 'body'],
query: $term
),
index: Post::SEARCH_INDEX
)->count();
});
// Get paginated results using aggregation pipeline
$rawResults = Post::aggregate()
->search(Search::text(path: ['title', 'body'], query: $term), index: Post::SEARCH_INDEX)
->sort(created_at: -1)
->skip($skip)
->limit($perPage)
->get();
// Hydrate results into models
$results = Post::hydrate($rawResults->toArray());
// Create Laravel paginator
$posts = new LengthAwarePaginator(
$results,
$total,
$perPage,
$page,
[
'path' => $request->url(),
'query' => $request->query()
]
);
} else {
$posts = Post::query()
->latest()
->paginate(10)
->withQueryString();
}
if ($request->wantsJson()) {
return response()->json($posts);
}
return view('posts.index', compact('posts'));
}
MongoDB aggregation pipelines are a server‑side data processing framework where documents flow through an ordered series of stages (e.g., $match, $project, $sort, $group, $lookup, $unwind, $search). Each stage transforms the data and passes it to the next, enabling complex queries and analytics in one pass.
Step 3: Experience a simplified architecture
Before migration:
Laravel → PostgreSQL + Elasticsearch (with Laravel Scout)
After migration:
Laravel → MongoDB Atlas (storage + search)
No separate search container or driver needed—just one database doing it all.
10. Testing the application
Now that your Laravel app is fully connected to MongoDB Atlas—with storage, relationships, and full-text search—it’s time to test everything end to end.
Step 1: Run the app
If you are using Docker, start the containers in the mongodb branch:
docker compose up -d --build
The --build flag ensures that Docker images are rebuilt, which is necessary when switching between branches or when dependencies have changed.
Make sure your .env points to MongoDB Atlas:
DB_CONNECTION=mongodb
Once the app is up, navigate to http://localhost:8080 in your browser to test it.
Step 2: Verify basic CRUD
Try creating, reading, updating, and deleting posts and comments:
- Create a post: Ensure it’s stored in your Atlas posts collection.
- Add comments: Verify they appear as embedded documents inside the post.
- Update: Confirm the changes reflect in the same MongoDB document (not a separate table).
- Delete: Ensure removing a post also removes its embedded comments.
You can verify the actual data in your MongoDB Atlas Dashboard → Browse Collections view to confirm the new structure.
Step 3: Test full-text search
Search for a keyword you know appears in a post title or body:
You should see the matching posts returned—powered by MongoDB Atlas Search. If the results don’t appear, confirm that you have created the search indexes for the posts as described.
Step 4: Sanity check relationships
Verify that each post still references its author via user_id by registering and creating a post.
Confirm that embedded comments retain the user_id of the commenter by commenting on several posts and observing how the app behaves.
Try adding new users and posts—the relationships should still behave as before, just without SQL joins.
11. Local setup (optional)
If you prefer running MongoDB locally instead of using Atlas Cloud:
Step 1: Install MongoDB Atlas Local Edition
Download and install the latest installation file for your operating system from the official MongoDB Atlas downloads page.
Step 2: Update your .env
DB_CONNECTION=mongodb
DB_HOST=localhost # Or ‘mongodb’ if you are using docker-compose.
DB_PORT=27017
DB_DATABASE=laravel_blog
DB_PASSWORD=secret
Step 3: Run the app
php artisan serve --port=8080
Your Laravel app will now use your local MongoDB installation instead of Atlas.
Step 4: Using Docker to run a local MongoDB Atlas instance
If you would prefer to use Docker with a full local setup, a docker-compose-mongo-local.yml file is included, which pulls the MongoDB Atlas Local Docker image. You can start up the application and MongoDB Atlas local containers with the command:
docker compose -f docker-compose-mongo-local.yml up --build -d
Then, update the MongoDB connection URL in the .env file accordingly:
DB_URL=mongodb://laravel:secret@mongodb:27017/laravel_blog?authSource=laravel_blog
Once the containers are up, you should be able to access the application at:
The MongoDB Atlas Local image used here comes with Atlas Search, so you should be able to perform full-text searches on your app. If search results don’t show up, you can rerun the search index creation command:
php artisan app:create-search-indexes
Conclusion
Migration summary: We moved from PostgreSQL + Elasticsearch to MongoDB Atlas, simplifying both storage and search.
Key benefits:
- Flexible document schema—no rigid migrations
- Native full-text search via Atlas Search
- Fewer services to maintain (no separate Elasticsearch)
Next steps: Explore MongoDB Relational Migrator, Atlas Search, MongoDB Design Patterns, and Laravel MongoDB integration docs for further reading and deeper optimization.


























Top comments (1)
I understand this is to promote MongoDB.
The problem I have that there is no section about why you would want to go from a relational database to a document database. And it is not because the post should be short enough not to overload the reader with information.
A flexible schema is a poor reason to migrate to a different database type.
PostgreSQL full text search can be all the search functionality the application needs, so Elasticsearch is not a default dependency.