Every Laravel application sits on top of a database full of insights. How many users signed up this month? Which products are underperforming? What's the average order value by region? The data is right there, but getting to it usually means one of two things: writing raw SQL or building custom dashboards that take days to ship.
I built LaraGrep to close that gap — a Laravel package that lets you ask questions about your database in plain English and get real answers back.
The Problem Every Developer Knows
You're in a meeting. The client asks: "How many active subscriptions do we have that are expiring next month?" You know the data exists. You know which tables to join. But to answer that question right now, you'd need to open a terminal, write a query, run it, maybe fix a typo, run it again.
Now multiply that by every stakeholder who needs "just a quick number" from the database. Product managers, sales teams, founders — they all have questions, and the developer becomes the bottleneck.
Some teams build internal dashboards. That works for recurring questions, but not for the ad-hoc ones that come out of nowhere. And maintaining those dashboards is a cost that compounds over time.
What LaraGrep Actually Does
LaraGrep connects to your Laravel application, reads your database schema, and uses AI to translate natural language questions into safe, parameterized SQL queries. But the key difference from other text-to-SQL tools is that it doesn't just generate one query and hope for the best.
It uses an agent loop. The AI executes a query, sees the results, thinks about what it learned, and decides whether to run another query or provide the final answer. This means it can handle questions that require multiple steps — joining data across tables, filtering based on previous results, self-correcting when something unexpected comes back.
Ask "What's our top-selling product category in Q4 compared to Q3?" and the AI might run three or four queries behind the scenes: first to understand the schema, then to pull Q3 numbers, then Q4, then compare. You just see the final answer.
A Data Analyst That Knows Your Schema
Think of LaraGrep as a junior data analyst who has memorized your entire database structure. You describe your tables, columns, relationships, and even provide hints about what JSON columns contain. The AI uses all of that context to write accurate queries.
Table::make('orders')
->description('Customer orders with payment status.')
->columns([
Column::id(),
Column::bigInteger('user_id')->unsigned()->description('FK to users.id'),
Column::decimal('total', 10, 2)->description('Order total in USD'),
Column::enum('status', ['pending', 'paid', 'cancelled']),
Column::json('metadata')
->template(['shipping_method' => 'express', 'coupon_code' => 'SAVE20']),
Column::timestamp('created_at'),
])
->relationships([
Relationship::belongsTo('users', 'user_id'),
])
The more context you provide, the better the answers get. Column descriptions, enum values, JSON templates — all of it helps the AI understand not just the structure, but the meaning behind your data.
For large databases, you don't even need to define everything manually. Set the schema mode to auto and LaraGrep reads directly from information_schema, pulling table and column comments as descriptions.
Real Use Cases
For client projects: Imagine delivering an admin panel where the client can type "Show me all users who placed more than 3 orders last month but haven't logged in this week" and get an instant answer. No custom report page. No Jira ticket. No waiting.
For your own SaaS: You're debugging a billing issue at 11pm. Instead of writing JOIN queries across five tables, you ask: "Which users on the Pro plan have a failed payment in the last 48 hours and an active subscription?" LaraGrep figures out the joins and filters for you.
For understanding your numbers: Early-stage founders who are technical enough to have a database but don't always have time to query it properly. "What's my MRR?" "What's the churn rate this quarter?" "How does signup-to-first-purchase time compare between organic and paid users?" These questions shouldn't require 20 minutes of SQL each time.
For generating reports: LaraGrep has a query export mode where the AI consolidates all its reasoning into a single optimized SELECT query and hands it back to you — no LIMIT, no memory constraints. You take that query and run it however you want: cursor() for streaming, chunk() for batch processing, pipe it into Laravel Excel. The AI does the thinking, you control the execution.
For recurring reports: The recipe system auto-saves every successful query chain. Found a question you'll ask every Monday? Replay the recipe with fresh data — the AI adjusts date parameters automatically. Wire it to a scheduled job and you have automated reporting without building a single dashboard.
Keep It Local with Ollama
Not every project can send database schemas to external APIs. Healthcare, finance, government contracts — there are legitimate reasons to keep everything on-premises.
LaraGrep works with Ollama out of the box. Point it to your local instance, pick a model, and every query stays on your machine. No data leaves your network. The setup is three environment variables:
LARAGREP_PROVIDER=openai
LARAGREP_API_KEY=ollama
LARAGREP_BASE_URL=http://localhost:11434/v1/chat/completions
LARAGREP_MODEL=qwen3-coder:30b
The trade-off is that local models are less capable than GPT-4o or Claude for complex multi-step reasoning. But for straightforward queries — counts, aggregations, filtered lookups — they work well enough. And for sensitive environments, "good enough locally" beats "perfect in the cloud" every time.
Security by Design
Letting AI write SQL sounds dangerous, and it should — if done carelessly. LaraGrep takes a strict approach:
Only SELECT queries are allowed. Any mutation attempt is rejected before it reaches the database. All values use parameterized bindings — the AI never interpolates user input into raw SQL. Every table reference is validated against the schema you defined — the AI can't query tables you haven't exposed. There's a configurable execution timeout that kills slow queries before they can lock your database. And the agent loop is capped at a maximum number of iterations to prevent runaway API costs.
This doesn't make it zero-risk. You're still exposing read access to data, so protecting the endpoint with authentication middleware is essential. But the attack surface is significantly smaller than giving someone a database GUI.
Getting Started
If you have a Laravel 10+ application and an API key (or Ollama), you can be running in under five minutes:
composer require behindsolution/laragrep
php artisan vendor:publish --tag=laragrep-config
php artisan migrate
Define your tables in the config, set your API key in .env, and send a POST request:
curl -X POST http://localhost/laragrep \
-H "Content-Type: application/json" \
-d '{"question": "How many users registered this week?"}'
That's it. The response comes back with a natural language summary and, optionally, the full debug trace showing every query the AI ran.
What This Means for Your Projects
The shift here isn't really about AI writing SQL. It's about making database knowledge accessible to anyone who can describe what they need in words. The developer still controls the schema, the security, the infrastructure. But the barrier to getting answers drops from "know SQL and the schema" to "know what you want to know."
For agencies delivering client projects, this is a feature that adds immediate value without weeks of dashboard development. For SaaS builders, it's a way to stay close to your data without context-switching into a SQL client every time. For teams with non-technical stakeholders, it's one less reason to interrupt the developer.
The data was always there. Now it's easier to reach.
LaraGrep is open source and available on GitHub:
github.com/behindSolution/laragrep
Stars, issues, and pull requests are welcome. If you're using it in a project, I'd love to hear about your experience.
Top comments (0)