DEV Community

Cover image for I am building a Notebook Environment for SQL Inside a Database Client
Andrea Debernardi
Andrea Debernardi

Posted on

I am building a Notebook Environment for SQL Inside a Database Client

This post is also available on tabularis.dev.

You know the drill. Write a query, get a table. Need to build on that result? Copy-paste into the next query. Need a chart? Export CSV, open a spreadsheet. Want to document the analysis? Paste SQL into a doc and pray nothing drifts.

I got tired of this loop, so I'm building Notebooks into Tabularis — a cell-based SQL analysis environment that lives inside the database client. No Jupyter, no Python runtime, no context switching. Just SQL + markdown cells, inline charts, and a few features that make multi-query analysis way less painful.

It's still in development, but the core works. Here's what it looks like and how it's shaping up.


How It Works

A notebook is a sequence of cells — SQL or markdown. SQL cells run against your database and show results inline with the same data grid from the query editor (sorting, filtering, resizable panels). Markdown cells are for documentation between queries.

Tabularis notebook with SQL cell, data grid results, and inline pie chart


Cell References via CTEs

This is the part I'm most excited about.

Any SQL cell can reference another cell's query with {{cell_N}}. At execution time, it gets resolved as a CTE:

-- Cell 1: Base query
SELECT customer_id, SUM(amount) AS total
FROM orders
GROUP BY customer_id

-- Cell 3: References Cell 1
SELECT * FROM {{cell_1}} WHERE total > 1000
Enter fullscreen mode Exit fullscreen mode

Becomes:

WITH cell_1 AS (
  SELECT customer_id, SUM(amount) AS total
  FROM orders
  GROUP BY customer_id
)
SELECT * FROM cell_1 WHERE total > 1000
Enter fullscreen mode Exit fullscreen mode

No temp tables, no copy-paste. Change the base query, re-run downstream cells, everything stays in sync. You can chain across multiple cells and every intermediate result stays visible.

Two SQL cells with cell reference — Cell 11 filters results from Cell 10 using CTE syntax


Inline Charts

Any result with 2+ columns and at least one row can be charted — bar, line, or pie — directly in the cell. Pick a label column and value columns, done. Config is saved with the cell.

Not meant to replace BI tools. It's for when you're exploring and want a quick visual check before writing the next query.

SQL cell with bar chart and label column selector dropdown open showing chart configuration

Pie chart and line chart in separate notebook cells showing chart type variety


Parameters

Define once, use everywhere:

@start_date = '2024-01-01'
@end_date   = '2024-12-31'
@min_amount = 500
Enter fullscreen mode Exit fullscreen mode

Every SQL cell with @start_date gets it substituted before execution. Change the value, re-run — all queries pick it up. Great for monthly reports, cohort comparisons, anything where the logic stays the same but inputs change.

Notebook parameters panel with productCategory and orderStatus variables defined


Parallel Execution

Not every cell depends on the previous one. Mark independent cells with the lightning bolt icon and they run concurrently during "Run All" instead of waiting in sequence. For notebooks with heavy queries against different tables, this makes a real difference.

Two SQL cells with parallel execution lightning bolt icons enabled for concurrent running


Run All + Stop on Error

Ctrl+Shift+Enter runs every SQL cell top to bottom. Stop on Error controls whether it halts at the first failure or keeps going. After execution, a summary card shows succeeded/failed/skipped counts — click a failed cell to jump straight to it.


Multi-Database in One Notebook

Each SQL cell can target a different database connection. Pull from production PostgreSQL in one cell, compare with your analytics SQLite in the next. Works across MySQL, MariaDB, PostgreSQL, and SQLite.

SQL cell with database selector dropdown showing multiple MySQL, PostgreSQL, and SQLite connections


Execution History

Every cell keeps its last 10 runs — timestamp, duration, row count. You can restore any previous query version. Useful when you've been iterating and need to go back.

Execution history panel showing timestamp, duration, and row count for previous query runs


AI Assist

Each SQL cell has AI and Explain buttons — describe what you want, get SQL back, or break down an existing query. There's also an auto-naming feature: click the sparkles icon and AI generates a cell name based on the content. Named cells show up in a notebook outline for navigation.

SQL cell with AI and Explain buttons, execution history, and collapsed cells overview

Notebook outline panel with AI-generated cell names and markdown headings as table of contents


Organization

  • Collapse cells to show just headers
  • Drag and drop to reorder
  • Cell names (manual or AI-generated) for identity
  • Markdown cells as section dividers

Notebook with collapsed and expanded SQL and markdown cells showing organization


Import / Export

  • .tabularis-notebook — JSON with cells, parameters, charts. No result data. Share it, import it, connect to a different DB, run it.
  • HTML export — self-contained document with rendered markdown, syntax highlighting, embedded result tables. Dark-themed.
  • Individual results export as CSV or JSON.

What's Not Done Yet

Being honest about rough edges:

  • Large notebooks (30+ cells) need better virtualization
  • Circular reference detection is missing — needs a dependency graph
  • Chart customization is minimal (no axis labels, no color palettes)
  • Keyboard navigation between cells is partially implemented
  • Notebook-level undo/redo doesn't exist yet (cell-level works via Monaco)

Why Build This?

Database clients haven't really evolved beyond "connect, query, see table." Analysis tooling moved forward — Jupyter, Observable, dbt — but the DB client stayed behind.

Notebooks in Tabularis bet that the database client is the right place for exploratory SQL analysis. You already have the connection, the schema, autocomplete, query history. Cells, charts, references and parameters on top of that means the whole workflow — first query to shareable report — happens without switching tools.

It's not a Jupyter replacement. No Python, no R. It's purpose-built for SQL, and for the kind of work most people actually do with their database every day — ad-hoc exploration, report building, data validation, performance investigation — that focus is a feature.

Landing soon. If you want to try it, check out Tabularis.

Top comments (0)