XLTable: Bringing the OLAP Experience Back to Excel on Modern Data Warehouses
The problem: data has grown, but users haven’t
Business users have always worked with data.
For many years, Excel was the primary tool for analysis, and it worked well as long as datasets were relatively small.
Over time, data volumes have grown dramatically. Today, meaningful analytical work often requires knowledge of SQL, Python, and modern data warehouses. These technologies are far beyond the skill set of most business users.
As a result, companies are forced to involve data engineers, analysts, or BI specialists even for simple analytical questions. This increases cost, slows down decision-making, and creates a gap between business questions and answers.
Why Excel is still alive
Despite countless predictions, Excel has not disappeared.
The reason is simple: Excel is intuitive. Pivot tables allow users to explore data freely — choosing metrics, slicing by dimensions, drilling down, and rearranging reports on the fly.
No BI tool has fully replicated this combination of flexibility, speed, and familiarity for a broad audience.
Why BI tools did not replace Excel
Modern BI tools such as Power BI, Looker Studio, or DataLens are powerful and well-designed. However, they have not become a true replacement for Excel in everyday analytical work.
In practice:
- reports are often predefined,
- ad-hoc analysis is limited,
- semantic models require maintenance by specialists,
- users lose the feeling of direct interaction with data.
When business users want to explore data independently, they still turn to Excel.
OLAP was the right idea
Many years ago, Microsoft took an important step by introducing OLAP technology.
The idea was simple:
- data processing happens on a server,
- heavy computations are performed centrally,
- users work with data through Excel as a client.
This allowed business users to work with large datasets using familiar Excel pivot tables, while all complexity remained on the server side. OLAP dramatically improved accessibility of analytics.
Why classic OLAP stopped evolving
Traditional OLAP systems rely on pre-calculated cubes and aggregations. While this approach worked well in the past, it does not align with modern data architectures.
At the same time, columnar databases and cloud data warehouses became dominant:
- ClickHouse
- BigQuery
- Snowflake
- Redshift
These systems are extremely fast and scalable, but working with them still requires SQL, Python, or BI tools.
As a result:
- OLAP provides the right user experience but is technologically outdated,
- modern data warehouses are powerful but inaccessible to most users.
The idea behind XLTable
XLTable was created to bridge this gap.
Our goal is simple:
Give users the same experience they had with OLAP in Excel, but on top of modern columnar data warehouses.
Users should be able to:
- open Excel,
- connect to data,
- build pivot tables,
- work with measures and dimensions,
without learning SQL or Python.
A short overview of OLAP principles
OLAP is based on a multidimensional data model.
Conceptually, data is represented as a multidimensional cube:
- Dimensions define axes (time, products, customers, regions)
- Measures are stored in cube cells (revenue, quantity, average price)
When a user asks a question like “revenue by year”, they select:
- one measure (revenue),
- one dimension (year).
Excel pivot tables act as a client for OLAP systems, allowing users to arrange dimensions in rows and columns, apply filters, sorting, and drill-down operations.
OLAP vs columnar databases
Classic OLAP systems:
- load data into cubes,
- pre-calculate aggregations,
- store results for fast access.
Columnar databases work differently:
- data is stored in raw, column-oriented form,
- aggregations are calculated on the fly,
- performance is achieved through compression, parallelism, and query optimization.
Modern columnar systems are designed to compute analytical queries efficiently without pre-aggregation.
Shifting computation to the database
XLTable follows a simple principle:
All heavy computation should happen inside the database.
Instead of pre-calculating cubes, XLTable delegates aggregations and grouping to the underlying columnar database, using it exactly for what it was designed for.
XLTable acts as a semantic and protocol layer, not as a compute engine.
Open OLAP standards: XMLA and MDX
OLAP is an open technology.
Besides Microsoft, OLAP servers have been implemented by multiple vendors such as Oracle and Hyperion. The ecosystem relies on two key standards:
- XMLA (XML for Analysis) — a protocol used by clients (such as Excel) to communicate with OLAP servers
- MDX — a SQL-like query language for multidimensional data
When Excel works with an OLAP server, it sends XMLA requests containing embedded MDX queries.
Microsoft published the XMLA specification, making it possible to implement compatible OLAP servers.
What XLTable actually is
XLTable is a custom OLAP-compatible server that:
- accepts XMLA requests from Excel over HTTP,
- translates MDX queries into SQL,
- executes them on modern data warehouses,
- returns results back to Excel in XMLA format.
From Excel’s perspective, XLTable behaves like a classic OLAP server.
From the database’s perspective, XLTable is a client that generates optimized SQL.
Defining the semantic model
Any OLAP system requires a semantic model:
- measures,
- dimensions,
- and mappings to source tables.
In Microsoft OLAP, this model is defined using graphical tools in Visual Studio.
XLTable takes a different approach. The semantic model is defined using SQL-based configuration scripts.
This approach is:
- simpler and more transparent,
- easier to version and maintain,
- explicit in terms of generated SQL,
- important for performance and cost control, especially in systems like BigQuery.
Once the model is defined, users simply connect to it from Excel.
Key features of XLTable
- Fully proprietary, in-house development
- All data processing happens inside the database (e.g. Snowflake)
- Support for multiple measure groups, dimensions, and hierarchies in a single cube
- Flexible caching strategies
- LDAP / Active Directory integration
- Fine-grained access control:
- by measures,
- by dimensions,
- by dimension members
Performance and scalability
- XLTable does not perform aggregation or grouping itself
- It receives MDX from Excel, translates it to SQL, and executes it in the database
- This process requires minimal server resources
- Performance depends entirely on database configuration, which aligns with columnar database design
- Results can be cached at user or server level
There are no artificial limits on the number of dimensions or measure groups.
A typical production configuration includes:
- ~30 source tables
- total data volume around 2 TB
- individual tables with up to 2 billion rows
What’s next
In the next article, we will walk through:
- installation,
- configuration,
- and the first connection from Excel to XLTable.

Top comments (0)