DEV Community

Cover image for Filament slow on large table? Optimize with Postgres partitions
yebor974 for Filament Mastery

Posted on • Originally published at filamentmastery.com on

Filament slow on large table? Optimize with Postgres partitions

When I started working on a client project, I ran into a familiar challenge: a Filament Resource managing millions of sensor measurements.

The table contained several years of historical data. Most users usually focused on the current month, but occasionally they needed to look back at older measurements for special cases.

At first, it looked like the resource was slow simply due to its sheer size, but profiling revealed a deeper issue. It became clear that Filament itself was not the bottleneck, the database was.

Identifying the problem

Filtering by date or sensor type took several seconds, even for just the current month. Sorting and pagination felt sluggish. It was a classic case of “everything works, but not fast enough for production.”

The goal was simple: allow fast access to current data while keeping historical queries possible, without rewriting the panel.

The example below has been simplified for clarity. In the real project, additional filters and performance optimizations were applied. Default dates focus on the current month, but older data queries remain fully supported. The project was built with FilamentPHP and PostgreSQL.

Partitioning the table by month

To solve the problem, I used PostgreSQL table partitioning, splitting the data by month. Here’s a simplified example:

// Parent table
CREATE TABLE sensor_measurements (
    id BIGSERIAL NOT NULL,
    sensor_id BIGINT NOT NULL REFERENCES sensors(id) ON DELETE CASCADE,
    value NUMERIC(10,2) NOT NULL,
    measured_at TIMESTAMP(0) NOT NULL,
      data JSONB
      created_at TIMESTAMP(0) DEFAULT now(),
    updated_at TIMESTAMP(0) DEFAULT now(),
      PRIMARY KEY (id, measured_at)
) PARTITION BY RANGE (measured_at);

// a month partition
CREATE TABLE IF NOT EXISTS sensor_measurements_2026_01 PARTITION OF sensor_measurements
    FOR VALUES FROM ('2026-01-01') TO ('2026-02-01');

CREATE TABLE IF NOT EXISTS sensor_measurements_2026_02 PARTITION OF sensor_measurements
    FOR VALUES FROM ('2026-02-01') TO ('2026-03-01');

Enter fullscreen mode Exit fullscreen mode

The partition key need to be in the primary key.

Partitioning ensures queries for the current month only scan a small, relevant subset, keeping performance high. Historical queries still work efficiently by targeting older partitions.

I created a scheduled job to automatically create new year/month partitions. More information about PostgreSQL partitioning is available in the official PostgreSQL documentation

Adding indexes and optimizing queries

Indexes must be created on each partition, not on the parent table. For example:

CREATE INDEX idx_sensor_measurements_2026_01_data_gin
    ON sensor_measurements_2026_01(sensor_id)
      USING GIN (data);

CREATE INDEX idx_sensor_measurements_2026_01_measured_at 
    ON sensor_measurements_2026_02(measured_at)
      USING GIN (data);

Enter fullscreen mode Exit fullscreen mode

This ensures that even with filters, the database can quickly find the relevant rows.

Adjusting the Filament Resource

Finally, I adapted the Filament Resource with default filters for the current month:

use Carbon\Carbon;

class SensorMeasurementResource extends Resource
{
      //...

    public static function table(Table $table): Table
    {
        $now = Carbon::now();

        return $table
            ->columns([
                TextColumn::make('sensor_id')->sortable(),
                TextColumn::make('value'),
                TextColumn::make('measured_at')->dateTime(),
            ])
            ->filters([
                Filter::make('date')
                    ->form([
                        DatePicker::make('from')
                            ->default($now->copy()->startOfMonth()),
                        DatePicker::make('to')
                            ->default($now->copy()->endOfMonth()),
                    ])
                    ->query(fn(Builder $query, array $data) => $query
                        ->when($data['from'], fn($q) => $q->where('measured_at', '>=', $data['from']))
                        ->when($data['to'], fn($q) => $q->where('measured_at', '<=', $data['to']))
                    ),
            ]);
    }
}

Enter fullscreen mode Exit fullscreen mode

I didn't include the TableSchema file in this example for simplicity. One limitation is that the filter cannot be made mandatory without allowing users to remove it from the indicators toolbar. The goal is to always have a date range to optimize database loading.

Results

The impact was immediate:

  • Queries for the current month now return in under 300ms
  • Historical queries remain fast enough for occasional analysis
  • Users can explore both current and past data without frustration

Lessons Learned

  1. Filament can handle large datasets, if the underlying database is optimized.
  2. Partitioning by time periods is ideal for sensor or historical measurement data.
  3. Proper indexes + pagination make huge tables usable in production.
  4. Profiling queries first saves hours of wasted debugging in the admin panel.
  5. Small tweaks in Filament (filters, lazy-loading) can dramatically improve UX.

If you enjoyed this kind of real-world experience and case study, don’t forget to like and share it!

📬 Join the community on filamentmastery.com

Top comments (0)