DEV Community


Handling PostgreSQL slow counting on Elixir

edipox profile image Edipo Vinicius da Silva ・3 min read

Some time ago I found an unexpected issue when working on a simple JSON API project that should provide paginated resources.

Most JSON API friendly libraries perform a record count in order to provide the pagination links on each response, especially for the last page link. But under some conditions a simple count query can be very slow:


Specifically, in this case, this query was taking over 6sec when performed on a table with over 2.7M records.

Understanding the issue

This count query is high time consuming due to PostgreSQL's MVCC implementation. The MVCC (Multiversion Concurrency Control) is a method commonly used to provide data consistency when multiple processes are accessing the same data. This is usually done by taking snapshots of the data, so when concurrent transactions are working on the same data, each one will read a "version" of a given database object based on the transaction time.

In order to count the table rows, PostgreSQL needs to ensure that they exist. But given the multiple snapshots/versions of each record, PostgreSQL is unable to summarize the whole table directly. So instead PostgreSQL reads each row, performing a sequential scan.!

How to fix this?

There are different approaches to fix this issue, including a trigger-based mechanism. In my case, it was acceptable to use an estimated count of the rows, and luckily, PostgreSQL provides exactly that via pg_class reltuples:

SELECT reltuples::BIGINT AS estimate FROM pg_class WHERE relname=<table_name>

Wait.. What is reltuples?

[It is the] Number of rows in the table. This is only an estimate used by the planner. - PostgreSQL: Documentation: pg_class

The accuracy of this estimate depends on how frequently the planner updates pg_class reltuples:

It is updated by VACUUM, ANALYZE, and a few DDL commands such as CREATE INDEX. - PostgreSQL: Documentation: pg_class

You are probably not executing CREATE and INDEX that often, but don't worry, ANALYZE will also update this value. VACUUM and ANALYZE are automatically executed by the autovacuum daemon which is enabled by default.

Notice this solution is based on the total estimate record count, so it won't work for cases when you need to filter query results.

OK, let's write some Elixir code!

In our implementation, we were using scrivener_ecto which is a very nice package that helps you to add pagination to your Ecto queries.

Recently we added the total_entries option to scrivener_ecto so any custom logic can be used to calculate the total_entries and fix similar situations.

This is the function that handles the pagination based on a given conn and page parameters. We pass the query to Scrivener.paginate with a config struct containing the total_entries option:

@spec handle_pagination(conn :: Plug.Conn.t(), page :: number | atom, page_size :: number | atom, query :: module) ::
defp handle_pagination(conn, page, page_size, query) do
  config = %Scrivener.Config{
    module: Repo,
    page_number: page,
    page_size: page_size,
    options: [total_entries: total_entries(query)]

  query |> Scrivener.paginate(config)

Notice the call to total_entries on the previous function. We get the estimated value by executing SQL code to retrieve reltuples directly:

# Retrieves the estimated record count
@spec total_entries(query :: module) :: number
defp total_entries(query) do
  table_name = table_name(query)

  {:ok, result} =
    Ecto.Adapters.SQL.query(Repo, "SELECT reltuples::BIGINT AS estimate FROM pg_class WHERE relname=$1", [table_name])

  result.rows |> List.first() |> List.first()

We have a missing piece on this puzzle, given that we don’t know the table name to query pg_class’ reltuples. To fix this we read the __schema__(:source) from the query:

# Determines the table name based on a given query
@spec table_name(query :: module) :: module
defp table_name(query) do

# Retrieves the model module from a given query
@spec module_from_query(module :: module) :: module
defp module_from_query(%{from: {_table, module}}), do: module
defp module_from_query(module), do: module


In certain cases, when it’s possible to use relatively accurate values, we can take advantage of PostgreSQL estimated count capabilities. This plus the very open elixir community that allows us to add new options to existing packages makes an almost effortless transition from an unacceptable issue to a very efficient endpoint: taking less than 0.5 ms in my computer with half a million records.

Suggestions and comments are more than welcome!

Discussion (0)

Editor guide