DEV Community

Cover image for Server-side cache for YugabyteDB sequences to workaround the Nspgl `DISCARD SEQUENCES`
Franck Pachot for YugabyteDB

Posted on

Server-side cache for YugabyteDB sequences to workaround the Nspgl `DISCARD SEQUENCES`

Npgsql, the Entity Framework data provider for PostgreSQL, issues a DISCARD ALL or DISCARD SEQUENCES command after each connection is released to the pool, unless No Reset On Close=true, in which case no discard is called. Discarding the changes made by the connection, such as temporary tables or parameter settings, is a good idea. However, the sequence cache is a feature explicitly set when creating a sequence to optimize them, and starting each connection with a cold cache, when grabbed from the pool, is not desirable.

The consequence of discarding those is that you cannot scale your application with sequence cache because:

  • PostgreSQL caches the sequence at the connection level, and discarding the sequence state loses all its benefits.
  • You don't benefit from sequence cache but have all its drawbacks as a gap will be created each time it is discarded.

Fortunately, YugabyteDB can cache the sequence numbers on the tablet server rather than on the PostgreSQL backend.

ysql_sequence_cache_method=connection

Here is an example with YugabyteDB, using the same connection-side caching, like PostgreSQL does, and which is the default for YugabyteDB:

yugabyted start --tserver_flags=ysql_sequence_cache_method=server
yugabyted connect ysql <<<'create sequence s cache 100;'

Enter fullscreen mode Exit fullscreen mode

I have created a sequence with CACHE 100, also the default, as set by ysql_sequence_cache_minval=100.

I run ten concurrent sessions reading the next value of the sequence from prepared statements to stress-test the sequence:

/home/yugabyte/postgres/bin/ysql_bench -r -M prepared -n -c 10 -T 15 -h $(hostname) -f /dev/stdin <<SQL
select nextval('s');
SQL

Enter fullscreen mode Exit fullscreen mode

Here is the result: 35k values generated from the sequence per second.

latency average = 0.284 ms
tps = 35255.525557 (including connections establishing)
tps = 35295.129672 (excluding connections establishing)
statement latencies in milliseconds:
         0.279  select nextval('s');
Enter fullscreen mode Exit fullscreen mode

To show the problem, I'm running the same with a DISCARD SEQUENCES after each:

/home/yugabyte/postgres/bin/ysql_bench -r -M prepared -n -c 10 -T 15 -h $(hostname) -f /dev/stdin <<SQL
select nextval('s');
discard sequences;
SQL

Enter fullscreen mode Exit fullscreen mode

My throughput is ten times slower:

latency average = 3.311 ms
tps = 3020.134687 (including connections establishing)
tps = 3023.566441 (excluding connections establishing)
statement latencies in milliseconds:
         2.689  select nextval('s');
         0.571  discard sequences;
[root@52117adfdda7 yugabyte]#
Enter fullscreen mode Exit fullscreen mode

Here, each nextval() call had to update the sequence value stored in the database. This involves a remote write with related network latency, which can be bad, especially in a multi-region deployment.

ysql_sequence_cache_method=server

YugabyteDB offers the possibility of caching the sequence on the tablet server. The DISCARD statements, which concern only connections, do not affect this.

I restart my server with this setting:

yugabyted stop
yugabyted start --tserver_flags=ysql_sequence_cache_method=server
Enter fullscreen mode Exit fullscreen mode

With my example, the throughput has doubled:

latency average = 1.616 ms
tps = 6188.567466 (including connections establishing)
tps = 6195.493912 (excluding connections establishing)
statement latencies in milliseconds:
         0.842  select nextval('s');
         0.750  discard sequences;
Enter fullscreen mode Exit fullscreen mode

This shows that the optimization works. I'm running on a single-node server, where the sequence is on the same server. You will see a more significant improvement with a multi-zone or multi-region deployment because reading the sequence value for each nextval() is not scalable. That's why SQL sequences have a cache: to read multiple values at once and save subsequent reads.

You may wonder if there's an overhead to caching on the tablet server rather than the connection. Of course, because this cache is shared by multiple connections, it must be synchronized with mutexes, and you may see some spinlock contention with such an intensive stress test.

If the DISCARD statement is removed, the transaction rate drops from 35k to 11k per second when switching from a connection-side cache to a server-side cache:

latency average = 0.870 ms
tps = 11489.416992 (including connections establishing)
tps = 11502.260194 (excluding connections establishing)
statement latencies in milliseconds:
         0.857  select nextval('s');
Enter fullscreen mode Exit fullscreen mode

The reason connection-side caching is the default is that it is the fastest when connections are used correctly.
There are two reasons why you might prefer setting the server-side cache:

  1. As we have reproduced here to simulate what Nspgl does, the application or framework discards the connection-side cache.
  2. The connection pool is too dynamic and opens too many new connections at once, which will start with a cold cache and compete to update the sequence table.

In some cases, if you cannot fix it from the application side, you can implement server-side caching for the sequences. If you encounter a bottleneck, it can still scale by adding more tablet servers to connect to.

About fixing it from the application side, I've opened the following issue:

DISCARD SEQUENCES is useless and not scalable #5705

On connection reset, DISCARD SEQUENCE is called. Why discard the sequence cache? This action eliminates all the benefits of having a sequence cache (the same as CACHE 1) while introducing numerous inconveniences (like creating many gaps). The sequence cache is designed to maintain generated next values across sessions within a physical transaction, to be scalable. Without a cache, or when the cache is cold, like after a DISCARD SEQUENCE, each nextval() call must update the physical sequence, which becomes a bottleneck when multiple sessions are calling it. The workaround is No Reset On Close=true but this disables all reset. Resetting temporary tables are desirable because without it data can be shared between unrelated sessions.



Calling DISCARD ALL is not a good idea for your application. You may want to disable the state for:

CLOSE ALL; SET SESSION AUTHORIZATION DEFAULT; SET SESSION AUTHORIZATION DEFAULT; RESET ALL; UNLISTEN *; SELECT pg_advisory_unlock_all();

.

If you want each connection to prepare its statements, you can also DEALLOCATE ALL. However, DISCARD PLANS; DISCARD SEQUENCES; are undesirable as they flush the cache explicitly used for performance and scalability reasons.

Note that PgBouncer also does a DISCARD ALL by default but you can change it with server_reset_query.

Top comments (0)