PostgreSQL 18 (currently in beta) introduces a UUID version 7 generation function that features a time-ordered prefix, reducing the scattering seen in random UUID version 4. I was surprised by the enthusiasm for this. For time-incrementing identifiers, I prefer cached sequences. I think the primary advantage of UUID is the ability to generate it from the application, before hitting the database. I discussed this in a previous post: UUID in PostgreSQL.
There's one case where generating a UUID version 7 in the database is beneficial: bulk loads, as it shares the same uuid
datatype as UUIDv4, and both can be present in the same column.
For OLTP, UUIDv4 minimizes hotspots in the B-Tree index during concurrent inserts, and is often generated by the application. However, for bulk loads, UUIDv7 improves cache efficiency by updating fewer blocks. When data comes from another table, and necessitates an new identifier, a built-in function is useful.
Bulk Insert throughput
For this demo, I ran a bulk ingest job, inserting 10 million rows, and compared identifiers generated by uuidv7()
and uuidv4
. The purpose is to illustrate the comparison.
Here is the full script:
-- reset (you are in a lab)
\! pkill -f "postgres: .* COPY"
drop table if exists demo;
drop table if exists progress;
-- create table and insert 10M rows in background
create table demo (
id uuid default uuidv7() primary key,
-- id uuid default uuidv4() primary key,
value text
);
\! psql -c "copy demo(value) from program 'base64 -w 100 /dev/urandom | head -10000000'" &
-- monitor size and progress
create table progress as
select now() ts,*,pg_indexes_size('demo')/8192 index_pages, pg_table_size('demo') tablesize , pg_indexes_size('demo') indexsize, pg_current_wal_lsn()
from pg_stat_progress_copy;
create index on progress(ts asc);
-- record last progress snapshot
insert into progress
select now(),*,pg_indexes_size('demo')/8192 index_pages, pg_table_size('demo') tablesize , pg_indexes_size('demo') indexsize, pg_current_wal_lsn()
from pg_stat_progress_copy
returning *
\;
-- display rate over last snapshots
select
pg_size_pretty((bytes_processed - lag(bytes_processed) over w) / extract(epoch from (ts - lag(ts) over w))) || '/s' as "COPY bytes/s",
(tuples_processed - lag(tuples_processed) over w) / extract(epoch from (ts - lag(ts) over w))::int as "rows/s",
pg_size_pretty(pg_wal_lsn_diff(pg_current_wal_lsn , lag(pg_current_wal_lsn) over w) / extract(epoch from (ts - lag(ts) over w))) || '/s' as "WAL bytes/s",
round(pg_wal_lsn_diff(pg_current_wal_lsn , lag(pg_current_wal_lsn) over w) / (bytes_processed - lag(bytes_processed) over w),2) || ' %' as "WAL bytes/COPY bytes",
pg_size_pretty(indexsize) "index",
pg_size_pretty(tablesize) "table",
current_setting('shared_buffers') shared_buffers,
ts
from progress
window w as (order by ts asc)
order by ts asc -- limit 30
-- every 10 second
\watch 10
The script first resets the lab environment to avoid leftovers from previous runs. It then creates a demo table, populating it via COPY with pseudo-random values. I run it in the background. The foreground session queries pg_stat_progress_copy
every ten seconds and calculates some thoughput statistics. Each snapshot records:
- Insert rate (rows per second)
- COPY throughput (input bytes per second)
- Index size and table size over time
- WAL (Write-Ahead Log) usage
UUIDv7
Here is the result with id uuid default uuidv7() primary key
:
COPY bytes/s | rows/s | WAL bytes/s | WAL bytes/COPY bytes | index | table | shared_buffers | ts
--------------+--------+-------------+----------------------+---------+---------+----------------+-------------------------------
| | | | 8232 kB | 38 MB | 128MB | 2025-08-11 15:44:56.858214+00
9030 kB/s | 91601 | 22 MB/s | 2.46 x | 36 MB | 169 MB | 128MB | 2025-08-11 15:45:06.858344+00
8979 kB/s | 91036 | 22 MB/s | 2.50 x | 63 MB | 298 MB | 128MB | 2025-08-11 15:45:16.858316+00
819 kB/s | 8314 | 2046 kB/s | 2.50 x | 65 MB | 310 MB | 128MB | 2025-08-11 15:45:26.858289+00
826 kB/s | 8318 | 2048 kB/s | 2.48 x | 68 MB | 321 MB | 128MB | 2025-08-11 15:45:36.858318+00
7693 kB/s | 77984 | 19 MB/s | 2.50 x | 91 MB | 432 MB | 128MB | 2025-08-11 15:45:46.858495+00
7296 kB/s | 74032 | 18 MB/s | 2.50 x | 114 MB | 537 MB | 128MB | 2025-08-11 15:45:56.858338+00
5184 kB/s | 52531 | 13 MB/s | 2.51 x | 130 MB | 612 MB | 128MB | 2025-08-11 15:46:06.858545+00
832 kB/s | 8453 | 2017 kB/s | 2.42 x | 132 MB | 624 MB | 128MB | 2025-08-11 15:46:16.858854+00
1389 kB/s | 14074 | 3528 kB/s | 2.54 x | 136 MB | 644 MB | 128MB | 2025-08-11 15:46:26.858837+00
7642 kB/s | 77440 | 19 MB/s | 2.49 x | 160 MB | 754 MB | 128MB | 2025-08-11 15:46:36.858441+00
7776 kB/s | 78848 | 19 MB/s | 2.50 x | 183 MB | 866 MB | 128MB | 2025-08-11 15:46:46.858377+00
4826 kB/s | 48928 | 12 MB/s | 2.50 x | 198 MB | 935 MB | 128MB | 2025-08-11 15:46:56.858401+00
1350 kB/s | 13675 | 3366 kB/s | 2.49 x | 202 MB | 955 MB | 128MB | 2025-08-11 15:47:06.858323+00
1216 kB/s | 12372 | 3047 kB/s | 2.51 x | 206 MB | 972 MB | 128MB | 2025-08-11 15:47:16.858334+00
7731 kB/s | 78382 | 19 MB/s | 2.50 x | 229 MB | 1084 MB | 128MB | 2025-08-11 15:47:26.859178+00
6996 kB/s | 70923 | 17 MB/s | 2.50 x | 251 MB | 1185 MB | 128MB | 2025-08-11 15:47:36.858273+00
4928 kB/s | 49925 | 12 MB/s | 2.55 x | 266 MB | 1255 MB | 128MB | 2025-08-11 15:47:46.858335+00
934 kB/s | 9504 | 2166 kB/s | 2.32 x | 269 MB | 1269 MB | 128MB | 2025-08-11 15:47:56.858356+00
2259 kB/s | 22880 | 5806 kB/s | 2.57 x | 276 MB | 1301 MB | 128MB | 2025-08-11 15:48:06.858839+00
7744 kB/s | 78496 | 19 MB/s | 2.45 x | 299 MB | 1413 MB | 128MB | 2025-08-11 15:48:16.858306+00
(21 rows)
The UUIDv7 ingest run shows consistently high throughput, with brief dips likely due to vacuum, background I/O or checkpoints, followed by quick recovery. Index growth is smooth and compact, while WAL overhead stays stable at 2.50 times the input data. The sequential, time-sortable nature of UUIDv7 enables fast and predictable bulk load performance, completing the 10M row job in just over 3 minutes and maintaining tight disk usage.
UUIDv4
Here is the result with id uuid default uuidv4() primary key
:
COPY bytes/s | rows/s | WAL bytes/s | WAL bytes/COPY bytes | index | table | shared_buffers | ts
--------------+--------+-------------+----------------------+---------+---------+----------------+-------------------------------
8698 kB/s | 88199 | 24 MB/s | 2.82 x | 37 MB | 142 MB | 128MB | 2025-08-11 15:37:08.184794+00
7802 kB/s | 79130 | 20 MB/s | 2.66 x | 71 MB | 254 MB | 128MB | 2025-08-11 15:37:18.184817+00
5920 kB/s | 59959 | 15 MB/s | 2.58 x | 90 MB | 339 MB | 128MB | 2025-08-11 15:37:28.184804+00
1248 kB/s | 12664 | 3364 kB/s | 2.70 x | 96 MB | 357 MB | 128MB | 2025-08-11 15:37:38.184869+00
877 kB/s | 8924 | 2391 kB/s | 2.73 x | 101 MB | 370 MB | 128MB | 2025-08-11 15:37:48.184803+00
1882 kB/s | 19083 | 12 MB/s | 6.60 x | 112 MB | 397 MB | 128MB | 2025-08-11 15:37:58.184795+00
4384 kB/s | 44420 | 12 MB/s | 2.71 x | 131 MB | 460 MB | 128MB | 2025-08-11 15:38:08.184808+00
3821 kB/s | 38720 | 10006 kB/s | 2.62 x | 142 MB | 515 MB | 128MB | 2025-08-11 15:38:18.184814+00
2778 kB/s | 28160 | 7144 kB/s | 2.57 x | 149 MB | 555 MB | 128MB | 2025-08-11 15:38:28.184822+00
1971 kB/s | 19982 | 5152 kB/s | 2.61 x | 155 MB | 583 MB | 128MB | 2025-08-11 15:38:38.184814+00
1427 kB/s | 14513 | 3750 kB/s | 2.63 x | 159 MB | 604 MB | 128MB | 2025-08-11 15:38:48.184858+00
1152 kB/s | 11658 | 3069 kB/s | 2.66 x | 164 MB | 621 MB | 128MB | 2025-08-11 15:38:58.184933+00
1312 kB/s | 13333 | 15 MB/s | 11.83 x | 169 MB | 639 MB | 128MB | 2025-08-11 15:39:08.184786+00
3373 kB/s | 34144 | 9108 kB/s | 2.70 x | 184 MB | 688 MB | 128MB | 2025-08-11 15:39:18.184791+00
2221 kB/s | 22528 | 5985 kB/s | 2.69 x | 196 MB | 720 MB | 128MB | 2025-08-11 15:39:28.18487+00
3885 kB/s | 39424 | 10 MB/s | 2.72 x | 217 MB | 776 MB | 128MB | 2025-08-11 15:39:38.184842+00
3232 kB/s | 32736 | 8685 kB/s | 2.69 x | 233 MB | 822 MB | 128MB | 2025-08-11 15:39:48.18481+00
832 kB/s | 8448 | 2233 kB/s | 2.68 x | 237 MB | 834 MB | 128MB | 2025-08-11 15:39:58.184806+00
800 kB/s | 8096 | 2136 kB/s | 2.67 x | 241 MB | 846 MB | 128MB | 2025-08-11 15:40:08.184789+00
1107 kB/s | 11264 | 2962 kB/s | 2.68 x | 246 MB | 862 MB | 128MB | 2025-08-11 15:40:18.184786+00
768 kB/s | 7744 | 15 MB/s | 19.64 x | 249 MB | 873 MB | 128MB | 2025-08-11 15:40:28.184804+00
2912 kB/s | 29568 | 11 MB/s | 3.99 x | 260 MB | 915 MB | 128MB | 2025-08-11 15:40:38.184825+00
3475 kB/s | 35200 | 9083 kB/s | 2.61 x | 271 MB | 965 MB | 128MB | 2025-08-11 15:40:48.184843+00
1491 kB/s | 15136 | 3886 kB/s | 2.61 x | 276 MB | 986 MB | 128MB | 2025-08-11 15:40:58.184784+00
1427 kB/s | 14432 | 3682 kB/s | 2.58 x | 280 MB | 1007 MB | 128MB | 2025-08-11 15:41:08.184791+00
2426 kB/s | 24640 | 6333 kB/s | 2.61 x | 286 MB | 1042 MB | 128MB | 2025-08-11 15:41:18.184804+00
557 kB/s | 5632 | 1394 kB/s | 2.50 x | 288 MB | 1050 MB | 128MB | 2025-08-11 15:41:28.184862+00
998 kB/s | 10122 | 2596 kB/s | 2.60 x | 290 MB | 1064 MB | 128MB | 2025-08-11 15:41:38.188483+00
845 kB/s | 8533 | 16 MB/s | 19.09 x | 293 MB | 1076 MB | 128MB | 2025-08-11 15:41:48.184857+00
1939 kB/s | 19704 | 12 MB/s | 6.21 x | 298 MB | 1104 MB | 128MB | 2025-08-11 15:41:58.184859+00
3878 kB/s | 39320 | 10137 kB/s | 2.61 x | 309 MB | 1160 MB | 128MB | 2025-08-11 15:42:08.184844+00
2554 kB/s | 25853 | 6740 kB/s | 2.64 x | 318 MB | 1197 MB | 128MB | 2025-08-11 15:42:18.18483+00
1485 kB/s | 15088 | 3834 kB/s | 2.58 x | 323 MB | 1218 MB | 128MB | 2025-08-11 15:42:28.185772+00
1319 kB/s | 13376 | 3469 kB/s | 2.63 x | 328 MB | 1237 MB | 128MB | 2025-08-11 15:42:38.184778+00
730 kB/s | 7392 | 1922 kB/s | 2.63 x | 331 MB | 1248 MB | 128MB | 2025-08-11 15:42:48.184816+00
973 kB/s | 9856 | 2584 kB/s | 2.66 x | 335 MB | 1262 MB | 128MB | 2025-08-11 15:42:58.184801+00
896 kB/s | 9044 | 12 MB/s | 14.24 x | 338 MB | 1275 MB | 128MB | 2025-08-11 15:43:08.184924+00
1011 kB/s | 10279 | 15 MB/s | 15.55 x | 343 MB | 1289 MB | 128MB | 2025-08-11 15:43:18.184842+00
3373 kB/s | 34179 | 10 MB/s | 3.11 x | 358 MB | 1338 MB | 128MB | 2025-08-11 15:43:28.184817+00
2566 kB/s | 26002 | 6871 kB/s | 2.68 x | 370 MB | 1375 MB | 128MB | 2025-08-11 15:43:38.18481+00
1216 kB/s | 12365 | 3279 kB/s | 2.70 x | 376 MB | 1392 MB | 128MB | 2025-08-11 15:43:48.184849+00
1114 kB/s | 11264 | 2993 kB/s | 2.69 x | 381 MB | 1408 MB | 128MB | 2025-08-11 15:43:58.184826+00
659 kB/s | 6688 | 1780 kB/s | 2.70 x | 385 MB | 1418 MB | 128MB | 2025-08-11 15:44:08.184888+00
With UUIDv4, initial throughput is strong but quickly becomes uneven, with frequent dips and slow intervals. The index grows larger than with UUIDv7 due to increased page splits and less packing from random key distribution. WAL overhead shows spikes, peaking at 19 times the input data during heavy index maintenance. More cache eviction is also responsible for full-page WAL logging. Overall, the job takes over twice as long to complete compared to UUIDv7, demonstrating how random UUIDs may increase the response time for a bulk load.
Comparison
UUIDv7 is faster than UUIDv4 for bulk inserts, but both suffer from PostgreSQL's performance issues due to intermittent background tasks like vacuuming and checkpoints:
When comparing UUIDv7 and UUIDv4 on a single-session bulk load, the advantages of sequential UUIDs become immediately clear. With UUIDv7, the bulk load finishes in just over 3 minutes, delivering consistently high throughput and efficient index growth. WAL overhead remains low and predictable due to compact indexes and fewer page splits. In contrast, the UUIDv4 run takes more than twice as long—over 7 minutes—to complete the same workload. Throughput for UUIDv4 is erratic, frequently impacted by slow intervals caused by random key distribution in the index. This leads to higher index fragmentation, more frequent and expensive index maintenance, and periodic spikes in WAL usage. The end result: with UUIDv4, you sacrifice ingest performance and disk efficiency in exchange for randomized key distribution, whereas UUIDv7 offers rapid, stable loading and more compact indexes.
However, before you rush to standardize on UUIDv7, there’s one critical caveat for high-concurrency workloads: the last B+Tree page is a hotspot as all new values are higher than the preceding. This is good with a single session, like this bulk load, but the concurrent inserts typical of OLTP would benefit from being more distributed. Random UUIDv4 distributes inserts evenly across the index. What would be better is a UUID with a low cardinality prefix, like a modulo on the session identifier, to distribute the insertion to multiple places instead of one, but followed by a sequential number to avoid distributing to all index blocks. The uuidv7() function takes an additional parameter that can be used to get multiple insertion points in the B+Tree:
postgres=# select uuidv7( interval '1 year' ) from generate_series(1,5);
uuidv7
--------------------------------------
019ff213-dc27-79c2-8021-6206fc583f7b
019ff213-dc27-7a34-8e3d-85f74a2319ad
019ff213-dc27-7a3e-9321-38fa1cb90577
019ff213-dc27-7a46-9037-01e7b7042728
019ff213-dc27-7a4d-8770-1964895a789b
(5 rows)
postgres=# select uuidv7( interval '2 year' ) from generate_series(1,5);
uuidv7
--------------------------------------
01a749c5-0dd5-799d-ac15-2f0a3483ec43
01a749c5-0dd5-7a0b-a9f0-da3a86723785
01a749c5-0dd5-7a15-b789-99008641f559
01a749c5-0dd5-7a1d-b63e-276796f516e2
01a749c5-0dd5-7a24-aed1-841df99b5269
(5 rows)
postgres=# select uuidv7( (pg_backend_pid()%8) * interval '1 year' ) from generate_series(1,5);
uuidv7
--------------------------------------
01993a07-de13-7108-8fed-6236c1ac302a
01993a07-de13-717c-9abe-e8e3bb8e7ca2
01993a07-de13-7186-a812-034d7bd36b7e
01993a07-de13-718f-aee7-459428c043ed
01993a07-de13-7196-847e-174143ece29d
(5 rows)
I used the shift
parameter to partition the UUID values into eight insertion points, rathen than one corresponging to the current time, reducing the contention of multiple inserts without bringing all pages to the shared buffer cache. By using the connection process ID, I increase locality of rows inserted by the same session at the same time (like all order items of an order) but distribute the rows from multiple sessions.
Top comments (1)
Some comments may only be visible to logged-in visitors. Sign in to view all comments.