<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Tantor Labs</title>
    <description>The latest articles on DEV Community by Tantor Labs (@tantorlabs).</description>
    <link>https://dev.to/tantorlabs</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/tantorlabs"/>
    <language>en</language>
    <item>
      <title>How to load test PostgreSQL database and not miss anything</title>
      <dc:creator>Tantor Labs</dc:creator>
      <pubDate>Tue, 11 Nov 2025 06:39:33 +0000</pubDate>
      <link>https://dev.to/tantorlabs/how-to-load-test-postgresql-database-and-not-miss-anything-9jk</link>
      <guid>https://dev.to/tantorlabs/how-to-load-test-postgresql-database-and-not-miss-anything-9jk</guid>
      <description>&lt;p&gt;During load testing of Tantor Postgres databases or other PostgreSQL-based databases using the standard tool pgbench, specialists often encounter non-representative results and the need for repeated tests due to the fact that details of the environment (such as DBMS configuration, server characteristics, PostgreSQL versions) are not recorded. In this article we are going to review author's pg_perfbench, which is designed to address this issue. It ensures that scenarios are repeatable, prevents the loss of important data, and streamlines result comparison by registering all parameters in a single template. It also automatically launches pgbench with TPC-B load generation, collects all metadata on the testing environment, and generates a structured report.&lt;/p&gt;

&lt;h2&gt;
  
  
  Automation of PostgreSQL-based DB load testing and system information collection
&lt;/h2&gt;

&lt;p&gt;During development and support, it is frequently necessary to quickly conduct a load test in order to evaluate hypotheses or the overall efficiency of Tantor Postgres DB or other PostgreSQL-based databases. Most often, specialists use the standard tool pgbench for TPC-B load generation and evaluate the results. However, when describing the results of such tests, important details about the environment are often overlooked. There is no clear description of the used configuration: PostgreSQL parameters, server hardware characteristics, client version, and network settings. And that can lead to ambiguous or incomplete conclusions re performance. And you may need to spend time on re-running tests.&lt;/p&gt;

&lt;p&gt;To eliminate such routine tasks and minimize the risk of 'forgotten' details, I developed the &lt;a href="https://github.com/TantorLabs/pg_perfbench" rel="noopener noreferrer"&gt;&lt;strong&gt;pg_perfbench&lt;/strong&gt;&lt;/a&gt; tool, which starts pgbench and collects data on the database environment, then generates a full report based on a predefined template. This helps to replay testing scenarios, document all necessary database environment details, and quickly compare test results if needed.&lt;/p&gt;

&lt;h2&gt;
  
  
  Motivation for development
&lt;/h2&gt;

&lt;p&gt;Let's review a typical performance measurement scenario for Tantor Postgres or other PostgreSQL-based databases, during which we need not only to obtain transactions per second but also to create a detailed description of the performed test. In real practice, database administrators and developers face the following tasks:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Checking the behavior of the database under increased load (number of connections, data size, etc.).&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Comparing the performance of different versions of PostgreSQL or different configurations (for example, different parameters like shared_buffers, work_mem, enabled extensions, etc.).&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Recording test results for follow-up analysis and replay.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;However, without a full description of the environment, including OS versions, parameters of the network infrastructure, server characteristics (CPU, RAM, disk subsystem), and the PostgreSQL configuration itself, follow-up analysis and replay of experiments become quite challenging. Any insignificant change, whether the use of different data directories (for example, one with included checksums, and another without them) or other differences in hardware and network configuration, can significantly affect the results. Then, when trying to compare the new indicators with the previous ones, it becomes clear that the tests are actually &lt;strong&gt;incomparable&lt;/strong&gt;, since they were run under different conditions.&lt;/p&gt;

&lt;p&gt;The idea of ​​supplementing standard load testing tools with automatic report generation originates from the need to record both productivity indicators and details on the environment. For similar &lt;strong&gt;pgbench&lt;/strong&gt; runs, solutions with custom scripts are well suited, but they often require modification and manual collection of data to get a complete picture. In practice, this leads to an increase in temporary costs and the risk of missing some important detail in the final description.&lt;/p&gt;

&lt;h3&gt;
  
  
  Standard load testing cycle with pgbench
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;pgbench&lt;/strong&gt; operates with a load built per TPC-B model, and allows you to quickly obtain the following metrics:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;tps (transactions per second)&lt;/strong&gt; - the number of transactions executed per second.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Latency average&lt;/strong&gt; - average latency between transactions' execution, measured in milliseconds (ms).&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Number of transactions actually processed&lt;/strong&gt; - the total number of successfully processed transactions.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Number of failed transactions&lt;/strong&gt; - the number of transactions that ended with an error.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Initial connection time&lt;/strong&gt; - the time spent on establishing the initial connection with the database, measured in milliseconds (ms) or seconds (s) depending on the measurement context.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;To demonstrate, let's take a classic scenario: cyclic launch of &lt;strong&gt;pgbench&lt;/strong&gt; in a simple Bash script with iterations, where the number of clients (connections to the database) varies. Script example:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;/usr/lib/postgresql/15/bin/pgbench -i --scale=4000 --foreign-keys -h `hostname` -p 5432 -U postgres test

#!/bin/bash
clients="1 10 20 50 100"
t=600
dir=/var/lib/postgresql/test_result
mkdir -p $dir
for c in $clients; do
echo "pgbench_${c}_${t}.txt"
echo "start test: "`date +"%Y.%m.%d_%H:%M:%S"` &amp;gt;&amp;gt; "${dir}/pgbench_${c}.txt"
/usr/lib/postgresql/15/bin/pgbench -h `hostname` -p 5432 test -c $c -j $c -T $t &amp;gt;&amp;gt; "${dir}/pgbench_${c}.txt"
echo "stop test: "`date +"%Y.%m.%d_%H:%M:%S"` &amp;gt;&amp;gt; "${dir}/pgbench_${c}.txt"
done
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This code:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Initializes pgbench test database with a scale of 4000 (parameter --scale=4000) and foreign keys (--foreign-keys).&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Launches load cycles based on the variable number of clients (1, 10, 20, 50, 100) over 600 seconds (-T 600).&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Logs the results into separate files, starting and ending each iteration with timestamps.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Typically, a chart is generated at the end, showing the dependency of indicators (e.g., tps or average latency) on the number of clients:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffd80nk22xylf861ignt9.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffd80nk22xylf861ignt9.png" width="799" height="438"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;We can also create an additional chart to show the &lt;strong&gt;Latency average&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;Despite the fact that &lt;strong&gt;pgbench&lt;/strong&gt; data already provide a basic performance estimate, we also need to consider the following to get the full analysis:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Server parameters:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;CPU (model, number of cores, Hyper-Threading support);&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;RAM (capacity, frequency, type);&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;disk space (storage type, capacity, RAID configuration, etc.);&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;network parameter settings (TCP/IP stack, MTU, etc.);&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;system drivers (e.g. for SSD or network cards).&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Database parameters:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;exact version of PostgreSQL (server and client, if necessary — patches);&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;enabled extensions (e.g. pg_repack, pg_stat_statements);&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;database configuration (postgresql.conf) and key parameters (shared_buffers, work_mem, wal_buffers, etc.);&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;pg_config results (build options, paths to directories, etc.);&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;enabled replications (physical or logical).&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;When replaying the same test, we need to ensure the consistency of the above-listed environmental parameters, since discrepancies in the listed aspects can affect the final numbers (for example, if the CPU in one environment differs from the CPU in another, direct comparison of &lt;strong&gt;tps&lt;/strong&gt; results becomes meaningless). Therefore, in serious research, it is necessary to confirm that the test was replayed with the same environmental parameters.&lt;/p&gt;

&lt;h3&gt;
  
  
  The need for ready-made solutions and automation
&lt;/h3&gt;

&lt;p&gt;To conduct load testing and collect the listed system information, you can use a set of open-source utilities, but practically all of them require manual integration and additional scripts. In other words, the administrator has to separately collect information about the configuration of Tantor Postgres or another PostgreSQL-based DB, network settings, operating system kernel version, etc., and then generate a unified report, often generated in an arbitrary format, which makes it challenging to compare this report with colleagues' data.&lt;/p&gt;

&lt;p&gt;In order for all necessary data to be assembled and documented automatically, adds-on to standard tools are required. This is the issue &lt;a href="https://github.com/TantorLabs/pg_perfbench" rel="noopener noreferrer"&gt;pg_perfbench&lt;/a&gt; is designed to solve, as it allows:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;running standard or custom pgbench scripts:&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;simultaneously collecting metadata about hardware, OS, and PostgreSQL configuration;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;automatically generating output report per specified template.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;As a result, when conducting stress tests, you can be sure that no crucial detail related to the environment will be missed. This report simplifies comparison of results, saves time on configuring repeated runs, and ensures a high level of reproducibility of experiments.&lt;/p&gt;

&lt;p&gt;Automation of DB performance test&lt;/p&gt;

&lt;p&gt;The overview of &lt;a href="https://github.com/TantorLabs/pg_perfbench" rel="noopener noreferrer"&gt;pg_perfbench&lt;/a&gt; operation during load testing:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzo7gnw3ard0xvqfiyv98.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzo7gnw3ard0xvqfiyv98.png" width="800" height="426"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The process of load testing and performance analysis of PostgreSQL is significantly simplified if in the end we receive a full &lt;strong&gt;structured report containing&lt;/strong&gt; not only numerical indicators (tps, Latency average, etc.), but also detailed information about the environment. Such a report helps:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;To&lt;/strong&gt; &lt;strong&gt;compare&lt;/strong&gt; the results of different runs and configuration versions without losing context.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;To store&lt;/strong&gt; all essential information about the server, PostgreSQL parameters, and network settings in one place.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;To replay&lt;/strong&gt; test scenarios, with ready-to-use template containing all the crucial data.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;There are several core sections in the generated report:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;System properties&lt;/strong&gt; - describes the characteristics of the hardware part (processor, memory, disks, network settings).&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Common database info&lt;/strong&gt; - contains information about the PostgreSQL version, enabled extensions, and key database parameters.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Common benchmark info&lt;/strong&gt; - specifies pgbench run settings (number of clients, testing time, used commands).&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Tests result info&lt;/strong&gt; - finalizes the results of executed tests, displays performance indicators (TPS, delays, etc.).&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Collapsed report sections:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fx7k3moqxn6n46ysbusw1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fx7k3moqxn6n46ysbusw1.png" width="800" height="375"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The formats used to generate allow easy reading and visualization of test results: JSON for storing data (including measurement results and configuration) and HTML for visualization of the report allowing to add links, styles, charts, etc. In our implementation, the HTML report contains a nested JSON object with filled data, used to visualize the report. Thus, the final process may look like:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Running test scenarios and collecting information.&lt;/strong&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Automatic report generation&lt;/strong&gt;: a JSON file with a detailed structure and sections is created, and an HTML version of the report is generated based on the same template.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Analysis of results&lt;/strong&gt;: required values are taken from JSON to build charts, comparison tables, etc., and the HTML report visualizes the testing results in a convenient interactive form.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzes7jwrgjbvyenb7dc9u.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzes7jwrgjbvyenb7dc9u.png" width="749" height="423"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Each section has parameters for structuring the interactive HTML report.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqs1w5tp2rgb84xmf3rmx.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqs1w5tp2rgb84xmf3rmx.png" width="800" height="871"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Section with a description of the server environment&lt;/strong&gt; may include a table with information on the processor, memory, disks, as well as the output of some system commands. Expanded section contains elements (collapsed):&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1ttwm1voyi0j87qthka0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1ttwm1voyi0j87qthka0.png" width="800" height="402"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Section with a description of the DB configuration&lt;/strong&gt; may contain postgresql.conf parameters, list of extensions, etc.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fl2d3gl27a0f3ffzq09yl.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fl2d3gl27a0f3ffzq09yl.png" width="800" height="465"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Collecting data on the DB environment
&lt;/h3&gt;

&lt;p&gt;To collect server information, commands are usually executed in the terminal, for example:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;df -h   # &amp;lt;---- disk space

cat /etc/fstab # &amp;lt;---- description of file systems

cat /etc/os-release # &amp;lt;---- description of the Linux distribution

uname -r -m # &amp;lt;---- kernel version
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;To collect information about the DB environment, you can use shell commands or SQL commands that are stored in separate files. The description of the report &lt;em&gt;item&lt;/em&gt; indicates a file with these commands, and their results are included in the &lt;strong&gt;data&lt;/strong&gt; field:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F43vkpb79j6at9irloqsh.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F43vkpb79j6at9irloqsh.png" width="800" height="416"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Let's look at the report section with data on the server environment filled:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6nmwcbz3g93k1rpnj2wp.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6nmwcbz3g93k1rpnj2wp.png" width="800" height="857"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Each item of the report structure has a description for interactive layout in HTML, which includes the following parameters:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;header - the specified &lt;strong&gt;name&lt;/strong&gt; of the section;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;description - a &lt;strong&gt;description&lt;/strong&gt; or a brief note;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;state - the state of the report item (collapsed or expanded);&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;item_type - type of the returned script result (table or text);&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;shell_comand_file or sql_command_file - name of the shell or SQL script;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;data - the field &lt;strong&gt;where the result&lt;/strong&gt; of the script execution &lt;strong&gt;is recorded&lt;/strong&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5wpnz4xl968j30otxm5x.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5wpnz4xl968j30otxm5x.png" width="800" height="167"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Information retrieval from the database is structured similarly:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcbluvfq1w9z0r431cud4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcbluvfq1w9z0r431cud4.png" width="800" height="415"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2illek46yywbk2ixzaip.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2illek46yywbk2ixzaip.png" width="800" height="682"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Configuration of the database load testing
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;pgbench&lt;/strong&gt; is configured in &lt;a href="https://github.com/TantorLabs/pg_perfbench" rel="noopener noreferrer"&gt;&lt;strong&gt;pg_perfbench&lt;/strong&gt;&lt;/a&gt; user parameters:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;python -m pg_perfbench --mode=benchmark\
--log-level=debug\
--connection-type=ssh\
--ssh-port=22\
--ssh-key=/key/p_key\
--ssh-host=10.100.100.100\
--remote-pg-host=127.0.0.1\
--remote-pg-port=5432\
--pg-host=127.0.0.1\
--pg-port=5439\
--pg-user=postgres\
--pg-password=pswd\
--pg-database=tdb\
--pg-data-path=/var/lib/postgresql/16/main\
--pg-bin-path=/usr/lib/postgresql/16/bin\
--benchmark-type=default\
--pgbench-clients=1,10,20,50,100\
--pgbench-path=/usr/bin/pgbench\
--psql-path=/usr/bin/psql\
--init-command="ARG_PGBENCH_PATH -i --scale=10 --foreign-keys -p ARG_PG_PORT -h ARG_PG_HOST -U postgres ARG_PG_DATABASE"\
--workload-command="ARG_PGBENCH_PATH -p ARG_PG_PORT -h ARG_PG_HOST -U ARG_PG_USER ARG_PG_DATABASE -c ARG_PGBENCH_CLIENTS -j 10 -T 10 --no-vacuum"

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;All settings are recorded in the report for a full description of the tool startup. The number of iterations is also set by the user: you can specify either --pgbench-clients (number of clients) or --pgbench-time (duration) for each test run.&lt;/p&gt;

&lt;p&gt;Configuration of &lt;strong&gt;pg_perfbench&lt;/strong&gt; user parameters is described in the &lt;a href="https://github.com/TantorLabs/pg_perfbench" rel="noopener noreferrer"&gt;documentation&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;pgbench launch iterations&lt;/strong&gt; will be displayed in the benchmark description section:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvr9p6yrqvv9im8rl0v0p.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvr9p6yrqvv9im8rl0v0p.png" width="800" height="484"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Since standard pgbench load queries are used, initialization and loading scripts will be displayed as &lt;strong&gt;pgbench&lt;/strong&gt; commands:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4ck45v1jy29wr81ra6qv.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4ck45v1jy29wr81ra6qv.png" width="800" height="511"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Result of database load testing
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;pgbench&lt;/strong&gt; data is conveniently structures as tables, where each row is one iteration, and the columns correspond to the following key metrics:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fr4ekfk73b8pmkgf3ncf2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fr4ekfk73b8pmkgf3ncf2.png" width="753" height="424"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;After that, on the chart, you can compare iterations by TPS (vertical axis - TPS, horizontal axis - number of clients):&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Figer9l8bvguie8buthcu.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Figer9l8bvguie8buthcu.png" width="749" height="325"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Eventually, automating of collection of all required data and converting it into a convenient form (JSON + HTML, charts, tables) significantly simplifies not only analysis but also co-working on the project. You can reproduce tests quicker, compare results, and identify bottlenecks in PostgreSQL performance without wasting time on investigating what exactly was tested and under what conditions.&lt;/p&gt;

&lt;h3&gt;
  
  
  Comparison of reports
&lt;/h3&gt;

&lt;p&gt;When testing and analyzing PostgreSQL performance, we often need to compare several results of load tests. This can be applicable for the following scenarios:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;comparison of the behavior of the same database on different servers;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;checking how changes in postgresql.conf affect the final throughput (TPS) and latency.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://github.com/TantorLabs/pg_perfbench" rel="noopener noreferrer"&gt;&lt;strong&gt;pg_perfbench&lt;/strong&gt;&lt;/a&gt; provides a mechanism for comparing ready reports in --mode=join mode. The the core purpose of its work is to:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Make sure that we are actually comparing &lt;em&gt;corresponding&lt;/em&gt; environments (identical PostgreSQL versions, similar system parameters, the same set of extensions, etc.).&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;After checking the key elements for matching, a single "consolidated" report is formed, where the difference in settings and benchmark results can be seen.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fy8kqbh1xhmpz9mr2g2wq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fy8kqbh1xhmpz9mr2g2wq.png" width="800" height="391"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;A &lt;strong&gt;JSON&lt;/strong&gt; file with keys used to compare data from different reports may have the following structure:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
      "description": &amp;lt;Description of the current task&amp;gt;,
       "items":
              [
               "sections.system.reports.sysctl_vm.data",
               "sections.system.reports.total_ram.data",
               "sections.system.reports.cpu_info.data",
               "sections.system.reports.etc_fstab.data",
               "sections.db.reports.version_major.data",
               "sections.db.reports.pg_available_extensions.data",
               "sections.db.reports.pg_config.data",
               "sections.benchmark.reports.options.data",
               "sections.benchmark.reports.custom_tables.data",
               "sections.benchmark.reports.custom_workload.data"
               ]
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The "items" field contains an array of object fields representing the structure of the report. Equality of values will be checked based on these values. If it turns out that some of the compared mandatory items differ (for example, CPU on servers differ, or different versions of PostgreSQL are used), the utility will interrupt the join and display an error message. This protects against incorrect comparisons when the results simply do not make sense to compare directly.&lt;/p&gt;

&lt;p&gt;Example log:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;2025-03-28 16:31:02,285 INFO  root : 218 - Loaded 2 report(s): benchmark-ssh-custom-config.json, benchmark-ssh-default-config.json
2025-03-28 16:31:02,286 ERROR root : 190 - Comparison failed: Unlisted mismatch in 'cpu_info'
 reference report - benchmark-ssh-custom-config
 comparable report - benchmark-ssh-default-config
2025-03-28 16:31:02,286 ERROR root : 222 - Merge of reports failed.
2025-03-28 16:31:02,286 ERROR root : 317 - Emergency program termination. No report has been generated.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Comparison of DB performance with different settings in postgresql.conf
&lt;/h3&gt;

&lt;p&gt;One of the most common tasks is to &lt;strong&gt;test the effect&lt;/strong&gt; of changing PostgreSQL settings. Let's review a simple scenario when we have two configuration files:&lt;/p&gt;

&lt;p&gt;postgresql_1.conf:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;.....
shared_buffers = 166MB
work_mem = 10000kB
maintenance_work_mem = 20MB
effective_cache_size = 243MB
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;postgresql_2.conf:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;.....
shared_buffers = 90MB
work_mem = 5000kB
maintenance_work_mem = 10MB
effective_cache_size = 150MB
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The goal is to understand how the throughput capacity changes when the configuration parameters are changed. You need to run*&lt;em&gt; pg_perfbench&lt;/em&gt;&lt;em&gt; with these two configurations, as described above (each run of measurements generates a &lt;/em&gt;&lt;em&gt;separate report&lt;/em&gt;*). Read more on configuration of user parameters in the &lt;a href="https://github.com/TantorLabs/pg_perfbench/README.md" rel="noopener noreferrer"&gt;documentation&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;To join two reports, specify JSON with a list of "items" that do not include pg_settings (pg_perfbench/join_tasks/task_compare_dbs_on_single_host.json).&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
    "description": "Comparison of database performance across different configurations in the same environment using the same PostgreSQL version",
    "items": [
        "sections.system.reports.sysctl_vm.data",
        "sections.system.reports.sysctl_net_ipv4_tcp.data",
        "sections.system.reports.sysctl_net_ipv4_udp.data",
        "sections.system.reports.total_ram.data",
        "sections.system.reports.cpu_info.data",
        "sections.system.reports.etc_fstab.data",
        "sections.db.reports.version_major.data",
        "sections.db.reports.pg_available_extensions.data",
        "sections.db.reports.pg_config.data",
        "sections.benchmark.reports.options.data",
        "sections.benchmark.reports.custom_tables.data",
        "sections.benchmark.reports.custom_workload.data"
    ]
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now when running&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;python -m pg_perfbench --mode=join\
--report-name=join-diff-conf-reports\
--join-task=task_compare_dbs_on_single_host.json\
--input-dir=/pg_perfbench/report
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;the utility will check for compliance of the specified parameters: identical CPU, identical amount of RAM (at least the one documented in the reports), the same major version of PostgreSQL, matching set of extensions, etc. If the parameters comply, the utility will generate a consolidated report, and if any discrepancies are revealed in the listed items, it will output an error message. In the process log there will be a list of loaded reports and the result of the join:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;python -m pg_perfbench --mode=join\
--report-name=join-diff-conf-reports\
--join-task=task_compare_dbs_on_single_host.json\
--input-dir=/pg_perfbench/report
2025-03-28 16:53:31,476       INFO                                root :   55 - Logging level: info
.....
#-----------------------------------
2025-03-28 16:53:31,476       INFO                                root :  211 - Compare items 'task_compare_dbs_on_single_host.json' loaded successfully:
sections.system.reports.sysctl_vm.data
sections.system.reports.sysctl_net_ipv4_tcp.data
sections.system.reports.sysctl_net_ipv4_udp.data
sections.system.reports.total_ram.data
sections.system.reports.cpu_info.data
sections.system.reports.etc_fstab.data
sections.db.reports.version_major.data
sections.db.reports.pg_available_extensions.data
sections.db.reports.pg_config.data
sections.benchmark.reports.options.data
sections.benchmark.reports.custom_tables.data
sections.benchmark.reports.custom_workload.data
.....
2025-03-28 16:53:31,481       INFO                                root :   99 - The report is saved in the 'report' folder
2025-03-28 16:53:31,481       INFO                                root :  322 - Benchmark report saved successfully.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Join&lt;/strong&gt;-report will contain:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  List of the compared reports and join parameters:&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fftx74h7j5jne75dyxyh3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fftx74h7j5jne75dyxyh3.png" width="800" height="478"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  system data matches at least for the items listed in the join list. If there are elements not specified in the join list, and there are differences in the data field in the compared reports, then in the final Join report, the tab for such an item will be marked with a special color and with "Diff". In the tab itself, the results for the corresponding items of all compared reports will be shown.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpqmf2jmvhj8d9vf9he9m.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpqmf2jmvhj8d9vf9he9m.png" width="800" height="548"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fuetd3crnysnb35ew6pj9.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fuetd3crnysnb35ew6pj9.png" width="764" height="718"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  &lt;strong&gt;DB &lt;/strong&gt;data matches by join parameters.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fi49wnrxh83hglbbpp1vz.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fi49wnrxh83hglbbpp1vz.png" width="800" height="414"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  Configuration of &lt;strong&gt;pgbench&lt;/strong&gt; load. Differences in the configurations of user parameters are in different configurations (postgresql.conf) of the database reports.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fd9ezuzlefpq97abkmp40.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fd9ezuzlefpq97abkmp40.png" width="800" height="438"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  &lt;strong&gt;results&lt;/strong&gt; section with performance difference across benchmark reports.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fw3au1tjrd4vkl66dx7ta.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fw3au1tjrd4vkl66dx7ta.png" width="800" height="734"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;On the chart, you can quickly assess the difference in benchmarks by &lt;strong&gt;tps&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyifxkfjdksan7qdyl3ka.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyifxkfjdksan7qdyl3ka.png" width="800" height="333"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;The use of the &lt;a href="https://github.com/TantorLabs/pg_perfbench" rel="noopener noreferrer"&gt;pg_perfbench&lt;/a&gt; allows:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;To avoid missing environment info&lt;/strong&gt;: all important hardware parameters (CPU, RAM, disks, network settings) and configurations of Tantor Postgres or other PostgreSQL-based DB (version, extensions, main and secondary parameters, key utilities) are automatically collected during testing, which eliminates the risk of working with incomplete data during analysis.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;To reduce the time spent on preparing and replaying tests&lt;/strong&gt;: you no longer need to manually describe every detail of the test environment, and then repeat all manual steps. Load parameters (number of clients, test time, pgbench commands) and key system parameters are automatically recorded in one report.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;To simplify analysis and comparison of results&lt;/strong&gt;: information is collected in a structured format (JSON and/or HTML) that is easy to navigate. Visual representation (charts, tables) allows you to quickly assess the dynamics of indicators (tps), and the built-in join functionality makes it easier to compare results of several reports.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;To ensure reproducibility of experiments&lt;/strong&gt;: if necessary, you can go back to previous measurements and re-run them under the same conditions without having to manually restore all parameters - all important settings and environment information are already included in the report.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;To standardize the approach to load testing&lt;/strong&gt;: pg_perfbench provides a unified interface to connect to different types of environments (local machine, SSH, Docker) and integrate with pgbench. This increases flexibility and provides centralized management of the testing process.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Thus, pg_perfbench makes PostgreSQL load testing transparent, reliable, and reproducible, significantly simplifying the analysis of results, rerunning of scenarios, and comparing them.&lt;/p&gt;

&lt;p&gt;You can find the reviewed project at &lt;a href="https://github.com/TantorLabs/pg_perfbench" rel="noopener noreferrer"&gt;https://github.com/TantorLabs/pg_perfbench&lt;/a&gt;&lt;/p&gt;

</description>
      <category>postgres</category>
      <category>testing</category>
    </item>
    <item>
      <title>Redundant statistics slow down your Postgres? Try sampling in pg_stat_statements</title>
      <dc:creator>Tantor Labs</dc:creator>
      <pubDate>Tue, 11 Nov 2025 06:33:34 +0000</pubDate>
      <link>https://dev.to/tantorlabs/redundant-statistics-slow-down-your-postgres-try-sampling-in-pgstatstatements-44em</link>
      <guid>https://dev.to/tantorlabs/redundant-statistics-slow-down-your-postgres-try-sampling-in-pgstatstatements-44em</guid>
      <description>&lt;p&gt;pg_stat_statements is the standard PostgreSQL extension used to track query statistics: number of executions, total and average execution time, number of returned rows, and other metrics. This information allows to analyze query behavior over time, identify problem areas, and make informed optimization decisions. However, in systems with high contention, pg_stat_statements itself can become a bottleneck and cause performance drops. In this article, we will analyze in which scenarios the extension becomes a source of problems, how sampling is structured, and in which cases its application can reduce overhead.&lt;/p&gt;

&lt;h2&gt;
  
  
  Issue
&lt;/h2&gt;

&lt;p&gt;Let's briefly recall how &lt;code&gt;pg_stat_statements&lt;/code&gt; is structured in order to understand when and why the extension may slow down performance. This will help understand which mechanisms within the extension can become a bottleneck under high load.\&lt;br&gt;
The key data structure in &lt;code&gt;pg_stat_statements&lt;/code&gt; is a hash table. Each bucket in it contains execution metrics for a specific query. The key for this table is formed based on four parameters:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;queryid&lt;/code&gt; — unique identifier of the normalized query;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;User &lt;code&gt;OID&lt;/code&gt;;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Database &lt;code&gt;OID&lt;/code&gt;;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;a &lt;code&gt;toplevel&lt;/code&gt; flag indicating whether the query is top-level, meaning it's not nested within any internal function or subquery.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The hash table finds or creates the corresponding entry using this key and updates the collected query metrics.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5k92tus6yivd13touiz0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5k92tus6yivd13touiz0.png" width="621" height="552"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Upon receiving a request, &lt;code&gt;pg_stat_statements&lt;/code&gt; executes the following sequence of operations:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Search for the bucket: a shared LWLock is acquired on Проверить предлогthe hash table and it is searched through by key;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Normalization (optional): if there is no suitable bucket, the query is pre-normalized: literals are replaced with placeholders like $1, $2, etc.;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Creating a new bucket: the lock level of LWLock is raised to &lt;code&gt;exclusive&lt;/code&gt; and a new bucket is created;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Recording query information in the bucket: to update the query metrics in the bucket, the SpinLock of that bucket is acquired. Then the locks on SpinLock and LWLock are released. &lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;These locking operations with a large number of unique queries or high contention in &lt;code&gt;pg_stat_statements&lt;/code&gt; become a bottleneck. Let's review this using a scenario where all SQL queries are unique from the perspective of &lt;code&gt;pg_stat_statements&lt;/code&gt;. A machine with 48 CPUs can reproduce such a load. To ensure that the queries are unique, we will create 1000 similar tables with different names:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;init_script.sql&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;


&lt;/blockquote&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;DO $$

DECLARE

    i INT;

BEGIN

    FOR i IN 1..1000 LOOP

        EXECUTE format('CREATE TABLE table_%s (id INT PRIMARY KEY, value TEXT);', i);

        EXECUTE format('INSERT INTO table_%s (id, value) VALUES (1, ''test'');', i);

    END LOOP;

END;

$$;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt; Then, using the built-in random number generator in pgbench (&lt;strong&gt;pgbench_script.sq&lt;/strong&gt;l), we will create queries on these tables so that each of them is different and falls into a new bucket of the &lt;code&gt;pg_stat_statements&lt;/code&gt; hash table.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;pgbench_script.sql&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;
&lt;/blockquote&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;\set table1_id random(1, 1000)

\set table2_id random(1, 1000)

\set table3_id random(1, 1000)

SELECT t1.value AS value1, t2.value AS value2, t3.value AS value3

FROM table_:table1_id t1

JOIN table_:table2_id t2 ON t1.id = t2.id

JOIN table_:table3_id t3 ON t2.id = t3.id

WHERE t1.id = 1 AND t2.id = 1 AND t3.id = 1;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;To visually see what causes the performance drop, let's call pg_stat_activity once a second while the benchmark is running. The results of each query will be written in /tmp/waits file:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;waits.sql&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;
&lt;/blockquote&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;\o /tmp/waits

select 'OUT', COALESCE(wait_event, 'None') wait_event, COALESCE(wait_event_type, 'No wait') wait_event_type from  pg_stat_activity where state = 'active';

\watch 1
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;After the benchmark is completed, we group all types of delays and count how many times each of them occurred:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;cat /tmp/waits | grep OUT | awk '{print $2 "|" $3}' FS="|" | sort | uniq -c | sort -n -r -k1
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Then we run the benchmark, compare the system performance with the pg_stat_statements extension enabled and disabled, and display the reasons for performance drop. To do this, we will use standard pgbench utility:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Number of clients (-c): 48 - according to the number of CPUs;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Threads (-j): 20 - limits contention at the OS level to avoid overloading the CPU and context switches;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Duration (-T): 120 seconds;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Script: run pgbench_script.sql (-f pgbench_script.sql);&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Metrics: total number of delays and final TPS.&lt;br&gt;
&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;pgbench -c48 -j20 -T120 -f pgbench_script.sql --progress 10 | grep "tps = " 2&amp;gt;&amp;gt;$RESULTS &amp;gt;&amp;gt;$RESULTS &amp;amp;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now lets combine all these actions in one script:&lt;br&gt;
&lt;br&gt;
 &lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;RESULTS="/tmp/results"

rm -rfv $RESULTS

nohup pgbench -c48 -j20 -T120 -f pgbench_script.sql --progress 10 | grep "tps = " 2&amp;gt;&amp;gt;$RESULTS &amp;gt;&amp;gt;$RESULTS &amp;amp;

timeout 125 psql -f waits.sql

echo " count | wait_event | wait_event_type" &amp;gt;&amp;gt;$RESULTS

echo "--------------------------------------" &amp;gt;&amp;gt;$RESULTS

cat /tmp/waits | grep OUT | awk '{print $2 "|" $3}' FS="|" | sort | uniq -c | sort -n -r -k1 &amp;gt;&amp;gt;$RESULTS

cat $RESULTS

rm -rfv /tmp/waits
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;We will get the following results:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# With pg_stat_statements off

tps = 237 437.104223 (without initial connection time)

 count | wait_event | wait_event_type

--------------------------------------

   2922  None            No wait

    918  ClientRead    Client

# With pg_stat_statements on

tps =  32 112.129029 (without initial connection time)

 count |     wait_event     | wait_event_type

--------------------------------------

   4703  pg_stat_statements      LWLock

    884  None                    No wait

    143  ClientRead              Client
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;As we see here, with a large number of unique queries, enabled &lt;code&gt;pg_stat_statements &lt;/code&gt;can significantly reduce performance — even leading to fold drop in TPS. And this all is caused by &lt;strong&gt;frequent 'exclusive' LWLock&lt;/strong&gt; &lt;a href="http://acquisition.Now" rel="noopener noreferrer"&gt;&lt;strong&gt;acquisition.&lt;/strong&gt;Now&lt;/a&gt; let's review another scenario — with a large number of similar queries. Here we will need a more powerful machine with 192 CPUs. For the test, we will again use a script that periodically checks pg_stat_activity, but this time we will create the load using the same query executed through pgbench -M prepared -S with 192 clients:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;pgbench -c192 -j20 -T120 -M prepared -S --progress 10 | grep "tps = " 2&amp;gt;&amp;gt;$RESULTS &amp;gt;&amp;gt;$RESULTS &amp;amp;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;When we run this benchmark...&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;RESULTS="/tmp/results"

rm -rfv $RESULTS

pgbench -i -s500

nohup pgbench -c192 -j20 -T120 -M prepared -S --progress 10 | grep "tps = " 2&amp;gt;&amp;gt;$RESULTS &amp;gt;&amp;gt;$RESULTS &amp;amp;

timeout 125 psql -f waits.sql

echo " count | wait_event | wait_event_type" &amp;gt;&amp;gt;$RESULTS

echo "--------------------------------------" &amp;gt;&amp;gt;$RESULTS

cat /tmp/waits | grep OUT | awk '{print $2}' FS="|" | sort | uniq -c | sort -n -r -k1 &amp;gt;&amp;gt;$RESULTS

cat $RESULTS

rm -rfv /tmp/waits
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;..we get the following results:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Results with pg_stat_statements off

tps = 1 015 425.438193 (without initial connection time)

 count | wait_event | wait_event_type

--------------------------------------

  13201  None             No wait

   3482  ClientRead       Client

# Results with pg_stat_statements on

tps =   484 338.163894 (without initial connection time)

 count | wait_event | wait_event_type

--------------------------------------

 11 214  SpinDelay        Timeout

   9481  None             No wait

    930  ClientRead       Client
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt; If we try to reproduce this on a machine with 48 CPUs (i.e., with 48 users)...&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;pgbench -c48 -j20 -T120 -M prepared -S --progress 10 | grep "tps = " 2&amp;gt;&amp;gt;$RESULTS &amp;gt;&amp;gt;$RESULTS &amp;amp;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;...the performance issues will stay well within the bounds of statistical error.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Results with pg_stat_statements off

tps = 625 335.965464 (without initial connection time)

 count | wait_event | wait_event_type

--------------------------------------

    979  ClientRead         Client

    927  None               No wait

# Results with pg_stat_statements on

tps = 611 708.477697 (without initial connection time)

 count | wait_event | wait_event_type

--------------------------------------

   1000  ClientRead         Client

    978  None               No wait
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This indicates that the impact of &lt;code&gt;pg_stat_statements&lt;/code&gt; when working with duplicate queries becomes noticeable only at very high levels of parallelism. The main reason is the contention for the same entry in the hash table, which is accompanied by &lt;strong&gt;frequent SpinLock acquisitions when updating query metrics in the hash table bucket&lt;/strong&gt;. When many threads simultaneously execute the same query, they try to update the same structure — increment call counters, execution time, and other metrics. This leads to severe contention for SpinLock, which under high load causes delays and reduces TPS.&lt;/p&gt;

&lt;h3&gt;
  
  
  What is sampling?
&lt;/h3&gt;

&lt;p&gt;Query sampling is a method of uniform filtering, where only part of all queries is included in the sample. In the context of &lt;code&gt;pg_stat_statements&lt;/code&gt;, this means that metric information is recorded not for every executed query, but only for some of them, with equal probability. A similar approach is used in PostgreSQL in other places: log_transaction_sample_rate and log_statement_sample_rate are used to reduce log volume, as well as in &lt;code&gt;auto_explain.sample_rate&lt;/code&gt; and &lt;code&gt;pg_store_plans.sample_rate&lt;/code&gt;. In Tantor Postgres 17.5, a corresponding setting was added to pg_stat_statements — the GUC parameter &lt;strong&gt;&lt;code&gt;pg_stat_statements.sample_rate&lt;/code&gt;&lt;/strong&gt;, which allows you to set fraction of queries that is tracked in the extension's statistics. The value of the parameter (from 0 to 1) determines what fraction of queries will be tracked in pg_stat_statements. A query will be sampled if the following condition is met:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Sampling&lt;br&gt;
&lt;/p&gt;
&lt;/blockquote&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;is_query_sampled = pgss_sample_rate != 0.0 &amp;amp;&amp;amp;

                    (pgss_sample_rate == 1.0 ||

                    pg_prng_double(&amp;amp;pg_global_prng_state) &amp;lt;= pgss_sample_rate);
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Since the number of queries will be very large, using this inequation allows filtering only the specified fraction of the queries. The sampling method has one significant drawback: not all queries are tracked in &lt;code&gt;pg_stat_statements&lt;/code&gt;. This impacts the completeness of the collected information, especially during debugging or analyzing rare but problematic queries. On the other hand, since only a part of the queries is tracked, the load on locks is reduced, and consequently, the overall system performance is improved.&lt;/p&gt;

&lt;h3&gt;
  
  
  Limitations of the sampling
&lt;/h3&gt;

&lt;p&gt;If queries are sampled at the stage of adding a new bucket to the hash table, thereby unloading the LWLock, there is a risk of losing a valuable query hat would offer crucial information if tracked in &lt;code&gt;pg_stat_statements&lt;/code&gt;. Moreover, even if sampling did take care of the performance problem, there is still the issue of security and correctness of query storage. The fact is that adding a bucket to the hash table can occur both before and after the query execution. But the structure required for normalizing the query is relayed only &lt;strong&gt;before the query execution&lt;/strong&gt;, namely at the parsing stage, when we need to create a structure to sore literals for normalization. If at this stage &lt;code&gt;pg_stat_statements&lt;/code&gt; decides not to save the query due to sampling, but then (after execution) still tries to add it to the hash table, the query will be recorded in its original (non-normalized) form. This may lead to the leakage of sensitive information in pg_stat_statements (for example, passwords or personal data in literals of a SQL query). Therefore, sampling during query parsing is unacceptable: it can violate security requirements.\&lt;br&gt;
Nevertheless, both the PostgreSQL community and the developers at Tantor Labs are trying to solve the problem of a large number of unique queries in a different way — by merging similar queries under one queryid. This reduces the number of unique entries in the hash table and, accordingly, decreases the frequency of its locking. The community has already merged the following queries into one QueryId:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;In &lt;a href="https://github.com/postgres/postgres/commit/dc68515968e80d75f8106d0df05da346be537628" rel="noopener noreferrer"&gt;SET commands&lt;/a&gt;;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Identical &lt;a href="https://github.com/postgres/postgres/commit/9fbd53dea5d513a78ca04834101ca1aa73b63e59" rel="noopener noreferrer"&gt;queries with different lengths of IN&lt;/a&gt;(...);&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://docs.tantorlabs.ru/tdb/ru/17_5/se/differences.html#:~:text=%D0%98%D0%BD%D1%82%D0%B5%D0%BB%D0%BB%D0%B5%D0%BA%D1%82%D1%83%D0%B0%D0%BB%D1%8C%D0%BD%D0%B0%D1%8F%20%D0%B0%D0%B3%D1%80%D0%B5%D0%B3%D0%B0%D1%86%D0%B8%D1%8F%20%D0%B4%D0%B0%D0%BD%D0%BD%D1%8B%D1%85%3A%20%D1%81%D0%B8%D1%81%D1%82%D0%B5%D0%BC%D0%B0%20%D1%82%D0%B5%D0%BF%D0%B5%D1%80%D1%8C%20%D0%BC%D0%B0%D1%81%D0%BA%D0%B8%D1%80%D1%83%D0%B5%D1%82%20%D0%B8%D0%BC%D0%B5%D0%BD%D0%B0%20%D0%B2%D1%80%D0%B5%D0%BC%D0%B5%D0%BD%D0%BD%D1%8B%D1%85%20%D1%82%D0%B0%D0%B1%D0%BB%D0%B8%D1%86%20%D0%B8%20%D0%BC%D0%B0%D1%81%D1%81%D0%B8%D0%B2%D0%BE%D0%B2%2C%20%D1%87%D1%82%D0%BE%20%D0%BF%D0%BE%D0%B7%D0%B2%D0%BE%D0%BB%D1%8F%D0%B5%D1%82%20%D0%B1%D0%BE%D0%BB%D0%B5%D0%B5%20%D1%82%D0%BE%D1%87%D0%BD%D0%BE%20%D0%B0%D0%B3%D1%80%D0%B5%D0%B3%D0%B8%D1%80%D0%BE%D0%B2%D0%B0%D1%82%D1%8C%20%D1%81%D1%82%D0%B0%D1%82%D0%B8%D1%81%D1%82%D0%B8%D0%BA%D1%83%20%D0%BF%D0%BE%20%D0%BE%D0%B4%D0%BD%D0%BE%D1%82%D0%B8%D0%BF%D0%BD%D1%8B%D0%BC%20%D0%B7%D0%B0%D0%BF%D1%80%D0%BE%D1%81%D0%B0%D0%BC%20%D1%81%20%D1%80%D0%B0%D0%B7%D0%BD%D1%8B%D0%BC%D0%B8%20%D0%BF%D0%B0%D1%80%D0%B0%D0%BC%D0%B5%D1%82%D1%80%D0%B0%D0%BC%D0%B8" rel="noopener noreferrer"&gt;Temporary tables&lt;/a&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In scenarios where the load on SpinLock becomes bottleneck — for example, when the same entry in a hash table is updated frequently — sampling can be quite effective. Since SpinLock protects only a single bucket, fewer calls to it (by skipping some queries) reduces contention between threads and thus improves overall performance.&lt;/p&gt;
&lt;h3&gt;
  
  
  Sampling results
&lt;/h3&gt;

&lt;p&gt;Let's review the above scenario with a powerful machine of 192 CPUs. The same &lt;code&gt;waits.sql&lt;/code&gt; scripts and enabled &lt;code&gt;pg_stat_statements&lt;/code&gt; are used for the test. Now let's run a benchmark to assess the impact of the &lt;code&gt;pg_stat_statements.sample_rate&lt;/code&gt; on performance and the nature of waits. We run a loop over five &lt;code&gt;sample_rate&lt;/code&gt; values: &lt;code&gt;1, 0.75, 0.5, 0.25&lt;/code&gt;, and &lt;code&gt;0&lt;/code&gt;. For each value, we run a load testing using &lt;code&gt;pgbench&lt;/code&gt;:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;a href="http://benchmark.sh" rel="noopener noreferrer"&gt;&lt;strong&gt;benchmark.sh&lt;/strong&gt;&lt;/a&gt;&lt;br&gt;
&lt;/p&gt;


&lt;/blockquote&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;CONNECTIONS=192

RESULTS="/tmp/results"

pgbench -i -s500

rm -rfv $RESULTS

for i in 1 .75 .5 .25 0

do

  psql -c "alter system set pg_stat_statements.sample_rate = ${i};" 2&amp;gt;/dev/null &amp;gt;/dev/null

  psql -c "select pg_reload_conf();" 2&amp;gt;/dev/null &amp;gt;/dev/null

  psql -c "show pg_stat_statements.sample_rate;" 2&amp;gt;/dev/null &amp;gt;/dev/null

  echo -e "\nsample_rate = $i" &amp;gt;&amp;gt;$RESULTS

  nohup pgbench -c $CONNECTIONS -j20 -T120 -S -Mprepared --progress 10 | grep "tps = " 2&amp;gt;&amp;gt;/tmp/results &amp;gt;&amp;gt;$RESULTS &amp;amp;

  timeout 125 psql -f /tmp/waits.sql

  echo " count | wait_event | wait_event_type" &amp;gt;&amp;gt;$RESULTS

  echo "--------------------------------------" &amp;gt;&amp;gt;$RESULTS

  cat /tmp/waits | grep OUT | awk '{print $2 "|" $3}' FS="|" | sort | uniq -c | sort -n -r -k1 &amp;gt;&amp;gt;$RESULTS

  rm -rfv /tmp/waits

done

cat $RESULTS
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;After running the benchmark with various &lt;code&gt;pg_stat_statements.sample_rate&lt;/code&gt; values, we got the results in the table below. It shows how performance and the nature of waits change when we change the fraction of queries hitting the hash table:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsbv2tjklwdu8170s8kjj.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsbv2tjklwdu8170s8kjj.png" alt=" " width="745" height="244"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;At sample_rate = 1.0, when metrics are collected for all queries, TPS is the lowest and there is a huge number of waits on SpinLock. As the sample_rate decreases to 0.75 and below, TPS spikes, and SpinDelay decreases by 2.3 times. At sample_rate = 0.25 and below, SpinDelay disappears.\&lt;br&gt;
Thus, sampling effectively reduces the overhead of &lt;code&gt;pg_stat_statements&lt;/code&gt; and significantly improves performance in scenarios with high contention for SpinLock.&lt;/p&gt;

&lt;h3&gt;
  
  
  Conclusion
&lt;/h3&gt;

&lt;p&gt;In &lt;code&gt;pg_stat_statements&lt;/code&gt;, &lt;code&gt;LWLock&lt;/code&gt; is used when adding a new entry to the hash table, and with a large number of unique queries, it can become a bottleneck. The PostgreSQL community is trying to solve this problem by reducing the number of new entries (namely by merging similar queries under one &lt;code&gt;queryid&lt;/code&gt;), and in Tantor Postgres 17.5, corresponding configuration parameters were added to mask arrays and temporary tables: &lt;code&gt;pg_stat_statements.mask_const_arrays &lt;/code&gt;and &lt;code&gt;pg_stat_statements.mask_temp_tables&lt;/code&gt;. This helps to cluster similar queries more precisely.&lt;/p&gt;

&lt;p&gt;In turn, SpinLock is used to protect individual buckets and becomes a source of contention when counters for identical queries are frequently updated, especially on machines with a large number of CPUs. To solve this problem, &lt;code&gt;pg_stat_statements.sample_rate&lt;/code&gt; parameter was added in Tantor Postgres SE 17.5: it allows reducing the load by sampling queries, thereby eliminating the problem described in the article.&lt;/p&gt;

&lt;p&gt;The developers of Tantor Labs proposed to merge the query sampling mechanism in &lt;code&gt;pg_stat_statements&lt;/code&gt; in the main branch of PostgreSQL. There's already an &lt;a href="https://www.postgresql.org/message-id/flat/fe99e0ca-e564-480e-b865-5f0cee30bc60%40tantorlabs.com" rel="noopener noreferrer"&gt;active discussion&lt;/a&gt; happening over at pgsql-hackers — if you've got any thoughts or feedback, feel free to jump in!&lt;/p&gt;

</description>
    </item>
    <item>
      <title>OAuth 2.0 authorization in PostgreSQL using Keycloak as an example</title>
      <dc:creator>Tantor Labs</dc:creator>
      <pubDate>Mon, 10 Nov 2025 10:42:46 +0000</pubDate>
      <link>https://dev.to/tantorlabs/oauth-20-authorization-in-postgresql-using-keycloak-as-an-example-3pf3</link>
      <guid>https://dev.to/tantorlabs/oauth-20-authorization-in-postgresql-using-keycloak-as-an-example-3pf3</guid>
      <description>&lt;p&gt;Hello! Today we will talk about authorization support via OAuth 2.0 Device Authorization Flow implemented in Tantor Postgres 17.5.0 DBMS. This is a modern and secure access method that allows applications to request access to PostgreSQL on behalf of the user through an external identification and access control provider, such as Keycloak, which is especially convenient for cloud environments and microservice architectures (the feature will also be available in PostgreSQL 18). In this article, we'll take a step-by-step look at configuring OAuth authorization in PostgreSQL using Keycloak: configure Keycloak, prepare PostgreSQL, write an OAuth token validator in PostgreSQL, and verify successful authorization via psql using Device Flow.&lt;/p&gt;

&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;Support for authorization via &lt;a href="https://datatracker.ietf.org/doc/html/rfc8628" rel="noopener noreferrer"&gt;OAuth 2.0 Device Authorization Flow&lt;/a&gt; is a modern and secure way of providing access, introduced in PostgreSQL 18. This authorization method allows applications to request access to PostgreSQL on behalf of the user through an external identification and access control provider, such as &lt;a href="https://www.keycloak.org/" rel="noopener noreferrer"&gt;Keycloak&lt;/a&gt;, which is especially convenient for cloud environments and microservices architecture. Support for this functionality in Tantor Postgres has been implemented since version 17.5.0, introduced in June 2025.&lt;/p&gt;

&lt;p&gt;Unlike password authentication (password, md5, SCRAM), OAuth allows you to centralize account management and security policies in a single identity and access control provider. Device Authorization Flow is ideal for scenarios where the client side is limited or absent (for example, terminal applications, automated services). The user confirms access on a separate device via a browser or a mobile application. This makes the authorization procedure more secure, since it is impossible to intercept the password on the client side.&lt;/p&gt;

&lt;p&gt;In this article, we will take a step-by-step look at configuring OAuth authorization in PostgreSQL using Keycloak: configure Keycloak, prepare PostgreSQL, write an OAuth token validator in PostgreSQL, and verify successful authorization via psql using Device Flow. In other words, let's go through the following scheme for configuring an OAuth connection in PostgreSQL:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fi7l4f8rdv0j2r1a5r0xw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fi7l4f8rdv0j2r1a5r0xw.png" alt=" " width="800" height="567"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Keycloak setup by a security engineer
&lt;/h2&gt;

&lt;p&gt;Keycloak is an open source identification and access control system that allows you to manage user identification, control access to applications and data, providing a single point of entry (SSO). Keycloak makes it easier to set up access, restore passwords, edit profiles, and send out one-time passwords, eliminating the need for developers to create additional login forms. With Keycloak, these processes can be integrated with just a few mouse clicks.&lt;/p&gt;

&lt;h3&gt;
  
  
  Keycloak launch
&lt;/h3&gt;

&lt;p&gt;Let's launch Keycloak using a Docker image. We will also open port 8080 and create an initial admin user with the username admin and password admin. Using the --name option, we will set the name of the container to be created as keycloak.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker run --name keycloak -p 8080:8080 -e KC_BOOTSTRAP_ADMIN_USERNAME=admin -e KC_BOOTSTRAP_ADMIN_PASSWORD=admin quay.io/keycloak/keycloak:26.2.1 start-dev
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Next, enter the address in the browser &lt;code&gt;http://localhost:8080&lt;/code&gt;, the Keycloak admin panel will open and ask for a username and password. Let's enter admin, admin.&lt;/p&gt;

&lt;p&gt;The security engineer sets up user access rights in Keycloak in the following order:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Creating a Realm&lt;/li&gt;
&lt;li&gt;Creating Users&lt;/li&gt;
&lt;li&gt;Creating a Client scope&lt;/li&gt;
&lt;li&gt;Creating Clients&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Let's take a closer look at each stage.&lt;/p&gt;

&lt;h3&gt;
  
  
  Creating a Realm
&lt;/h3&gt;

&lt;p&gt;Realm is a security configuration area that includes user accounts, roles, groups, and authorization settings. To create it, click Manage realms in the upper-left corner.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fj2aoa7wfa06hkhhyzmhh.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fj2aoa7wfa06hkhhyzmhh.png" alt=" " width="800" height="572"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You will see the &lt;code&gt;Manage realms&lt;/code&gt; page:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F53c3tzzvhkieatl5v9p9.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F53c3tzzvhkieatl5v9p9.png" width="800" height="246"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Next, click on &lt;code&gt;Create realm&lt;/code&gt;, and in the dialog box that appears, enter "postgres-realm" in the &lt;code&gt;Realm name&lt;/code&gt; field.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpd1km0d83cllnt0celnr.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpd1km0d83cllnt0celnr.png" width="800" height="561"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;After clicking on &lt;code&gt;Create&lt;/code&gt;, our realm will be created and will become the current one.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fypi69s3zjbqn0f0jqxcb.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fypi69s3zjbqn0f0jqxcb.png" width="800" height="299"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Creating Users
&lt;/h3&gt;

&lt;p&gt;Users are entities that can log in to the system. They can have associated attributes such as email, username, address, phone number, and birthday. To open the User creation window, click on the Users tab in the left panel:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fab3clvpcyi5ic6e5pbag.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fab3clvpcyi5ic6e5pbag.png" width="800" height="448"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Next, click &lt;code&gt;Create new user&lt;/code&gt;, a window will appear with the data entry for the new user. In the &lt;code&gt;Username&lt;/code&gt; field, enter the name "alice" and fill in the fields &lt;code&gt;Email&lt;/code&gt;, &lt;code&gt;First name&lt;/code&gt;, and &lt;code&gt;Last name&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fd75ixpzchutf4msifi3p.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fd75ixpzchutf4msifi3p.png" width="800" height="508"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Click the &lt;code&gt;Create&lt;/code&gt; button, a new user window will appear. You will need an ID to the DB administrator for mapping the Keycloak and PostgreSQL user in the &lt;code&gt;pg_ident.conf&lt;/code&gt; file.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fq1ui98oi8u7ilt3br4gz.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fq1ui98oi8u7ilt3br4gz.png" width="800" height="599"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Switch to the &lt;code&gt;Credentials&lt;/code&gt; tab and click on &lt;code&gt;Set password&lt;/code&gt; to set the password:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhgtb7g19rh0v0fa5n5ru.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhgtb7g19rh0v0fa5n5ru.png" width="800" height="389"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Enter the password "alice". Temporary switch (temporary password) is set to the off position, otherwise, at the first login, the system will require the user to set a new password.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0njv6dcn8njs2uenrhms.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0njv6dcn8njs2uenrhms.png" width="800" height="442"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Click &lt;code&gt;Save&lt;/code&gt;, then in the dialog box that appears click &lt;code&gt;Save&lt;/code&gt; password:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fu9mejketds7uyfqxq3mk.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fu9mejketds7uyfqxq3mk.png" width="788" height="216"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The password is set:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffz7zrnl5ej0kbechxrsy.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffz7zrnl5ej0kbechxrsy.png" width="800" height="213"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Creating Client scopes
&lt;/h3&gt;

&lt;p&gt;The Client scope is a way to limit the access rights that are declared in tokens. It allows the client to request only the roles they need, which makes tokens more secure and manageable.&lt;/p&gt;

&lt;p&gt;Switch to the &lt;code&gt;Client scopes&lt;/code&gt; tab:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Foee11203h4gx5qc70b4x.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Foee11203h4gx5qc70b4x.png" width="800" height="320"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Click on the &lt;code&gt;Create client scope&lt;/code&gt; button, a window for creating a scope will open. Enter the "postgres" value in the &lt;code&gt;Name&lt;/code&gt; field, select the &lt;code&gt;Default type&lt;/code&gt;, activate &lt;code&gt;Include in token scope&lt;/code&gt;, and save by clicking &lt;code&gt;Save&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frzkczj0jwyq5pka1twfk.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frzkczj0jwyq5pka1twfk.png" width="800" height="765"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Creating a Client
&lt;/h3&gt;

&lt;p&gt;Clients are applications and services that can request user authorization. To create a client, go to the &lt;code&gt;Clients&lt;/code&gt; tab and click &lt;code&gt;Create client&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fny28j9d0iabeeirz3kov.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fny28j9d0iabeeirz3kov.png" width="800" height="315"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In the &lt;code&gt;General settings&lt;/code&gt; window that appears, enter "postgres-client" in the Client ID field. Then click Next.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fym6mxoazb2b3twmidyix.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fym6mxoazb2b3twmidyix.png" width="800" height="534"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In the &lt;code&gt;Capability config&lt;/code&gt; window that appears:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Turn on &lt;code&gt;Client authentication&lt;/code&gt; (On position);&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Disable &lt;code&gt;Standard flow&lt;/code&gt;, because we do not use Authorization Code Flow;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Enable the &lt;code&gt;OAuth 2.0 Device Authorization Grant&lt;/code&gt;;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Click &lt;code&gt;Next&lt;/code&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fazmukmolcj4hf39cncn6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fazmukmolcj4hf39cncn6.png" width="800" height="548"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;We don't change anything in the &lt;code&gt;Login settings&lt;/code&gt; window, just click Save.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkees28nn5l54pokn4qac.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkees28nn5l54pokn4qac.png" width="800" height="369"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The client we've created opens:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhndznzwbk61wx2ov7lod.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhndznzwbk61wx2ov7lod.png" width="800" height="355"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Next, go to the &lt;code&gt;Client scopes&lt;/code&gt; tab and check that:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;"postgres" is present (in the picture at the very bottom) and the &lt;code&gt;Default&lt;/code&gt; type is set for it;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;"basic" also has the &lt;code&gt;Default&lt;/code&gt; type.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The rest of the scopes are not important for our example, you can leave everything as it is.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkkxpralp1bdp8x3dnonz.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkkxpralp1bdp8x3dnonz.png" width="800" height="628"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The &lt;code&gt;Credentials&lt;/code&gt; tab contains the &lt;code&gt;Client Secret&lt;/code&gt;, which we will need to enter in the terminal when logging into PostgreSQL (see below in the "Authorization via psql" section)&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbn59ft28zczmkelih2ol.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbn59ft28zczmkelih2ol.png" width="800" height="488"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Configuring PostgreSQL by a database administrator
&lt;/h2&gt;

&lt;p&gt;The possibility of OAuth operation presupposes the appropriate configuration of PostgreSQL:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Creating a user in PostgreSQL;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Configure the parameters in the &lt;code&gt;postgresql.conf&lt;/code&gt; file;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Configure the parameters in the &lt;code&gt;pg_ident.conf&lt;/code&gt; file in the case of mapping users through it between Keycloak and PostgreSQL. If the mapping occurs in the validator that wrote developer, you don't need to configure it.;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Configure the parameters in the &lt;code&gt;pg_hba.conf&lt;/code&gt; file.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  Creating Roles
&lt;/h3&gt;

&lt;p&gt;A role is an entity that can own objects and have certain rights in the database. A role can represent a user, a group, or both, depending on the use case.&lt;/p&gt;

&lt;p&gt;Create a role and give it the right to connect to the database:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;CREATE ROLE alice;
ALTER ROLE alice WITH LOGIN;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Configuring the postgresql.conf file
&lt;/h3&gt;

&lt;p&gt;In the &lt;code&gt;oauth_validator_libraries&lt;/code&gt; parameter we will set the name of the validator that will verify the token (see the section "Writing a validator by the developer").&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;oauth_validator_libraries = 'oauth_validator'

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If only one verification library is provided, it will be used by default for all OAuth connections; otherwise, all entries of oauth HBA must explicitly set the verification tool selected from this list. If an empty string value is set (by default), OAuth connections will be denied.&lt;/p&gt;

&lt;h3&gt;
  
  
  User mapping between Keycloak and PostgreSQL
&lt;/h3&gt;

&lt;p&gt;There are two ways to map users:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;via the &lt;code&gt;pg_ident.conf&lt;/code&gt; file - this is configured by the database administrator;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;using a validator -- the developer adds this mapping to the validator.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  Mapping users via the pg_ident.conf file
&lt;/h4&gt;

&lt;p&gt;Configure the display of Keycloak user IDs and databases:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# MAPNAME    SYSTEM-USERNAME                           PG-USERNAME
oauthmap    "0fc72b6f-6221-4ed8-a916-069e7a081d14"     "alice"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The name of the mapping is indicated in the first column, and the user ID from Keycloak is indicated in the second column (see "Creating Users" section), in the third is the name of the role in PostgreSQL.&lt;/p&gt;

&lt;h4&gt;
  
  
  Mapping users through a validator
&lt;/h4&gt;

&lt;p&gt;It is described below in the "Writing a validator by the developer" section.&lt;/p&gt;

&lt;h3&gt;
  
  
  Configuring the pg_hba.conf file
&lt;/h3&gt;

&lt;p&gt;Setting up the client's login to the database:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# TYPE  DATABASE        USER            ADDRESS                 METHOD local   all             all             oauth issuer="http://192.168.0.156:8080/realms/postgres-realm/.well-known/openid-configuration" scope="openid postgres" map="oauthmap"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In the fourth field, set oauth and then its parameters. In the issuer parameter, set the URL of the discovery service &lt;a href="http://192.168.0.156:8080/realms/postgres-realm/.well-known/openid-configuration" rel="noopener noreferrer"&gt;http://192.168.0.156:8080/realms/postgres-realm/.well-known/openid-configuration &lt;/a&gt;. In the Scope parameter, we set the access areas that will be requested from Keycloak for the client.&lt;/p&gt;

&lt;p&gt;Next, you need to set the algorithm for mapping users between Keycloak and PostgreSQL (see "User mapping between Keycloak and PostgreSQL").&lt;/p&gt;

&lt;h4&gt;
  
  
  Mapping users via pg_ident.conf
&lt;/h4&gt;

&lt;p&gt;We add the map parameter, in which we set the &lt;code&gt;map&lt;/code&gt; id from the &lt;code&gt;pg_ident.conf&lt;/code&gt; file, we have this "oauthmap":&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# TYPE  DATABASE        USER            ADDRESS                 METHOD local   all             all             oauth
issuer="http://192.168.0.156:8080/realms/postgres-realm/.well-known/openid-configuration" scope="openid postgres" map="oauthmap"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  Mapping users through a validator
&lt;/h4&gt;

&lt;p&gt;In this case, the &lt;code&gt;delegate_ident_mapping=1&lt;/code&gt; parameter should be set instead of the map parameter.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# TYPE  DATABASE        USER            ADDRESS                 METHOD local   all             all             oauth
issuer="http://192.168.0.156:8080/realms/postgres-realm/.well-known/openid-configuration" scope="openid postgres" delegate_ident_mapping=1
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The &lt;code&gt;delegate_ident_mapping&lt;/code&gt; parameter has a higher priority than &lt;code&gt;map&lt;/code&gt;, so if the &lt;code&gt;map&lt;/code&gt; parameter is also specified with &lt;code&gt;delegate_ident_mapping=1&lt;/code&gt;, it will be ignored and user mapping will go through the validator.&lt;/p&gt;

&lt;h2&gt;
  
  
  Writing a validator by the developer
&lt;/h2&gt;

&lt;p&gt;OAuth authentication modules implement their functionality by defining a set of callbacks. The server will call them as needed to process the authorization request from the user.&lt;/p&gt;

&lt;h3&gt;
  
  
  Implementation of the token validator
&lt;/h3&gt;

&lt;p&gt;When mapping users between Keycloak and PostgreSQL, the implementation via &lt;code&gt;pg_ident.conf&lt;/code&gt; differs from the mapping via the validator only by the implementation of the get_user function. In the example below, the mapping is implemented using &lt;code&gt;pg_ident.conf&lt;/code&gt;. Token verification It consists in verifying that the required scope specified in &lt;code&gt;pg_hba.conf&lt;/code&gt; is present in the &lt;code&gt;scope&lt;/code&gt; field in the token received from the server. Upon successful verification, the user ID from the &lt;code&gt;sub&lt;/code&gt; token field is assigned &lt;code&gt;res-&amp;gt;authn_id&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;The main verification logic is implemented in the &lt;code&gt;validate_token&lt;/code&gt; function. The general scheme of its work:&lt;/p&gt;

&lt;p&gt;1\ Token content analysis. The source string of the token is analyzed to extract its contents (payload). If the token has an incorrect format or the contents cannot be extracted, the verification fails.&lt;/p&gt;

&lt;p&gt;2\ Extracting the &lt;code&gt;sub&lt;/code&gt; and &lt;code&gt;scope&lt;/code&gt; fields from the JWT token. The token content must include both fields:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;sub&lt;/code&gt; (Subject) - user ID&lt;/p&gt;

&lt;p&gt;&lt;code&gt;scope&lt;/code&gt; -- a list of permissions (Scopes) provided by the token, separated by spaces&lt;/p&gt;

&lt;p&gt;If any of these fields are missing, the verification is considered failed.&lt;/p&gt;

&lt;p&gt;3\ Purpose of the identifier.&lt;/p&gt;

&lt;p&gt;The sub value is assigned to the res-&amp;gt;authn_id field, which PostgreSQL uses to identify the user. This value is then compared with the entries in &lt;code&gt;pg_ident.conf&lt;/code&gt; to determine the actual role in the database that the user can use.&lt;/p&gt;

&lt;p&gt;4\  Comparison of permissions (scopes).&lt;/p&gt;

&lt;p&gt;The permissions granted by the token are compared with those required by the corresponding entry in &lt;code&gt;pg_hba.conf&lt;/code&gt; (from &lt;code&gt;oauth_scope&lt;/code&gt;). If all required permissions are present in the token, the verification is considered successful.&lt;/p&gt;

&lt;p&gt;5\ Setting the authorization result.&lt;/p&gt;

&lt;p&gt;The &lt;code&gt;res-&amp;gt;authorized&lt;/code&gt; flag is set to true if the permissions match, otherwise false.&lt;/p&gt;

&lt;p&gt;6\  Identification mapping. The &lt;code&gt;sub&lt;/code&gt; value is then mapped (outside of this module) to the entries in &lt;code&gt;pg_ident.conf&lt;/code&gt; to determine the actual role in the database that the user can use.&lt;/p&gt;

&lt;p&gt;Let's start writing our training validator (its sources are also posted on &lt;a href="https://github.com/TantorLabs/oauth_validator" rel="noopener noreferrer"&gt;GitHub&lt;/a&gt;). First, let's create the oauth_validator folder, and in it we'll create the &lt;strong&gt;oauth_validator.c file&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;#include &amp;lt;string.h&amp;gt;

#include "postgres.h"

#include "token_utils.h"

#include "fmgr.h"
#include "libpq/oauth.h"
#include "miscadmin.h"
#include "nodes/pg_list.h"
#include "utils/builtins.h"

PG_MODULE_MAGIC;

/*
 * Declarations of internal module functions.
 */
static void validator_startup(ValidatorModuleState *state);
static void validator_shutdown(ValidatorModuleState *state);
static bool validate_token(const ValidatorModuleState *state,
                           const char *token,
                           const char *role,
                           ValidatorModuleResult *result);

/*
 * Structure with pointers to OAuth token validator callback functions.
 * PostgreSQL calls these functions during certain phases of the module's lifecycle.
 */
static const OAuthValidatorCallbacks validator_callbacks = {
    PG_OAUTH_VALIDATOR_MAGIC, /* Magic number for API version check */

    .startup_cb = validator_startup,   /* Validator initialization function */
    .shutdown_cb = validator_shutdown, /* Validator shutdown function */
    .validate_cb = validate_token      /* Token validation function */
};

/*
 * Entry point for the OAuth validator module.
 * PostgreSQL calls this function when loading the module.
 */
const OAuthValidatorCallbacks *
_PG_oauth_validator_module_init(void)
{
    return &amp;amp;validator_callbacks;
}

/*
 * Validator initialization function.
 * Called once when the module is loaded.
 */
static void
validator_startup(ValidatorModuleState *state)
{
    /*
     * Check if the server version matches the one the module was built with.
     * (Real production modules shouldn't do this, as it breaks upgrade compatibility.)
     */
    if (state-&amp;gt;sversion != PG_VERSION_NUM)
        elog(ERROR, "oauth_validator: server version mismatch: sversion=%d", state-&amp;gt;sversion);
}

/*
 * Validator shutdown function.
 * Called when the module is unloaded or the server shuts down.
 */
static void
validator_shutdown(ValidatorModuleState *state)
{
    /* Nothing to do for now, but resource cleanup could be added here if necessary. */
}

/*
 * Main OAuth token validation function.
 *
 * Parameters:
 * - state: validator module state (may contain configuration etc.);
 * - token: string containing the token to validate;
 * - role: PostgreSQL role the client is trying to connect as;
 * - res: structure to store the validation result.
 *
 * Returns true if the token is valid, false otherwise.
 */
static bool
validate_token(const ValidatorModuleState *state,
               const char *token, const char *role,
               ValidatorModuleResult *res)
{
    char *sub = NULL;               /* Value of the "sub" field from the token (user identifier) */
    char *scope = NULL;             /* Value of the "scope" field from the token (allowed scopes) */
    const char *token_payload = NULL; /* Token payload as JSON string */
    List *granted_scopes = NIL;     /* List of scopes granted by the token */
    List *required_scopes = NIL;    /* List of required scopes from HBA configuration */
    bool matched = false;           /* Flag indicating whether required scopes are satisfied */

    /* Initialize result */
    res-&amp;gt;authn_id = NULL;     /* Authentication ID (sub) */
    res-&amp;gt;authorized = false;  /* Authorization flag */

    /* Extract payload from the token */
    token_payload = parse_token_payload(token);
    if (token_payload == NULL)
    {
        elog(LOG, "Invalid token: missing payload: %s", token);
        return false;
    }

    /* Extract 'sub' and 'scope' fields from the payload */
    extract_sub_scope_fields(token_payload, &amp;amp;sub, &amp;amp;scope);
    if (!sub || !scope)
    {
        elog(LOG, "Invalid token: missing sub and/or scope fields: %s", token);
        return false;
    }

    /* Set authentication ID (sub) in the result */
    res-&amp;gt;authn_id = pstrdup(sub);

    /* Split the token's scope field into a list */
    granted_scopes = split_scopes(scope);

    /* Split the required scopes from HBA file into a list */
    required_scopes = split_scopes(MyProcPort-&amp;gt;hba-&amp;gt;oauth_scope);

    if (!granted_scopes || !required_scopes)
        return false;

    /* Check if the granted scopes satisfy the required scopes */
    matched = check_scopes(granted_scopes, required_scopes);

    /* Set authorization result flag */
    res-&amp;gt;authorized = matched;

    return true;
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;token_utils.h/.c&lt;/strong&gt; -- utility functions&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;#ifndef TOKEN_UTILS_H
#define TOKEN_UTILS_H

#include &amp;lt;stdbool.h&amp;gt;

#include "common/jsonapi.h"
#include "nodes/pg_list.h"

const char* parse_token_payload(const char *token);
void extract_sub_scope_fields(const char *json, char **sub_field, char **scope_field);
const char *decode_base64(const char *b64);
char *base64url_to_base64(const char *b64url);
List *split_scopes(const char *raw);
bool check_scopes(List *granted, List *required);

#endif
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;#include "postgres.h"

#include "token_utils.h"

#include "common/base64.h"
#include "mb/pg_wchar.h"

#define SUB_FIELD   0   /* Index for 'sub' field */
#define SCOPE_FIELD 1   /* Index for 'scope' field */

/*
 * JSON object field handler.
 * Marks that the currently processed field is 'sub' or 'scope'
 * to store its value at the next processing stage.
 */
static JsonParseErrorType
token_field_start(void *state, char *fname, bool isnull)
{
    char **fields = (char **) state;

    if (strcmp(fname, "sub") == 0)
        fields[SUB_FIELD] = (char *) 1;  /* Mark that we are processing 'sub' field */
    else if (strcmp(fname, "scope") == 0)
        fields[SCOPE_FIELD] = (char *) 1; /* Mark that we are processing 'scope' field */

    return JSON_SUCCESS;
}

/*
 * JSON scalar value handler.
 * Stores the value of 'sub' or 'scope' if it was marked earlier.
 */
static JsonParseErrorType
token_scalar(void *state, char *token, JsonTokenType tokentype)
{
    char **fields = (char **) state;

    if (fields[SUB_FIELD] == (char *) 1)
        fields[SUB_FIELD] = pstrdup(token);  /* Save the value of 'sub' */
    else if (fields[SCOPE_FIELD] == (char *) 1)
        fields[SCOPE_FIELD] = pstrdup(token); /* Save the value of 'scope' */

    return JSON_SUCCESS;
}

/*
 * Extracts 'sub' and 'scope' fields from a JSON string.
 *
 * Parameters:
 *  - json: JSON string
 *  - sub_field: returns the value of 'sub' field
 *  - scope_field: returns the value of 'scope' field
 */
void
extract_sub_scope_fields(const char *json, char **sub_field, char **scope_field)
{
    JsonLexContext lex;
    JsonSemAction sem;

    char **fields = palloc0(sizeof(char *) * 2); /* Allocate memory for 2 strings ('sub', 'scope') */

    *sub_field = NULL;
    *scope_field = NULL;

    /* Create a lexical context for JSON parsing */
    makeJsonLexContextCstringLen(&amp;amp;lex, json, strlen(json), GetDatabaseEncoding(), true);

    /* Set up JSON parser handlers */
    memset(&amp;amp;sem, 0, sizeof(sem));
    sem.semstate = (void *) fields;
    sem.object_field_start = token_field_start;
    sem.scalar = token_scalar;

    /* Start JSON parsing */
    pg_parse_json(&amp;amp;lex, &amp;amp;sem);

    /* Return the found values */
    *sub_field = fields[SUB_FIELD];
    *scope_field = fields[SCOPE_FIELD];
}

/*
 * Extracts the payload from a JWT token.
 * Returns the decoded payload string in JSON format.
 */
const char*
parse_token_payload(const char *token)
{
    char *dot1 = NULL;
    char *dot2 = NULL;
    int payload_len = 0;
    char *payload_b64url = NULL;
    char *b64 = NULL;

    if(!token)
        return NULL;

    /* Find the first and second dots in JWT (separators for header.payload.signature) */
    dot1 = strchr(token, '.');
    dot2 = dot1 ? strchr(dot1 + 1, '.') : NULL;

    if (!dot1 || !dot2)
    {
        elog(LOG, "Invalid token format, two dots required: %s", token);
        return NULL;
    }

    /* Extract the encoded payload between the dots */
    payload_len = dot2 - (dot1 + 1);
    payload_b64url = pnstrdup(dot1 + 1, payload_len);

    /* Convert base64url to regular base64 */
    b64 = base64url_to_base64(payload_b64url);

    /* Decode base64 to JSON string */
    return decode_base64(b64);
}

/*
 * Converts a base64url string to base64 format.
 * Replaces '-' with '+', '_' with '/' and adds padding '=' if necessary.
 */
char *
base64url_to_base64(const char *b64url)
{
    int len = strlen(b64url);
    int pad = (4 - (len % 4)) % 4; /* Determine the number of '=' padding characters */
    char *b64 = palloc(len + pad + 1);

    for (int i = 0; i &amp;lt; len; i++)
    {
        if (b64url[i] == '-')
            b64[i] = '+';
        else if (b64url[i] == '_')
            b64[i] = '/';
        else
            b64[i] = b64url[i];
    }

    /* Add padding '=' */
    for (int i = 0; i &amp;lt; pad; i++)
        b64[len + i] = '=';

    b64[len + pad] = '\0';
    return b64;
}

/*
 * Decodes a base64 string into a regular string.
 * Returns the decoded string or NULL in case of error.
 */
const char *
decode_base64(const char *b64)
{
    int encoded_len = strlen(b64);
    int max_decoded_len = pg_b64_dec_len(encoded_len); /* Calculate required buffer length */
    char *decoded = palloc(max_decoded_len + 1);
    int decoded_len = pg_b64_decode(b64, encoded_len, decoded, max_decoded_len);

    if (decoded_len &amp;lt;= 0)
    {
        elog(LOG, "Invalid token format: base64 decoding error");
        return NULL;
    }

    decoded[decoded_len] = '\0';
    return decoded;
}

/*
 * Splits a space-separated string (e.g., scope list from token) into a List of strings.
 */
List *
split_scopes(const char *raw)
{
    List *result = NIL;
    char *str = pstrdup(raw);  /* Make a copy of the string because strtok modifies it */
    char *tok = strtok(str, " ");
    while (tok)
    {
        result = lappend(result, pstrdup(tok));
        tok = strtok(NULL, " ");
    }
    return result;
}

/*
 * String comparison function for list sorting.
 */
static int
list_string_cmp(const ListCell *a, const ListCell *b)
{
    const char *sa = (const char *) lfirst(a);
    const char *sb = (const char *) lfirst(b);
    return strcmp(sa, sb);
}

/*
 * Checks whether all required scopes are present in the granted scopes.
 * Lists are sorted beforehand for easier comparison.
 *
 * Returns true if all required scopes are found in granted scopes.
 */
bool
check_scopes(List *granted, List *required)
{
    ListCell *gcell;
    ListCell *rcell;

    /* Sort both lists to simplify comparison */
    list_sort(granted, list_string_cmp);
    list_sort(required, list_string_cmp);

    gcell = list_head(granted);
    rcell = list_head(required);

    while (rcell != NULL &amp;amp;&amp;amp; gcell != NULL)
    {
        char *r = (char *) lfirst(rcell);
        char *g = (char *) lfirst(gcell);
        int cmp = strcmp(r, g);

        if (cmp == 0)
        {
            /* Match found --- move to the next required element */
            rcell = lnext(required, rcell);
            gcell = lnext(granted, gcell);
        }
        else if (cmp &amp;gt; 0)
        {
            /* granted is behind --- move to the next granted element */
            gcell = lnext(granted, gcell);
        }
        else
        {
            /* required element not found in granted --- return false */
            return false;
        }
    }

    /* If not all required elements were found --- error */
    if (rcell != NULL)
        return false;

    return true;
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Makefile&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# contrib/oauth_validator/Makefile

PGFILEDESC = "oauth_validator - OAuth validator"
MODULE_big = oauth_validator

OBJS =\
    $(WIN32RES)\
    oauth_validator.o\
    token_utils.o

PG_CPPFLAGS += -I$(top_srcdir)/src/common
PG_CPPFLAGS += -I$(libpq_srcdir)

PG_CONFIG = pg_config
PGXS := $(shell $(PG_CONFIG) --pgxs)
include $(PGXS)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Callbacks
&lt;/h3&gt;

&lt;h4&gt;
  
  
  startup_cb callback
&lt;/h4&gt;

&lt;p&gt;The &lt;code&gt;startup_cb&lt;/code&gt; callback is executed immediately after the module is loaded. It can be used to configure the local state and perform additional initialization if required. If the validator has a state, it can use the &lt;code&gt;state-&amp;gt;private_data&lt;/code&gt; field to store it.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;typedef void (*ValidatorStartupCB) (ValidatorModuleState *state);

ValidatorStartupCB startup_cb;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  validate_cb callback
&lt;/h4&gt;

&lt;p&gt;The &lt;code&gt;validate_cb&lt;/code&gt; callback is executed when the user tries to pass authorization using OAuth. Any state set in previous calls will be available in &lt;code&gt;state-&amp;gt;private_data&lt;/code&gt;.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;typedef bool (*ValidatorValidateCB) (const ValidatorModuleState *state,
                                    const char *token, const char *role,
                                    ValidatorModuleResult *result);

ValidatorValidateCB validate_cb;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The argument &lt;code&gt;token&lt;/code&gt; will contain a carrier token for verification. PostgreSQL made sure that the token was formed syntactically correct, but no other check was carried out. The role parameter contains the role on behalf of which the user requested login. The callback should set the output parameters in the resulting structure, which is defined as:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;typedef struct ValidatorModuleResult
{
   bool authorized;
   char *authn_id;
} ValidatorModuleResult;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The connection will be established only if the validator sets the result-&amp;gt;authorized parameter to true. To authenticate a user under an authenticated username (i.e. a specific one using a token), memory must be allocated (using the palloc function ), and a pointer to this memory region must be assigned to the &lt;code&gt;result-&amp;gt;authn_id&lt;/code&gt; field. Alternatively, the &lt;code&gt;result-&amp;gt;authn_id&lt;/code&gt; parameter can be set to &lt;code&gt;NULL&lt;/code&gt; if the token is valid but the associated user ID cannot be determined.&lt;/p&gt;

&lt;p&gt;If the token verification fails , the validator must return false due to incorrect token format, lack of necessary user rights, or other error, then any parameters from the result argument are ignored and the connection is interrupted. In case of successful verification of the token, the validator must return true.&lt;/p&gt;

&lt;p&gt;The behavior after returning from &lt;code&gt;validate_cb&lt;/code&gt; depends on the specific HBA setting. Usually, the user name &lt;code&gt;result-&amp;gt;authn_id&lt;/code&gt; must correspond exactly to the role under which the user logs in (this behavior can be changed using the user's card). But when authenticating according to the HBA rule with delegate_ident_mapping enabled, PostgreSQL will not perform any checks on the result-&amp;gt;authn_id value at all; in this case, the validator must ensure that the token has sufficient privileges so that the user can log in under the specified role.&lt;/p&gt;

&lt;h4&gt;
  
  
  Shutdown_cb callback
&lt;/h4&gt;

&lt;p&gt;The &lt;code&gt;shutdown_cb&lt;/code&gt; callback is executed when there is a server process associated with the connection. If the validator has any allocated state, this callback should release it to avoid resource leakage.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;typedef void (*ValidatorShutdownCB) (ValidatorModuleState *state);

ValidatorShutdownCB shutdown_cb;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Authorization process
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Logging
&lt;/h3&gt;

&lt;p&gt;Let's start by configuring logging. To see what requests PostgreSQL sends and what responses it receives, query logging can be enabled in &lt;code&gt;postgresql.conf&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;log_connections = on

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Examples of logs will be presented below, prefixes will appear at the beginning of the lines, meaning the following:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;the prefix "&amp;gt;" means a Keycloak request.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;the prefix "&amp;lt;" means the Keycloak response.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  discovery
&lt;/h4&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;[libcurl] *   Trying 192.168.0.156:8080...
[libcurl] * Connected to 192.168.0.156 (192.168.0.156) port 8080 (#0)
[libcurl] &amp;gt; GET /realms/postgres-realm/.well-known/openid-configuration HTTP/1.1
[libcurl] &amp;gt; Host: 192.168.0.156:8080
[libcurl] &amp;gt;
[libcurl] &amp;lt; HTTP/1.1 200 OK
[libcurl] &amp;lt; content-length: 6638
[libcurl] &amp;lt; Cache-Control: no-cache, must-revalidate, no-transform, no-store
[libcurl] &amp;lt; Content-Type: application/json;charset=UTF-8
[libcurl] &amp;lt; Referrer-Policy: no-referrer
[libcurl] &amp;lt; Strict-Transport-Security: max-age=31536000; includeSubDomains
[libcurl] &amp;lt; X-Content-Type-Options: nosniff
[libcurl] &amp;lt; X-Frame-Options: SAMEORIGIN
[libcurl] &amp;lt;
[libcurl] &amp;lt; {"issuer":"http://192.168.0.156:8080/realms/postgres-realm","authorization_endpoint":"http://192.168.0.156:8080/realms/postgres-realm/protocol/openid-connect/auth","token_endpoint":"http://192.168.0.156:8080/realms/postgres-realm/protocol/openid-connect/token","introspection_endpoint":"http://192.168.0.156:8080/realms/postgres-realm/protocol/openid-connect/token/introspect","userinfo_endpoint":"http://192.168.0.156:8080/realms/postgres-realm/protocol/openid-connect/userinfo","end_session_endpoint":"http://192.168.0.156:8080/realms/postgres-realm/protocol/openid-connect/logout","frontchannel_logout_session_supported":true,"frontchannel_logout_supported":true,"jwks_uri":"http://192.168.0.156:8080/realms/postgres-realm/protocol/openid-connect/certs","check_session_iframe":"http://192.168.0.156:8080/realms/postgres-realm/protocol/openid-connect/login-status-iframe.html","grant\_types\_supported":["authorization\_code","client\_credentials","implicit","password","refresh\_token","urn:ietf:params:oauth:grant-type:device\_code","urn:ietf:params:oauth:grant-type:token-exchange","urn:ietf:params:oauth:grant-type:uma-ticket","urn:openid:params:grant-type:ciba"],"acr\_values\_supported":["0","1"],"response\_types\_supported":["code","none","id\_token","token","id\_token token","code id_token","code token","code id_token token"],"subject_types_supported":["public","pairwise"],"prompt_values_supported":["none","login","consent"],"id_token_signing_alg_values_supported":["PS384","RS384","EdDSA","ES384","HS256","HS512","ES256","RS256","HS384","ES512","PS256","PS512","RS512"],"id_token_encryption_alg_values_supported":["ECDH-ES+A256KW","ECDH-ES+A192KW","ECDH-ES+A128KW","RSA-OAEP","RSA-OAEP-256","RSA1_5","ECDH-ES"],"id_token_encryption_enc_values_supported":["A256GCM","A192GCM","A128GCM","A128CBC-HS256","A192CBC-HS384","A256CBC-HS512"],"userinfo_signing_alg_values_supported":["PS384","RS384","EdDSA","ES384","HS256","HS512","ES256","RS256","HS384","ES512","PS256","PS512","RS512","none"],"userinfo_encryption_alg_values_supported":["ECDH-ES+A256KW","ECDH-ES+A192KW","ECDH-ES+A128KW","RSA-OAEP","RSA-OAEP-256","RSA1_5","ECDH-ES"],"userinfo_encryption_enc_values_supported":["A256GCM","A192GCM","A128GCM","A128CBC-HS256","A192CBC-HS384","A256CBC-HS512"],"request_object_signing_alg_values_supported":["PS384","RS384","EdDSA","ES384","HS256","HS512","ES256","RS256","HS384","ES512","PS256","PS512","RS512","none"],"request_object_encryption_alg_values_supported":["ECDH-ES+A256KW","ECDH-ES+A192KW","ECDH-ES+A128KW","RSA-OAEP","RSA-OAEP-256","RSA1_5","ECDH-ES"],"request_object_encryption_enc_values_supported":["A256GCM","A192GCM","A128GCM","A128CBC-HS256","A192CBC-HS384","A256CBC-HS512"],"response_modes_supported":["query","fragment","form_post","query.jwt","fragment.jwt","form_post.jwt","jwt"],"registration_endpoint":"http://192.168.0.156:8080/realms/postgres-realm/clients-registrations/openid-connect","token_endpoint_auth_methods_supported":["private_key_jwt","client_secret_basic","client_secret_post","tls_client_auth","client_secret_jwt"],"token_endpoint_auth_signing_alg_values_supported":["PS384","RS384","EdDSA","ES384","HS256","HS512","ES256","RS256","HS384","ES512","PS256","PS512","RS512"],"introspection_endpoint_auth_methods_supported":["private_key_jwt","client_secret_basic","client_secret_post","tls_client_auth","client_secret_jwt"],"introspection_endpoint_auth_signing_alg_values_supported":["PS384","RS384","EdDSA","ES384","HS256","HS512","ES256","RS256","HS384","ES512","PS256","PS512","RS512"],"authorization_signing_alg_values_supported":["PS384","RS384","EdDSA","ES384","HS256","HS512","ES256","RS256","HS384","ES512","PS256","PS512","RS512"],"authorization_encryption_alg_values_supported":["ECDH-ES+A256KW","ECDH-ES+A192KW","ECDH-ES+A128KW","RSA-OAEP","RSA-OAEP-256","RSA1_5","ECDH-ES"],"authorization_encryption_enc_values_supported":["A256GCM","A192GCM","A128GCM","A128CBC-HS256","A192CBC-HS384","A256CBC-HS512"],"claims_supported":["aud","sub","iss","auth_time","name","given_name","family_name","preferred_username","email","acr"],"claim_types_supported":["normal"],"claims_parameter_supported":true,"scopes_supported":["openid","offline_access","organization","service_account","postgres","address","phone","acr","profile","microprofile-jwt","web-origins","roles","basic","email"],"request_parameter_supported":true,"request_uri_parameter_supported":true,"require_request_uri_registration":true,"code_challenge_methods_supported":["plain","S256"],"tls_client_certificate_bound_access_tokens":true,"revocation_endpoint":"http://192.168.0.156:8080/realms/postgres-realm/protocol/openid-connect/revoke","revocation_endpoint_auth_methods_supported":["private_key_jwt","client_secret_basic","client_secret_post","tls_client_auth","client_secret_jwt"],"revocation_endpoint_auth_signing_alg_values_supported":["PS384","RS384","EdDSA","ES384","HS256","HS512","ES256","RS256","HS384","ES512","PS256","PS512","RS512"],"backchannel_logout_supported":true,"backchannel_logout_session_supported":true,"device_authorization_endpoint":"http://192.168.0.156:8080/realms/postgres-realm/protocol/openid-connect/auth/device","backchannel_token_delivery_modes_supported":["poll","ping"],"backchannel_authentication_endpoint":"http://192.168.0.156:8080/realms/postgres-realm/protocol/openid-connect/ext/ciba/auth","backchannel_authentication_request_signing_alg_values_supported":["PS384","RS384","EdDSA","ES384","ES256","RS256","ES512","PS256","PS512","RS512"],"require_pushed_authorization_requests":false,"pushed_authorization_request_endpoint":"http://192.168.0.156:8080/realms/postgres-realm/protocol/openid-connect/ext/par/request","mtls_endpoint_aliases":{"token_endpoint":"http://192.168.0.156:8080/realms/postgres-realm/protocol/openid-connect/token","revocation_endpoint":"http://192.168.0.156:8080/realms/postgres-realm/protocol/openid-connect/revoke","introspection_endpoint":"http://192.168.0.156:8080/realms/postgres-realm/protocol/openid-connect/token/introspect","device_authorization_endpoint":"http://192.168.0.156:8080/realms/postgres-realm/protocol/openid-connect/auth/device","registration_endpoint":"http://192.168.0.156:8080/realms/postgres-realm/clients-registrations/openid-connect","userinfo_endpoint":"http://192.168.0.156:8080/realms/postgres-realm/protocol/openid-connect/userinfo","pushed_authorization_request_endpoint":"http://192.168.0.156:8080/realms/postgres-realm/protocol/openid-connect/ext/par/request","backchannel_authentication_endpoint":"http://192.168.0.156:8080/realms/postgres-realm/protocol/openid-connect/ext/ciba/auth"},"authorization\_response\_iss\_parameter\_supported":true}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  auth device
&lt;/h4&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;[libcurl] * Connection #0 to host 192.168.0.156 left intact
[libcurl] * Found bundle for host: 0x6124961fd400 [serially]
[libcurl] * Can not multiplex, even if we wanted to
[libcurl] * Re-using existing connection #0 with host 192.168.0.156
[libcurl] * Server auth using Basic with user 'postgres-client'
[libcurl] &amp;gt; POST /realms/postgres-realm/protocol/openid-connect/auth/device HTTP/1.1
[libcurl] &amp;gt; Host: 192.168.0.156:8080
[libcurl] &amp;gt; Authorization: Basic cG9zdGdyZXMtY2xpZW50OmZTY1hYcDFUcFNQY3BVaEZLcWk0eDk4alZ5NTR1Y1RC
[libcurl] &amp;gt; Content-Length: 47
[libcurl] &amp;gt; Content-Type: application/x-www-form-urlencoded
[libcurl] &amp;gt;
[libcurl] &amp;gt; scope=openid+postgres&amp;amp;client_id=postgres-client
[libcurl] &amp;lt; HTTP/1.1 200 OK
[libcurl] &amp;lt; Cache-Control: no-store, must-revalidate, max-age=0
[libcurl] &amp;lt; content-length: 296
[libcurl] &amp;lt; Content-Type: application/json
[libcurl] &amp;lt; Referrer-Policy: no-referrer
[libcurl] &amp;lt; Strict-Transport-Security: max-age=31536000; includeSubDomains
[libcurl] &amp;lt; X-Content-Type-Options: nosniff
[libcurl] &amp;lt; X-Frame-Options: SAMEORIGIN
[libcurl] &amp;lt;
[libcurl] &amp;lt; {"device_code":"tgElqUogevqjEkZy6-Z1i209pgoKW9_CT0t9wwNjafY","user_code":"WXAI-ZNVY","verification_uri":"http://192.168.0.156:8080/realms/postgres-realm/device","verification_uri_complete":"http://192.168.0.156:8080/realms/postgres-realm/device?user\_code=WXAI-ZNVY","expires\_in":600,"interval":5}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  token
&lt;/h4&gt;

&lt;p&gt;The client is awaiting approval of the user's authorization request:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;[libcurl] * Connection #0 to host 192.168.0.156 left intact
[libcurl] * Found bundle for host: 0x6124961fd400 [serially]
[libcurl] * Can not multiplex, even if we wanted to
[libcurl] * Re-using existing connection #0 with host 192.168.0.156
[libcurl] * Server auth using Basic with user 'postgres-client'
[libcurl] &amp;gt; POST /realms/postgres-realm/protocol/openid-connect/token HTTP/1.1
[libcurl] &amp;gt; Host: 192.168.0.156:8080
[libcurl] &amp;gt; Authorization: Basic cG9zdGdyZXMtY2xpZW50OmZTY1hYcDFUcFNQY3BVaEZLcWk0eDk4alZ5NTR1Y1RC
[libcurl] &amp;gt; Content-Length: 147
[libcurl] &amp;gt; Content-Type: application/x-www-form-urlencoded
[libcurl] &amp;gt;
[libcurl] &amp;gt; device_code=tgElqUogevqjEkZy6-Z1i209pgoKW9_CT0t9wwNjafY&amp;amp;grant_type=urn%3Aietf%3Aparams%3Aoauth%3Agrant-type%3Adevice_code&amp;amp;client_id=postgres-client
[libcurl] &amp;lt; HTTP/1.1 400 Bad Request
[libcurl] &amp;lt; Cache-Control: no-store
[libcurl] &amp;lt; Pragma: no-cache
[libcurl] &amp;lt; content-length: 98
[libcurl] &amp;lt; Content-Type: application/json
[libcurl] &amp;lt; Referrer-Policy: no-referrer
[libcurl] &amp;lt; Strict-Transport-Security: max-age=31536000; includeSubDomains
[libcurl] &amp;lt; X-Content-Type-Options: nosniff
[libcurl] &amp;lt; X-Frame-Options: SAMEORIGIN
[libcurl] &amp;lt;
[libcurl] &amp;lt; {"error":"authorization_pending","error_description":"The authorization request is still pending"}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The user is approved:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;[libcurl] * Connection #0 to host 192.168.0.156 left intact
[libcurl] * Found bundle for host: 0x6124961fd400 [serially]
[libcurl] * Can not multiplex, even if we wanted to
[libcurl] * Re-using existing connection #0 with host 192.168.0.156
[libcurl] * Server auth using Basic with user 'postgres-client'
[libcurl] &amp;gt; POST /realms/postgres-realm/protocol/openid-connect/token HTTP/1.1
[libcurl] &amp;gt; Host: 192.168.0.156:8080
[libcurl] &amp;gt; Authorization: Basic cG9zdGdyZXMtY2xpZW50OmZTY1hYcDFUcFNQY3BVaEZLcWk0eDk4alZ5NTR1Y1RC
[libcurl] &amp;gt; Content-Length: 147
[libcurl] &amp;gt; Content-Type: application/x-www-form-urlencoded
[libcurl] &amp;gt;
[libcurl] &amp;gt; device_code=tgElqUogevqjEkZy6-Z1i209pgoKW9_CT0t9wwNjafY&amp;amp;grant_type=urn%3Aietf%3Aparams%3Aoauth%3Agrant-type%3Adevice_code&amp;amp;client_id=postgres-client
[libcurl] &amp;lt; HTTP/1.1 200 OK
[libcurl] &amp;lt; Cache-Control: no-store
[libcurl] &amp;lt; Pragma: no-cache
[libcurl] &amp;lt; content-length: 3307
[libcurl] &amp;lt; Content-Type: application/json
[libcurl] &amp;lt; Referrer-Policy: no-referrer
[libcurl] &amp;lt; Strict-Transport-Security: max-age=31536000; includeSubDomains
[libcurl] &amp;lt; X-Content-Type-Options: nosniff
[libcurl] &amp;lt; X-Frame-Options: SAMEORIGIN
[libcurl] &amp;lt;
[libcurl] &amp;lt; {"access_token":"eyJhbGciOiJSUzI1NiIsInR5cCIgOiAiSldUIiwia2lkIiA6ICIwb1RQSV85LXVIQWJ5Q0t3M2luTm5MTW5hU21hc05nWS1OaDVkTzJ4X0lNIn0.eyJleHAiOjE3NDYwMzEyNTMsImlhdCI6MTc0NjAzMDk1MywiYXV0aF90aW1lIjoxNzQ2MDMwOTUyLCJqdGkiOiJvbnJ0ZGc6MWU3MmRlNGUtNTRhYi00OTZjLWIxYWYtY2FhYmRiYzJlYzExIiwiaXNzIjoiaHR0cDovLzE5Mi4xNjguMC4xNTY6ODA4MC9yZWFsbXMvcG9zdGdyZXMtcmVhbG0iLCJhdWQiOiJhY2NvdW50Iiwic3ViIjoiMGZjNzJiNmYtNjIyMS00ZWQ4LWE5MTYtMDY5ZTdhMDgxZDE0IiwidHlwIjoiQmVhcmVyIiwiYXpwIjoicG9zdGdyZXMtY2xpZW50Iiwic2lkIjoiNjRlMDUzMTMtM2UyMi00MmNjLWE0YmItOTAxODU2YTFhYjMzIiwiYWxsb3dlZC1vcmlnaW5zIjpbIi8qIl0sInJlYWxtX2FjY2VzcyI6eyJyb2xlcyI6WyJvZmZsaW5lX2FjY2VzcyIsImRlZmF1bHQtcm9sZXMtcG9zdGdyZXMtcmVhbG0iLCJ1bWFfYXV0aG9yaXphdGlvbiJdfSwicmVzb3VyY2VfYWNjZXNzIjp7ImFjY291bnQiOnsicm9sZXMiOlsibWFuYWdlLWFjY291bnQiLCJtYW5hZ2UtYWNjb3VudC1saW5rcyIsInZpZXctcHJvZmlsZSJdfX0sInNjb3BlIjoib3BlbmlkIHByb2ZpbGUgcG9zdGdyZXMiLCJuYW1lIjoiYWxpY2UgcG9zdGdyZXMiLCJwcmVmZXJyZWRfdXNlcm5hbWUiOiJhbGljZSIsImdpdmVuX25hbWUiOiJhbGljZSIsImZhbWlseV9uYW1lIjoicG9zdGdyZXMifQ.RXMszI-snIdrXyyTw74U8QXQeDG3zpfV4OvxYuJQvsb86eauXkKHAH35GfEm3XvQbtmpdSdfs1S4i11d69dUjpVTgPpzx6G7IXCXj2NTowzZuyuvdnLxPi1aXdxXqOKNSLSj5PXhGIaZhWsn2sR8dAJ0jjWTUO_lh8qJuJYaDcFulWn_flHVGQYzMZ5PTneRadg8h_1dWp4HSr6yC74NmF94dnOBmytivM4a__Wcq6TkZ3KLn_gafqnn72HpWY0WRwyZdQuzc5o8mE3UUAoKukxMnwDG7Yhxif2YFb_a5aCloMbL9aDghbMypahl3MiJHHx3j50FavSRm0FJa3zK9w","expires_in":300,"refresh_expires_in":1800,"refresh_token":"eyJhbGciOiJIUzUxMiIsInR5cCIgOiAiSldUIiwia2lkIiA6ICJjOGNiNjY3Ni00OTAxLTRmNjItOTI0OS1kMzY2MWI5Mjg3OTIifQ.eyJleHAiOjE3NDYwMzI3NTMsImlhdCI6MTc0NjAzMDk1MywianRpIjoiZTJkNzkzODUtNjBhZS00MTIwLWIwODAtNjVmYWU4ZmNhYzIzIiwiaXNzIjoiaHR0cDovLzE5Mi4xNjguMC4xNTY6ODA4MC9yZWFsbXMvcG9zdGdyZXMtcmVhbG0iLCJhdWQiOiJodHRwOi8vMTkyLjE2OC4wLjE1Njo4MDgwL3JlYWxtcy9wb3N0Z3Jlcy1yZWFsbSIsInN1YiI6IjBmYzcyYjZmLTYyMjEtNGVkOC1hOTE2LTA2OWU3YTA4MWQxNCIsInR5cCI6IlJlZnJlc2giLCJhenAiOiJwb3N0Z3Jlcy1jbGllbnQiLCJzaWQiOiI2NGUwNTMxMy0zZTIyLTQyY2MtYTRiYi05MDE4NTZhMWFiMzMiLCJzY29wZSI6Im9wZW5pZCBwcm9maWxlIHdlYi1vcmlnaW5zIHBvc3RncmVzIHJvbGVzIGJhc2ljIn0.43pRSq4PBO7ZY86jt8dIL7xZJylntY_CZXllcRfwfh41IRCOft6iqIWdJQp7TJv_JIDI-_-QeOSf1EC_wzABNg","token_type":"Bearer","id_token":"eyJhbGciOiJSUzI1NiIsInR5cCIgOiAiSldUIiwia2lkIiA6ICIwb1RQSV85LXVIQWJ5Q0t3M2luTm5MTW5hU21hc05nWS1OaDVkTzJ4X0lNIn0.eyJleHAiOjE3NDYwMzEyNTMsImlhdCI6MTc0NjAzMDk1MywiYXV0aF90aW1lIjoxNzQ2MDMwOTUyLCJqdGkiOiIzNWM3ZDAyZC05OGFjLTQzMTgtOTg3NC0zYzY0Mjg5NjFhMjgiLCJpc3MiOiJodHRwOi8vMTkyLjE2OC4wLjE1Njo4MDgwL3JlYWxtcy9wb3N0Z3Jlcy1yZWFsbSIsImF1ZCI6InBvc3RncmVzLWNsaWVudCIsInN1YiI6IjBmYzcyYjZmLTYyMjEtNGVkOC1hOTE2LTA2OWU3YTA4MWQxNCIsInR5cCI6IklEIiwiYXpwIjoicG9zdGdyZXMtY2xpZW50Iiwic2lkIjoiNjRlMDUzMTMtM2UyMi00MmNjLWE0YmItOTAxODU2YTFhYjMzIiwiYXRfaGFzaCI6ImphOXRKZ1E0VkVPTTNBZGc0VWJBVGciLCJuYW1lIjoiYWxpY2UgcG9zdGdyZXMiLCJwcmVmZXJyZWRfdXNlcm5hbWUiOiJhbGljZSIsImdpdmVuX25hbWUiOiJhbGljZSIsImZhbWlseV9uYW1lIjoicG9zdGdyZXMifQ.dH5hM21-ygBiCXoA9NVOou-L3esUVJUFFUmt_1fU0jc9al4Lk7EUN-cqHicHZPD48HhkIWMJK7WZjxnZLDlkG7ORGdDKPGccMU4-sGsRuVu-GDuNFo_5kHAh_NJZsLBXz9UkpNBkd8ROxK3-fbmyYdwsuwNeg6KNhSOj0FxEnxLc0-HrjEE92P7hzq0PD29oY2jhRKcqpbtknMwxFkkMBi8xPgdpuyTmLtJD3-xxYuwMKP7WUGzwzVAFqfrhFm5O5dJxeld5fTFE4Kyl9fR24JcjtfxBeIHVJqLiQkl9Et_KNGiFoXDG4Xwcc7eIUaBnauhY5_froYvKS8NbQxCOUg","not-before-policy":0,"session_state":"64e05313-3e22-42cc-a4bb-901856a1ab33","scope":"openid profile postgres"}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Authorization via psql
&lt;/h3&gt;

&lt;p&gt;For testing purposes, we will allow authorization over the unsecured http protocol (when using https, you will need to configure certificate chains):&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;export PGOAUTHDEBUG="UNSAFE"

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Let's launch psql and set the connection details:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;psql "user=alice dbname=postgres oauth_issuer=http://192.168.0.156:8080/realms/postgres-realm/.well-known/openid-configuration oauth_client_id=postgres-client oauth_client_secret=YYi8LqfzHRMnqUUlptpWVC2k7eWNrqjX"

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We will see the URL and the code that you will need to enter after clicking on this URL:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Visit http://192.168.0.156:8080/realms/postgres-realm/device and enter the code: LJDW-RURX

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We enter it in the browser [&lt;a href="http://192.168.0.156:8080/realms/postgres-realm/device%5D(http://192.168.0.156:8080/realms/postgres-realm/device" rel="noopener noreferrer"&gt;http://192.168.0.156:8080/realms/postgres-realm/device](http://192.168.0.156:8080/realms/postgres-realm/device&lt;/a&gt;&lt;a href="http://192.168.0.156:8080/realms/postgres-realm/device:" rel="noopener noreferrer"&gt;:&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftu1iqyvlvznqoslaxct4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftu1iqyvlvznqoslaxct4.png" width="800" height="496"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Enter the code and click Submit.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyy5i8nq7yvz7zh9wyp8n.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyy5i8nq7yvz7zh9wyp8n.png" width="800" height="492"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Next, we see the user login window:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkpjnzu8q7201zcpxj803.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkpjnzu8q7201zcpxj803.png" width="800" height="600"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Enter the username and password (see "Creating users") and click Sign In.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqbrq4yhhf9kt48fdu31f.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqbrq4yhhf9kt48fdu31f.png" width="800" height="639"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Next, he asks if we are ready to provide (send) such information to PostgreSQL. Click Yes.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwc3rwaz7l6rf2oqoet6u.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwc3rwaz7l6rf2oqoet6u.png" width="800" height="610"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Successfully logged in...&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcaz6p3u3z8ctqdo6g0cd.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcaz6p3u3z8ctqdo6g0cd.png" width="800" height="376"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;... and we can enter commands in the terminal:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;psql (18devel) Type "help" for help.
postgres=&amp;gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;As a result, we successfully logged in to PostgreSQL.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Integration OAuth 2.0 Device Authorization Flow, introduced in the PostgreSQL database management system 18 and Tantor Postgres 17.5.0, allows the use of the SSO (single sign-on) mechanism. Implementation of centralized access control via providers like Keycloak increase the level of protection, eliminating the risks of password interception, and optimize administration in distributed environments. The article provides a step-by-step guide from setting up Keycloak and PostgreSQL configurations before implementing the token validator and successful authorization via psql.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffonu2a3ss4bs9iftjgqv.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffonu2a3ss4bs9iftjgqv.png" width="800" height="567"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2nr64mfrrkuqt5xlav84.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2nr64mfrrkuqt5xlav84.png" width="800" height="567"&gt;&lt;/a&gt;You will see the &lt;code&gt;Manage realms&lt;/code&gt; page:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F53c3tzzvhkieatl5v9p9.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F53c3tzzvhkieatl5v9p9.png" width="800" height="246"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Next, click on &lt;code&gt;Create realm&lt;/code&gt;, and in the dialog box that appears, enter "postgres-realm" in the &lt;code&gt;Realm name&lt;/code&gt; field.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpd1km0d83cllnt0celnr.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpd1km0d83cllnt0celnr.png" width="800" height="561"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;After clicking on &lt;code&gt;Create&lt;/code&gt;, our realm will be created and will become the current one.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fypi69s3zjbqn0f0jqxcb.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fypi69s3zjbqn0f0jqxcb.png" width="800" height="299"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Creating Users
&lt;/h3&gt;

&lt;p&gt;Users are entities that can log in to the system. They can have associated attributes such as email, username, address, phone number, and birthday. To open the User creation window, click on the Users tab in the left panel:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fab3clvpcyi5ic6e5pbag.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fab3clvpcyi5ic6e5pbag.png" width="800" height="448"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Next, click &lt;code&gt;Create new user&lt;/code&gt;, a window will appear with the data entry for the new user. In the &lt;code&gt;Username&lt;/code&gt; field, enter the name "alice" and fill in the fields &lt;code&gt;Email&lt;/code&gt;, &lt;code&gt;First name&lt;/code&gt;, and &lt;code&gt;Last name&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fd75ixpzchutf4msifi3p.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fd75ixpzchutf4msifi3p.png" width="800" height="508"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Click the &lt;code&gt;Create&lt;/code&gt; button, a new user window will appear. You will need an ID to the DB administrator for mapping the Keycloak and PostgreSQL user in the &lt;code&gt;pg_ident.conf&lt;/code&gt; file.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fq1ui98oi8u7ilt3br4gz.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fq1ui98oi8u7ilt3br4gz.png" width="800" height="599"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Switch to the &lt;code&gt;Credentials&lt;/code&gt; tab and click on &lt;code&gt;Set password&lt;/code&gt; to set the password:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhgtb7g19rh0v0fa5n5ru.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhgtb7g19rh0v0fa5n5ru.png" width="800" height="389"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Enter the password "alice". Temporary switch (temporary password) is set to the off position, otherwise, at the first login, the system will require the user to set a new password.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0njv6dcn8njs2uenrhms.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0njv6dcn8njs2uenrhms.png" width="800" height="442"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Click &lt;code&gt;Save&lt;/code&gt;, then in the dialog box that appears click &lt;code&gt;Save&lt;/code&gt; password:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fu9mejketds7uyfqxq3mk.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fu9mejketds7uyfqxq3mk.png" width="788" height="216"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The password is set:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffz7zrnl5ej0kbechxrsy.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffz7zrnl5ej0kbechxrsy.png" width="800" height="213"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Creating Client scopes
&lt;/h3&gt;

&lt;p&gt;The Client scope is a way to limit the access rights that are declared in tokens. It allows the client to request only the roles they need, which makes tokens more secure and manageable.&lt;/p&gt;

&lt;p&gt;Switch to the &lt;code&gt;Client scopes&lt;/code&gt; tab:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Foee11203h4gx5qc70b4x.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Foee11203h4gx5qc70b4x.png" width="800" height="320"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Click on the &lt;code&gt;Create client scope&lt;/code&gt; button, a window for creating a scope will open. Enter the "postgres" value in the &lt;code&gt;Name&lt;/code&gt; field, select the &lt;code&gt;Default type&lt;/code&gt;, activate &lt;code&gt;Include in token scope&lt;/code&gt;, and save by clicking &lt;code&gt;Save&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frzkczj0jwyq5pka1twfk.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frzkczj0jwyq5pka1twfk.png" width="800" height="765"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Creating a Client
&lt;/h3&gt;

&lt;p&gt;Clients are applications and services that can request user authorization. To create a client, go to the &lt;code&gt;Clients&lt;/code&gt; tab and click &lt;code&gt;Create client&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fny28j9d0iabeeirz3kov.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fny28j9d0iabeeirz3kov.png" width="800" height="315"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In the &lt;code&gt;General settings&lt;/code&gt; window that appears, enter "postgres-client" in the Client ID field. Then click Next.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fym6mxoazb2b3twmidyix.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fym6mxoazb2b3twmidyix.png" width="800" height="534"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In the &lt;code&gt;Capability config&lt;/code&gt; window that appears:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Turn on &lt;code&gt;Client authentication&lt;/code&gt; (On position);&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Disable &lt;code&gt;Standard flow&lt;/code&gt;, because we do not use Authorization Code Flow;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Enable the &lt;code&gt;OAuth 2.0 Device Authorization Grant&lt;/code&gt;;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Click &lt;code&gt;Next&lt;/code&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fazmukmolcj4hf39cncn6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fazmukmolcj4hf39cncn6.png" width="800" height="548"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;We don't change anything in the &lt;code&gt;Login settings&lt;/code&gt; window, just click Save.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkees28nn5l54pokn4qac.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkees28nn5l54pokn4qac.png" width="800" height="369"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The client we've created opens:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhndznzwbk61wx2ov7lod.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhndznzwbk61wx2ov7lod.png" width="800" height="355"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Next, go to the &lt;code&gt;Client scopes&lt;/code&gt; tab and check that:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;"postgres" is present (in the picture at the very bottom) and the &lt;code&gt;Default&lt;/code&gt; type is set for it;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;"basic" also has the &lt;code&gt;Default&lt;/code&gt; type.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The rest of the scopes are not important for our example, you can leave everything as it is.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkkxpralp1bdp8x3dnonz.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkkxpralp1bdp8x3dnonz.png" width="800" height="628"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The &lt;code&gt;Credentials&lt;/code&gt; tab contains the &lt;code&gt;Client Secret&lt;/code&gt;, which we will need to enter in the terminal when logging into PostgreSQL (see in the "Authorization via psql" section).&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbn59ft28zczmkelih2ol.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbn59ft28zczmkelih2ol.png" width="800" height="488"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Configuring PostgreSQL by a database administrator
&lt;/h2&gt;

&lt;p&gt;The possibility of OAuth operation presupposes the appropriate configuration of PostgreSQL:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Creating a user in PostgreSQL;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Configure the parameters in the &lt;code&gt;postgresql.conf&lt;/code&gt; file;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Configure the parameters in the &lt;code&gt;pg_ident.conf&lt;/code&gt; file in the case of mapping users through it between Keycloak and PostgreSQL. If the mapping occurs in the validator that wrote developer, you don't need to configure it.;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Configure the parameters in the &lt;code&gt;pg_hba.conf&lt;/code&gt; file.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  Creating Roles
&lt;/h3&gt;

&lt;p&gt;A role is an entity that can own objects and have certain rights in the database. A role can represent a user, a group, or both, depending on the use case.&lt;/p&gt;

&lt;p&gt;Create a role and give it the right to connect to the database:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;CREATE ROLE alice;
ALTER ROLE alice WITH LOGIN;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Configuring the postgresql.conf file
&lt;/h3&gt;

&lt;p&gt;In the &lt;code&gt;oauth_validator_libraries&lt;/code&gt; parameter we will set the name of the validator that will verify the token (see the "Writing a validator by the developer" section).&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;oauth_validator_libraries = 'oauth_validator'

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If only one verification library is provided, it will be used by default for all OAuth connections; otherwise, all entries of oauth HBA must explicitly set the verification tool selected from this list. If an empty string value is set (by default), OAuth connections will be denied.&lt;/p&gt;

&lt;h3&gt;
  
  
  User mapping between Keycloak and PostgreSQL
&lt;/h3&gt;

&lt;p&gt;There are two ways to map users:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;via the &lt;code&gt;pg_ident.conf&lt;/code&gt; file - this is configured by the database administrator;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;using a validator -- the developer adds this mapping to the validator.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  Mapping users via the pg_ident.conf file
&lt;/h4&gt;

&lt;p&gt;Configure the display of Keycloak user IDs and databases:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# MAPNAME    SYSTEM-USERNAME                           PG-USERNAME
oauthmap    "0fc72b6f-6221-4ed8-a916-069e7a081d14"     "alice"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The name of the mapping is indicated in the first column, and the user ID from Keycloak is indicated in the second column (see "Creating Users"), in the third is the name of the role in PostgreSQL.&lt;/p&gt;

&lt;h4&gt;
  
  
  Mapping users through a validator
&lt;/h4&gt;

&lt;p&gt;It is described below in the "Writing a validator by the developer" section.&lt;/p&gt;

&lt;h3&gt;
  
  
  Configuring the pg_hba.conf file
&lt;/h3&gt;

&lt;p&gt;Setting up the client's login to the database:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# TYPE  DATABASE        USER            ADDRESS                 METHOD local   all             all             oauth issuer="http://192.168.0.156:8080/realms/postgres-realm/.well-known/openid-configuration" scope="openid postgres" map="oauthmap"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In the fourth field, set oauth and then its parameters. In the issuer parameter, set the URL of the discovery service &lt;a href="http://192.168.0.156:8080/realms/postgres-realm/.well-known/openid-configuration" rel="noopener noreferrer"&gt;http://192.168.0.156:8080/realms/postgres-realm/.well-known/openid-configuration &lt;/a&gt;. In the Scope parameter, we set the access areas that will be requested from Keycloak for the client.&lt;/p&gt;

&lt;p&gt;Next, you need to set the algorithm for mapping users between Keycloak and PostgreSQL (see "User mapping between Keycloak and PostgreSQL").&lt;/p&gt;

&lt;h4&gt;
  
  
  Mapping users via pg_ident.conf
&lt;/h4&gt;

&lt;p&gt;We add the map parameter, in which we set the &lt;code&gt;map&lt;/code&gt; id from the &lt;code&gt;pg_ident.conf&lt;/code&gt; file, we have this "oauthmap":&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# TYPE  DATABASE        USER            ADDRESS                 METHOD local   all             all             oauth
issuer="http://192.168.0.156:8080/realms/postgres-realm/.well-known/openid-configuration" scope="openid postgres" map="oauthmap"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  Mapping users through a validator
&lt;/h4&gt;

&lt;p&gt;In this case, the &lt;code&gt;delegate_ident_mapping=1&lt;/code&gt; parameter should be set instead of the map parameter.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# TYPE  DATABASE        USER            ADDRESS                 METHOD local   all             all             oauth
issuer="http://192.168.0.156:8080/realms/postgres-realm/.well-known/openid-configuration" scope="openid postgres" delegate_ident_mapping=1
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The &lt;code&gt;delegate_ident_mapping&lt;/code&gt; parameter has a higher priority than &lt;code&gt;map&lt;/code&gt;, so if the &lt;code&gt;map&lt;/code&gt; parameter is also specified with &lt;code&gt;delegate_ident_mapping=1&lt;/code&gt;, it will be ignored and user mapping will go through the validator.&lt;/p&gt;

&lt;h2&gt;
  
  
  Writing a validator by the developer
&lt;/h2&gt;

&lt;p&gt;OAuth authentication modules implement their functionality by defining a set of callbacks. The server will call them as needed to process the authorization request from the user.&lt;/p&gt;

&lt;h3&gt;
  
  
  Implementation of the token validator
&lt;/h3&gt;

&lt;p&gt;When mapping users between Keycloak and PostgreSQL, the implementation via &lt;code&gt;pg_ident.conf&lt;/code&gt; differs from the mapping via the validator only by the implementation of the get_user function. In the example below, the mapping is implemented using &lt;code&gt;pg_ident.conf&lt;/code&gt;. Token verification It consists in verifying that the required scope specified in &lt;code&gt;pg_hba.conf&lt;/code&gt; is present in the &lt;code&gt;scope&lt;/code&gt; field in the token received from the server. Upon successful verification, the user ID from the &lt;code&gt;sub&lt;/code&gt; token field is assigned &lt;code&gt;res-&amp;gt;authn_id&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;The main verification logic is implemented in the &lt;code&gt;validate_token&lt;/code&gt; function. The general scheme of its work:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt; Token content analysis.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;The source string of the token is analyzed to extract its contents (payload). If the token has an incorrect format or the contents cannot be extracted, the verification fails.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt; Extracting the &lt;code&gt;sub&lt;/code&gt; and &lt;code&gt;scope&lt;/code&gt; fields from the JWT token. The token content must include both fields:&lt;/li&gt;
&lt;li&gt;  &lt;code&gt;sub&lt;/code&gt; (Subject) - user ID&lt;/li&gt;
&lt;li&gt;  &lt;code&gt;scope&lt;/code&gt; -- a list of permissions (Scopes) provided by the token, separated by spaces&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;If any of these fields are missing, the verification is considered failed.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Purpose of the identifier. The sub value is assigned to the res-&amp;gt;authn_id field, which PostgreSQL uses to identify the user. This value is then compared with the entries in &lt;code&gt;pg_ident.conf&lt;/code&gt; to determine the actual role in the database that the user can use.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Comparison of permissions (scopes). The permissions granted by the token are compared with those required by the corresponding entry in &lt;code&gt;pg_hba.conf&lt;/code&gt; (from &lt;code&gt;oauth_scope&lt;/code&gt;). If all required permissions are present in the token, the verification is considered successful.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Setting the authorization result. The &lt;code&gt;res-&amp;gt;authorized&lt;/code&gt; flag is set to true if the permissions match, otherwise false.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Identification mapping. The &lt;code&gt;sub&lt;/code&gt; value is then mapped (outside of this module) to the entries in &lt;code&gt;pg_ident.conf&lt;/code&gt; to determine the actual role in the database that the user can use.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Let's start writing our training validator (its sources are also posted on &lt;a href="https://github.com/TantorLabs/oauth_validator" rel="noopener noreferrer"&gt;GitHub&lt;/a&gt;). First, let's create the oauth_validator folder, and in it we'll create the &lt;strong&gt;oauth_validator.c file&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;#include &amp;lt;string.h&amp;gt;

#include "postgres.h"

#include "token_utils.h"

#include "fmgr.h"
#include "libpq/oauth.h"
#include "miscadmin.h"
#include "nodes/pg_list.h"
#include "utils/builtins.h"

PG_MODULE_MAGIC;

/*
 * Declarations of internal module functions.
 */
static void validator_startup(ValidatorModuleState *state);
static void validator_shutdown(ValidatorModuleState *state);
static bool validate_token(const ValidatorModuleState *state,
                           const char *token,
                           const char *role,
                           ValidatorModuleResult *result);

/*
 * Structure with pointers to OAuth token validator callback functions.
 * PostgreSQL calls these functions during certain phases of the module's lifecycle.
 */
static const OAuthValidatorCallbacks validator_callbacks = {
    PG_OAUTH_VALIDATOR_MAGIC, /* Magic number for API version check */

    .startup_cb = validator_startup,   /* Validator initialization function */
    .shutdown_cb = validator_shutdown, /* Validator shutdown function */
    .validate_cb = validate_token      /* Token validation function */
};

/*
 * Entry point for the OAuth validator module.
 * PostgreSQL calls this function when loading the module.
 */
const OAuthValidatorCallbacks *
_PG_oauth_validator_module_init(void)
{
    return &amp;amp;validator_callbacks;
}

/*
 * Validator initialization function.
 * Called once when the module is loaded.
 */
static void
validator_startup(ValidatorModuleState *state)
{
    /*
     * Check if the server version matches the one the module was built with.
     * (Real production modules shouldn't do this, as it breaks upgrade compatibility.)
     */
    if (state-&amp;gt;sversion != PG_VERSION_NUM)
        elog(ERROR, "oauth_validator: server version mismatch: sversion=%d", state-&amp;gt;sversion);
}

/*
 * Validator shutdown function.
 * Called when the module is unloaded or the server shuts down.
 */
static void
validator_shutdown(ValidatorModuleState *state)
{
    /* Nothing to do for now, but resource cleanup could be added here if necessary. */
}

/*
 * Main OAuth token validation function.
 *
 * Parameters:
 * - state: validator module state (may contain configuration etc.);
 * - token: string containing the token to validate;
 * - role: PostgreSQL role the client is trying to connect as;
 * - res: structure to store the validation result.
 *
 * Returns true if the token is valid, false otherwise.
 */
static bool
validate_token(const ValidatorModuleState *state,
               const char *token, const char *role,
               ValidatorModuleResult *res)
{
    char *sub = NULL;               /* Value of the "sub" field from the token (user identifier) */
    char *scope = NULL;             /* Value of the "scope" field from the token (allowed scopes) */
    const char *token_payload = NULL; /* Token payload as JSON string */
    List *granted_scopes = NIL;     /* List of scopes granted by the token */
    List *required_scopes = NIL;    /* List of required scopes from HBA configuration */
    bool matched = false;           /* Flag indicating whether required scopes are satisfied */

    /* Initialize result */
    res-&amp;gt;authn_id = NULL;     /* Authentication ID (sub) */
    res-&amp;gt;authorized = false;  /* Authorization flag */

    /* Extract payload from the token */
    token_payload = parse_token_payload(token);
    if (token_payload == NULL)
    {
        elog(LOG, "Invalid token: missing payload: %s", token);
        return false;
    }

    /* Extract 'sub' and 'scope' fields from the payload */
    extract_sub_scope_fields(token_payload, &amp;amp;sub, &amp;amp;scope);
    if (!sub || !scope)
    {
        elog(LOG, "Invalid token: missing sub and/or scope fields: %s", token);
        return false;
    }

    /* Set authentication ID (sub) in the result */
    res-&amp;gt;authn_id = pstrdup(sub);

    /* Split the token's scope field into a list */
    granted_scopes = split_scopes(scope);

    /* Split the required scopes from HBA file into a list */
    required_scopes = split_scopes(MyProcPort-&amp;gt;hba-&amp;gt;oauth_scope);

    if (!granted_scopes || !required_scopes)
        return false;

    /* Check if the granted scopes satisfy the required scopes */
    matched = check_scopes(granted_scopes, required_scopes);

    /* Set authorization result flag */
    res-&amp;gt;authorized = matched;

    return true;
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;token_utils.h/.c&lt;/strong&gt; -- utility functions&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;#ifndef TOKEN_UTILS_H
#define TOKEN_UTILS_H

#include &amp;lt;stdbool.h&amp;gt;

#include "common/jsonapi.h"
#include "nodes/pg_list.h"

const char* parse_token_payload(const char *token);
void extract_sub_scope_fields(const char *json, char **sub_field, char **scope_field);
const char *decode_base64(const char *b64);
char *base64url_to_base64(const char *b64url);
List *split_scopes(const char *raw);
bool check_scopes(List *granted, List *required);

#endif
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;#include "postgres.h"

#include "token_utils.h"

#include "common/base64.h"
#include "mb/pg_wchar.h"

#define SUB_FIELD   0   /* Index for 'sub' field */
#define SCOPE_FIELD 1   /* Index for 'scope' field */

/*
 * JSON object field handler.
 * Marks that the currently processed field is 'sub' or 'scope'
 * to store its value at the next processing stage.
 */
static JsonParseErrorType
token_field_start(void *state, char *fname, bool isnull)
{
    char **fields = (char **) state;

    if (strcmp(fname, "sub") == 0)
        fields[SUB_FIELD] = (char *) 1;  /* Mark that we are processing 'sub' field */
    else if (strcmp(fname, "scope") == 0)
        fields[SCOPE_FIELD] = (char *) 1; /* Mark that we are processing 'scope' field */

    return JSON_SUCCESS;
}

/*
 * JSON scalar value handler.
 * Stores the value of 'sub' or 'scope' if it was marked earlier.
 */
static JsonParseErrorType
token_scalar(void *state, char *token, JsonTokenType tokentype)
{
    char **fields = (char **) state;

    if (fields[SUB_FIELD] == (char *) 1)
        fields[SUB_FIELD] = pstrdup(token);  /* Save the value of 'sub' */
    else if (fields[SCOPE_FIELD] == (char *) 1)
        fields[SCOPE_FIELD] = pstrdup(token); /* Save the value of 'scope' */

    return JSON_SUCCESS;
}

/*
 * Extracts 'sub' and 'scope' fields from a JSON string.
 *
 * Parameters:
 *  - json: JSON string
 *  - sub_field: returns the value of 'sub' field
 *  - scope_field: returns the value of 'scope' field
 */
void
extract_sub_scope_fields(const char *json, char **sub_field, char **scope_field)
{
    JsonLexContext lex;
    JsonSemAction sem;

    char **fields = palloc0(sizeof(char *) * 2); /* Allocate memory for 2 strings ('sub', 'scope') */

    *sub_field = NULL;
    *scope_field = NULL;

    /* Create a lexical context for JSON parsing */
    makeJsonLexContextCstringLen(&amp;amp;lex, json, strlen(json), GetDatabaseEncoding(), true);

    /* Set up JSON parser handlers */
    memset(&amp;amp;sem, 0, sizeof(sem));
    sem.semstate = (void *) fields;
    sem.object_field_start = token_field_start;
    sem.scalar = token_scalar;

    /* Start JSON parsing */
    pg_parse_json(&amp;amp;lex, &amp;amp;sem);

    /* Return the found values */
    *sub_field = fields[SUB_FIELD];
    *scope_field = fields[SCOPE_FIELD];
}

/*
 * Extracts the payload from a JWT token.
 * Returns the decoded payload string in JSON format.
 */
const char*
parse_token_payload(const char *token)
{
    char *dot1 = NULL;
    char *dot2 = NULL;
    int payload_len = 0;
    char *payload_b64url = NULL;
    char *b64 = NULL;

    if(!token)
        return NULL;

    /* Find the first and second dots in JWT (separators for header.payload.signature) */
    dot1 = strchr(token, '.');
    dot2 = dot1 ? strchr(dot1 + 1, '.') : NULL;

    if (!dot1 || !dot2)
    {
        elog(LOG, "Invalid token format, two dots required: %s", token);
        return NULL;
    }

    /* Extract the encoded payload between the dots */
    payload_len = dot2 - (dot1 + 1);
    payload_b64url = pnstrdup(dot1 + 1, payload_len);

    /* Convert base64url to regular base64 */
    b64 = base64url_to_base64(payload_b64url);

    /* Decode base64 to JSON string */
    return decode_base64(b64);
}

/*
 * Converts a base64url string to base64 format.
 * Replaces '-' with '+', '_' with '/' and adds padding '=' if necessary.
 */
char *
base64url_to_base64(const char *b64url)
{
    int len = strlen(b64url);
    int pad = (4 - (len % 4)) % 4; /* Determine the number of '=' padding characters */
    char *b64 = palloc(len + pad + 1);

    for (int i = 0; i &amp;lt; len; i++)
    {
        if (b64url[i] == '-')
            b64[i] = '+';
        else if (b64url[i] == '_')
            b64[i] = '/';
        else
            b64[i] = b64url[i];
    }

    /* Add padding '=' */
    for (int i = 0; i &amp;lt; pad; i++)
        b64[len + i] = '=';

    b64[len + pad] = '\0';
    return b64;
}

/*
 * Decodes a base64 string into a regular string.
 * Returns the decoded string or NULL in case of error.
 */
const char *
decode_base64(const char *b64)
{
    int encoded_len = strlen(b64);
    int max_decoded_len = pg_b64_dec_len(encoded_len); /* Calculate required buffer length */
    char *decoded = palloc(max_decoded_len + 1);
    int decoded_len = pg_b64_decode(b64, encoded_len, decoded, max_decoded_len);

    if (decoded_len &amp;lt;= 0)
    {
        elog(LOG, "Invalid token format: base64 decoding error");
        return NULL;
    }

    decoded[decoded_len] = '\0';
    return decoded;
}

/*
 * Splits a space-separated string (e.g., scope list from token) into a List of strings.
 */
List *
split_scopes(const char *raw)
{
    List *result = NIL;
    char *str = pstrdup(raw);  /* Make a copy of the string because strtok modifies it */
    char *tok = strtok(str, " ");
    while (tok)
    {
        result = lappend(result, pstrdup(tok));
        tok = strtok(NULL, " ");
    }
    return result;
}

/*
 * String comparison function for list sorting.
 */
static int
list_string_cmp(const ListCell *a, const ListCell *b)
{
    const char *sa = (const char *) lfirst(a);
    const char *sb = (const char *) lfirst(b);
    return strcmp(sa, sb);
}

/*
 * Checks whether all required scopes are present in the granted scopes.
 * Lists are sorted beforehand for easier comparison.
 *
 * Returns true if all required scopes are found in granted scopes.
 */
bool
check_scopes(List *granted, List *required)
{
    ListCell *gcell;
    ListCell *rcell;

    /* Sort both lists to simplify comparison */
    list_sort(granted, list_string_cmp);
    list_sort(required, list_string_cmp);

    gcell = list_head(granted);
    rcell = list_head(required);

    while (rcell != NULL &amp;amp;&amp;amp; gcell != NULL)
    {
        char *r = (char *) lfirst(rcell);
        char *g = (char *) lfirst(gcell);
        int cmp = strcmp(r, g);

        if (cmp == 0)
        {
            /* Match found --- move to the next required element */
            rcell = lnext(required, rcell);
            gcell = lnext(granted, gcell);
        }
        else if (cmp &amp;gt; 0)
        {
            /* granted is behind --- move to the next granted element */
            gcell = lnext(granted, gcell);
        }
        else
        {
            /* required element not found in granted --- return false */
            return false;
        }
    }

    /* If not all required elements were found --- error */
    if (rcell != NULL)
        return false;

    return true;
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Makefile&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# contrib/oauth_validator/Makefile

PGFILEDESC = "oauth_validator - OAuth validator"
MODULE_big = oauth_validator

OBJS =\
    $(WIN32RES)\
    oauth_validator.o\
    token_utils.o

PG_CPPFLAGS += -I$(top_srcdir)/src/common
PG_CPPFLAGS += -I$(libpq_srcdir)

PG_CONFIG = pg_config
PGXS := $(shell $(PG_CONFIG) --pgxs)
include $(PGXS)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Callbacks
&lt;/h3&gt;

&lt;h4&gt;
  
  
  startup_cb callback
&lt;/h4&gt;

&lt;p&gt;The &lt;code&gt;startup_cb&lt;/code&gt; callback is executed immediately after the module is loaded. It can be used to configure the local state and perform additional initialization if required. If the validator has a state, it can use the &lt;code&gt;state-&amp;gt;private_data&lt;/code&gt; field to store it.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;typedef void (*ValidatorStartupCB) (ValidatorModuleState *state);

ValidatorStartupCB startup_cb;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  validate_cb callback
&lt;/h4&gt;

&lt;p&gt;The &lt;code&gt;validate_cb&lt;/code&gt; callback is executed when the user tries to pass authorization using OAuth. Any state set in previous calls will be available in &lt;code&gt;state-&amp;gt;private_data&lt;/code&gt;.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;typedef bool (*ValidatorValidateCB) (const ValidatorModuleState *state,
                                    const char *token, const char *role,
                                    ValidatorModuleResult *result);

ValidatorValidateCB validate_cb;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The argument &lt;code&gt;token&lt;/code&gt; will contain a carrier token for verification. PostgreSQL made sure that the token was formed syntactically correct, but no other check was carried out. The role parameter contains the role on behalf of which the user requested login. The callback should set the output parameters in the resulting structure, which is defined as:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;typedef struct ValidatorModuleResult
{
   bool authorized;
   char *authn_id;
} ValidatorModuleResult;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The connection will be established only if the validator sets the result-&amp;gt;authorized parameter to true. To authenticate a user under an authenticated username (i.e. a specific one using a token), memory must be allocated (using the palloc function ), and a pointer to this memory region must be assigned to the &lt;code&gt;result-&amp;gt;authn_id&lt;/code&gt; field. Alternatively, the &lt;code&gt;result-&amp;gt;authn_id&lt;/code&gt; parameter can be set to &lt;code&gt;NULL&lt;/code&gt; if the token is valid but the associated user ID cannot be determined.&lt;/p&gt;

&lt;p&gt;If the token verification fails , the validator must return false due to incorrect token format, lack of necessary user rights, or other error, then any parameters from the result argument are ignored and the connection is interrupted. In case of successful verification of the token, the validator must return true.&lt;/p&gt;

&lt;p&gt;The behavior after returning from &lt;code&gt;validate_cb&lt;/code&gt; depends on the specific HBA setting. Usually, the user name &lt;code&gt;result-&amp;gt;authn_id&lt;/code&gt; must correspond exactly to the role under which the user logs in (this behavior can be changed using the user's card). But when authenticating according to the HBA rule with delegate_ident_mapping enabled, PostgreSQL will not perform any checks on the result-&amp;gt;authn_id value at all; in this case, the validator must ensure that the token has sufficient privileges so that the user can log in under the specified role.&lt;/p&gt;

&lt;h4&gt;
  
  
  Shutdown_cb callback
&lt;/h4&gt;

&lt;p&gt;The &lt;code&gt;shutdown_cb&lt;/code&gt; callback is executed when there is a server process associated with the connection. If the validator has any allocated state, this callback should release it to avoid resource leakage.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;typedef void (*ValidatorShutdownCB) (ValidatorModuleState *state);

ValidatorShutdownCB shutdown_cb;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Authorization process
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Logging
&lt;/h3&gt;

&lt;p&gt;Let's start by configuring logging. To see what requests PostgreSQL sends and what responses it receives, query logging can be enabled in &lt;code&gt;postgresql.conf&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;log_connections = on

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Examples of logs will be presented below, prefixes will appear at the beginning of the lines, meaning the following:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;the prefix "&amp;gt;" means a Keycloak request.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;the prefix "&amp;lt;" means the Keycloak response.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  discovery
&lt;/h4&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;[libcurl] *   Trying 192.168.0.156:8080...
[libcurl] * Connected to 192.168.0.156 (192.168.0.156) port 8080 (#0)
[libcurl] &amp;gt; GET /realms/postgres-realm/.well-known/openid-configuration HTTP/1.1
[libcurl] &amp;gt; Host: 192.168.0.156:8080
[libcurl] &amp;gt;
[libcurl] &amp;lt; HTTP/1.1 200 OK
[libcurl] &amp;lt; content-length: 6638
[libcurl] &amp;lt; Cache-Control: no-cache, must-revalidate, no-transform, no-store
[libcurl] &amp;lt; Content-Type: application/json;charset=UTF-8
[libcurl] &amp;lt; Referrer-Policy: no-referrer
[libcurl] &amp;lt; Strict-Transport-Security: max-age=31536000; includeSubDomains
[libcurl] &amp;lt; X-Content-Type-Options: nosniff
[libcurl] &amp;lt; X-Frame-Options: SAMEORIGIN
[libcurl] &amp;lt;
[libcurl] &amp;lt; {"issuer":"http://192.168.0.156:8080/realms/postgres-realm","authorization_endpoint":"http://192.168.0.156:8080/realms/postgres-realm/protocol/openid-connect/auth","token_endpoint":"http://192.168.0.156:8080/realms/postgres-realm/protocol/openid-connect/token","introspection_endpoint":"http://192.168.0.156:8080/realms/postgres-realm/protocol/openid-connect/token/introspect","userinfo_endpoint":"http://192.168.0.156:8080/realms/postgres-realm/protocol/openid-connect/userinfo","end_session_endpoint":"http://192.168.0.156:8080/realms/postgres-realm/protocol/openid-connect/logout","frontchannel_logout_session_supported":true,"frontchannel_logout_supported":true,"jwks_uri":"http://192.168.0.156:8080/realms/postgres-realm/protocol/openid-connect/certs","check_session_iframe":"http://192.168.0.156:8080/realms/postgres-realm/protocol/openid-connect/login-status-iframe.html","grant\_types\_supported":["authorization\_code","client\_credentials","implicit","password","refresh\_token","urn:ietf:params:oauth:grant-type:device\_code","urn:ietf:params:oauth:grant-type:token-exchange","urn:ietf:params:oauth:grant-type:uma-ticket","urn:openid:params:grant-type:ciba"],"acr\_values\_supported":["0","1"],"response\_types\_supported":["code","none","id\_token","token","id\_token token","code id_token","code token","code id_token token"],"subject_types_supported":["public","pairwise"],"prompt_values_supported":["none","login","consent"],"id_token_signing_alg_values_supported":["PS384","RS384","EdDSA","ES384","HS256","HS512","ES256","RS256","HS384","ES512","PS256","PS512","RS512"],"id_token_encryption_alg_values_supported":["ECDH-ES+A256KW","ECDH-ES+A192KW","ECDH-ES+A128KW","RSA-OAEP","RSA-OAEP-256","RSA1_5","ECDH-ES"],"id_token_encryption_enc_values_supported":["A256GCM","A192GCM","A128GCM","A128CBC-HS256","A192CBC-HS384","A256CBC-HS512"],"userinfo_signing_alg_values_supported":["PS384","RS384","EdDSA","ES384","HS256","HS512","ES256","RS256","HS384","ES512","PS256","PS512","RS512","none"],"userinfo_encryption_alg_values_supported":["ECDH-ES+A256KW","ECDH-ES+A192KW","ECDH-ES+A128KW","RSA-OAEP","RSA-OAEP-256","RSA1_5","ECDH-ES"],"userinfo_encryption_enc_values_supported":["A256GCM","A192GCM","A128GCM","A128CBC-HS256","A192CBC-HS384","A256CBC-HS512"],"request_object_signing_alg_values_supported":["PS384","RS384","EdDSA","ES384","HS256","HS512","ES256","RS256","HS384","ES512","PS256","PS512","RS512","none"],"request_object_encryption_alg_values_supported":["ECDH-ES+A256KW","ECDH-ES+A192KW","ECDH-ES+A128KW","RSA-OAEP","RSA-OAEP-256","RSA1_5","ECDH-ES"],"request_object_encryption_enc_values_supported":["A256GCM","A192GCM","A128GCM","A128CBC-HS256","A192CBC-HS384","A256CBC-HS512"],"response_modes_supported":["query","fragment","form_post","query.jwt","fragment.jwt","form_post.jwt","jwt"],"registration_endpoint":"http://192.168.0.156:8080/realms/postgres-realm/clients-registrations/openid-connect","token_endpoint_auth_methods_supported":["private_key_jwt","client_secret_basic","client_secret_post","tls_client_auth","client_secret_jwt"],"token_endpoint_auth_signing_alg_values_supported":["PS384","RS384","EdDSA","ES384","HS256","HS512","ES256","RS256","HS384","ES512","PS256","PS512","RS512"],"introspection_endpoint_auth_methods_supported":["private_key_jwt","client_secret_basic","client_secret_post","tls_client_auth","client_secret_jwt"],"introspection_endpoint_auth_signing_alg_values_supported":["PS384","RS384","EdDSA","ES384","HS256","HS512","ES256","RS256","HS384","ES512","PS256","PS512","RS512"],"authorization_signing_alg_values_supported":["PS384","RS384","EdDSA","ES384","HS256","HS512","ES256","RS256","HS384","ES512","PS256","PS512","RS512"],"authorization_encryption_alg_values_supported":["ECDH-ES+A256KW","ECDH-ES+A192KW","ECDH-ES+A128KW","RSA-OAEP","RSA-OAEP-256","RSA1_5","ECDH-ES"],"authorization_encryption_enc_values_supported":["A256GCM","A192GCM","A128GCM","A128CBC-HS256","A192CBC-HS384","A256CBC-HS512"],"claims_supported":["aud","sub","iss","auth_time","name","given_name","family_name","preferred_username","email","acr"],"claim_types_supported":["normal"],"claims_parameter_supported":true,"scopes_supported":["openid","offline_access","organization","service_account","postgres","address","phone","acr","profile","microprofile-jwt","web-origins","roles","basic","email"],"request_parameter_supported":true,"request_uri_parameter_supported":true,"require_request_uri_registration":true,"code_challenge_methods_supported":["plain","S256"],"tls_client_certificate_bound_access_tokens":true,"revocation_endpoint":"http://192.168.0.156:8080/realms/postgres-realm/protocol/openid-connect/revoke","revocation_endpoint_auth_methods_supported":["private_key_jwt","client_secret_basic","client_secret_post","tls_client_auth","client_secret_jwt"],"revocation_endpoint_auth_signing_alg_values_supported":["PS384","RS384","EdDSA","ES384","HS256","HS512","ES256","RS256","HS384","ES512","PS256","PS512","RS512"],"backchannel_logout_supported":true,"backchannel_logout_session_supported":true,"device_authorization_endpoint":"http://192.168.0.156:8080/realms/postgres-realm/protocol/openid-connect/auth/device","backchannel_token_delivery_modes_supported":["poll","ping"],"backchannel_authentication_endpoint":"http://192.168.0.156:8080/realms/postgres-realm/protocol/openid-connect/ext/ciba/auth","backchannel_authentication_request_signing_alg_values_supported":["PS384","RS384","EdDSA","ES384","ES256","RS256","ES512","PS256","PS512","RS512"],"require_pushed_authorization_requests":false,"pushed_authorization_request_endpoint":"http://192.168.0.156:8080/realms/postgres-realm/protocol/openid-connect/ext/par/request","mtls_endpoint_aliases":{"token_endpoint":"http://192.168.0.156:8080/realms/postgres-realm/protocol/openid-connect/token","revocation_endpoint":"http://192.168.0.156:8080/realms/postgres-realm/protocol/openid-connect/revoke","introspection_endpoint":"http://192.168.0.156:8080/realms/postgres-realm/protocol/openid-connect/token/introspect","device_authorization_endpoint":"http://192.168.0.156:8080/realms/postgres-realm/protocol/openid-connect/auth/device","registration_endpoint":"http://192.168.0.156:8080/realms/postgres-realm/clients-registrations/openid-connect","userinfo_endpoint":"http://192.168.0.156:8080/realms/postgres-realm/protocol/openid-connect/userinfo","pushed_authorization_request_endpoint":"http://192.168.0.156:8080/realms/postgres-realm/protocol/openid-connect/ext/par/request","backchannel_authentication_endpoint":"http://192.168.0.156:8080/realms/postgres-realm/protocol/openid-connect/ext/ciba/auth"},"authorization\_response\_iss\_parameter\_supported":true}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  auth device
&lt;/h4&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;[libcurl] * Connection #0 to host 192.168.0.156 left intact
[libcurl] * Found bundle for host: 0x6124961fd400 [serially]
[libcurl] * Can not multiplex, even if we wanted to
[libcurl] * Re-using existing connection #0 with host 192.168.0.156
[libcurl] * Server auth using Basic with user 'postgres-client'
[libcurl] &amp;gt; POST /realms/postgres-realm/protocol/openid-connect/auth/device HTTP/1.1
[libcurl] &amp;gt; Host: 192.168.0.156:8080
[libcurl] &amp;gt; Authorization: Basic cG9zdGdyZXMtY2xpZW50OmZTY1hYcDFUcFNQY3BVaEZLcWk0eDk4alZ5NTR1Y1RC
[libcurl] &amp;gt; Content-Length: 47
[libcurl] &amp;gt; Content-Type: application/x-www-form-urlencoded
[libcurl] &amp;gt;
[libcurl] &amp;gt; scope=openid+postgres&amp;amp;client_id=postgres-client
[libcurl] &amp;lt; HTTP/1.1 200 OK
[libcurl] &amp;lt; Cache-Control: no-store, must-revalidate, max-age=0
[libcurl] &amp;lt; content-length: 296
[libcurl] &amp;lt; Content-Type: application/json
[libcurl] &amp;lt; Referrer-Policy: no-referrer
[libcurl] &amp;lt; Strict-Transport-Security: max-age=31536000; includeSubDomains
[libcurl] &amp;lt; X-Content-Type-Options: nosniff
[libcurl] &amp;lt; X-Frame-Options: SAMEORIGIN
[libcurl] &amp;lt;
[libcurl] &amp;lt; {"device_code":"tgElqUogevqjEkZy6-Z1i209pgoKW9_CT0t9wwNjafY","user_code":"WXAI-ZNVY","verification_uri":"http://192.168.0.156:8080/realms/postgres-realm/device","verification_uri_complete":"http://192.168.0.156:8080/realms/postgres-realm/device?user\_code=WXAI-ZNVY","expires\_in":600,"interval":5}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  token
&lt;/h4&gt;

&lt;p&gt;The client is awaiting approval of the user's authorization request:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;[libcurl] * Connection #0 to host 192.168.0.156 left intact
[libcurl] * Found bundle for host: 0x6124961fd400 [serially]
[libcurl] * Can not multiplex, even if we wanted to
[libcurl] * Re-using existing connection #0 with host 192.168.0.156
[libcurl] * Server auth using Basic with user 'postgres-client'
[libcurl] &amp;gt; POST /realms/postgres-realm/protocol/openid-connect/token HTTP/1.1
[libcurl] &amp;gt; Host: 192.168.0.156:8080
[libcurl] &amp;gt; Authorization: Basic cG9zdGdyZXMtY2xpZW50OmZTY1hYcDFUcFNQY3BVaEZLcWk0eDk4alZ5NTR1Y1RC
[libcurl] &amp;gt; Content-Length: 147
[libcurl] &amp;gt; Content-Type: application/x-www-form-urlencoded
[libcurl] &amp;gt;
[libcurl] &amp;gt; device_code=tgElqUogevqjEkZy6-Z1i209pgoKW9_CT0t9wwNjafY&amp;amp;grant_type=urn%3Aietf%3Aparams%3Aoauth%3Agrant-type%3Adevice_code&amp;amp;client_id=postgres-client
[libcurl] &amp;lt; HTTP/1.1 400 Bad Request
[libcurl] &amp;lt; Cache-Control: no-store
[libcurl] &amp;lt; Pragma: no-cache
[libcurl] &amp;lt; content-length: 98
[libcurl] &amp;lt; Content-Type: application/json
[libcurl] &amp;lt; Referrer-Policy: no-referrer
[libcurl] &amp;lt; Strict-Transport-Security: max-age=31536000; includeSubDomains
[libcurl] &amp;lt; X-Content-Type-Options: nosniff
[libcurl] &amp;lt; X-Frame-Options: SAMEORIGIN
[libcurl] &amp;lt;
[libcurl] &amp;lt; {"error":"authorization_pending","error_description":"The authorization request is still pending"}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The user is approved:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;[libcurl] * Connection #0 to host 192.168.0.156 left intact
[libcurl] * Found bundle for host: 0x6124961fd400 [serially]
[libcurl] * Can not multiplex, even if we wanted to
[libcurl] * Re-using existing connection #0 with host 192.168.0.156
[libcurl] * Server auth using Basic with user 'postgres-client'
[libcurl] &amp;gt; POST /realms/postgres-realm/protocol/openid-connect/token HTTP/1.1
[libcurl] &amp;gt; Host: 192.168.0.156:8080
[libcurl] &amp;gt; Authorization: Basic cG9zdGdyZXMtY2xpZW50OmZTY1hYcDFUcFNQY3BVaEZLcWk0eDk4alZ5NTR1Y1RC
[libcurl] &amp;gt; Content-Length: 147
[libcurl] &amp;gt; Content-Type: application/x-www-form-urlencoded
[libcurl] &amp;gt;
[libcurl] &amp;gt; device_code=tgElqUogevqjEkZy6-Z1i209pgoKW9_CT0t9wwNjafY&amp;amp;grant_type=urn%3Aietf%3Aparams%3Aoauth%3Agrant-type%3Adevice_code&amp;amp;client_id=postgres-client
[libcurl] &amp;lt; HTTP/1.1 200 OK
[libcurl] &amp;lt; Cache-Control: no-store
[libcurl] &amp;lt; Pragma: no-cache
[libcurl] &amp;lt; content-length: 3307
[libcurl] &amp;lt; Content-Type: application/json
[libcurl] &amp;lt; Referrer-Policy: no-referrer
[libcurl] &amp;lt; Strict-Transport-Security: max-age=31536000; includeSubDomains
[libcurl] &amp;lt; X-Content-Type-Options: nosniff
[libcurl] &amp;lt; X-Frame-Options: SAMEORIGIN
[libcurl] &amp;lt;
[libcurl] &amp;lt; {"access_token":"eyJhbGciOiJSUzI1NiIsInR5cCIgOiAiSldUIiwia2lkIiA6ICIwb1RQSV85LXVIQWJ5Q0t3M2luTm5MTW5hU21hc05nWS1OaDVkTzJ4X0lNIn0.eyJleHAiOjE3NDYwMzEyNTMsImlhdCI6MTc0NjAzMDk1MywiYXV0aF90aW1lIjoxNzQ2MDMwOTUyLCJqdGkiOiJvbnJ0ZGc6MWU3MmRlNGUtNTRhYi00OTZjLWIxYWYtY2FhYmRiYzJlYzExIiwiaXNzIjoiaHR0cDovLzE5Mi4xNjguMC4xNTY6ODA4MC9yZWFsbXMvcG9zdGdyZXMtcmVhbG0iLCJhdWQiOiJhY2NvdW50Iiwic3ViIjoiMGZjNzJiNmYtNjIyMS00ZWQ4LWE5MTYtMDY5ZTdhMDgxZDE0IiwidHlwIjoiQmVhcmVyIiwiYXpwIjoicG9zdGdyZXMtY2xpZW50Iiwic2lkIjoiNjRlMDUzMTMtM2UyMi00MmNjLWE0YmItOTAxODU2YTFhYjMzIiwiYWxsb3dlZC1vcmlnaW5zIjpbIi8qIl0sInJlYWxtX2FjY2VzcyI6eyJyb2xlcyI6WyJvZmZsaW5lX2FjY2VzcyIsImRlZmF1bHQtcm9sZXMtcG9zdGdyZXMtcmVhbG0iLCJ1bWFfYXV0aG9yaXphdGlvbiJdfSwicmVzb3VyY2VfYWNjZXNzIjp7ImFjY291bnQiOnsicm9sZXMiOlsibWFuYWdlLWFjY291bnQiLCJtYW5hZ2UtYWNjb3VudC1saW5rcyIsInZpZXctcHJvZmlsZSJdfX0sInNjb3BlIjoib3BlbmlkIHByb2ZpbGUgcG9zdGdyZXMiLCJuYW1lIjoiYWxpY2UgcG9zdGdyZXMiLCJwcmVmZXJyZWRfdXNlcm5hbWUiOiJhbGljZSIsImdpdmVuX25hbWUiOiJhbGljZSIsImZhbWlseV9uYW1lIjoicG9zdGdyZXMifQ.RXMszI-snIdrXyyTw74U8QXQeDG3zpfV4OvxYuJQvsb86eauXkKHAH35GfEm3XvQbtmpdSdfs1S4i11d69dUjpVTgPpzx6G7IXCXj2NTowzZuyuvdnLxPi1aXdxXqOKNSLSj5PXhGIaZhWsn2sR8dAJ0jjWTUO_lh8qJuJYaDcFulWn_flHVGQYzMZ5PTneRadg8h_1dWp4HSr6yC74NmF94dnOBmytivM4a__Wcq6TkZ3KLn_gafqnn72HpWY0WRwyZdQuzc5o8mE3UUAoKukxMnwDG7Yhxif2YFb_a5aCloMbL9aDghbMypahl3MiJHHx3j50FavSRm0FJa3zK9w","expires_in":300,"refresh_expires_in":1800,"refresh_token":"eyJhbGciOiJIUzUxMiIsInR5cCIgOiAiSldUIiwia2lkIiA6ICJjOGNiNjY3Ni00OTAxLTRmNjItOTI0OS1kMzY2MWI5Mjg3OTIifQ.eyJleHAiOjE3NDYwMzI3NTMsImlhdCI6MTc0NjAzMDk1MywianRpIjoiZTJkNzkzODUtNjBhZS00MTIwLWIwODAtNjVmYWU4ZmNhYzIzIiwiaXNzIjoiaHR0cDovLzE5Mi4xNjguMC4xNTY6ODA4MC9yZWFsbXMvcG9zdGdyZXMtcmVhbG0iLCJhdWQiOiJodHRwOi8vMTkyLjE2OC4wLjE1Njo4MDgwL3JlYWxtcy9wb3N0Z3Jlcy1yZWFsbSIsInN1YiI6IjBmYzcyYjZmLTYyMjEtNGVkOC1hOTE2LTA2OWU3YTA4MWQxNCIsInR5cCI6IlJlZnJlc2giLCJhenAiOiJwb3N0Z3Jlcy1jbGllbnQiLCJzaWQiOiI2NGUwNTMxMy0zZTIyLTQyY2MtYTRiYi05MDE4NTZhMWFiMzMiLCJzY29wZSI6Im9wZW5pZCBwcm9maWxlIHdlYi1vcmlnaW5zIHBvc3RncmVzIHJvbGVzIGJhc2ljIn0.43pRSq4PBO7ZY86jt8dIL7xZJylntY_CZXllcRfwfh41IRCOft6iqIWdJQp7TJv_JIDI-_-QeOSf1EC_wzABNg","token_type":"Bearer","id_token":"eyJhbGciOiJSUzI1NiIsInR5cCIgOiAiSldUIiwia2lkIiA6ICIwb1RQSV85LXVIQWJ5Q0t3M2luTm5MTW5hU21hc05nWS1OaDVkTzJ4X0lNIn0.eyJleHAiOjE3NDYwMzEyNTMsImlhdCI6MTc0NjAzMDk1MywiYXV0aF90aW1lIjoxNzQ2MDMwOTUyLCJqdGkiOiIzNWM3ZDAyZC05OGFjLTQzMTgtOTg3NC0zYzY0Mjg5NjFhMjgiLCJpc3MiOiJodHRwOi8vMTkyLjE2OC4wLjE1Njo4MDgwL3JlYWxtcy9wb3N0Z3Jlcy1yZWFsbSIsImF1ZCI6InBvc3RncmVzLWNsaWVudCIsInN1YiI6IjBmYzcyYjZmLTYyMjEtNGVkOC1hOTE2LTA2OWU3YTA4MWQxNCIsInR5cCI6IklEIiwiYXpwIjoicG9zdGdyZXMtY2xpZW50Iiwic2lkIjoiNjRlMDUzMTMtM2UyMi00MmNjLWE0YmItOTAxODU2YTFhYjMzIiwiYXRfaGFzaCI6ImphOXRKZ1E0VkVPTTNBZGc0VWJBVGciLCJuYW1lIjoiYWxpY2UgcG9zdGdyZXMiLCJwcmVmZXJyZWRfdXNlcm5hbWUiOiJhbGljZSIsImdpdmVuX25hbWUiOiJhbGljZSIsImZhbWlseV9uYW1lIjoicG9zdGdyZXMifQ.dH5hM21-ygBiCXoA9NVOou-L3esUVJUFFUmt_1fU0jc9al4Lk7EUN-cqHicHZPD48HhkIWMJK7WZjxnZLDlkG7ORGdDKPGccMU4-sGsRuVu-GDuNFo_5kHAh_NJZsLBXz9UkpNBkd8ROxK3-fbmyYdwsuwNeg6KNhSOj0FxEnxLc0-HrjEE92P7hzq0PD29oY2jhRKcqpbtknMwxFkkMBi8xPgdpuyTmLtJD3-xxYuwMKP7WUGzwzVAFqfrhFm5O5dJxeld5fTFE4Kyl9fR24JcjtfxBeIHVJqLiQkl9Et_KNGiFoXDG4Xwcc7eIUaBnauhY5_froYvKS8NbQxCOUg","not-before-policy":0,"session_state":"64e05313-3e22-42cc-a4bb-901856a1ab33","scope":"openid profile postgres"}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Authorization via psql
&lt;/h3&gt;

&lt;p&gt;For testing purposes, we will allow authorization over the unsecured http protocol (when using https, you will need to configure certificate chains):&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;export PGOAUTHDEBUG="UNSAFE"

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Let's launch psql and set the connection details:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;psql "user=alice dbname=postgres oauth_issuer=http://192.168.0.156:8080/realms/postgres-realm/.well-known/openid-configuration oauth_client_id=postgres-client oauth_client_secret=YYi8LqfzHRMnqUUlptpWVC2k7eWNrqjX"

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We will see the URL and the code that you will need to enter after clicking on this URL:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Visit http://192.168.0.156:8080/realms/postgres-realm/device and enter the code: LJDW-RURX

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We enter it in the browser [&lt;a href="http://192.168.0.156:8080/realms/postgres-realm/device%5D(http://192.168.0.156:8080/realms/postgres-realm/device" rel="noopener noreferrer"&gt;http://192.168.0.156:8080/realms/postgres-realm/device](http://192.168.0.156:8080/realms/postgres-realm/device&lt;/a&gt;&lt;a href="http://192.168.0.156:8080/realms/postgres-realm/device:" rel="noopener noreferrer"&gt;:&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftu1iqyvlvznqoslaxct4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftu1iqyvlvznqoslaxct4.png" width="800" height="496"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Enter the code and click Submit.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyy5i8nq7yvz7zh9wyp8n.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyy5i8nq7yvz7zh9wyp8n.png" width="800" height="492"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Next, we see the user login window:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkpjnzu8q7201zcpxj803.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkpjnzu8q7201zcpxj803.png" width="800" height="600"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Enter the username and password (see "Creating users") and click on Sign In.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqbrq4yhhf9kt48fdu31f.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqbrq4yhhf9kt48fdu31f.png" width="800" height="639"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Next, he asks if we are ready to provide (send) such information to PostgreSQL. Click Yes.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwc3rwaz7l6rf2oqoet6u.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwc3rwaz7l6rf2oqoet6u.png" width="800" height="610"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Successfully logged in...&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcaz6p3u3z8ctqdo6g0cd.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcaz6p3u3z8ctqdo6g0cd.png" width="800" height="376"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;... and we can enter commands in the terminal:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;psql (18devel) Type "help" for help.
postgres=&amp;gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;As a result, we successfully logged in to PostgreSQL.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Integration of OAuth 2.0 Device Authorization Flow, introduced in the PostgreSQL database management system 18 and Tantor Postgres 17.5.0, allows the use of the SSO (single sign-on) mechanism. Implementation of centralized access control via providers like Keycloak increase the level of protection, eliminating the risks of password interception, and optimize administration in distributed environments. The article provides a step-by-step guide from setting up Keycloak and PostgreSQL configurations before implementing the token validator and successful authorization via psql.&lt;/p&gt;

</description>
      <category>postgres</category>
      <category>oauth</category>
    </item>
  </channel>
</rss>
