<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Ahmad Tashfeen</title>
    <description>The latest articles on DEV Community by Ahmad Tashfeen (@ahmadtash7).</description>
    <link>https://dev.to/ahmadtash7</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/ahmadtash7"/>
    <language>en</language>
    <item>
      <title>Understanding Cost Estimation in Single-Table Queries in PostgreSQL</title>
      <dc:creator>Ahmad Tashfeen</dc:creator>
      <pubDate>Fri, 14 Jul 2023 16:57:55 +0000</pubDate>
      <link>https://dev.to/ahmadtash7/understanding-cost-estimation-in-single-table-queries-in-postgresql-1e92</link>
      <guid>https://dev.to/ahmadtash7/understanding-cost-estimation-in-single-table-queries-in-postgresql-1e92</guid>
      <description>&lt;p&gt;In PostgreSQL, query optimisation plays a vital role in achieving optimal query performance. This article explores how PostgreSQL estimates costs in single-table queries and provides insights into the different types of costs and their significance in query optimisation.&lt;br&gt;
Estimating Costs: Functions and Dimensions:&lt;br&gt;
Cost estimation in PostgreSQL is based on relative performance indicators rather than absolute values. The functions defined in costsize.c are responsible for estimating costs for different operations executed by the executor. For instance, the cost_seqscan() and cost_index() functions estimate the costs of sequential scans and index scans, respectively.&lt;br&gt;
Types of Costs: Start-up, Run, and Total:&lt;br&gt;
PostgreSQL distinguishes three types of costs: start-up, run, and total. The start-up cost represents the cost incurred before the first tuple is fetched, such as reading index pages. The run cost, on the other hand, reflects the cost of fetching all tuples. The total cost is the sum of the start-up and run costs. It is important to note that the start-up and run costs are estimated independently, and the total cost is derived from their combination.&lt;br&gt;
Understanding EXPLAIN and Cost Display:&lt;br&gt;
The EXPLAIN command in PostgreSQL provides insights into the cost estimation process. It displays both the start-up and total costs for each operation in a query plan. For instance, when executing a simple single-table query, the EXPLAIN output includes information about the sequential scan operation, with the cost section displaying the start-up and total costs.&lt;br&gt;
Exploring Specific Operations: Sequential Scan, Index Scan, and Sort:&lt;br&gt;
In this article, we delve into the estimation process for three specific operations: sequential scan, index scan, and sort. By examining these operations in detail, we gain a deeper understanding of how PostgreSQL estimates their costs during query optimization.&lt;br&gt;
Practical Example:&lt;br&gt;
To illustrate the cost estimation process, we use a specific table and index. We create a table called tbl with columns id and data, and an index called tbl_data_idx on the data column. We then populate the table with data and analyze it to generate statistics for accurate cost estimation. The article presents a practical example using the created table and index to showcase the cost estimation process in action.&lt;br&gt;
Cost estimation is a crucial aspect of query optimization in PostgreSQL. By estimating costs for various operations, PostgreSQL's optimizer can make informed decisions about the most efficient execution plan for a query. Understanding the different types of costs and their estimation methods empowers developers and database administrators to optimize query performance and ensure efficient utilization of resources in single-table queries within PostgreSQL.&lt;/p&gt;

</description>
      <category>apacheage</category>
      <category>postgres</category>
    </item>
    <item>
      <title>Understanding the Planner and Executor in PostgreSQL: Optimizing Query Performance</title>
      <dc:creator>Ahmad Tashfeen</dc:creator>
      <pubDate>Fri, 14 Jul 2023 16:56:09 +0000</pubDate>
      <link>https://dev.to/ahmadtash7/understanding-the-planner-and-executor-in-postgresql-optimizing-query-performance-5fa5</link>
      <guid>https://dev.to/ahmadtash7/understanding-the-planner-and-executor-in-postgresql-optimizing-query-performance-5fa5</guid>
      <description>&lt;h2&gt;
  
  
  The Planner: Generating the Optimal Query Plan
&lt;/h2&gt;

&lt;p&gt;The planner acts as the brain behind query optimization in PostgreSQL. It receives a query tree from the rewriter, which represents the parsed and normalized query, and its primary objective is to generate a query plan tree that can be executed most efficiently by the executor. Unlike rule-based optimization or hints, PostgreSQL's planner relies solely on cost-based optimization strategies.&lt;/p&gt;

&lt;h2&gt;
  
  
  Composing the Plan Tree: Plan Nodes and Structure
&lt;/h2&gt;

&lt;p&gt;The plan tree is the result of the planner's work and consists of plan nodes, which are elements defining various operations and steps involved in query execution. These plan nodes hold information required by the executor for processing. The plan tree, connected to the PlannedStmt structure, is represented as a list of plan nodes defined in plannodes.h. For a more detailed understanding of the plan nodes, we refer to Section 3.3.3 and Section 3.5.4.2 of the PostgreSQL documentation.&lt;/p&gt;

&lt;h2&gt;
  
  
  Executor: Bringing the Plan to Life
&lt;/h2&gt;

&lt;p&gt;The executor is responsible for executing the query plan generated by the planner. It operates on the plan tree, starting from the end and working its way up to the root. In the case of a single-table query, the executor processes the plan nodes accordingly, leveraging the information provided by each node. For instance, if the plan tree includes a sort node followed by a sequential scan node, the executor will first perform a sequential scan on the table and then sort the obtained result.&lt;/p&gt;

&lt;h2&gt;
  
  
  Interacting with the Database Cluster: Reading, Writing, and Memory Management
&lt;/h2&gt;

&lt;p&gt;To fulfill its tasks, the executor interacts with the database cluster, reading and writing tables and indexes via the buffer manager, as described in Chapter 8 of the PostgreSQL documentation. During query processing, the executor utilizes allocated memory areas such as temp_buffers and work_mem, which are pre-allocated for efficient memory management. Additionally, the executor may create temporary files if necessary to handle intermediate results during execution.&lt;/p&gt;

&lt;p&gt;The planner and executor are fundamental components of PostgreSQL's query processing system. While the planner generates the optimal query plan based on cost-based optimization, the executor brings that plan to life, executing the operations defined by the plan nodes. By understanding the intricacies of these components and their interactions, developers and database administrators can fine-tune query performance, optimize resource usage, and ultimately improve the efficiency of their PostgreSQL databases.&lt;/p&gt;

</description>
      <category>postgres</category>
      <category>apacheage</category>
    </item>
    <item>
      <title>Multi-Table Query Execution in FDW</title>
      <dc:creator>Ahmad Tashfeen</dc:creator>
      <pubDate>Wed, 12 Jul 2023 17:14:30 +0000</pubDate>
      <link>https://dev.to/ahmadtash7/multi-table-query-execution-in-fdw-39n8</link>
      <guid>https://dev.to/ahmadtash7/multi-table-query-execution-in-fdw-39n8</guid>
      <description>&lt;p&gt;In PostgreSQL, when executing a multi-table query using the postgres_fdw extension, each foreign table is fetched individually using single-table SELECT statements, and then the join operation is performed on the local server.&lt;/p&gt;

&lt;p&gt;In versions 9.5 or earlier, even if the foreign tables are stored on the same remote server, postgres_fdw fetches them individually and performs the join locally.&lt;/p&gt;

&lt;p&gt;However, in versions 9.6 or later, significant improvements have been made to postgres_fdw. It can now execute the remote join operation on the remote server when the foreign tables are on the same server and the use_remote_estimate option is enabled.&lt;/p&gt;

&lt;p&gt;Let's explore the execution details for a multi-table query involving two foreign tables, tbl_a and tbl_b:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight sql"&gt;&lt;code&gt;&lt;span class="k"&gt;SELECT&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="k"&gt;FROM&lt;/span&gt; &lt;span class="n"&gt;tbl_a&lt;/span&gt; &lt;span class="k"&gt;AS&lt;/span&gt; &lt;span class="n"&gt;a&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;tbl_b&lt;/span&gt; &lt;span class="k"&gt;AS&lt;/span&gt; &lt;span class="n"&gt;b&lt;/span&gt; &lt;span class="k"&gt;WHERE&lt;/span&gt; &lt;span class="n"&gt;a&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;id&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;b&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;id&lt;/span&gt; &lt;span class="k"&gt;AND&lt;/span&gt; &lt;span class="n"&gt;a&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;id&lt;/span&gt; &lt;span class="o"&gt;&amp;lt;&lt;/span&gt; &lt;span class="mi"&gt;200&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The result of the EXPLAIN command for this query is as follows:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight sql"&gt;&lt;code&gt;&lt;span class="k"&gt;EXPLAIN&lt;/span&gt; &lt;span class="k"&gt;SELECT&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="k"&gt;FROM&lt;/span&gt; &lt;span class="n"&gt;tbl_a&lt;/span&gt; &lt;span class="k"&gt;AS&lt;/span&gt; &lt;span class="n"&gt;a&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;tbl_b&lt;/span&gt; &lt;span class="k"&gt;AS&lt;/span&gt; &lt;span class="n"&gt;b&lt;/span&gt; &lt;span class="k"&gt;WHERE&lt;/span&gt; &lt;span class="n"&gt;a&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;id&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;b&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;id&lt;/span&gt; &lt;span class="k"&gt;AND&lt;/span&gt; &lt;span class="n"&gt;a&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;id&lt;/span&gt; &lt;span class="o"&gt;&amp;lt;&lt;/span&gt; &lt;span class="mi"&gt;200&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
                                &lt;span class="n"&gt;QUERY&lt;/span&gt; &lt;span class="n"&gt;PLAN&lt;/span&gt;
&lt;span class="c1"&gt;------------------------------------------------------------------------------&lt;/span&gt;
 &lt;span class="n"&gt;Merge&lt;/span&gt; &lt;span class="k"&gt;Join&lt;/span&gt;  &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;cost&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mi"&gt;532&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="mi"&gt;31&lt;/span&gt;&lt;span class="p"&gt;..&lt;/span&gt;&lt;span class="mi"&gt;700&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="mi"&gt;34&lt;/span&gt; &lt;span class="k"&gt;rows&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mi"&gt;10918&lt;/span&gt; &lt;span class="n"&gt;width&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mi"&gt;16&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
   &lt;span class="n"&gt;Merge&lt;/span&gt; &lt;span class="n"&gt;Cond&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;a&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;id&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;b&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;id&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
   &lt;span class="o"&gt;-&amp;gt;&lt;/span&gt;  &lt;span class="n"&gt;Sort&lt;/span&gt;  &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;cost&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mi"&gt;200&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="mi"&gt;59&lt;/span&gt;&lt;span class="p"&gt;..&lt;/span&gt;&lt;span class="mi"&gt;202&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="mi"&gt;72&lt;/span&gt; &lt;span class="k"&gt;rows&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mi"&gt;853&lt;/span&gt; &lt;span class="n"&gt;width&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mi"&gt;8&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
         &lt;span class="n"&gt;Sort&lt;/span&gt; &lt;span class="k"&gt;Key&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;a&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;id&lt;/span&gt;
         &lt;span class="o"&gt;-&amp;gt;&lt;/span&gt;  &lt;span class="k"&gt;Foreign&lt;/span&gt; &lt;span class="n"&gt;Scan&lt;/span&gt; &lt;span class="k"&gt;on&lt;/span&gt; &lt;span class="n"&gt;tbl_a&lt;/span&gt; &lt;span class="n"&gt;a&lt;/span&gt;  &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;cost&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mi"&gt;100&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="mi"&gt;00&lt;/span&gt;&lt;span class="p"&gt;..&lt;/span&gt;&lt;span class="mi"&gt;159&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="mi"&gt;06&lt;/span&gt; &lt;span class="k"&gt;rows&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mi"&gt;853&lt;/span&gt; &lt;span class="n"&gt;width&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mi"&gt;8&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
   &lt;span class="o"&gt;-&amp;gt;&lt;/span&gt;  &lt;span class="n"&gt;Sort&lt;/span&gt;  &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;cost&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mi"&gt;331&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="mi"&gt;72&lt;/span&gt;&lt;span class="p"&gt;..&lt;/span&gt;&lt;span class="mi"&gt;338&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="mi"&gt;12&lt;/span&gt; &lt;span class="k"&gt;rows&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mi"&gt;2560&lt;/span&gt; &lt;span class="n"&gt;width&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mi"&gt;8&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
         &lt;span class="n"&gt;Sort&lt;/span&gt; &lt;span class="k"&gt;Key&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;b&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;id&lt;/span&gt;
         &lt;span class="o"&gt;-&amp;gt;&lt;/span&gt;  &lt;span class="k"&gt;Foreign&lt;/span&gt; &lt;span class="n"&gt;Scan&lt;/span&gt; &lt;span class="k"&gt;on&lt;/span&gt; &lt;span class="n"&gt;tbl_b&lt;/span&gt; &lt;span class="n"&gt;b&lt;/span&gt;  &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;cost&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mi"&gt;100&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="mi"&gt;00&lt;/span&gt;&lt;span class="p"&gt;..&lt;/span&gt;&lt;span class="mi"&gt;186&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="mi"&gt;80&lt;/span&gt; &lt;span class="k"&gt;rows&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mi"&gt;2560&lt;/span&gt; &lt;span class="n"&gt;width&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mi"&gt;8&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;8&lt;/span&gt; &lt;span class="k"&gt;rows&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The execution plan reveals that the executor selects a merge join and executes it in the following steps:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;The executor fetches the rows from tbl_a using a foreign table scan (line 8).&lt;/li&gt;
&lt;li&gt;The fetched rows of tbl_a are sorted on the local server (line 6).&lt;/li&gt;
&lt;li&gt;The executor fetches the rows from tbl_b using a foreign table scan (line 11).&lt;/li&gt;
&lt;li&gt;The fetched rows of tbl_b are sorted on the local server (line 9).&lt;/li&gt;
&lt;li&gt;The executor performs a merge join operation on the local server (line 4).&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  During the execution, the following steps are taken for fetching the rows:
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;Start the remote transaction.&lt;/li&gt;
&lt;li&gt;Declare cursor c1 with the SELECT statement: &lt;code&gt;SELECT id, data FROM public.tbl_a WHERE (id &amp;lt; 200)&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;Execute FETCH commands to obtain the result from cursor c1.&lt;/li&gt;
&lt;li&gt;Declare cursor c2 with the SELECT statement: &lt;code&gt;SELECT id, data FROM public.tbl_b&lt;/code&gt;.
Note that the original WHERE clause of the multi-table query is "tbl_a.id = tbl_b.id AND tbl_a.id &amp;lt; 200." However, postgres_fdw cannot infer the WHERE clause "tbl_b.id &amp;lt; 200" logically. Therefore, the executor executes a SELECT statement without any WHERE clauses and fetches all rows of the foreign table tbl_b. This process is inefficient as unnecessary rows are read from the remote server over the network, and the received rows must be sorted for the merge join operation.&lt;/li&gt;
&lt;li&gt;Execute FETCH commands to obtain the result from cursor c2.&lt;/li&gt;
&lt;li&gt;Close cursor c1.&lt;/li&gt;
&lt;li&gt;Close cursor c2.&lt;/li&gt;
&lt;li&gt;Commit the transaction.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;When executing multi-table queries with the postgres_fdw extension in PostgreSQL, the foreign tables are fetched individually, and the join operation is performed on the local server. In earlier versions, this approach was followed regardless of whether the foreign tables were on the same remote server. However, in recent versions, postgres_fdw has been improved to execute the remote join operation when the foreign tables are on the same server and the use_remote_estimate option is enabled. These optimisations enhance the performance and efficiency of multi-table query execution in FDW.&lt;/p&gt;

</description>
      <category>apacheage</category>
      <category>postgres</category>
    </item>
    <item>
      <title>Creating a Query Tree, Connecting to the Remote Server, and Plan Tree Creation in FDW</title>
      <dc:creator>Ahmad Tashfeen</dc:creator>
      <pubDate>Wed, 12 Jul 2023 17:07:23 +0000</pubDate>
      <link>https://dev.to/ahmadtash7/creating-a-query-tree-connecting-to-the-remote-server-and-plan-tree-creation-in-fdw-1dea</link>
      <guid>https://dev.to/ahmadtash7/creating-a-query-tree-connecting-to-the-remote-server-and-plan-tree-creation-in-fdw-1dea</guid>
      <description>&lt;h2&gt;
  
  
  1. Creating a Query Tree:
&lt;/h2&gt;

&lt;p&gt;During query analysis, the analyzer/analyser constructs the query tree based on the input SQL statement. To build the query tree for FDW, the definitions of the foreign tables are utilized. These definitions are stored in the pg_catalog.pg_class and pg_catalog.pg_foreign_table catalogs, which are populated using commands like CREATE FOREIGN TABLE or IMPORT FOREIGN SCHEMA.&lt;/p&gt;

&lt;h2&gt;
  
  
  2. Connecting to the Remote Server:
&lt;/h2&gt;

&lt;p&gt;To establish a connection with the remote server, the planner (or executor) relies on specific libraries designed for connecting to the respective database servers. For instance, when connecting to a remote PostgreSQL server, the postgres_fdw extension utilizes the libpq library. On the other hand, mysql_fdw, developed by EnterpriseDB, employs the libmysqlclient library for connecting to MySQL servers. The connection parameters, including the username, server's IP address, and port number, are stored in the pg_catalog.pg_user_mapping and pg_catalog.pg_foreign_server catalogs using commands like CREATE USER MAPPING and CREATE SERVER.&lt;/p&gt;

&lt;h2&gt;
  
  
  3. Creating a Plan Tree Using EXPLAIN Commands (Optional):
&lt;/h2&gt;

&lt;p&gt;PostgreSQL's FDW provides support for obtaining statistics of foreign tables, which are utilized to estimate the plan tree of a query. Some FDW extensions, such as postgres_fdw, mysql_fdw, tds_fdw, and jdbc2_fdw, make use of this feature. If the use_remote_estimate option is enabled for a server using the ALTER SERVER command, the planner can query the cost of plans from the remote server by executing the EXPLAIN command. By default, embedded constant values are used for estimation. Here's an example of enabling the use_remote_estimate option:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight sql"&gt;&lt;code&gt;&lt;span class="k"&gt;ALTER&lt;/span&gt; &lt;span class="n"&gt;SERVER&lt;/span&gt; &lt;span class="n"&gt;remote_server_name&lt;/span&gt; &lt;span class="k"&gt;OPTIONS&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;use_remote_estimate&lt;/span&gt; &lt;span class="s1"&gt;'on'&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;It's important to note that while some FDW extensions can reflect the results of the EXPLAIN command (such as postgres_fdw), other extensions may not be able to utilize the EXPLAIN results for planning. For instance, the EXPLAIN command in MySQL only returns the estimated number of rows, which may not provide enough information for PostgreSQL's planner to estimate the cost accurately, as explained in Chapter 3.&lt;/p&gt;

&lt;p&gt;When using FDW in PostgreSQL, the process involves creating a query tree, establishing a connection to the remote server using specific libraries, and optionally utilizing EXPLAIN commands to estimate the plan tree of a query. These steps allow PostgreSQL to seamlessly interact with foreign tables on remote servers, expanding the capabilities and flexibility of your database environment.&lt;/p&gt;

</description>
      <category>apacheage</category>
      <category>postgres</category>
    </item>
    <item>
      <title>FDW Performance in PostgreSQL: An Overview</title>
      <dc:creator>Ahmad Tashfeen</dc:creator>
      <pubDate>Wed, 12 Jul 2023 17:01:50 +0000</pubDate>
      <link>https://dev.to/ahmadtash7/fdw-performance-in-postgresql-an-overview-58kh</link>
      <guid>https://dev.to/ahmadtash7/fdw-performance-in-postgresql-an-overview-58kh</guid>
      <description>&lt;p&gt;To utilize the Foreign Data Wrapper (FDW) feature in PostgreSQL, you need to install the relevant extension and execute setup commands, including creating foreign tables, servers, and user mappings. For detailed instructions, refer to the official PostgreSQL documentation.&lt;/p&gt;

&lt;p&gt;Once the necessary settings are in place, the functions provided by the FDW extension come into play during query processing to access foreign tables.&lt;/p&gt;

&lt;h2&gt;
  
  
  Query Tree Creation:
&lt;/h2&gt;

&lt;p&gt;The query analyzer (or analyser) constructs the query tree based on the input SQL statement.&lt;/p&gt;

&lt;h2&gt;
  
  
  Remote Server Connection:
&lt;/h2&gt;

&lt;p&gt;The planner (or executor) establishes a connection with the remote server that hosts the foreign table.&lt;/p&gt;

&lt;h2&gt;
  
  
  Cost Estimation:
&lt;/h2&gt;

&lt;p&gt;If the use_remote_estimate option is enabled (off by default), the planner executes EXPLAIN commands on the remote server to estimate the cost of various plan paths.&lt;/p&gt;

&lt;h2&gt;
  
  
  SQL Statement Generation:
&lt;/h2&gt;

&lt;p&gt;The planner generates a plain text SQL statement from the plan tree, a process known as deparsing.&lt;/p&gt;

&lt;h2&gt;
  
  
  Remote Execution and Result Retrieval:
&lt;/h2&gt;

&lt;p&gt;The executor sends the plain text SQL statement to the remote server for execution and retrieves the resulting data.&lt;br&gt;
By following these steps, FDW allows PostgreSQL to seamlessly interact with foreign tables on remote servers. The performance of FDW queries depends on factors such as network latency, the efficiency of the foreign server, and the data volume being transferred.&lt;/p&gt;

&lt;p&gt;PostgreSQL's FDW feature enables efficient access to and manipulation of foreign tables located on remote servers. By executing setup commands and leveraging the provided extension, you can seamlessly integrate remote data sources into your PostgreSQL database.&lt;/p&gt;

&lt;p&gt;The performance of FDW queries involves steps such as query tree creation, remote server connection, cost estimation, SQL statement generation, and remote execution. Taking into account factors like network latency and server efficiency, FDW empowers PostgreSQL to process queries on foreign tables effectively.&lt;/p&gt;

</description>
      <category>apacheage</category>
      <category>postgres</category>
    </item>
    <item>
      <title>Foreign Data Wrappers (FDW): Accessing Remote Data in PostgreSQL</title>
      <dc:creator>Ahmad Tashfeen</dc:creator>
      <pubDate>Wed, 12 Jul 2023 16:55:59 +0000</pubDate>
      <link>https://dev.to/ahmadtash7/foreign-data-wrappers-fdw-accessing-remote-data-in-postgresql-4fj8</link>
      <guid>https://dev.to/ahmadtash7/foreign-data-wrappers-fdw-accessing-remote-data-in-postgresql-4fj8</guid>
      <description>&lt;p&gt;In 2003, the SQL Management of External Data (SQL/MED) specification was introduced as part of the SQL standard. PostgreSQL has been actively developing its Foreign Data Wrappers (FDW) feature since version 9.1 to implement a portion of SQL/MED.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Concept of FDW:
&lt;/h2&gt;

&lt;p&gt;In SQL/MED, a table residing on a remote server is referred to as a foreign table. PostgreSQL's FDW leverages SQL/MED to manage these foreign tables, making them similar to local tables in terms of access and manipulation.&lt;/p&gt;

&lt;p&gt;Accessing Foreign Tables:&lt;br&gt;
To access foreign tables on remote servers, you need to install the required extension and configure the appropriate settings. Let's consider an example with two remote servers: "postgresql" and "mysql," each hosting a foreign table, "foreign_pg_tbl" and "foreign_my_tbl," respectively. By issuing SELECT queries as shown below, you can access these foreign tables from the local server.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;-- Accessing the foreign_pg_tbl on the remote postgresql server
SELECT count(*) FROM foreign_pg_tbl;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;-- Accessing the foreign_my_tbl on the remote mysql server
SELECT count(*) FROM foreign_my_tbl;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Joining Foreign Tables:
&lt;/h2&gt;

&lt;p&gt;FDW enables executing join operations with foreign tables stored on different servers, just like local tables. For instance, you can perform a join operation between "foreign_pg_tbl" and "foreign_my_tbl" using the following query:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;SELECT count(*) FROM foreign_pg_tbl AS p, foreign_my_tbl AS m WHERE p.id = m.id;&lt;br&gt;
&lt;/code&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  FDW Extensions:
&lt;/h2&gt;

&lt;p&gt;Various FDW extensions have been developed and listed in the Postgres wiki. However, it's worth noting that most of these extensions are not actively maintained. The "postgres_fdw" extension stands out as the officially developed and maintained FDW extension by the PostgreSQL Global Development Group. It allows access to remote PostgreSQL servers.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion:
&lt;/h2&gt;

&lt;p&gt;PostgreSQL's Foreign Data Wrappers provide a powerful mechanism for accessing and managing remote data through foreign tables. With FDW, you can seamlessly integrate remote data sources into your local PostgreSQL database, enabling efficient querying and joining of foreign tables. While various FDW extensions exist, it's advisable to use the officially supported "postgres_fdw" extension for accessing remote PostgreSQL servers.&lt;/p&gt;

</description>
      <category>apacheage</category>
      <category>postgres</category>
    </item>
    <item>
      <title>Estimating Sequential Scan Cost in PostgreSQL</title>
      <dc:creator>Ahmad Tashfeen</dc:creator>
      <pubDate>Thu, 29 Jun 2023 11:30:09 +0000</pubDate>
      <link>https://dev.to/ahmadtash7/estimating-sequential-scan-cost-in-postgresql-545k</link>
      <guid>https://dev.to/ahmadtash7/estimating-sequential-scan-cost-in-postgresql-545k</guid>
      <description>&lt;p&gt;In PostgreSQL, query optimization plays a crucial role in achieving efficient query performance. Cost estimation is a key component of this optimization process, allowing the database to estimate the cost of various operations. This article focuses on the estimation of the sequential scan cost, one of the fundamental operations in PostgreSQL, and explores how it is calculated for a specific query.&lt;/p&gt;

&lt;h2&gt;
  
  
  Understanding Sequential Scan Cost:
&lt;/h2&gt;

&lt;p&gt;The cost of a sequential scan operation is estimated using the cost_seqscan() function. To illustrate the estimation process, let's consider the following query:&lt;/p&gt;

&lt;p&gt;testdb=# SELECT * FROM tbl WHERE id &amp;lt; 8000;&lt;/p&gt;

&lt;h2&gt;
  
  
  Estimating Run Cost:
&lt;/h2&gt;

&lt;p&gt;The run cost of a sequential scan is composed of two parts: the CPU run cost and the disk run cost. The CPU run cost is calculated based on the number of tuples (Ntuple) and the CPU costs (cpu_tuple_cost and cpu_operator_cost) set in the postgresql.conf file. Similarly, the disk run cost is determined by the number of pages (Npage) and the sequential page cost (seq_page_cost). The default values for these costs are 1.0, 0.01, 0.0025, and 1.0, respectively.&lt;/p&gt;

&lt;p&gt;To estimate the run cost, we can use the following equation:&lt;/p&gt;

&lt;p&gt;'run cost' = ('cpu run cost' + 'disk run cost') = ((cpu_tuple_cost + cpu_operator_cost) × Ntuple) + (seq_page_cost × Npage)&lt;/p&gt;

&lt;p&gt;To determine Ntuple and Npage, we can execute the following query:&lt;/p&gt;

&lt;p&gt;testdb=# SELECT relpages, reltuples FROM pg_class WHERE relname = 'tbl';&lt;/p&gt;

&lt;p&gt;In our example, Ntuple is 10,000 and Npage is 45.&lt;/p&gt;

&lt;p&gt;Plugging these values into the equation, we get:&lt;/p&gt;

&lt;p&gt;'run cost' = ((0.01 + 0.0025) × 10,000) + (1.0 × 45) = 170.0&lt;/p&gt;

&lt;h2&gt;
  
  
  Calculating Total Cost:
&lt;/h2&gt;

&lt;p&gt;The total cost is the sum of the start-up cost and the run cost. In the case of a sequential scan, the start-up cost is 0.0. Therefore, the total cost is equal to the run cost. In our example, the total cost is 170.0.&lt;/p&gt;

&lt;h2&gt;
  
  
  Confirming the Cost Estimation:
&lt;/h2&gt;

&lt;p&gt;To validate the cost estimation, we can use the EXPLAIN command on our query:&lt;/p&gt;

&lt;p&gt;testdb=# EXPLAIN SELECT * FROM tbl WHERE id &amp;lt; 8000;&lt;/p&gt;

&lt;p&gt;The output will display the query plan with the associated costs. In our case, we can observe that the start-up cost is 0.00 and the total cost is 170.00. Additionally, it is estimated that 8,000 rows (tuples) will be selected by scanning all rows.&lt;/p&gt;

&lt;p&gt;Understanding how PostgreSQL estimates the cost of a sequential scan operation is essential for query optimization. By analyzing the CPU and disk costs based on the number of tuples and pages, PostgreSQL can determine the most efficient execution plan. By considering the estimated costs and analyzing the query plan, developers and database administrators can make informed decisions to optimize performance in PostgreSQL.&lt;/p&gt;

</description>
      <category>apacheage</category>
      <category>postgres</category>
    </item>
    <item>
      <title>The Rewriter: Enhancing Query Trees Through Rule-Based Transformations</title>
      <dc:creator>Ahmad Tashfeen</dc:creator>
      <pubDate>Wed, 14 Jun 2023 09:54:47 +0000</pubDate>
      <link>https://dev.to/ahmadtash7/the-rewriter-enhancing-query-trees-through-rule-based-transformations-1d15</link>
      <guid>https://dev.to/ahmadtash7/the-rewriter-enhancing-query-trees-through-rule-based-transformations-1d15</guid>
      <description>&lt;p&gt;The rewriter is an integral component of the system that is responsible for implementing the rule system. It operates by examining a query tree and applying transformations based on the rules stored in the pg_rules system catalog, whenever deemed necessary. The rule system itself is a complex system with its own intricacies and functionalities. However, detailed descriptions of the rule system and the rewriter are intentionally omitted in this chapter to maintain a concise length.&lt;/p&gt;

&lt;p&gt;The rewriter plays a crucial role in the query processing pipeline. Its primary function is to analyze the query tree and evaluate whether any rules from the rule system need to be applied. These rules serve as guidelines for modifying the query tree, allowing for optimizations, reordering, or other necessary alterations to improve the overall performance of the query.&lt;/p&gt;

&lt;p&gt;By leveraging the rule system, the rewriter can effectively transform the query tree to enhance efficiency, reduce redundant computations, and ensure optimal resource utilization. The specific transformations performed by the rewriter depend on the nature of the rules defined in the system catalog.&lt;/p&gt;

&lt;p&gt;While the rule system and rewriter themselves warrant in-depth exploration, the decision to omit their detailed explanations in this chapter aims to maintain focus on the core concepts without overwhelming the reader with excessive technicalities.&lt;/p&gt;

&lt;p&gt;Apache GitHub: &lt;a href="https://github.com/apache/age"&gt;https://github.com/apache/age&lt;/a&gt;&lt;br&gt;
Apache Website: &lt;a href="https://www.apache.org"&gt;https://www.apache.org&lt;/a&gt;&lt;/p&gt;

</description>
      <category>apacheage</category>
      <category>apache</category>
      <category>postgres</category>
    </item>
    <item>
      <title>Semantic Analysis and Query Tree Generation by the Analyzer/Analyser</title>
      <dc:creator>Ahmad Tashfeen</dc:creator>
      <pubDate>Wed, 14 Jun 2023 09:53:02 +0000</pubDate>
      <link>https://dev.to/ahmadtash7/semantic-analysis-and-query-tree-generation-by-the-analyzeranalyser-3fgm</link>
      <guid>https://dev.to/ahmadtash7/semantic-analysis-and-query-tree-generation-by-the-analyzeranalyser-3fgm</guid>
      <description>&lt;p&gt;After the parser generates a parse tree, the analyzer/analyser performs a semantic analysis on it and generates a query tree. The root of the query tree is represented by the Query structure defined in parsenodes.h, which contains essential metadata about the command type (SELECT, INSERT, or others) and various leaves. Each leaf in the query tree forms a list or a tree and holds specific data related to individual clauses.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--GNip57-J--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/x0y50jbv1gapaxi2pvk0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--GNip57-J--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/x0y50jbv1gapaxi2pvk0.png" alt="Image description" width="800" height="349"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In the generated query tree, the targetlist represents the columns that will be included in the query result. For the given example, the targetlist consists of two columns: 'id' and 'data'. In cases where the input query tree uses an asterisk symbol ('*'), the analyzer/analyser replaces it explicitly with all the columns.&lt;/p&gt;

&lt;p&gt;The range table, another component of the query tree, contains information about the relations used in the query. For the provided query, the range table would hold details about the 'tbl_a' table, including its object identifier (OID) and name.&lt;/p&gt;

&lt;p&gt;The join tree is responsible for storing the FROM clause and the WHERE clauses, capturing the relationships and conditions between tables.&lt;/p&gt;

&lt;p&gt;Lastly, the sort clause is a list of SortGroupClause, which defines the sorting behavior specified in the ORDER BY clause.&lt;/p&gt;

&lt;p&gt;Through its semantic analysis, the analyzer/analyser transforms the parse tree into a query tree, enabling further processing and execution of the SQL query.&lt;/p&gt;

&lt;p&gt;Apache GitHub: &lt;a href="https://github.com/apache/age"&gt;https://github.com/apache/age&lt;/a&gt;&lt;br&gt;
Apache Website: &lt;a href="https://www.apache.org"&gt;https://www.apache.org&lt;/a&gt;&lt;/p&gt;

</description>
      <category>apacheage</category>
      <category>apache</category>
      <category>postgres</category>
    </item>
    <item>
      <title>Parsing SQL Queries and the Role of the Parser</title>
      <dc:creator>Ahmad Tashfeen</dc:creator>
      <pubDate>Wed, 14 Jun 2023 09:49:12 +0000</pubDate>
      <link>https://dev.to/ahmadtash7/parsing-sql-queries-and-the-role-of-the-parser-5bon</link>
      <guid>https://dev.to/ahmadtash7/parsing-sql-queries-and-the-role-of-the-parser-5bon</guid>
      <description>&lt;p&gt;The process of parsing SQL queries involves converting plain text statements into a parse tree, which can be further processed by other subsystems. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--TWwZYgP0--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/pgxw7tppow17736ip3lk.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--TWwZYgP0--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/pgxw7tppow17736ip3lk.png" alt="Image description" width="800" height="354"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;An example query, such as the one shown below, can be analyzed in this manner:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;testdb=# SELECT id, data FROM tbl_a WHERE id &amp;lt; 300 ORDER BY data;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The root node of the parse tree is the SelectStmt structure defined in parsenodes.h. Each element of the SELECT query corresponds to a specific element in the parse tree, identified by the same numbering. For instance, (1) represents an item in the target list, specifically the 'id' column of the table, while (4) indicates a WHERE clause.&lt;/p&gt;

&lt;p&gt;It is important to note that the parser solely focuses on checking the syntax of the input query. If there is a syntax error, an error will be returned. However, semantic checks, such as verifying the existence of a table, are not performed by the parser. These checks are carried out by the analyzer/analyser, which goes beyond syntax and evaluates the meaning and validity of the query.&lt;/p&gt;

&lt;p&gt;Apache GitHub: &lt;a href="https://github.com/apache/age"&gt;https://github.com/apache/age&lt;/a&gt;&lt;br&gt;
Apache Website: &lt;a href="https://www.apache.org"&gt;https://www.apache.org&lt;/a&gt;&lt;/p&gt;

</description>
      <category>apacheage</category>
      <category>apache</category>
      <category>postgres</category>
    </item>
    <item>
      <title>Understanding Process Architecture in PostgreSQL</title>
      <dc:creator>Ahmad Tashfeen</dc:creator>
      <pubDate>Tue, 09 May 2023 17:20:43 +0000</pubDate>
      <link>https://dev.to/ahmadtash7/understanding-process-architecture-in-postgresql-5e0i</link>
      <guid>https://dev.to/ahmadtash7/understanding-process-architecture-in-postgresql-5e0i</guid>
      <description>&lt;p&gt;PostgreSQL is a relational database management system that uses a client/server type architecture with a multi-process approach running on a single host. The PostgreSQL server is a collection of multiple processes that cooperatively manage one database cluster.&lt;/p&gt;

&lt;h2&gt;
  
  
  Postgres Server Process
&lt;/h2&gt;

&lt;p&gt;A postgres server process, also known as postmaster in earlier versions, is the parent process of all other processes in a PostgreSQL server. It starts up by executing the pg_ctl utility with a start option and allocates a shared memory area in memory, starts various background processes, starts replication associated processes and background worker processes if necessary, and waits for connection requests from clients. Whenever a connection request is received from a client, it starts a backend process.&lt;/p&gt;

&lt;h2&gt;
  
  
  Backend Processes
&lt;/h2&gt;

&lt;p&gt;A backend process, also called postgres, is started by the postgres server process and handles all queries issued by a connected client. It communicates with the client through a single TCP connection and terminates when the client gets disconnected. PostgreSQL allows multiple clients to connect simultaneously with the configuration parameter max_connections controlling the maximum number of clients.&lt;/p&gt;

&lt;h2&gt;
  
  
  Background Processes
&lt;/h2&gt;

&lt;p&gt;Various background processes perform processes of each feature (e.g., VACUUM and CHECKPOINT processes) for database management. Replication-associated processes perform streaming replication, and background worker processes can perform any processing implemented by users. It is impossible to explain each function simply as they depend on individual specific features and PostgreSQL internals.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;em&gt;Note:&lt;/em&gt;&lt;/strong&gt; The PostgreSQL server listens to one network port, with the default port being 5432.&lt;/p&gt;

&lt;h2&gt;
  
  
  Helpful Sources:
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://github.com/apache/age"&gt;https://github.com/apache/age&lt;/a&gt;&lt;br&gt;
&lt;a href="https://www.interdb.jp/pg/pgsql02.html"&gt;https://www.interdb.jp/pg/pgsql02.html&lt;/a&gt;&lt;/p&gt;

</description>
      <category>postgres</category>
    </item>
    <item>
      <title>Methods of Writing and Reading Tuples</title>
      <dc:creator>Ahmad Tashfeen</dc:creator>
      <pubDate>Tue, 09 May 2023 17:16:50 +0000</pubDate>
      <link>https://dev.to/ahmadtash7/methods-of-writing-and-reading-tuples-3ohi</link>
      <guid>https://dev.to/ahmadtash7/methods-of-writing-and-reading-tuples-3ohi</guid>
      <description>&lt;h2&gt;
  
  
  Introduction to Heap Table File Layout
&lt;/h2&gt;

&lt;p&gt;PostgreSQL stores data in heap tables, which are divided into pages of fixed length. The pages are numbered sequentially, and each page contains heap tuples, line pointers, and header data. The header data contains information about the page, such as the LSN of XLOG record, checksum value, pd_lower, pd_upper, and pd_special. A tuple identifier (TID) is used to identify a tuple within the table.&lt;/p&gt;

&lt;h2&gt;
  
  
  Writing Heap Tuples
&lt;/h2&gt;

&lt;p&gt;When a new tuple is added to a page, a new line pointer is also added to the line pointer array. The line pointer points to the new tuple, and the pd_upper points to the new tuple as well. The pd_lower is updated to point to the end of the line pointer array. This process is repeated for each new tuple that is added to the page.&lt;/p&gt;

&lt;h2&gt;
  
  
  Reading Heap Tuples
&lt;/h2&gt;

&lt;p&gt;There are two typical access methods for reading heap tuples: sequential scan and B-tree index scan. In a sequential scan, all tuples in all pages are read by scanning all line pointers in each page. In a B-tree index scan, an index file is used to find the TID of the desired heap tuple. The TID is then used to read the desired heap tuple without unnecessary scanning in the pages.&lt;/p&gt;

&lt;h2&gt;
  
  
  Helpful Sources:
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://github.com/apache/age"&gt;https://github.com/apache/age&lt;/a&gt;&lt;br&gt;
&lt;a href="https://www.interdb.jp/pg/pgsql01.html"&gt;https://www.interdb.jp/pg/pgsql01.html&lt;/a&gt;&lt;/p&gt;

</description>
      <category>postgres</category>
    </item>
  </channel>
</rss>
