<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Shaiby014</title>
    <description>The latest articles on DEV Community by Shaiby014 (@shoaib014).</description>
    <link>https://dev.to/shoaib014</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/shoaib014"/>
    <language>en</language>
    <item>
      <title>Vacuum Processing in PostgreSQL: Chapter 6</title>
      <dc:creator>Shaiby014</dc:creator>
      <pubDate>Tue, 27 Jun 2023 18:53:31 +0000</pubDate>
      <link>https://dev.to/shoaib014/vacuum-processing-in-postgresql-chapter-6-9gl</link>
      <guid>https://dev.to/shoaib014/vacuum-processing-in-postgresql-chapter-6-9gl</guid>
      <description>&lt;p&gt;As we move towards the conclusion of our blog covering 6 chapters from the Internals of PostgreSQL, here is the last blog covering the 6th chapter of the book covering vacuum processing in PostgreSQL.&lt;/p&gt;

&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;The performance and integrity of PostgreSQL databases are crucially dependent on vacuum processing. Vacuum processing ensures dependable operation and effective data retrieval by eliminating dead tuples and freezing transaction IDs. In this article, we'll examine the subtleties of vacuum processing in PostgreSQL, including its various modes, the Visibility Map, freeze processing, handling of clog files, the auto vacuum daemon, and the requirement for Full VACUUM.&lt;/p&gt;

&lt;h2&gt;
  
  
  1. Understanding Vacuum Processing:
&lt;/h2&gt;

&lt;p&gt;In PostgreSQL, vacuum processing is a maintenance procedure that eliminates inactive tuples and freezes transaction IDs. Table rows known as "dead tuples" are no longer required, and freezing transaction IDs stops data tampering. Concurrent VACUUM and Full VACUUM are the two modes in which vacuum processing can be carried out.&lt;/p&gt;

&lt;h2&gt;
  
  
  2. Concurrent Vacuum:
&lt;/h2&gt;

&lt;p&gt;A vacuum processing technique known as concurrent vacuum eliminates dead tuples page by page. While the procedure is active, it permits other transactions to access the table. Concurrent VACUUM removes index tuples pointing to dead tuples and defragments live tuples. By monitoring page visibility and avoiding pointless scans, PostgreSQL 8.4 introduced the Visibility Map (VM) to streamline this procedure.&lt;/p&gt;

&lt;h2&gt;
  
  
  3. The Visibility Map:
&lt;/h2&gt;

&lt;p&gt;In PostgreSQL 8.4, a table-specific map called the Visibility Map (VM) was added. It complements the Free Space Map (FSM) and boosts the effectiveness of vacuum processing. The VM keeps track of page visibility and reports whether or not a page contains dead tuples. The vacuum process can skip pages without dead tuples by using the VM, which optimizes speed and uses fewer resources.&lt;/p&gt;

&lt;h2&gt;
  
  
  4. Freeze Processing:
&lt;/h2&gt;

&lt;p&gt;An essential component of vacuum processing is freeze processing, which requires storing outdated transaction IDs (txids) for tuples. Lazy and eager freeze processing modes are available in PostgreSQL. Using data from the VM, only pages containing dead tuples are scanned in lazy mode. All pages are scanned using eager mode, which also completes a thorough freeze operation. Eager mode was significantly optimized in PostgreSQL 9.6 by skipping pages with fully frozen tuples.&lt;/p&gt;

&lt;h2&gt;
  
  
  5. Removing Unnecessary Clog Files:
&lt;/h2&gt;

&lt;p&gt;Transaction states are kept in PostgreSQL's clog (commit log). PostgreSQL makes an effort to delete unneeded clog files when the minimum frozen transaction ID is adjusted. Older clog files and related pages can be securely erased while still maintaining transaction information for data integrity by locating the clog file that has the minimum frozen transaction ID. This frees up disc space.&lt;/p&gt;

&lt;h2&gt;
  
  
  6. The Autovacuum Daemon:
&lt;/h2&gt;

&lt;p&gt;The vacuum procedure in PostgreSQL was automated with the advent of the autovacuum daemon. Multiple autovacuum_worker processes are started by the daemon at regular intervals, ensuring gradual and effective removal of dead tuples. Vacuum processing for various tables is carried out concurrently by autovacuum workers, minimizing the impact on ongoing database operations and streamlining database administration.&lt;/p&gt;

&lt;h2&gt;
  
  
  7. Full VACUUM:
&lt;/h2&gt;

&lt;p&gt;Despite being crucial, concurrent hoover cannot completely reduce table size. A table's physical size does not change, which results in wasted disc space and decreased performance. By carrying out further procedures such table clustering or reorganization, physically compressing the table, and reclaiming unneeded space, full VACUUM resolves this issue.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion:
&lt;/h2&gt;

&lt;p&gt;Vacuum processing is an indispensable part of PostgreSQL database maintenance. By understanding the different modes, utilizing the Visibility Map, optimizing freeze processing, managing clog files, leveraging the autovacuum daemon, and considering Full VACUUM, database administrators can ensure efficient performance and data integrity. Implementing effective vacuum processing strategies is essential for maintaining the reliability and optimal functioning of PostgreSQL databases.&lt;/p&gt;




&lt;p&gt;For extensive study use this link: &lt;a href="https://www.interdb.jp/pg/pgsql06.html"&gt;https://www.interdb.jp/pg/pgsql06.html&lt;/a&gt;&lt;br&gt;
Similar articles to this:&lt;a href="https://dev.to/hannan2910/vacuum-processing-11m7"&gt;https://dev.to/hannan2910/vacuum-processing-11m7&lt;/a&gt;&lt;/p&gt;

</description>
      <category>apacheage</category>
      <category>sql</category>
      <category>database</category>
    </item>
    <item>
      <title>Foreign Data Wrapper: Internals of PostgreSQL Chapter 4</title>
      <dc:creator>Shaiby014</dc:creator>
      <pubDate>Fri, 23 Jun 2023 13:10:39 +0000</pubDate>
      <link>https://dev.to/shoaib014/foreign-data-wrapper-internals-of-postgresql-chapter-4-5bgo</link>
      <guid>https://dev.to/shoaib014/foreign-data-wrapper-internals-of-postgresql-chapter-4-5bgo</guid>
      <description>&lt;p&gt;Foreign Data Wrappers (FDWs) enable PostgreSQL to access data from external databases, offering various benefits such as integrating data from different sources or offloading processing to a separate database. FDWs are implemented as shared libraries that PostgreSQL loads, providing a standardized interface for accessing data from diverse databases. This allows PostgreSQL to treat data from different sources uniformly.&lt;/p&gt;

&lt;p&gt;One widely used FDW is postgres_fdw, developed and maintained by the PostgreSQL Global Development Group. It empowers PostgreSQL to access data from remote PostgreSQL servers. When creating a postgres_fdw foreign table, PostgreSQL generates a local copy of the remote table's schema, enabling seamless interaction as if the foreign table were local. Queries involving foreign tables are processed similarly to local tables, but PostgreSQL sends the query to the remote server for execution and subsequently processes the returned results.&lt;/p&gt;

&lt;p&gt;PostgreSQL leverages the capabilities of postgres_fdw to optimize the efficiency of queries referencing foreign tables. For instance, it can reduce the data transfer between servers by utilizing postgres_fdw to push down predicates to the remote server. By doing so, the quantity of data exchanged can be minimized. Queries referencing foreign tables undergo processing by PostgreSQL, which dispatches them to the remote server for execution, receives the results, and performs further processing.&lt;/p&gt;

&lt;p&gt;postgres_fdw provides several features to enhance query performance when accessing data from remote PostgreSQL servers. It supports predicate pushdown, allowing PostgreSQL to delegate filtering operations to the remote server, reducing data transfer requirements. With its comprehensive maintenance and documentation, postgres_fdw serves as a robust tool for applications seeking to access data from remote PostgreSQL servers. It offers a reliable solution for seamlessly integrating external data sources into PostgreSQL-based systems.&lt;/p&gt;

&lt;h2&gt;
  
  
  Architecture:
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--jmayH8_0--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/qdsiyltfvojgifxtzguf.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--jmayH8_0--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/qdsiyltfvojgifxtzguf.png" alt="Image description" width="800" height="411"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The architecture of FDWs in PostgreSQL can be described as follows:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Query Tree Construction:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;During query processing, the PostgreSQL analyzer constructs 
the query tree for the input SQL statement.&lt;/li&gt;
&lt;li&gt;The definitions of foreign tables, stored in the 
pg_catalog.pg_class and pg_catalog.pg_foreign_table 
catalogs, are incorporated into the query tree.&lt;/li&gt;
&lt;li&gt;These definitions can be created using commands such as 
&lt;strong&gt;CREATE FOREIGN TABLE or IMPORT FOREIGN SCHEMA&lt;/strong&gt;.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Connection Establishment:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The planner (executor) establishes a connection to the 
remote server using a specific library designed for 
connecting to the corresponding remote database server.&lt;/li&gt;
&lt;li&gt;For example, the postgres_fdw extension utilizes the libpq 
library when connecting to a remote PostgreSQL server, while 
the mysql_fdw extension relies on the libmysqlclient library 
when connecting to a MySQL server.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Cost Estimation:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;When the use_remote_estimate option is enabled (disabled by 
default), the planner executes EXPLAIN commands to estimate 
the cost associated with each potential plan path.&lt;/li&gt;
&lt;li&gt;This estimation helps evaluate and select the most optimal 
plan for query execution.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Deparsing:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The PostgreSQL planner converts the plan tree into a plain 
text SQL statement through deparsing.&lt;/li&gt;
&lt;li&gt;Deparsing transforms the internal representation of the 
query plan into a human-readable SQL statement that reflects 
the intended operations and structure of the query.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Execution and Result Handling:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;After the planning phase, the executor sends the plain text 
SQL statement to the remote server for execution and awaits 
the result.&lt;/li&gt;
&lt;li&gt;Upon receiving the data from the remote server, the executor 
may perform additional processing based on the query type.&lt;/li&gt;
&lt;li&gt;For example, in a multi-table query, join processing is 
carried out to combine the received data with other relevant 
tables.&lt;/li&gt;
&lt;li&gt;The specific processing steps are determined by the query 
requirements and data relationships.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;By following this architecture, PostgreSQL's FDWs facilitate the seamless integration and processing of data from remote sources, enhancing the capabilities of the database system.&lt;/p&gt;

&lt;p&gt;Now we look at the some of benefits in using FDWs in PostgreSQL:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Data integration&lt;/strong&gt;: FDWs can be used to integrate data from different databases into a single PostgreSQL database. This can be useful for applications that need to access data from multiple sources.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Offloading processing&lt;/strong&gt;: FDWs can be used to offload processing to a different database. This can be useful for applications that need to perform computationally expensive operations on large datasets.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Performance&lt;/strong&gt;: FDWs can help to improve the performance of queries that reference data from remote databases. This is because FDWs can push down predicates to the remote database, which can help to reduce the amount of data that needs to be transferred between the two databases.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;There are a few things to consider if you're thinking about using FDWs in PostgreSQL:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Performance&lt;/strong&gt;: Depending on the specific FDW you are using and the remote database's settings, FDW performance can vary. Prior to implementing FDWs in a live environment, it is crucial to assess their performance in that setting.&lt;br&gt;
&lt;strong&gt;Security&lt;/strong&gt;: If FDWs are not configured appropriately, security vulnerabilities may be introduced. Before implementing FDWs in production, it's crucial to properly analyse the security consequences.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Conclusion:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;In general, FDWs can be an effective tool for PostgreSQL users to retrieve data from distant databases. However, before implementing FDWs in production, it's crucial to thoroughly weigh their advantages and disadvantages.&lt;/p&gt;

&lt;p&gt;In this article, we have explored the fundamental concepts of FDW and its role within PostgreSQL. We have discussed how FDW enables the execution of join operations across multiple servers, enabling seamless data integration. By incorporating the FDW extension, PostgreSQL expands its capabilities, allowing users to effectively manage and analyze remote data alongside local data.&lt;/p&gt;

</description>
      <category>apache</category>
      <category>apacheage</category>
      <category>postgres</category>
    </item>
    <item>
      <title>Introduction to Database Clusters, Databases, and Tables for Effective Data Management: PostgreSQL</title>
      <dc:creator>Shaiby014</dc:creator>
      <pubDate>Tue, 20 Jun 2023 20:05:31 +0000</pubDate>
      <link>https://dev.to/shoaib014/introduction-to-database-clusters-databases-and-tables-for-effective-data-management-postgresql-20gi</link>
      <guid>https://dev.to/shoaib014/introduction-to-database-clusters-databases-and-tables-for-effective-data-management-postgresql-20gi</guid>
      <description>&lt;p&gt;The main objective of this blog is to get readers with some brief insights of the concept of Database Clusters and provide a comprehensive overview of PostgreSQL, which serves as an illustrative example of a database cluster. The article covers the following topics in detail:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;The logical structure of a database cluster.&lt;/li&gt;
&lt;li&gt;The physical structure of a database cluster.&lt;/li&gt;
&lt;li&gt;The internal structure of a heap table file.&lt;/li&gt;
&lt;li&gt;The procedures involved in writing and reading data into a table.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;By exploring these aspects, readers will gain a fundamental understanding of how database clusters are organized and how data is managed within them, specifically focusing on PostgreSQL as a practical implementation.&lt;/p&gt;

&lt;h2&gt;
  
  
  1-Database Logical Structure Overview:
&lt;/h2&gt;

&lt;p&gt;First of all, PostgreSQL is a server that manages cluster of database( collection of database)&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--ikxZdrwa--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/o7lyk8flpggayib3mtuf.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--ikxZdrwa--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/o7lyk8flpggayib3mtuf.png" alt="Logical Overview of Db" width="800" height="288"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;A database is a compilation of various database objects that serve the purpose of storing and referencing data. These objects can take the form of different data structures. One commonly used example is a table (often referred to as a heap), but there are also other types of objects such as indexes, sequences, views, functions, and more. In PostgreSQL, the database itself is considered a database object and is logically distinct from other objects. To manage these objects internally, PostgreSQL utilizes Object Identifiers (OIDs), which are 4-byte unsigned integers assigned to each object. The relationships between database objects and their corresponding OIDs are stored in designated system catalogues.&lt;/p&gt;

&lt;h2&gt;
  
  
  2-Lets Talk about the Physical Structure:
&lt;/h2&gt;

&lt;p&gt;Cluster of Db is a basically a sub-directory which usually contains a lot of files and other related Db stuff.&lt;br&gt;
A database is a collection of database objects that are designed to store and reference data. These objects encompass various data structures, including tables (commonly known as heaps), as well as indexes, sequences, views, functions, and more. In PostgreSQL, the actual database is treated as a distinct database object, separate from other objects within the system. To effectively handle these objects, PostgreSQL employs Object Identifiers (OIDs), which are 4-byte unsigned integers uniquely assigned to each object. The relationships between the database objects and their respective OIDs are stored in specific system catalogues for efficient management.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Database Cluster's layout&lt;/strong&gt;:&lt;br&gt;
 In terms of file and sub-directory descriptions, the following details can be provided:&lt;/p&gt;

&lt;p&gt;Files:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;PG_VERSION: This file holds the major version number of 
PostgreSQL.&lt;/li&gt;
&lt;li&gt;postgresql.conf: It is a file utilized for configuring 
parameters.&lt;/li&gt;
&lt;li&gt;postgresql.auto.conf: This file is responsible for storing 
configuration parameters that are set in ALTER SYSTEM (version 
9.4 or later).&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Sub-directories:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;base/: This sub-directory encompasses per-database subdirectories.&lt;/li&gt;
&lt;li&gt;global/: Within this sub-directory, cluster-wide tables like 
pg_database and pg_control are stored.&lt;/li&gt;
&lt;li&gt;pg_stat/: This sub-directory is dedicated to permanent files 
for the statistics subsystem.&lt;/li&gt;
&lt;li&gt;pg_tblspc/: Here, symbolic links to tablespaces can be found.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Db's Layout:&lt;/strong&gt;&lt;br&gt;
Under the base sub-directory, each database in PostgreSQL is represented as a separate sub-directory. The names of these database directories correspond to their respective Object Identifiers (OIDs).&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Information regarding Files Associated with Tables and Indexes:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Here are key points regarding the files associated with tables and indexes,&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Tables and indexes in PostgreSQL are internally managed by individual Object Identifiers (OIDs).&lt;/li&gt;
&lt;li&gt;The data files of tables and indexes are managed by a variable called "relfilenode".&lt;/li&gt;
&lt;li&gt;Certain commands like TRUNCATE, REINDEX, and CLUSTER are used to change the relfilenode values of tables and indexes.&lt;/li&gt;
&lt;li&gt;When the file size of a table or index exceeds 1GB, PostgreSQL creates a new file with the name relfilenode.1. Subsequent files are created with names like relfilenode.2 when the previous file is full.&lt;/li&gt;
&lt;li&gt;Each table has two associated files: one suffixed with '_fsm' for the free space map and another suffixed with '_vm' for the visibility map.&lt;/li&gt;
&lt;li&gt;Free space maps store information about the free space capacity of each page in the table or index, while visibility maps store visibility information.&lt;/li&gt;
&lt;li&gt;Indexes only have individual free space maps and do not have a visibility map.&lt;/li&gt;
&lt;li&gt;These maps and files can also be referred to internally as forks of each relationship.&lt;/li&gt;
&lt;li&gt;The free space map is the first branch of the table/index data file (with branch number 1), and the visibility map is the second branch of the table data file (with branch number 2).&lt;/li&gt;
&lt;li&gt;The fork number of the data file is 0.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Tablespaces:&lt;/strong&gt;&lt;br&gt;
In PostgreSQL, a tablespace is an extra storage area located outside the base directory. When creating a tablespace using the CREATE TABLESPACE statement, a version-specific subdirectory is established within the specified directory. For instance, a subdirectory named PG_14_202011044 may be created. When a new table is created within a database belonging to the base directory, a new directory is generated under the version-specific subdirectory, bearing the same name as the existing database OID. Subsequently, the new table file is stored within the created directory.&lt;/p&gt;

&lt;h2&gt;
  
  
  3. Internal Layout of a Heap Table File:
&lt;/h2&gt;

&lt;p&gt;In PostgreSQL, the data file, including the heap table, index, free space map, and visibility map, is partitioned into fixed-length pages or blocks. By default, each page is 8192 bytes or 8 KB in size. These pages are assigned sequential block numbers starting from 0 within each file. When a file reaches its capacity, PostgreSQL appends a new empty page at the end of the file to expand its size and accommodate additional data.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--926lOOSw--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/6dscc7rzp6kd7k6ply38.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--926lOOSw--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/6dscc7rzp6kd7k6ply38.png" alt="Heap File's Layout" width="800" height="318"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Heap tuple(s): These are records stored in a table page, 
arranged in order from the bottom of the page.&lt;/li&gt;
&lt;li&gt;Line pointer(s): Each line pointer is a 4-byte element that 
points to a heap tuple within the page.&lt;/li&gt;
&lt;li&gt;Header data: This 24-byte section stores general information 
about the page, providing an overview of its properties.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;A table page in PostgreSQL contains heap tuples, line pointers, header data, free space or holes, and Tuple Identifiers (TIDs). Heap tuples are ordered from the bottom of the page, line pointers indicate their locations, and the header data provides general information. Free space refers to empty areas, and TIDs include block and offset numbers. Larger heap tuples use TOAST for storage.&lt;/p&gt;

&lt;h2&gt;
  
  
  Methods of Writing and Reading Tuples:
&lt;/h2&gt;

&lt;p&gt;In a single-page table with one heap tuple, when a second tuple is inserted, a new row pointer is added to the first tuple to point to the second tuple. The pointers for pd_lower and pd_upper are updated accordingly, along with other necessary header modifications in the page.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--7Uk2k7Xk--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/en6t2enx8r346j27zshw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--7Uk2k7Xk--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/en6t2enx8r346j27zshw.png" alt="For better clarification of the above paragraph" width="800" height="202"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Learn how to read the Head Tables&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Sequential Scan&lt;/strong&gt;: All tuples on each page are read sequentially by scanning row pointers.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;B-tree Index Scan&lt;/strong&gt;: The index file contains index tuples with an index key and a TID pointing to the target heap tuple. When an index tuple matching the searched key is found, PostgreSQL utilizes the TID to retrieve the desired heap tuple.&lt;/p&gt;




&lt;p&gt;For more Detailed information visit the official documentation website: &lt;a href="https://www.interdb.jp/pg/"&gt;https://www.interdb.jp/pg/&lt;/a&gt;&lt;br&gt;
Similar articles like this: &lt;a href="https://medium.com/@nimratahir1212/chapter-01-81c49052a73"&gt;https://medium.com/@nimratahir1212/chapter-01-81c49052a73&lt;/a&gt;&lt;/p&gt;

</description>
      <category>postgressql</category>
      <category>clusters</category>
      <category>tutorial</category>
    </item>
  </channel>
</rss>
