<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Suhaib Salarzai</title>
    <description>The latest articles on DEV Community by Suhaib Salarzai (@salarzaisuhaib).</description>
    <link>https://dev.to/salarzaisuhaib</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/salarzaisuhaib"/>
    <language>en</language>
    <item>
      <title>Understanding Background Processes in PostgreSQL</title>
      <dc:creator>Suhaib Salarzai</dc:creator>
      <pubDate>Sat, 12 Aug 2023 15:06:24 +0000</pubDate>
      <link>https://dev.to/salarzaisuhaib/understanding-background-processes-in-postgresql-li5</link>
      <guid>https://dev.to/salarzaisuhaib/understanding-background-processes-in-postgresql-li5</guid>
      <description>&lt;p&gt;When PostgreSQL starts, many background tasks are made by the main server, also known as the "postmaster" process. These processes perform a variety of functions that are crucial to the database's seamless and effective operation. These background operations usually have a distinct function. This article delves into the specifics of these procedures and clarifies why they are crucial.&lt;/p&gt;

&lt;h4&gt;
  
  
  1.  Essential Background Processes:
&lt;/h4&gt;

&lt;p&gt;&lt;strong&gt;Checkpointer:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Among the paramount operations, the checkpointer guarantees that alterations performed in the memory (buffer cache) are inscribed (flushed) onto the disk at regular intervals. By executing this in the backdrop at planned periods, it secures that the system need not write all elements simultaneously during crash recovery.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Writer:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;This procedure writes buffer pages onto the disk in advance of their necessity, aiding in diminishing the count of buffers that necessitate inscription by the checkpointer or backends.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;WAL Writer:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;This process flushes the write-ahead log (WAL) buffers onto the disk. The Log maintains a chronicle of all modifications made to the database, permitting crash recovery.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Stats Collector:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;This process assembles information concerning database activity. It accumulates information such as access to tables and indexes, along with the inner movement of the system. This data facilitates DBAs (Database Administrators) in fine-tuning performance.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Archival Supervisor:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;In case archiving of antiquated sections of WAL, logs is active, this process oversees that responsibility. Archiving is pivotal for recovery at specific time points and particular duplication setups.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Vacuum:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;This process clears up and repossesses storage. As time progresses and data undergoes updates or deletions, the database collects 'dead rows.' The vacuum process eradicates these, ensuring practical usage of space and continued peak performance. AutoVacuum automates this task, executing the vacuum process as required.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;WAL Sender and Receiver:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;These processes come into action when replication is configured. The WAL Sender process sends WAL records to standby servers, while the WAL receiver process on the standby server receives and writes those records.&lt;/p&gt;

&lt;h3&gt;
  
  
  2.   Impact of Background processes on performance:
&lt;/h3&gt;

&lt;p&gt;The simultaneous operation of these procedures guarantees the seamless functioning of PostgreSQL:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;By delegating routine maintenance and data security tasks to these processes, the central server process remains free to manage incoming client connections and queries.&lt;/li&gt;
&lt;li&gt;Operations like log writing, archiving, or buffer flushing to the disk can bring time lag if executed synchronously. By overseeing these tasks in the background, PostgreSQL can sustain optimal performance and responsiveness.&lt;/li&gt;
&lt;li&gt;Procedures like AutoVacuum are imperative to ensure the database doesn't become overburdened with repeated data and maintains efficient utilization of storage space.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  4.   Conclusion:
&lt;/h3&gt;

&lt;p&gt;For any database administrator or developer, comprehending the background processes in PostgreSQL holds prime importance. These procedures guarantee that the database retains its responsiveness, security, and efficiency. By digging in into each process’s depths, one can optimize configurations, fine-tune performance, and assure the system's availability and resilience.&lt;br&gt;
For those delving into PostgreSQL, acknowledging these inconspicuous processes in the background can offer important insights into the elaborate and robust architecture of this renowned RDBMS.&lt;/p&gt;

</description>
      <category>postgres</category>
      <category>postgressql</category>
      <category>database</category>
    </item>
    <item>
      <title>Exploring AgensSQL Enterprise Plus: A Brief Analysis against Different RDBMS</title>
      <dc:creator>Suhaib Salarzai</dc:creator>
      <pubDate>Wed, 09 Aug 2023 19:23:04 +0000</pubDate>
      <link>https://dev.to/salarzaisuhaib/exploring-agenssql-enterprise-plus-a-brief-analysis-against-different-rdbms-42fg</link>
      <guid>https://dev.to/salarzaisuhaib/exploring-agenssql-enterprise-plus-a-brief-analysis-against-different-rdbms-42fg</guid>
      <description>&lt;p&gt;Relational database Management Systems (RDBMSs) play a crucial part in arranging and handling data in the changing environment of database administration. AgensSQL Enterprise Plus appears as a game-changer in this field, with a range of distinct characteristics that set it against competing RDBMSs. In this blog post article, we comprehended the distinguishing characteristics and distinctive aspects that distinguish AgensSQL Enterprise Plus as a better alternative, enabling enterprises to achieve flexibility, efficiency, and customization.&lt;/p&gt;

&lt;h3&gt;
  
  
  A Data Modelling Paradigm Shift:
&lt;/h3&gt;

&lt;p&gt;A notable feature of AgensSQL Enterprise Plus is its excellent compatibility with graph database systems. AgensSQL, unlike typical RDBMSs, is great at dealing with complicated relationships and interrelated data. Because of these revolutionary graph database abilities, it is an attractive option for projects requiring sophisticated relationship mappings, such as networking sites, prediction systems, authentication, and information structures. The native graph feature in AgensSQL allows for more effective traversals, increasing the efficiency of queries and facilitating the analysis of complicated relationships.&lt;/p&gt;

&lt;h3&gt;
  
  
  Customizing the RDBMS to Your Specific Requirements:
&lt;/h3&gt;

&lt;p&gt;With its adaptability structure, AgensSQL Enterprise Plus expands versatility by enabling customers to build and implement unique modules and components. This unmatched customizability allows enterprises to shape the RDBMS to their own needs and operations. Like numerous other RDBMSs, that could not provide such extensive development options, AgensSQL enables companies to smoothly alter the platform by integrating custom data structure, operations, and business logic processes.&lt;/p&gt;

&lt;h3&gt;
  
  
  Finding the Geospatial Frontier:
&lt;/h3&gt;

&lt;p&gt;Geographical information continues to grow essential for a variety of businesses, like transportation, city development, and retailers. The built-in capability for geographical organizing data splits AgensSQL Enterprise Plus. This feature allows for the optimal storing, accessing, and evaluation of tracking data. Organizations may use sophisticated spatial analysis to execute well-informed decisions based on geographical information. Contrary to other RDBMSs which may require third-party additions for geographical capability, AgensSQL's merged physical storage and retrieval provides an extensive and flawless service.&lt;/p&gt;

&lt;h3&gt;
  
  
  Data Protection in an Uncertain World
&lt;/h3&gt;

&lt;p&gt;Strong protections are vital in an era of hacking and cyber-attacks. With an extensive array of extra safety attributes, AgensSQL Enterprise Plus adapts to the responsibility. Fine-grained authorization, secure communication at rest or in travel, and extensive potential for auditing are among them. AgensSQL provides data safety and regulatory adherence through these standard security mechanisms, lowering the threat related to unauthorized entry and information losses.&lt;/p&gt;

&lt;h3&gt;
  
  
  Full Potential of Data Handling:
&lt;/h3&gt;

&lt;p&gt;Performance enhancement is essential for meeting the needs of huge amounts of data. AgensSQL Enterprise Plus excels in this area by supporting concurrency as well as innovative speed optimization strategies. Corporations may obtain substantially quicker handling and analyzing data by running commands simultaneously, especially for cost-intensive operations. The efficiency and improving features of AgensSQL lead to efficient responses, making it an excellent choice for dealing with massive amounts of data and complicated applications.&lt;/p&gt;

&lt;h3&gt;
  
  
  Conclusion
&lt;/h3&gt;

&lt;p&gt;AgensSQL Enterprise Plus is indisputably at the cutting edge of the RDBMS environment, providing an extensive array of unique qualities that enable enterprises to handle their information more effectively, safely, and innovatively. AgensSQL is an unrivaled method due to its breakthrough graph database assets, unrivaled flexibility, merged geographical storage, enhanced safety features, and strong performance optimization techniques. As enterprises continue the challenges of handling data, AgensSQL Enterprise Plus arises as an essential tool, enabling progress, insights, and accomplishment in a modern and future media-driven world.&lt;/p&gt;

</description>
      <category>postgres</category>
      <category>postgressql</category>
      <category>database</category>
      <category>agens</category>
    </item>
    <item>
      <title>A Journey Through Key Features in Each Version of PostgreSQL (Continue)</title>
      <dc:creator>Suhaib Salarzai</dc:creator>
      <pubDate>Tue, 08 Aug 2023 19:46:14 +0000</pubDate>
      <link>https://dev.to/salarzaisuhaib/a-journey-through-key-features-in-each-version-of-postgresql-continue-328j</link>
      <guid>https://dev.to/salarzaisuhaib/a-journey-through-key-features-in-each-version-of-postgresql-continue-328j</guid>
      <description>&lt;p&gt;This blog article follows on earlier &lt;a href="https://dev.to/salarzaisuhaib/a-journey-through-key-features-in-each-version-of-postgresql-5cm1"&gt;one&lt;/a&gt; and we will go through the other remaining PostgreSQL versions.&lt;/p&gt;

&lt;h3&gt;
  
  
  PostgreSQL Version 13:
&lt;/h3&gt;

&lt;p&gt;PostgreSQL 13 included numerous new features aimed at improving efficiency, safety, and accessibility:&lt;/p&gt;

&lt;h4&gt;
  
  
  1.  Incremental Backup:
&lt;/h4&gt;

&lt;p&gt;This version added incremental backup functionality, which reduced the backup window and storage needs.&lt;/p&gt;

&lt;h4&gt;
  
  
  2.  Parallel Vacuuming:
&lt;/h4&gt;

&lt;p&gt;Parallel vacuuming was implemented, which improved the efficiency of vacuum operations, which are critical for database health.&lt;/p&gt;

&lt;h4&gt;
  
  
  3.  Enhanced Security:
&lt;/h4&gt;

&lt;p&gt;PostgreSQL 13 included a few security upgrades, including the capacity to avoid password reuse and enhanced authentication techniques.&lt;/p&gt;

&lt;h3&gt;
  
  
  PostgreSQL Version 14:
&lt;/h3&gt;

&lt;p&gt;PostgreSQL 14's effectiveness features are being refined further, enabling database operations more effective. Among the important enhancements are:&lt;/p&gt;

&lt;h4&gt;
  
  
  1. Efficient Hash Join:
&lt;/h4&gt;

&lt;p&gt;The use of "batched" hash joins improves the speed of join operation, particularly in data storage settings.&lt;/p&gt;

&lt;h4&gt;
  
  
  2. Logical Duplication:
&lt;/h4&gt;

&lt;p&gt;Increase in performance of logical replication make it an extra appealing alternative for synchronization of data across different databases.&lt;/p&gt;

&lt;h4&gt;
  
  
  3. Role administration Enhancements:
&lt;/h4&gt;

&lt;p&gt;PostgreSQL 14 adds additional versatility and control to role and permission administration, improving access for users monitoring.&lt;/p&gt;

&lt;h4&gt;
  
  
  4. Enhanced Monitoring:
&lt;/h4&gt;

&lt;p&gt;Improved monitoring utilities provide more details about the performance of databases and activity.&lt;/p&gt;

&lt;h3&gt;
  
  
  PostgreSQL version 15:
&lt;/h3&gt;

&lt;p&gt;PostgreSQL 15 advances, allowing efficient data structure and query performance.&lt;/p&gt;

&lt;h4&gt;
  
  
  1. Data Mask:
&lt;/h4&gt;

&lt;p&gt;PostgreSQL 15 offers data masking, which protects sensitive details by confusing specific data shards, assisting to comply with rules governing privacy.&lt;/p&gt;

&lt;h4&gt;
  
  
  2. GIN Indexes:
&lt;/h4&gt;

&lt;p&gt;By letting indexes to include commonly used columns, the addition of wrapping Generalized Inverted Index (GIN) functionality increases search speed.&lt;/p&gt;

&lt;h4&gt;
  
  
  3. BRIN Rating Enhancements:
&lt;/h4&gt;

&lt;p&gt;Block Range INdexes (BRIN) embrace enhanced efficiency, especially when searching huge datasets.&lt;/p&gt;

&lt;h4&gt;
  
  
  4. Procedural languages:
&lt;/h4&gt;

&lt;p&gt;PostgreSQL 15 provides functionality for new procedural languages, allowing you to write more persistent procedures and distinct functions.&lt;/p&gt;

&lt;h3&gt;
  
  
  Conclusion:
&lt;/h3&gt;

&lt;p&gt;PostgreSQL's path has been one of ongoing innovation and improvement. PostgreSQL has addressed the increasing needs of designers, administrators, and users with each new version, giving greater performance, sophisticated capabilities, and improved usability. As we've seen, key characteristics introduced with each version have played an important part in developing the database system's capabilities, providing PostgreSQL a dependable and versatile choice for a variety of applications. PostgreSQL is a cornerstone in the field of relational databases, setting the standard for open-source database management systems as it evolves.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>A Journey Through Key Features in Each Version of PostgreSQL</title>
      <dc:creator>Suhaib Salarzai</dc:creator>
      <pubDate>Mon, 07 Aug 2023 19:21:26 +0000</pubDate>
      <link>https://dev.to/salarzaisuhaib/a-journey-through-key-features-in-each-version-of-postgresql-5cm1</link>
      <guid>https://dev.to/salarzaisuhaib/a-journey-through-key-features-in-each-version-of-postgresql-5cm1</guid>
      <description>&lt;p&gt;Powerful open-source relational database management system PostgreSQL, sometimes known as Postgres, has seen constant development since its foudation in the 1980s. PostgreSQL adds cutting-edge innovations and upgrades that increase its functionality, speed, and usability with every new release. In this blog post, we'll go over the main characteristics of each major release of PostgreSQL while highlighting the turning points in its history.&lt;/p&gt;

&lt;h3&gt;
  
  
  PostgreSQL Version Series 9.x:
&lt;/h3&gt;

&lt;p&gt;The 9.x series of PostgreSQL was a pivotal development in the development of the database system. Key elements that were introduced during this time include:&lt;/p&gt;

&lt;h4&gt;
  
  
  1.  Support for JSON and JSONB:
&lt;/h4&gt;

&lt;p&gt;PostgreSQL 9.2 included native support for JSON data types, making it possible to store and query JSON documents effectively. When PostgreSQL 9.4 was released, JSONB, a binary representation of JSON data with better speed, was introduced.&lt;/p&gt;

&lt;h4&gt;
  
  
  2.  Index Improvements:
&lt;/h4&gt;

&lt;p&gt;Version 9.2 added index-only scans, enabling queries to extract data straight from indexes without contacting the primary table. Performance of queries is substantially enhanced by this feature.&lt;/p&gt;

&lt;h4&gt;
  
  
  3.  Parallel Query:
&lt;/h4&gt;

&lt;p&gt;With the introduction of parallel query execution in PostgreSQL 9.6, the database was able to use several CPU cores to process complicated queries more quickly.&lt;/p&gt;

&lt;h3&gt;
  
  
  PostgreSQL Version 10:
&lt;/h3&gt;

&lt;p&gt;PostgreSQL 10 carried on the trend of innovation by introducing several standout features:&lt;/p&gt;

&lt;h4&gt;
  
  
  1.  Native Partitioning:
&lt;/h4&gt;

&lt;p&gt;Declarative table partitioning was included in this release, enabling huge tables to be broken up into smaller, easier-to-maintain chunks.&lt;/p&gt;

&lt;h4&gt;
  
  
  2.  Logical Replication:
&lt;/h4&gt;

&lt;p&gt;With the addition of logical replication capability in PostgreSQL 10, it is now possible to selectively replicate database changes between servers. This feature is important for data warehousing and other scenarios.&lt;/p&gt;

&lt;h4&gt;
  
  
  3.  Improved Query Performance:
&lt;/h4&gt;

&lt;p&gt;Quorum commit for synchronous replication was included in this version, which improved the speed of synchronous replication systems.&lt;/p&gt;

&lt;h3&gt;
  
  
  PostgreSQL Version 11:
&lt;/h3&gt;

&lt;p&gt;PostgreSQL 11 improved upon existing features and added new ones:&lt;/p&gt;

&lt;h4&gt;
  
  
  1.  Stored Procedures:
&lt;/h4&gt;

&lt;p&gt;Procedure support was implemented in this release, enabling programmers to put business logic within the database.&lt;/p&gt;

&lt;h4&gt;
  
  
  2.  Just-In-Time (JIT) Compilation:
&lt;/h4&gt;

&lt;p&gt;PostgreSQL 11 introduces JIT compilation to speed up query execution, which is especially useful for complicated queries.&lt;/p&gt;

&lt;h4&gt;
  
  
  3.  Transactions and Durability:
&lt;/h4&gt;

&lt;p&gt;This release's enhancements increased the efficiency of managing transactions and the durability of data.&lt;/p&gt;

&lt;h3&gt;
  
  
  PostgreSQL Version 12:
&lt;/h3&gt;

&lt;p&gt;PostgreSQL 12 expanded on the basis built by its predecessors:&lt;/p&gt;

&lt;h4&gt;
  
  
  1.  Partitioning Enhancements:
&lt;/h4&gt;

&lt;p&gt;This release improved partitioned table performance, resulting in them more effective in handling massive datasets.&lt;/p&gt;

&lt;h4&gt;
  
  
  2.  B-tree Index Improvements:
&lt;/h4&gt;

&lt;p&gt;PostgreSQL 12 enhanced B-tree index management, resulting in faster query performance and lower storage needs.&lt;/p&gt;

&lt;h4&gt;
  
  
  3.  Generated Columns:
&lt;/h4&gt;

&lt;p&gt;Developers can now define columns that are dynamically calculated from other columns in the same table using generated columns.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;NOTE&lt;/strong&gt;: Continue learning about further versions in &lt;a href="https://dev.to/salarzaisuhaib/a-journey-through-key-features-in-each-version-of-postgresql-continue-328j"&gt;next blog&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>postgres</category>
      <category>postgressql</category>
      <category>database</category>
      <category>apacheage</category>
    </item>
    <item>
      <title>PostgreSQL's Features and Storage Mechanism</title>
      <dc:creator>Suhaib Salarzai</dc:creator>
      <pubDate>Sat, 05 Aug 2023 15:23:18 +0000</pubDate>
      <link>https://dev.to/salarzaisuhaib/postgresqls-features-and-storage-mechanism-20pb</link>
      <guid>https://dev.to/salarzaisuhaib/postgresqls-features-and-storage-mechanism-20pb</guid>
      <description>&lt;p&gt;An open-source object-relational database management system noted for its sturdiness and versatility is PostgreSQL, sometimes known as Postgres. Support for user-defined functions is one of PostgreSQL's most potent features since it enables programmers to write original code blocks and increase the database's capabilities. In this blog article, we will dig into the realm of PostgreSQL functions, investigating their usefulness and examining the underlying storage system that enables them to operate without a hitch.&lt;/p&gt;

&lt;h3&gt;
  
  
  PostgreSQL Functions:
&lt;/h3&gt;

&lt;p&gt;Blocks of reusable code that may be run with a single call are known as PostgreSQL functions, or stored methods in other database management systems. These functions make managing and maintaining them simpler by encapsulating complicated SQL queries, logic, or computations. Developers may encourage code reuse, lessen duplication, and enhance readability by interpreting functionality into functions.&lt;/p&gt;

&lt;h4&gt;
  
  
  Types of Functions:
&lt;/h4&gt;

&lt;p&gt;Depending on their use and return types, several types of functions are supported by PostgreSQL:&lt;/p&gt;

&lt;h4&gt;
  
  
  Scalar Function:
&lt;/h4&gt;

&lt;p&gt;Functions that return a single result, such as an integer, text, or date, are referred to as scalar functions.&lt;/p&gt;

&lt;h4&gt;
  
  
  Set-returning Functions:
&lt;/h4&gt;

&lt;p&gt;Such functions can return a collection of rows that may be utilised, much like a table, in the FROM clause.&lt;/p&gt;

&lt;h4&gt;
  
  
  Aggregate function:
&lt;/h4&gt;

&lt;p&gt;Aggregate functions, such as SUM, COUNT, AVG, etc., operate on a group of rows and return a single value.&lt;/p&gt;

&lt;h4&gt;
  
  
  Window functions:
&lt;/h4&gt;

&lt;p&gt;They are frequently employed for analytical purposes and work on a subset of rows connected to the current row.&lt;/p&gt;

&lt;h4&gt;
  
  
  Writing Programmes:
&lt;/h4&gt;

&lt;p&gt;The CREATE FUNCTION command from PostgreSQL is available for defining new functions. The function name, input parameters, return type, and function body—which contains the actual SQL logic—are all specified in the syntax.&lt;/p&gt;

&lt;h4&gt;
  
  
  Calling a Function:
&lt;/h4&gt;

&lt;p&gt;A function can be invoked once it has been written, just like any other SQL statement. The SELECT statement or other SQL structures are used to call functions, which can receive parameters as inputs and provide dynamic behaviour based on the arguments supplied.&lt;/p&gt;

&lt;h3&gt;
  
  
  Storage Mechanisms:
&lt;/h3&gt;

&lt;p&gt;We must investigate PostgreSQL's storage system to comprehend how it manages functions inside.&lt;/p&gt;

&lt;h4&gt;
  
  
  Tables in Catalogues:
&lt;/h4&gt;

&lt;p&gt;In system catalogue tables, PostgreSQL keeps different metadata about functions, such as their names, parameters, return types, and other characteristics. PostgreSQL can maintain track of all the functions declared inside a database thanks to these tables, which are controlled by the database.&lt;/p&gt;

&lt;h4&gt;
  
  
  Public Libraries:
&lt;/h4&gt;

&lt;p&gt;To hold the actual built code of functions, PostgreSQL makes use of shared libraries, sometimes referred to as object shares or shared libraries. PostgreSQL builds the function code when a function is generated and puts it in the shared library directory of the database. This method eliminates the need to restart the function for every use and allows the code to be shared across many sessions.&lt;/p&gt;

&lt;h4&gt;
  
  
  Function Caching:
&lt;/h4&gt;

&lt;p&gt;A caching method is used by PostgreSQL to enhance performance. The outcome of a function call may be cached for use in later calls with the same input parameters. The overhead of repeatedly performing the function for identical inputs is reduced by this caching approach.&lt;/p&gt;

&lt;h3&gt;
  
  
  Conclusion:
&lt;/h3&gt;

&lt;p&gt;The database's capabilities may be expanded with the help of PostgreSQL functions, which also help organize and make code easier to comprehend. Developers may fully utilize PostgreSQL features and create effective, scalable applications by knowing the basic storage mechanism and adhering to recommended practices. Effectively using functions may result in codebases that are easier to maintain, improved performance, and a smooth user experience. So, explore PostgreSQL functions and raise the bar for your database applications.&lt;/p&gt;

</description>
      <category>postgressql</category>
      <category>postgres</category>
      <category>database</category>
      <category>apacheage</category>
    </item>
    <item>
      <title>Features and Comparing the Special Aspects of AgensSQL with PostgreSQL</title>
      <dc:creator>Suhaib Salarzai</dc:creator>
      <pubDate>Wed, 26 Jul 2023 18:33:41 +0000</pubDate>
      <link>https://dev.to/salarzaisuhaib/features-and-comparing-the-special-aspects-of-agenssql-with-postgresql-jl9</link>
      <guid>https://dev.to/salarzaisuhaib/features-and-comparing-the-special-aspects-of-agenssql-with-postgresql-jl9</guid>
      <description>&lt;p&gt;In the world of contemporary database management systems, &lt;a href="https://www.postgresql.org/docs/"&gt;PostgreSQL&lt;/a&gt; has become one of the finest well-liked, and reliable options for programmers, companies, and organizations. It is the preferred choice for a variety of applications because of its open-source status, outstanding efficiency, and numerous array of features. Alternative options, like &lt;a href="https://bitnine.net/agensgraph/"&gt;AgensSQL&lt;/a&gt;, has risen, giving special characteristics that PostgreSQL is lacking in the pursuit of innovation and meeting particular demands.&lt;br&gt;
In this blog, we'll examine the advantages of PostgreSQL and look into some of the distinctive features that make AgensSQL stand out.&lt;/p&gt;

&lt;h2&gt;
  
  
  PostgresSQL advantages
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Open-source and active Community:
&lt;/h3&gt;

&lt;p&gt;The fact that PostgreSQL is open-source is among its greatest benefits. It has enthusiastic widespread support from a significant development community, which leads to continuous updates, bug fixes, and upgrades. This lively community makes sure the database is safe, reliable, and latest with modern technology.&lt;/p&gt;

&lt;h3&gt;
  
  
  Significant Feature Set:
&lt;/h3&gt;

&lt;p&gt;PostgreSQL is a flexible option for a variety of applications because of its exceptional feature set. It ensures versatility in data modelling by supporting a broad variety of data formats, indexing strategies, and foreign key relationships. Additionally, PostgreSQL enables sophisticated features like full-text search, geographical data support, JSON and XML processing, and much more.&lt;/p&gt;

&lt;h3&gt;
  
  
  Reliable efficiency:
&lt;/h3&gt;

&lt;p&gt;Large-scale datasets and intricate queries are easily handled by PostgreSQL which shows its outstanding performance and reliable efficiency. It is capable of processing tasks quickly even when under plenty of demand because of its powerful query optimizer and ability to carry out procedures in parallel.&lt;/p&gt;

&lt;h3&gt;
  
  
  Availability and backup:
&lt;/h3&gt;

&lt;p&gt;For backups of data and high availability, PostgreSQL provides a variety of replication techniques, including synchronous and asynchronous replication. This makes sure that crucial apps continue to run and function even when hardware crashes.&lt;/p&gt;

&lt;h2&gt;
  
  
  Special Aspects of AgensSQL over PostgreSQL
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Support for Graph Databases:
&lt;/h3&gt;

&lt;p&gt;AgensSQL sets itself distinctive from PostgreSQL by incorporating features for graph databases. Applications like social networks, recommendation engines, and fraud detection systems, which require complex linkages and interrelated data, are the perfect fit for graph databases. High-speed querying of graph data is made possible by AgensSQL because of its property graph model and effective graph algorithms.&lt;/p&gt;

&lt;h3&gt;
  
  
  Cypher Query:
&lt;/h3&gt;

&lt;p&gt;AgensSQL's compatibility with the Cypher query language is a significant feature. Cypher is an easy and effective tool for depicting graph patterns and traversals since it has been developed particularly for querying graph databases. Its pattern-focused approach makes intricate searches simpler including associated data.&lt;/p&gt;

&lt;h3&gt;
  
  
  Optimized Performance of Graph Workloads:
&lt;/h3&gt;

&lt;p&gt;AgensSQL enhances its efficiency for addressing graph workloads effectively as a specialized graph database extension. As a result of its storing and indexing algorithms being designed to handle graph structures, graph data can potentially be checked and traversed more quickly.&lt;/p&gt;

&lt;h3&gt;
  
  
  Adaptability:
&lt;/h3&gt;

&lt;p&gt;AgensSQL is meant to be extremely expandable and adaptable. Developers may develop extensions that offer novel capabilities and features that are appropriate to their particular demands. AgensSQL can be easily modified to fit a variety of use case scenarios and enterprises because of its adaptability.&lt;/p&gt;

&lt;p&gt;In the end, AgensSQL distinguishes itself as an imposing rival in the field of database management, providing obvious benefits for applications dependent on tightly linked data. AgensSQL shines at dealing with relations and traversals effectively because of its graph database features, making it the perfect choice.  Graph querying is made more simpler by its support for the user-friendly Cypher query language, increasing the productivity of developers. AgensSQL is a tempting alternative for those looking for outstanding performance and flexibility in graph-based applications, while PostgreSQL continues to be a flexible and reliable solution for a variety of applications.&lt;/p&gt;

</description>
      <category>postgressql</category>
      <category>apacheage</category>
      <category>agenssql</category>
      <category>database</category>
    </item>
    <item>
      <title>PostgreSQL and Apache Age as a stand-alone Approach</title>
      <dc:creator>Suhaib Salarzai</dc:creator>
      <pubDate>Fri, 14 Jul 2023 15:34:34 +0000</pubDate>
      <link>https://dev.to/salarzaisuhaib/postgresql-and-apache-age-as-a-stand-alone-approach-35mi</link>
      <guid>https://dev.to/salarzaisuhaib/postgresql-and-apache-age-as-a-stand-alone-approach-35mi</guid>
      <description>&lt;p&gt;When your data contains complicated interrelationships and interdependence, switching from a relational data model to a graph data model might be useful. A stand-alone approach for this migration may be offered by Apache Age and PostgreSQL. Let's study in this blog how these approaches would communicate with one another.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Graph Data:&lt;/strong&gt;&lt;br&gt;
Data is represented by nodes and edges, where nodes stand for entities and edges for connections between those entities. Each node and edge may be associated with one or more characteristics. Social networks, fraud detection, algorithms for recommendations, and knowledge graphs are just a few examples of applications where this architecture performs exceptionally well.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Apache AGE:&lt;/strong&gt;&lt;br&gt;
Apache Age is a PostgreSQL extension which provides the property graph data model over the basis of PostgreSQL to offer graph functionality. It makes use of the well-known SQL language and PostgreSQL's built-in capabilities in addition to including support for operations and queries that are unique to graphs.&lt;br&gt;
Some of the important characteristics offered by Apache AGE are as follows:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;It makes use of the relational database's underlying storage capabilities for storing graph data as tables in PostgreSQL.&lt;/li&gt;
&lt;li&gt;Apache Age adds a graph schema to specify how nodes and edges are organized, along with their characteristics and relationship variations.&lt;/li&gt;
&lt;li&gt;It is capable of performing a number of graph operations, including traversals, matching patterns, shortest routes, and graph algorithms.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Unified Solution:&lt;/strong&gt;&lt;br&gt;
You may have a unified solution for both relational and graph data models inside the same database system through the integration of Apache Age and PostgreSQL. This strategy has the following advantages:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;A single database can manage relational and graph data, which reduces complexities and maintenance costs.&lt;/li&gt;
&lt;li&gt;Without having to learn a new graph database, developers and database administrators who are already familiar with PostgreSQL may make use of their current expertise.&lt;/li&gt;
&lt;li&gt;You can preserve data integrity and coherence across relational and graph data with an identical solution. Both models update synchronously with changes to the data.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Migration Procedure:&lt;/strong&gt;&lt;br&gt;
You generally perform the following actions when utilizing Apache Age to switch from a relational model to a graph:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Examine your current relational schema and take note of any entities and relations that might benefit from having nodes and edges representation.&lt;/li&gt;
&lt;li&gt;Create an Apache Age graph schema by determining the different nodes, edges and relationship types.&lt;/li&gt;
&lt;li&gt;Create PostgreSQL tables based on the specified graph schema for storing the graph data.&lt;/li&gt;
&lt;li&gt;Convert and bring in the current relational tables' data into the new graph tables.&lt;/li&gt;
&lt;li&gt;Make changes to your application's code and queries to make use of Apache Age's graph capabilities.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In the end, the combination of &lt;a href="https://age.apache.org/age-manual/master/intro/overview.html"&gt;Apache Age&lt;/a&gt; and &lt;a href="https://www.postgresql.org/about/"&gt;PostgreSQL&lt;/a&gt; offers a comprehensive approach for switching from a relational to a graph data model, allowing you to make use of graph features while also enjoying the stability and development of PostgreSQL as a relational database.&lt;/p&gt;

</description>
      <category>postgres</category>
      <category>database</category>
      <category>postgrs</category>
      <category>apacheage</category>
    </item>
    <item>
      <title>Chapter 3 Query Processing: Join Operations in PostgreSQL (continue)</title>
      <dc:creator>Suhaib Salarzai</dc:creator>
      <pubDate>Tue, 11 Jul 2023 17:39:48 +0000</pubDate>
      <link>https://dev.to/salarzaisuhaib/chapter-3-query-processing-join-operations-in-postgresql-continue-2ai6</link>
      <guid>https://dev.to/salarzaisuhaib/chapter-3-query-processing-join-operations-in-postgresql-continue-2ai6</guid>
      <description>&lt;p&gt;In the &lt;a href="https://dev.to/salarzaisuhaib/chapter-3-query-processing-join-operations-in-postgresql-4iia"&gt;previous&lt;/a&gt; blog we started discussing about the join operations in PostgreSQL, join algorithms are essential for effectively merging data from many tables.&lt;br&gt;
In this blog we will move further and discuss Merge Join and Hash Join which are two popular join algorithms. We will go further talk about their variations.&lt;/p&gt;

&lt;h3&gt;
  
  
  Merge Join:
&lt;/h3&gt;

&lt;p&gt;This join algorithm is appropriate for equi-joins and natural joins. It makes effective use of the idea of sorting to integrate sorted data from two tables. Let's investigate its runtime variations and sequence.&lt;/p&gt;

&lt;h4&gt;
  
  
  Runtime Order:
&lt;/h4&gt;

&lt;p&gt;The total sorting cost for the inner and outer tables make up the startup cost of a merge join. O(Outer * log2(Outer) + Inner * log2(Inner)) can be used to represent it. The amount of tuples in each table, Outer and Inner, determine the run cost, which is O(Outer + Inner).&lt;/p&gt;

&lt;h3&gt;
  
  
  Variations:
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Merge Join:&lt;/strong&gt; This is the simplest type of Merge Join. To join the tuples based on the supplied join condition, it runs a merge operation on two sorted tables.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--WjisHkZQ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/mvk8vwsg8wxnel7aybfx.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--WjisHkZQ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/mvk8vwsg8wxnel7aybfx.png" alt="Merge Join" width="800" height="178"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Materialized Merge Join:&lt;/strong&gt; In this kind, the inner table is materialized, which increases the effectiveness of inner table scanning. A merge join operation is carried out after the inner table has been sorted and materialized.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--WAgFrvfS--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/w4zbbei5vfvjbdrfmukm.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--WAgFrvfS--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/w4zbbei5vfvjbdrfmukm.png" alt="Materialized Merge Join" width="800" height="183"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Variations:&lt;/strong&gt; The merge join in PostgreSQL gives other variations depending on which the outer table's index can be scanned, like the nested loop join.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--1GL1j3zq--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ry54bjrj8stf0z5nmcoz.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--1GL1j3zq--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ry54bjrj8stf0z5nmcoz.png" alt="vairations" width="800" height="185"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Hash Join:
&lt;/h3&gt;

&lt;p&gt;Another join method used in PostgreSQL for natural joins and equi-joins is called Hash Join. To effectively merge data from two tables, it uses hash tables. Let’s talk about the variations of hash join and its execution.&lt;/p&gt;

&lt;h4&gt;
  
  
  Hash Join Execution:
&lt;/h4&gt;

&lt;p&gt;Depending on the size of the tables, Hash Join performs differently. It does a straightforward two-phase in-memory hash join if the inner table's size is 25% or less of work_mem. If not, it employs the hybrid hash join with skew technique.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;In-Memory Hash Join:&lt;/strong&gt;&lt;br&gt;
Using a batch hash table, this technique performs the join operation on work_mem. The process consists of two phases: the build phase, during which the inner table tuples are added to the batch, and the second phase is the probe phase, during which comparison of each outer table tuple is carried out to the inner tuples in the batch for joining.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Hybrid Hash Join with Skew:&lt;/strong&gt;&lt;br&gt;
PostgreSQL makes use of the hybrid hash join with skew technique when the tuples of the inner table cannot fit into a single batch in work_mem. It includes a skew batch, multiple batches, and temporary batch files. In the skew batch, inner table tuples are stored that, depending on the join condition, will be merged with outer table tuples that have high frequency values.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;h4&gt;
  
  
  Join Access Paths and Join Nodes:
&lt;/h4&gt;

&lt;p&gt;JoinPath structures in PostgreSQL serve as a representation of join access pathways, and join nodes use these structures to effectively carry out join algorithms. They are Nested Loop Node, MergeJoinNode, and HashJoinNode.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--ZrY4JLO7--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/u41mh96e5bhocistvmcs.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--ZrY4JLO7--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/u41mh96e5bhocistvmcs.png" alt="join access paths" width="800" height="395"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;NOTE:&lt;/strong&gt; Reading &lt;a href="https://dev.to/salarzaisuhaib/chapter-3-query-processing-join-operations-in-postgresql-4iia"&gt;previous&lt;/a&gt; blog will help in better understanding.&lt;br&gt;
This blog is summary of book &lt;a href="https://www.interdb.jp/pg/index.html"&gt;The Internals of PostgreSQL&lt;/a&gt; written by Hironobu SUZUKI.&lt;/p&gt;

</description>
      <category>database</category>
      <category>apacheage</category>
      <category>postgres</category>
      <category>postgressql</category>
    </item>
    <item>
      <title>Chapter 3 Query Processing: Join Operations in PostgreSQL</title>
      <dc:creator>Suhaib Salarzai</dc:creator>
      <pubDate>Thu, 06 Jul 2023 17:36:10 +0000</pubDate>
      <link>https://dev.to/salarzaisuhaib/chapter-3-query-processing-join-operations-in-postgresql-4iia</link>
      <guid>https://dev.to/salarzaisuhaib/chapter-3-query-processing-join-operations-in-postgresql-4iia</guid>
      <description>&lt;p&gt;In Relational databases, combining data from multiple tables is done by Join operations which play a very important function in this regard. PostgreSQL provides a variety of Join Operations to take care of different join conditions. In this blog post which is a summary of the book &lt;a href="https://www.interdb.jp/pg/pgsql0303.html"&gt;The Internals of PostgreSQL&lt;/a&gt;, Chapter 3 Part 3, we will discuss three important join operations in PostgreSQL:&lt;/p&gt;

&lt;p&gt;Nested Loop Join&lt;br&gt;
Materialized Nested Loop Join &lt;br&gt;
Indexed Nested Loop Join&lt;/p&gt;

&lt;h3&gt;
  
  
  Nested Loop Join:
&lt;/h3&gt;

&lt;p&gt;In PostgreSQL, the nested loop join acts as the fundamental join operation, which is able of dealing with any kind of join conditions despite the fact that there are many other efficient variations.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--_7_D5TVw--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/oc3y5pwpte6k77xw5mm0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--_7_D5TVw--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/oc3y5pwpte6k77xw5mm0.png" alt="Nested Loop Join" width="512" height="122"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The nested loop join is adaptable for many join conditions and does not need a start-up operation. Multiplying the sizes of the inner and outer tables is directly proportional to the run cost. The run cost equation takes into consideration the tuple costs, CPU costs, and scanning costs of inner and outer tables.&lt;/p&gt;

&lt;h3&gt;
  
  
  Materialized Nested Loop Join:
&lt;/h3&gt;

&lt;p&gt;As scanning the whole inner table for each outer table tuple is an expensive process, PostgreSQL offers materialized nested loop join to reduce the collective scanning cost of the inner table. This variation scans the inner table only one time and the executor writes the inner table tuples in work_memory or temporary files.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--jRpRI8TP--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/y8ivce2cclzzfwilmzha.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--jRpRI8TP--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/y8ivce2cclzzfwilmzha.png" alt="Materialized Nested Loop Join" width="512" height="127"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Eliminating the need of scanning the inner table for each tuple of the outer table, Materialized nested loop join reduces the overall scanning cost.&lt;/li&gt;
&lt;li&gt;Before carrying out the join operation, the executor writes the inner table tuples in memory or temporary files.&lt;/li&gt;
&lt;li&gt;Temporary tuple storage module for materializing tables, creating batches, and much more is internally offered by PostgreSQL.&lt;/li&gt;
&lt;li&gt;To give an exact and accurate estimation cost of the whole join operation, materialized nested loop join cost estimation takes in variables like start-up cost, run cost, and rescan cost.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Indexed Nested Loop Join:
&lt;/h3&gt;

&lt;p&gt;When there is an index of the inner table offering the straightforward look-up of the tuples satisfying the join condition for matching each tuple of the outer table, the variation is known as indexed nested loop join.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--ZwvxORjN--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/7w1xmjifui9pq7wd9iqy.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--ZwvxORjN--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/7w1xmjifui9pq7wd9iqy.png" alt="Indexed Nested Loop Join" width="512" height="135"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This variation offers a process on the basis of a single loop of the outer table, which enhances the performance of join operations.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Indexed Nested Loop Join uses the index search on the inner table tuples instead of the sequential scanning fulfilling the join condition.&lt;/li&gt;
&lt;li&gt;Indexed Nested Loop Join increases the join operation performance as it directly looks up for tuples, this is crucial when managing large datasets.&lt;/li&gt;
&lt;li&gt;Indexed Nested Loop Join cost estimation takes in the variables like lookup costs and scanning cost of the outer table which ensures accurate join operation’s cost.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Conclusion
&lt;/h3&gt;

&lt;p&gt;To ensure optimal query processing in PostgreSQL, efficient join operations are important. Understanding the join operations help users such as developers, db administrators to choose most suitable approach for their specific use case. &lt;br&gt;
To conclude this blog we discussed various join operations each of which offers uniques properties and its advantages, ensuring efficient data processing and using these join operations with PostgreSQL makes it more efficient.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;NOTE:&lt;/strong&gt; More in the next blog&lt;/p&gt;

</description>
      <category>database</category>
      <category>postgres</category>
      <category>apacheage</category>
    </item>
    <item>
      <title>Query Processing: Cost Estimation in Single-Table query</title>
      <dc:creator>Suhaib Salarzai</dc:creator>
      <pubDate>Fri, 30 Jun 2023 17:26:23 +0000</pubDate>
      <link>https://dev.to/salarzaisuhaib/query-processing-cost-estimation-in-single-table-query-2og1</link>
      <guid>https://dev.to/salarzaisuhaib/query-processing-cost-estimation-in-single-table-query-2og1</guid>
      <description>&lt;p&gt;A complex query optimisation method that depends on cost estimate is used by PostgreSQL, a potent open-source relational database management system. By comparing and evaluating the relative performance of various operations, the database engine can select the most effective execution plan thanks to cost-based optimisation. We will examine the idea of cost estimate in single-table queries in this blog article, as well as how PostgreSQL computes the costs related to various actions.&lt;/p&gt;

&lt;h3&gt;
  
  
  PostgreSQL Costs:
&lt;/h3&gt;

&lt;p&gt;PostgreSQL costs are dimensionless variables that give the user a way to compare the relative performance of operations. It's crucial to remember that expenses are comparative tools rather than absolute performance metrics. 'costsize.c' contains routines that the query optimizer uses to estimate costs. The executor's cost function corresponds to each action it does.&lt;/p&gt;

&lt;p&gt;Three main sorts of expenses:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt; Start-up costs&lt;/li&gt;
&lt;li&gt; Run cost&lt;/li&gt;
&lt;li&gt; Total costs&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;While the run cost is the expense incurred once the first tuple is obtained, the start-up cost is the expense incurred before that. Start-up and operating expenses together make up the overall cost.&lt;/p&gt;

&lt;h3&gt;
  
  
  Sequential Scan
&lt;/h3&gt;

&lt;p&gt;Let's investigate the sequential scan cost estimate procedure, a fundamental PostgreSQL operation. A sequential scan normally has no startup costs, and the following calculation is used to determine run costs:&lt;/p&gt;

&lt;p&gt;run cost is calculated as follows:&lt;br&gt;
(cpu_tuple_cost + cpu_operator_cost) * Ntuple + seq_page_cost * Npage&lt;/p&gt;

&lt;p&gt;In this case, "Ntuple" denotes the overall number of tuples in the table, while "Npage" denotes the number of table pages. The values of the 'cpu_tuple_cost', 'cpu_operator_cost', and'seq_page_cost' parameters are programmable variables set forth in the 'postgresql.conf' file.&lt;/p&gt;

&lt;p&gt;Let's use a table with 10,000 tuples spread across 45 pages, named "tbl," as an example. We may calculate the run cost as follows by using the method described above:&lt;/p&gt;

&lt;p&gt;run cost = (0.1 + 0.0025) x 10,000 x 45, or 170.0.&lt;/p&gt;

&lt;p&gt;As a result, the sequential scan procedure would cost a total of 170.0. Typically, when using PostgreSQL's 'EXPLAIN' command, this data is shown.&lt;/p&gt;

&lt;h3&gt;
  
  
  Index Scan
&lt;/h3&gt;

&lt;p&gt;In PostgreSQL, the 'cost_index ()' function is used to estimate the cost of an index scan. The cost estimation procedure is the same regardless of the individual index technique used (for example, BTree, GiST, GIN, or BRIN).&lt;/p&gt;

&lt;p&gt;PostgreSQL takes both the start-up cost and the run cost into account when estimating the cost of an index scan. To access the first tuple in the target database, it is necessary to read index pages, which is represented by the start-up cost. The following equation establishes it:&lt;/p&gt;

&lt;p&gt;start-up costs = ceil(log2(Nindex_tuple)) + (Hindex + 1) * 50 * cpu_operator_cost.&lt;/p&gt;

&lt;p&gt;Here, the terms "Nindex_tuple" and "Hindex" stand for the height of the index tree and the number of tuples in the index, respectively.&lt;/p&gt;

&lt;p&gt;The total of the CPU and IO expenses for the index and the table constitutes the run cost of an index scan. The cost calculation algorithms depend on variables like selectivity, tuple numbers, and configuration settings.&lt;/p&gt;

&lt;p&gt;Cost estimation heavily relies on selectivity, which is the percentage of the search range that is included in the index based on the supplied WHERE clause. Utilising information from the pg_stats system view, such as histogram boundaries and most frequent values, PostgreSQL calculates selectivity.&lt;/p&gt;

&lt;p&gt;By considering these variables, PostgreSQL can calculate the price of index scans precisely and make deft query optimization choices.&lt;/p&gt;

&lt;h3&gt;
  
  
  Cost:
&lt;/h3&gt;

&lt;p&gt;A key component of PostgreSQL's query optimisation mechanism is estimated. For single-table queries, the query optimizer can choose the most effective execution plan by evaluating the costs related to various operations.&lt;/p&gt;

&lt;p&gt;In this blog article, we looked at the mechanisms for sequential scans and index scans' cost estimate processes as well as the major variables at play.&lt;/p&gt;

&lt;p&gt;This blog is a summary of chapter 3 part 2 from the book &lt;a href="https://www.interdb.jp/pg/pgsql0302.html"&gt;The Internals of PostgreSQL&lt;/a&gt; written by Hironobu SUZUKI.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Chapter 3: Query Processing Parser to Executor in PostgreSQL</title>
      <dc:creator>Suhaib Salarzai</dc:creator>
      <pubDate>Fri, 30 Jun 2023 17:01:24 +0000</pubDate>
      <link>https://dev.to/salarzaisuhaib/chapter-3-query-processing-parser-to-executor-in-postgresql-379d</link>
      <guid>https://dev.to/salarzaisuhaib/chapter-3-query-processing-parser-to-executor-in-postgresql-379d</guid>
      <description>&lt;p&gt;Several subsystems are used by PostgreSQL, a robust and feature-rich relational database management system (RDBMS), to effectively handle queries. We'll explore each stage of the query processing process in PostgreSQL in this blog.&lt;/p&gt;

&lt;h3&gt;
  
  
  Parser:
&lt;/h3&gt;

&lt;p&gt;The parser, which accepts a SQL statement as input and produces a parse tree, is at the heart of query processing. The parser does not perform any semantic tests; instead, it concentrates on examining the query's syntax. The structure of the query and its individual components are represented by the parse tree, which is rooted in the SelectStmt structure.&lt;/p&gt;

&lt;h3&gt;
  
  
  Analyzer:
&lt;/h3&gt;

&lt;p&gt;Once the parse tree is generated, it is passed to the analyzer. The analyzer performs semantic analysis, ensuring that the query is valid and meaningful. It generates a query tree, with the Query structure as its root, capturing metadata about the query's type and various clauses. The target list, range table, join tree, and sort clause are some key components of the query tree.&lt;/p&gt;

&lt;h3&gt;
  
  
  Rewriter:
&lt;/h3&gt;

&lt;p&gt;The rewriter intervenes to apply system catalog-stored rules to the query tree. These rules can be used to alter the query tree, enabling capabilities like PostgreSQL views. The rewriter examines the range table node when it comes across a view and replaces it with the relevant parse tree specified by the view's rule.&lt;/p&gt;

&lt;h3&gt;
  
  
  Planner:
&lt;/h3&gt;

&lt;p&gt;The planner oversees creating an optimised plan tree for query execution using the modified query tree as a starting point. Utilising cost-based optimisation, the planner examines alternative execution techniques and chooses the most effective one. It builds a plan tree made up of plan nodes that contains the data required for query execution.&lt;/p&gt;

&lt;h3&gt;
  
  
  Executor:
&lt;/h3&gt;

&lt;p&gt;The planner then sends the plan tree it created to the executor so it may be carried out. From leaf nodes to root nodes, the executor processes the plan nodes. It makes use of memory spaces, generates temporary files as necessary, and reads and writes data using the buffer manager. Using concurrency control methods, the executor makes sure that isolation and consistency are maintained throughout tuple access.&lt;/p&gt;

&lt;h4&gt;
  
  
  pg_hint_plan/Planner:
&lt;/h4&gt;

&lt;p&gt;While PostgreSQL does not by default provide planner hints in SQL, a workaround is provided via the pg_hint_plan extension. With the use of this plugin, users may add suggestions to their queries to sway the planner's judgement. Users can use this addon to optimise queries by consulting the official documentation.&lt;/p&gt;

&lt;p&gt;It's essential to comprehend PostgreSQL's query processing flow to maximise database performance. The subsystems collaborate flawlessly to translate a SQL statement into results that can be acted on, starting with the first parsing phase and ending with the final execution. The parser checks the syntax, the analyst does semantic analysis, the rewriter makes changes, the planner creates an optimised plan tree, and the executor carries out the plan. Developers and database managers may improve the effectiveness of their PostgreSQL databases by understanding these procedures and making educated decisions.&lt;/p&gt;

&lt;p&gt;This blog is summary of chapter 3 part 1 of book &lt;a href="https://www.interdb.jp/pg/pgsql03.html"&gt;The Internals of PostgreSQL&lt;/a&gt; written by Hironobu SUZUKI&lt;/p&gt;

</description>
      <category>apacheage</category>
      <category>postgres</category>
      <category>postgressql</category>
    </item>
    <item>
      <title>Apache Age: A New Contender in the Graph Database Space?</title>
      <dc:creator>Suhaib Salarzai</dc:creator>
      <pubDate>Fri, 30 Jun 2023 08:59:46 +0000</pubDate>
      <link>https://dev.to/salarzaisuhaib/apache-age-a-new-contender-in-the-graph-database-space-1op6</link>
      <guid>https://dev.to/salarzaisuhaib/apache-age-a-new-contender-in-the-graph-database-space-1op6</guid>
      <description>&lt;p&gt;The need for more effective and efficient data storage solutions is growing as technology develops. Graph databases are becoming more and more common, while conventional relational databases have long been the norm. This paper will examine the Apache Age graph database and explain why it may be a fresh new player in this field.&lt;/p&gt;

&lt;h3&gt;
  
  
  Graph Databases:
&lt;/h3&gt;

&lt;h3&gt;
  
  
  Graph Theory:
&lt;/h3&gt;

&lt;p&gt;To store, map, and query relationships between data items, &lt;a href="https://en.wikipedia.org/wiki/Graph_database"&gt;graph databases&lt;/a&gt; employ graph theory. Unlike traditional databases, which rely on predefined table structures, graph databases provide more flexible and dynamic data modelling.&lt;/p&gt;

&lt;h4&gt;
  
  
  Comparison to Traditional Relational Databases:
&lt;/h4&gt;

&lt;p&gt;Traditional relational databases are excellent at maintaining organised data, but when dealing with intricate interactions between data items, they can become slow and ineffective. A more effective and efficient way to manage these kinds of data structures is using graph databases.&lt;/p&gt;

&lt;h4&gt;
  
  
  Advantages of Graph Databases:
&lt;/h4&gt;

&lt;p&gt;Compared with traditional relational databases, graph databases provide several advantages. These consist of:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Improved query efficiency&lt;/li&gt;
&lt;li&gt;More adaptable data modelling&lt;/li&gt;
&lt;li&gt;Improved scalability for complicated connections&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Apache Age: The New Challenger
&lt;/h3&gt;

&lt;h4&gt;
  
  
  Origins of Apache Age:
&lt;/h4&gt;

&lt;p&gt;On top of the PostgreSQL RDBMS (Relational Database Management System), Apache Age is a graph database. Early in 2020, the project was made publicly available. It started in 2019.&lt;/p&gt;

&lt;h4&gt;
  
  
  The Architecture of Apache Age:
&lt;/h4&gt;

&lt;p&gt;A storage layer, a transaction layer, and a query layer are all parts of the Apache Age architecture. While the transaction layer offers ACID (Atomicity, Consistency, Isolation, Durability) qualities for data integrity, the storage layer oversees storing the graph data. The graph data may be efficiently queried thanks to the query layer.&lt;/p&gt;

&lt;h4&gt;
  
  
  Comparison of Apache Age and Other Popular Graph Databases:
&lt;/h4&gt;

&lt;p&gt;Despite being a relative newcomer to the graph database market, Apache Age already has some formidable rivals. Established graph databases Neo4j and Amazon Neptune both provide features comparable to Apache Age.&lt;/p&gt;

&lt;h3&gt;
  
  
  The Features of Apache Age:
&lt;/h3&gt;

&lt;h4&gt;
  
  
  Apache AGE data Modeling:
&lt;/h4&gt;

&lt;p&gt;The usage of nodes and edges in Apache Age provides flexible data modelling. Data elements are represented by nodes, and interactions between those elements are represented by edges.&lt;/p&gt;

&lt;h4&gt;
  
  
  Apache Age query language:
&lt;/h4&gt;

&lt;p&gt;Both the well-known SQL Query Language and the Cypher Query Language are used by Apache Age. These languages provide sophisticated graph data manipulation and querying.&lt;/p&gt;

&lt;h4&gt;
  
  
  Compatibility with PostgreSQL:
&lt;/h4&gt;

&lt;p&gt;Apache Age is completely compatible with all PostgreSQL tools and libraries because it is built on top of the PostgreSQL RDBMS.&lt;/p&gt;

&lt;h4&gt;
  
  
  Neo4j vs. Apache Age:
&lt;/h4&gt;

&lt;p&gt;An established graph database, &lt;a href="https://neo4j.com/"&gt;Neo4j&lt;/a&gt; has been in existence since 2007, and it is. It is a popular method for handling intricate connections between data items.&lt;/p&gt;

&lt;h4&gt;
  
  
  The Key Differences Between Apache Age and Neo4j:
&lt;/h4&gt;

&lt;p&gt;Their fundamental architectures and approaches to data modelling are the key distinctions between Neo4j and Apache Age. Neo4j features a more adaptable data modelling strategy, whereas Apache Age employs node and edge-based modelling and is built on top of PostgreSQL.&lt;/p&gt;

&lt;h3&gt;
  
  
  Getting Started with Apache Age:
&lt;/h3&gt;

&lt;h4&gt;
  
  
  Setting up Apache Age:
&lt;/h4&gt;

&lt;p&gt;Installing Apache Age is rather simple, and you may accomplish it by downloading the necessary binaries from the Apache Age website or via package managers.&lt;/p&gt;

&lt;h4&gt;
  
  
  Configuring a Graph Database:
&lt;/h4&gt;

&lt;p&gt;Setting up a graph database is as simple as establishing a new database and turning on the Apache Age extension once Apache Age has been installed.&lt;/p&gt;

&lt;h4&gt;
  
  
  Inserting/Updating/Retrieving Data:
&lt;/h4&gt;

&lt;p&gt;The SQL and Cypher query languages are supported by Apache Age, making it simple to input, edit, and retrieve data from the graph database.&lt;/p&gt;

&lt;h4&gt;
  
  
  Querying Data:
&lt;/h4&gt;

&lt;p&gt;Data Querying Apache Age has several querying methods, including a straightforward online interface and command-line tools.&lt;/p&gt;

&lt;p&gt;With various benefits over conventional relational databases and rival graph databases like Neo4j, Apache Age is a prospective new player in the graph database market. It is a fantastic option for several use cases due to its interoperability with PostgreSQL infrastructure and support for distributed situations. However, Apache Age has its limits just like any other technology, thus careful thought should be given to certain use cases before deploying this solution.&lt;/p&gt;

</description>
      <category>postgres</category>
      <category>apacheage</category>
      <category>postgressql</category>
      <category>database</category>
    </item>
  </channel>
</rss>
