<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Sadeed Ahmad</title>
    <description>The latest articles on DEV Community by Sadeed Ahmad (@sadeeddahmad).</description>
    <link>https://dev.to/sadeeddahmad</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/sadeeddahmad"/>
    <language>en</language>
    <item>
      <title>PostgreSQL: Space-Partitioned Generalized Search Tree</title>
      <dc:creator>Sadeed Ahmad</dc:creator>
      <pubDate>Tue, 25 Jul 2023 11:54:00 +0000</pubDate>
      <link>https://dev.to/sadeeddahmad/postgresql-space-partitioned-generalized-search-tree-1log</link>
      <guid>https://dev.to/sadeeddahmad/postgresql-space-partitioned-generalized-search-tree-1log</guid>
      <description>&lt;p&gt;PostgreSQL is an impressive open-source relational database management system that offers developers numerous advanced capabilities. PostgreSQL offers support for custom index types. These index types enable developers to create specialized indexes tailored to specific data types, resulting in improved query performance. &lt;/p&gt;

&lt;h2&gt;
  
  
  SP-GiST
&lt;/h2&gt;

&lt;p&gt;Space-Partitioned Generalized Search Tree is a specialized index designed to effectively manage complex data types like geometric shapes, network addresses, and full-text documents. It extends the capabilities of the widely used GiST (Generalized Search Tree) index type in PostgreSQL, making it a versatile and powerful custom index solution.&lt;/p&gt;

&lt;h2&gt;
  
  
  Working Mechanism
&lt;/h2&gt;

&lt;p&gt;SP-GiST divides the search space into separate regions, each with a label for efficient navigation. The split algorithm is used to divide the regions optimally, considering data distribution and size. &lt;/p&gt;

&lt;p&gt;SP-GiST can handle complex data types, making it suitable for various applications. It's also highly customizable, allowing developers to adjust it according to their needs. Moreover, SP-GiST performs better with high-dimensional data, improving query efficiency and database performance.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;SP-GiST is a useful custom index in PostgreSQL, specially designed to boost performance with complex data types. Its space partitioning technique and split algorithm make it efficient and easily adaptable. Developers can use SP-GiST to optimize their PostgreSQL databases for top-notch performance across various applications.&lt;/p&gt;

</description>
      <category>postgres</category>
      <category>apache</category>
      <category>database</category>
    </item>
    <item>
      <title>PostgreSQL indexing: Hash Index</title>
      <dc:creator>Sadeed Ahmad</dc:creator>
      <pubDate>Tue, 25 Jul 2023 11:18:11 +0000</pubDate>
      <link>https://dev.to/sadeeddahmad/postgresql-indexing-hash-index-130p</link>
      <guid>https://dev.to/sadeeddahmad/postgresql-indexing-hash-index-130p</guid>
      <description>&lt;p&gt;PostgreSQL offers a wide array of options for database indexing, hash indexing being one of them. In this article, we'll explore the fundamental theory and structure of hash indexing and its implementation in PostgreSQL.&lt;/p&gt;

&lt;p&gt;A hash index is a database indexing method that utilizes a hash function to map keys to corresponding values. Hashing is a technique to associate a small number with a value of any data type to access data for situations where keys are relatively small and the data set remains static, in PostgreSQL.&lt;/p&gt;

&lt;h2&gt;
  
  
  Architecture of Hash Index
&lt;/h2&gt;

&lt;p&gt;Meta Page is the initial page in the hash index and holds essential information about the index's contents. It also has Bucket Pages which are the primary pages of the index, where data is stored in the form of "hash code - TID" pairs. TID stands for "tuple identifier" and helps locate the actual data in the table. Overflow Pages have a structure similar to bucket pages and are used when a single page is insufficient to hold all the data for a specific bucket. Finally, Bitmap Pages maintain a record of overflow pages that are currently clear and can be reused for other buckets, improving the efficiency of the index.&lt;/p&gt;

&lt;h2&gt;
  
  
  Hash Index in PostgreSQL
&lt;/h2&gt;

&lt;p&gt;Hash indexing in PostgreSQL uses the hash function to calculate an integer hash code for each value in the indexed column. At first, the hash index contains a small number of buckets, but this dynamically increases as the index grows to accommodate data efficiently. When inserting data into the hash index, the hash code determines the specific bucket where the data is stored. The data is organized as "hash code - TID" pairs, with the TID serving as a pointer to the actual data location in the table. &lt;/p&gt;

&lt;p&gt;If different values produce the same hash code, PostgreSQL uses either open addressing or chaining. Open addressing involves searching for the next available bucket in a specific sequence until an empty one is found, while chaining stores multiple values in the same bucket or overflow pages. Bitmap pages are employed to manage the allocation of overflow pages effectively. &lt;/p&gt;

&lt;p&gt;Hash indexing is most effective for static datasets with relatively small keys, providing rapid data access for read-intensive workloads in PostgreSQL.&lt;/p&gt;

</description>
      <category>postgres</category>
      <category>database</category>
      <category>apache</category>
      <category>age</category>
    </item>
    <item>
      <title>Understanding AgensSQL Functionalities</title>
      <dc:creator>Sadeed Ahmad</dc:creator>
      <pubDate>Mon, 24 Jul 2023 10:47:08 +0000</pubDate>
      <link>https://dev.to/sadeeddahmad/agenssql-why-use-it-j03</link>
      <guid>https://dev.to/sadeeddahmad/agenssql-why-use-it-j03</guid>
      <description>&lt;p&gt;Graph databases are used for storing and performing queries on interconnected data with complex relationships between entities. Graph databases use nodes, edges, and properties to represent and store data.&lt;/p&gt;

&lt;p&gt;AgensSQL adds these capabilities to PostgreSQL which allows the developers to use similar syntax to perform queries and operations on the data stored in a relational database.&lt;/p&gt;

&lt;h2&gt;
  
  
  Functionalities of AgensSQL
&lt;/h2&gt;

&lt;p&gt;With the help of AgensSQL graph nodes and edges can be created in PostgreSQL tables. Furthermore, queries that are very similar to SQL can be utilized to retrieve specific patterns in the graph.&lt;br&gt;
This lends us the additional capability to perform graph analytics and gain insights from the connected data and navigate through the relationships in it.&lt;/p&gt;

&lt;h2&gt;
  
  
  Practical Implementation of AgensSQL
&lt;/h2&gt;

&lt;p&gt;Organizations can use AgensSQL effectively by implementing its capabilities to manage and analyze highly interconnected data efficiently.&lt;/p&gt;

&lt;p&gt;Complex Relationships can be modelled using nodes and edges, which represent entities and the connections between them enabling the developer to provide assistance in data driven analytics tasks, such as finding shortest paths, identifying patterns, and detecting anomalies within the graph data. AgensSQL can be used alongside traditional relational data in PostgreSQL to provide both graph and non-graph data handling and storing capabilities at one compact platform.&lt;/p&gt;

&lt;p&gt;AgensSQL can be really helpful in finding connections and relevancy in data to monitor user preferences. It can also be utilized to detect patterns that indicate any fraudulent activity.&lt;br&gt;
It can be integrated with machine learning libraries and tools to enhance data analysis and enable predictive modeling on graph data even on real time data.&lt;/p&gt;

&lt;p&gt;Organizations should invest in training and upskilling their teams on graph database concepts and AgensSQL's specific features in accordance with their use case. &lt;/p&gt;

</description>
      <category>agenssql</category>
      <category>postgres</category>
      <category>database</category>
      <category>apache</category>
    </item>
    <item>
      <title>PostgreSQL: An Introduction to Indexes</title>
      <dc:creator>Sadeed Ahmad</dc:creator>
      <pubDate>Sat, 15 Jul 2023 19:03:20 +0000</pubDate>
      <link>https://dev.to/sadeeddahmad/postgresql-an-introduction-to-indexes-3ih3</link>
      <guid>https://dev.to/sadeeddahmad/postgresql-an-introduction-to-indexes-3ih3</guid>
      <description>&lt;p&gt;Indexes serve as auxiliary structures within a database, serving two main purposes: enhancing data retrieval speed and enforcing integrity rules. Lets explore indexes in PostgreSQL in detail, look into the different types of indexes available, explain why there is such diversity, and see how they can be utilized to optimize query performance. &lt;/p&gt;

&lt;h2&gt;
  
  
  Index Types
&lt;/h2&gt;

&lt;p&gt;As of version 9.6, PostgreSQL offers six built-in types of indexes. All index types associate a specific key with the corresponding table rows that contain that key. Each row is uniquely identified by a tuple id (TID), which comprises the block number within the file and the row's position within that block.&lt;/p&gt;

&lt;p&gt;Any operation performed on indexed data, such as inserting, deleting, or updating table rows, necessitates updating the indexes for that particular table within the same transaction.&lt;/p&gt;

&lt;h2&gt;
  
  
  Extensibility of Indexes
&lt;/h2&gt;

&lt;p&gt;PostgreSQL has implemented an interface in its general indexing engine to facilitate the easy addition of new access methods to the system. The primary purpose of this interface is to obtain tuple ids (TIDs) from the access method and perform some tasks. It reads data from the appropriate versions of table rows and retrieve row versions using individual TIDs or in batches using a prebuilt bitmap. It determines the visibility of row versions for the current transaction, considering its isolation level.&lt;/p&gt;

&lt;p&gt;The indexing engine is involved in query execution and is invoked based on a plan generated during the optimization stage. The optimizer evaluates various ways to execute the query, taking into account the capabilities of all potential access methods.&lt;/p&gt;

&lt;p&gt;By using the indexing engine, PostgreSQL ensures uniform handling of different access methods while considering their specific characteristics. The primary scanning techniques are index scan and bitmap scan. In an index scan, TID values are sequentially returned until the last matching row is reached, and the indexing engine accesses the table rows indicated by these TIDs. On the other hand, in a bitmap scan, all TIDs matching the condition are initially returned, and a bitmap of row versions is constructed based on these TIDs before reading the corresponding row versions from the table. The choice between these scanning techniques depends on factors such as the number of retrieved rows and is determined by the optimizer.&lt;/p&gt;

</description>
      <category>postgres</category>
      <category>postgressql</category>
      <category>database</category>
      <category>apach</category>
    </item>
    <item>
      <title>PostgreSQL: GIST Indexes</title>
      <dc:creator>Sadeed Ahmad</dc:creator>
      <pubDate>Thu, 13 Jul 2023 13:39:12 +0000</pubDate>
      <link>https://dev.to/sadeeddahmad/postgresql-gist-indexes-2jl2</link>
      <guid>https://dev.to/sadeeddahmad/postgresql-gist-indexes-2jl2</guid>
      <description>&lt;p&gt;PostgreSQL, a widely used open-source relational database management system. One of its notable capabilities is its ability to handle a wide array of types of indexes, including the Generalized Search Tree (GIST) index for enhancing query performance within PostgreSQL.&lt;/p&gt;

&lt;h2&gt;
  
  
  GIST Indexes
&lt;/h2&gt;

&lt;p&gt;GIST indexes are a specialized type which handle complex data types like geometric shapes, full-text search, and network addresses in PostgreSQL. They are structured as balanced search trees, capable of handling intricate data structures. GIST indexes relies on the dividing the data into overlapping rectangles or boxes which contain data points. These boxes are then organized in a hierarchical tree structure.&lt;/p&gt;

&lt;p&gt;During query execution, PostgreSQL navigates through the GIST index tree to locate the relevant data at the root node and moves down the branches based on the query conditions enabling GIST indexes to effectively handle queries with efficiency and accuracy.&lt;/p&gt;

&lt;h2&gt;
  
  
  How GIST indexes improve accuracy
&lt;/h2&gt;

&lt;p&gt;GIST indexes play a crucial role in enhancing query performance by enabling PostgreSQL to swiftly locate the relevant data as now PostgreSQL can efficiently use the index to boost the process of identification of the required data instead of having to scan the entire table. GIST indexes can enhance the performance of full-text search queries, streamline network address queries, and optimize queries involving multi-dimensional data. By utilizing GIST indexes, PostgreSQL can efficiently handle these complex queries and deliver results more promptly.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;GIST indexes in PostgreSQL offer substantial benefits for optimizing query performance involving complex data types and multi-dimensional data. With their ability to divide data into overlapping rectangles or boxes and establish a hierarchical tree structure, GIST indexes enable PostgreSQL to efficiently locate the necessary data without the need for complete table scans. By leveraging GIST indexes, you can significantly enhance query performance, maximize performance gains and optimize your database operations.&lt;/p&gt;

</description>
      <category>apach</category>
      <category>postgres</category>
      <category>database</category>
    </item>
    <item>
      <title>AgensSQL: An All-New Relational DBMS</title>
      <dc:creator>Sadeed Ahmad</dc:creator>
      <pubDate>Tue, 11 Jul 2023 20:29:05 +0000</pubDate>
      <link>https://dev.to/sadeeddahmad/agenssql-an-all-new-relational-dbms-20m9</link>
      <guid>https://dev.to/sadeeddahmad/agenssql-an-all-new-relational-dbms-20m9</guid>
      <description>&lt;p&gt;AgensSQL is a robust DBMS that is primarily based on PostgreSQL and it provides a wide range of effective data management functionalities. It empowers users by offering easy access to efficient data management capabilities. AgensSQL incorporates various features to secure the user data in the best way possible. These measures are in place to safeguard valuable information and maintain its confidentiality.&lt;/p&gt;

&lt;p&gt;Additionally, AgensSQL presents an array of innovative features as an upgrade on its predecessors.&lt;/p&gt;

&lt;h2&gt;
  
  
  Enhanced Security
&lt;/h2&gt;

&lt;p&gt;The protection of user data is a key focus for AgensSQL. By implementing data redaction, password policies: 1. user password policy reinforces login security 2. excessive failed login attempts lock an account 3. define rules for password complexity, and auditing capabilities, AgensSQL ensures that your valuable information remains safe and secure.&lt;/p&gt;

&lt;h2&gt;
  
  
  Data Sharding
&lt;/h2&gt;

&lt;p&gt;AgensSQL's data sharding works great  with large datasets. It allows for the distribution of data across multiple databases, enabling efficient management and analysis of huge amounts of information. This feature proves beneficial in handling the challenges posed by handling substantial data volumes. &lt;/p&gt;

&lt;h2&gt;
  
  
  High Availability
&lt;/h2&gt;

&lt;p&gt;The Agens Enterprise Package encompasses Agens HA Manager (AHM), which guarantees uninterrupted operations. AHM is equipped with failover, failback, and load balancing capabilities, ensuring maximum availability and minimizing downtime allowing seamless transition and distribution of workloads&lt;/p&gt;

&lt;h2&gt;
  
  
  User-Friendly Management
&lt;/h2&gt;

&lt;p&gt;Agens Enterprise Manager (AEM) provides a simplified approach to various tasks and streamlines the management of databases by offering a user-friendly interface that simplifies operations such as backups, audits, SQL monitoring, performance optimization, and scheduling.&lt;/p&gt;

</description>
      <category>postgres</category>
      <category>postgressql</category>
      <category>agenssql</category>
      <category>database</category>
    </item>
    <item>
      <title>Optimizing The Efficiency of Vacuum Processing - Visibility Map</title>
      <dc:creator>Sadeed Ahmad</dc:creator>
      <pubDate>Mon, 19 Jun 2023 14:23:33 +0000</pubDate>
      <link>https://dev.to/sadeeddahmad/optimizing-the-efficiency-of-vacuum-processing-visibility-map-44ab</link>
      <guid>https://dev.to/sadeeddahmad/optimizing-the-efficiency-of-vacuum-processing-visibility-map-44ab</guid>
      <description>&lt;p&gt;The concept of a visibility map is a feature introduced in PostgreSQL to optimize the efficiency of vacuum processing. It tracks the visibility information of individual data pages within a table.&lt;/p&gt;

&lt;p&gt;The main purpose of the visibility map is to reduce the need for scanning entire tables during the vacuum process. When vacuuming a table, one of the primary tasks is to identify and remove dead tuples (rows) from the table's data pages. However, scanning every page of a table can be a time-consuming and resource-intensive operation, especially for large tables. The visibility map addresses this challenge by storing a bit for each data page in a separate map file. This bit represents the visibility status of the tuples on the corresponding data page. It can have two states: "All Visible" or "Some Tuples May Be Invisible."&lt;/p&gt;

&lt;p&gt;During vacuum processing, instead of scanning every data page, PostgreSQL first checks the visibility map to determine if a particular page has all visible tuples. If the visibility map indicates that all tuples on a page are visible to all transactions, PostgreSQL can skip the scanning of that page altogether. This optimization significantly reduces the amount of I/O and processing required during vacuuming.&lt;/p&gt;

&lt;h2&gt;
  
  
  Freeze Processing
&lt;/h2&gt;

&lt;p&gt;Freeze processing in PostgreSQL offers two modes, known as lazy mode and eager mode, which are chosen based on specific conditions.&lt;/p&gt;

&lt;p&gt;By default, freeze processing operates in lazy mode, where it scans only the pages containing dead tuples using the visibility map (VM) of the target tables. On the other hand, eager mode is triggered under certain circumstances. In eager mode, freeze processing scans all pages, regardless of whether they contain dead tuples or not. &lt;/p&gt;

&lt;h2&gt;
  
  
  Improving Freeze Processing in Eager Mode
&lt;/h2&gt;

&lt;p&gt;Earlier versions of the eager mode in PostgreSQL were inefficient as they would scan all pages, even if all the tuples within a page were already frozen.&lt;/p&gt;

&lt;p&gt;To address this inefficiency, improvements were made to the visibility map (VM) and freeze process in later versions. The enhanced VM now includes information about whether all tuples within a page are frozen or not. Consequently, when freeze processing is executed in eager mode, pages that solely consist of frozen tuples can be skipped, resulting in improved efficiency.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--ulmashOi--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/tm5r9jqcr3c3wfc8zvi9.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--ulmashOi--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/tm5r9jqcr3c3wfc8zvi9.png" alt="Image description" width="800" height="213"&gt;&lt;/a&gt;&lt;/p&gt;

</description>
      <category>apachea</category>
      <category>postgressql</category>
      <category>apache</category>
    </item>
    <item>
      <title>Vacuum Processing in PostgreSQL</title>
      <dc:creator>Sadeed Ahmad</dc:creator>
      <pubDate>Mon, 19 Jun 2023 13:05:34 +0000</pubDate>
      <link>https://dev.to/sadeeddahmad/vacuum-processing-in-postgresql-19ae</link>
      <guid>https://dev.to/sadeeddahmad/vacuum-processing-in-postgresql-19ae</guid>
      <description>&lt;p&gt;Vacuum processing in PostgreSQL ensures the smooth operation of the database system. Its two primary tasks are removing dead tuples and freezing transaction IDs. For the removal of dead tuples, vacuum processing has two modes: Concurrent VACUUM and Full VACUUM. &lt;/p&gt;

&lt;p&gt;Concurrent VACUUM, commonly referred to as VACUUM, eliminates dead tuples on a per-page basis, allowing other transactions to read the table concurrently. On the other hand, Full VACUUM removes dead tuples and performs defragmentation for the entire file, but during this process, other transactions are unable to access the tables.&lt;/p&gt;

&lt;p&gt;Despite its significance in PostgreSQL, the advancement of vacuum processing has been relatively slow compared to other functionalities. Due to the requirement of scanning entire tables, vacuum processing can be resource-intensive.&lt;/p&gt;

&lt;h2&gt;
  
  
  First Block
&lt;/h2&gt;

&lt;p&gt;This section of the process involves performing freeze processing and removing index tuples that point to dead tuples. At first, PostgreSQL scans the target table to create a list of dead tuples and potentially freeze old tuples. This list is saved in local memory. PostgreSQL uses the dead tuple list to remove corresponding index tuples once the scanning is complete. This step is called the "cleanup stage" and is resource-intensive.&lt;/p&gt;

&lt;h2&gt;
  
  
  Second Block
&lt;/h2&gt;

&lt;p&gt;This block removes dead tuples and updates both the FSM and VM on a page-by-page basis.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--bYMP8mpF--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/c2gwvtq4etddvbtp6zks.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--bYMP8mpF--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/c2gwvtq4etddvbtp6zks.png" alt="Image description" width="800" height="284"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Third Block
&lt;/h2&gt;

&lt;p&gt;After the deletion of indexes, the third block carries out the cleanup process, that updates the relevant statistics and system.&lt;/p&gt;

&lt;p&gt;After post-processing the process is considered to be completed.&lt;/p&gt;

</description>
      <category>postgressql</category>
      <category>apacheage</category>
      <category>age</category>
    </item>
    <item>
      <title>Network Address Data Types in PostgreSQL - Part 2</title>
      <dc:creator>Sadeed Ahmad</dc:creator>
      <pubDate>Wed, 14 Jun 2023 09:20:01 +0000</pubDate>
      <link>https://dev.to/sadeeddahmad/network-address-data-types-in-postgresql-part-2-27pe</link>
      <guid>https://dev.to/sadeeddahmad/network-address-data-types-in-postgresql-part-2-27pe</guid>
      <description>&lt;p&gt;In continuation of the last blog, further data types for storing IPv4, IPv6, and MAC addresses in PostgreSQL are discussed. Rather than utilizing basic text types, it is better to use these specific types for storing network addresses.&lt;/p&gt;

&lt;h2&gt;
  
  
  macaddr
&lt;/h2&gt;

&lt;p&gt;The macaddr type is designed to store MAC addresses also known as Media Access Control addresses, are identifiers assigned to network interfaces.&lt;/p&gt;

&lt;p&gt;The macaddr type in PostgreSQL allows you to store MAC addresses in a compact manner. The standard format of six groups of two hexadecimal digits is followed, separated by colons or hyphens. For example, a MAC address may look like "08:00:2b:04:02:05" or "08-00-2b-01-02-03".&lt;/p&gt;

&lt;p&gt;The macaddr type provides several benefits. First off, it ensures that the stored values are in the correct format. This helps prevent input errors when working with MAC addresses. It also provides specialized operators and functions for manipulating and comparing MAC addresses, making it easier to perform operations such as sorting and searching based on MAC addresses.&lt;/p&gt;

&lt;p&gt;MAC addresses are commonly associated with Ethernet networks. They can also be used for other purposes, such as in Bluetooth devices and Wi-Fi networks.&lt;/p&gt;

&lt;h2&gt;
  
  
  macaddr8
&lt;/h2&gt;

&lt;p&gt;An extension of the macaddr type in PostgreSQL macaddr8 was introduced in PostgreSQL version 10 to address the limitation of the original macaddr type, which only supported 48-bit MAC addresses.&lt;/p&gt;

&lt;p&gt;It is designed to store 64-bit MAC addresses which provides a larger address space to accommodate more unique identifiers. The format is similar to macaddr, with eight groups of two hexadecimal digits. For example, "00:0a:95:9d:28:12:23:52" or "00-0A-45-9D-64-12-23-22".&lt;/p&gt;

&lt;p&gt;By using macaddr8, PostgreSQL allows for the storage, manipulation, and comparison of 64-bit MAC addresses. It offers ensuring data integrity and providing specialized operators and functions for working with MAC addresses.&lt;/p&gt;

&lt;p&gt;To convert a traditional 48 bit MAC address in EUI-48 format to modified EUI-64 format to be included as the host portion of an IPv6 address, use macaddr8_set7bit as shown:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--Jfm32SgU--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ghxvrugi9fy9nw7ib1w0.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--Jfm32SgU--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ghxvrugi9fy9nw7ib1w0.jpg" alt="Image description" width="782" height="272"&gt;&lt;/a&gt;&lt;/p&gt;

</description>
      <category>apacheage</category>
      <category>age</category>
      <category>postgres</category>
      <category>postgressql</category>
    </item>
    <item>
      <title>Network Address Data Types in PostgreSQL - Part 1</title>
      <dc:creator>Sadeed Ahmad</dc:creator>
      <pubDate>Wed, 14 Jun 2023 08:44:45 +0000</pubDate>
      <link>https://dev.to/sadeeddahmad/network-address-data-types-in-postgresql-part-1-4i91</link>
      <guid>https://dev.to/sadeeddahmad/network-address-data-types-in-postgresql-part-1-4i91</guid>
      <description>&lt;p&gt;PostgreSQL provides multiple data types for storing IPv4, IPv6, and MAC addresses. Instead of using basic text types, it is better to use these specific types for storing network addresses. The reason being the fact that these data types provide error validation during input and offer dedicated operators and functions for manipulating them.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--ifWs6Sly--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/q8j7qnxtvmytk6f981b9.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--ifWs6Sly--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/q8j7qnxtvmytk6f981b9.jpg" alt="Image description" width="642" height="268"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  inet
&lt;/h2&gt;

&lt;p&gt;The inet type can store both IPv4 and IPv6 host addresses, along with their subnets, within a single field. The subnet is defined by the number of network address bits included in the host address, "netmask." &lt;/p&gt;

&lt;p&gt;In the case of IPv4 addresses with a netmask of 32, it signifies a single host rather than a subnet. For IPv6, the address length is fixed at 128 bits, resulting in a unique host address when all 128 bits are specified. &lt;/p&gt;

&lt;p&gt;It is recommended to use the cidr type instead of inet if you want to store network addresses and not individual hosts.&lt;/p&gt;

&lt;h2&gt;
  
  
  cidr
&lt;/h2&gt;

&lt;p&gt;The cidr type in PostgreSQL is designed to store specifications of IPv4 or IPv6 networks. The input and output formats follow the conventions of Classless Internet Domain Routing (CIDR). The format follows the pattern of "address/y," where "address" represents the lowest address of the network, expressed as an IPv4 or IPv6 address, and "y" indicates the number of bits in the netmask.&lt;/p&gt;

&lt;p&gt;If the "y" value is not provided, it is calculated based on assumptions derived from the older classful network numbering system. However, the calculated value will always be large enough to encompass all the octets specified in the input. It is important to note that specifying network addresses with bits set to the right of the specified netmask is considered an error and should be avoided.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--X1SlGxZA--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/mow4p13vhejcrfo79rcm.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--X1SlGxZA--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/mow4p13vhejcrfo79rcm.jpg" alt="Image description" width="800" height="813"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The important difference between inet and cidr data types is that inet accepts values with nonzero bits to the right of the netmask, whereas cidr does not.&lt;/p&gt;

</description>
      <category>apache</category>
      <category>apacheage</category>
      <category>age</category>
      <category>postgressql</category>
    </item>
    <item>
      <title>The Internals of PostgreSQL: Query Processing - Part 2</title>
      <dc:creator>Sadeed Ahmad</dc:creator>
      <pubDate>Mon, 15 May 2023 13:06:05 +0000</pubDate>
      <link>https://dev.to/sadeeddahmad/the-internals-of-postgresql-query-processing-part-2-2e92</link>
      <guid>https://dev.to/sadeeddahmad/the-internals-of-postgresql-query-processing-part-2-2e92</guid>
      <description>&lt;p&gt;We will be discussing rest of the three subsystems in detail in this article.&lt;/p&gt;

&lt;p&gt;Rewriter&lt;br&gt;
Planner&lt;br&gt;
Executor&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--Wp2zlVV5--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/splgvde6fpj4buo9vhnl.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--Wp2zlVV5--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/splgvde6fpj4buo9vhnl.png" alt="Image description" width="800" height="811"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Rewriter
&lt;/h2&gt;

&lt;p&gt;The rewriter is responsible for implementing the rule system and modifying a query tree based on the rules stored in the pg_rules system catalog, if required.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--qyvNrE-i--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/2ive7sweifyuk7tt03bq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--qyvNrE-i--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/2ive7sweifyuk7tt03bq.png" alt="Image description" width="800" height="283"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Planner
&lt;/h2&gt;

&lt;p&gt;After receiving the query tree from the rewriter, the planner in PostgreSQL undertakes the task of generating a plan tree that can be executed by the executor in the most efficient manner.&lt;br&gt;
It's important to note that the planner in PostgreSQL exclusively relies on cost-based optimization and does not support rule-based optimization or hints.&lt;br&gt;
A plan tree is composed of elements called plan nodes, and it is connected to the plantree list of the PlannedStmt structure. These elements are defined in plannodes.h.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--_DEeV_EJ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/p9bs5ei23841a5sjyp8k.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--_DEeV_EJ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/p9bs5ei23841a5sjyp8k.png" alt="Image description" width="800" height="255"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Executor
&lt;/h2&gt;

&lt;p&gt;The executor in PostgreSQL relies on information provided by each plan node to carry out the processing tasks. In the case of a single-table query, the executor processes the plan tree from the end to the root.&lt;/p&gt;

&lt;p&gt;For instance, consider the plan tree illustrated for a simple plan tree above, consisting of a sort node followed by a sequential scan node. In this scenario, the executor scans the table "tbl_a" using a sequential scan and subsequently sorts the obtained result.&lt;br&gt;
To interact with tables and indexes within the database cluster, the executor utilizes the buffer manager. During query processing, the executor makes use of pre-allocated memory areas such as temp_buffers and work_mem, and creates temporary files as necessary.&lt;br&gt;
Moreover, PostgreSQL employs a concurrency control mechanism to ensure consistency and isolation of running transactions when accessing tuples. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--Y3qOox1q--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/dngaofcha39l7x0iukic.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--Y3qOox1q--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/dngaofcha39l7x0iukic.png" alt="Image description" width="800" height="600"&gt;&lt;/a&gt;&lt;/p&gt;

</description>
      <category>postgres</category>
      <category>apacheage</category>
      <category>age</category>
      <category>apache</category>
    </item>
    <item>
      <title>The Internals of PostgreSQL: Query Processing - Part 1</title>
      <dc:creator>Sadeed Ahmad</dc:creator>
      <pubDate>Mon, 15 May 2023 12:50:07 +0000</pubDate>
      <link>https://dev.to/sadeeddahmad/the-internals-of-postgresql-query-processing-1e0a</link>
      <guid>https://dev.to/sadeeddahmad/the-internals-of-postgresql-query-processing-1e0a</guid>
      <description>&lt;p&gt;Query processing is a critical aspect of any database management system, and PostgreSQL excels in this domain with its efficient and versatile query processing capabilities. In this article, we delve into some valuable insights to gain a comprehensive understanding of PostgreSQL's query processing mechanisms.&lt;/p&gt;

&lt;h2&gt;
  
  
  Backend Subsystems
&lt;/h2&gt;

&lt;p&gt;We cover the first two systems in this part 1.&lt;/p&gt;

&lt;p&gt;Parser&lt;br&gt;
The parser generates a parse tree from an SQL statement in plain text.&lt;/p&gt;

&lt;p&gt;Analyzer&lt;br&gt;
The analyzer carries out a semantic analysis of a parse tree and generates a query tree.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s---l0u7a8u--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/p5vp53bvb50lrbc141xz.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s---l0u7a8u--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/p5vp53bvb50lrbc141xz.png" alt="Image description" width="800" height="811"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Parser
&lt;/h2&gt;

&lt;p&gt;The parser in PostgreSQL is responsible for transforming an SQL statement written in plain text into a parse tree. This parse tree serves as a structured representation of the SQL statement, allowing subsequent subsystems to interpret and analyze it. The root node of the parse tree is the SelectStmt structure, defined in parsenodes.h.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--noAoow8G--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/keeljis7zucfdwup76d1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--noAoow8G--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/keeljis7zucfdwup76d1.png" alt="Image description" width="800" height="354"&gt;&lt;/a&gt;&lt;br&gt;
The numbering of elements in the SELECT query and the corresponding parse tree elements are identical. For instance, the first item in the target list, which is the column 'id' of the table, is numbered (1), while the WHERE clause is numbered (4), and so on.&lt;br&gt;
The parser is responsible for checking the syntax of the input query when generating a parse tree. Therefore, it only reports an error if there is a syntax error in the query.&lt;br&gt;
However, the parser does not verify the semantics of the input query. For example, if the query contains a table name that does not exist, the parser does not report an error. The analyzer is responsible for performing semantic checks.&lt;/p&gt;

&lt;h2&gt;
  
  
  Analyzer
&lt;/h2&gt;

&lt;p&gt;After the parser generates a parse tree, the analyzer performs a semantic analysis and produces a query tree.&lt;br&gt;
The Query structure, which is defined in parsenodes.h, serves as the root of the query tree. This structure contains metadata about the query.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--lajE3DGP--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/35k42uud1d2kapk8lnkr.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--lajE3DGP--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/35k42uud1d2kapk8lnkr.png" alt="Image description" width="800" height="352"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Here is a brief description of the query tree:&lt;br&gt;
The targetlist is a list of columns that represent the output of the query. In this example, the targetlist consists of two columns: 'id' and 'data'. If the input query tree uses the '*' (asterisk) symbol, the analyzer/analyser will replace it with all of the columns explicitly.&lt;br&gt;
The range table is a list of relations that are used in the query.&lt;br&gt;
The join tree stores the FROM and WHERE clauses.&lt;br&gt;
The sort clause is a list of SortGroupClause.&lt;/p&gt;

</description>
      <category>apache</category>
      <category>postgres</category>
      <category>age</category>
    </item>
  </channel>
</rss>
