<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Denis Magda</title>
    <description>The latest articles on DEV Community by Denis Magda (@denismagda).</description>
    <link>https://dev.to/denismagda</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/denismagda"/>
    <language>en</language>
    <item>
      <title>Geo-Distributed Microservices and Their Database: Fighting the High Latency</title>
      <dc:creator>Denis Magda</dc:creator>
      <pubDate>Mon, 07 Nov 2022 16:57:01 +0000</pubDate>
      <link>https://dev.to/yugabyte/geo-distributed-microservices-and-their-database-fighting-the-high-latency-3fld</link>
      <guid>https://dev.to/yugabyte/geo-distributed-microservices-and-their-database-fighting-the-high-latency-3fld</guid>
      <description>&lt;p&gt;Ahoy, mateys!&lt;/p&gt;

&lt;p&gt;My development journey of the first version of the &lt;a href="https://github.com/YugabyteDB-Samples/geo-distributed-messenger" rel="noopener noreferrer"&gt;geo-distributed messenger&lt;/a&gt; has ended. So in this last article of the &lt;a href="https://dev.to/denismagda/series/19150"&gt;series&lt;/a&gt;, I’d like to talk about multi-region database deployment options that were validated for the messenger. &lt;/p&gt;

&lt;p&gt;The high-level architecture of the geo-messenger is represented below:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdq2f9n1mkn1n2xg7a6ai.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdq2f9n1mkn1n2xg7a6ai.png" alt="Image description" width="800" height="588"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The microservice instances of the application layer are scattered across the globe in cloud regions of choice. The API layer, powered by Kong Gateway, lets the microservices communicate with each other via simple REST endpoints.&lt;/p&gt;

&lt;p&gt;The global load balancer intercepts user requests at the nearest PoP (point-of-presence) and forwards the requests to microservice instances that are geographically closest to the user.&lt;/p&gt;

&lt;p&gt;Once the microservice instance receives a request from the load balancer, it’s very likely that the service will need to read data from the database or write changes to it. And this step can become a painful bottleneck—an issue that you will need to solve if the data (database) is located far away from the microservice instance and the user.&lt;/p&gt;

&lt;p&gt;In this article, I’ll select a few multi-region database deployment options and demonstrate how to keep the read and write latency for database queries low regardless of the user’s location.&lt;/p&gt;

&lt;p&gt;So, if you’re still with me on this journey, then, as the pirates used to say, “Weigh anchor and hoist the mizzen!” which means, “Pull up the anchor and get this ship sailing!” &lt;/p&gt;

&lt;h1&gt;
  
  
  Multi-Region Database Deployment Options
&lt;/h1&gt;

&lt;p&gt;There is no silver bullet for multi-region database deployments. It’s all about tradeoffs. Each option provides advantages, while others cost you something. &lt;/p&gt;

&lt;p&gt;YugabyteDB, my &lt;a href="https://www.yugabyte.com/tech/distributed-sql/" rel="noopener noreferrer"&gt;distributed SQL database&lt;/a&gt; of choice, supports four &lt;a href="https://dzone.com/articles/exploring-multi-region-database-deployment-options" rel="noopener noreferrer"&gt;multi-region database deployment options&lt;/a&gt; that are used by geo-distributed applications:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Single Stretched Cluster&lt;/strong&gt;: The database cluster is “stretched” across multiple regions. The regions are usually located in relatively close proximity (e.g., US Midwest and East regions). &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Single Stretched Cluster With Read Replicas&lt;/strong&gt;: As with the previous option, the cluster is deployed in one geographic location (e.g., North America) across cloud regions in relatively close proximity (e.g., US Midwest and East regions). But with this option, you can add read replica nodes to distant geographic locations (e.g., Europe and Asia) to improve performance for read workloads.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Single Geo-Partitioned Cluster&lt;/strong&gt;: The database cluster is spread across multiple distant geographic locations (e.g. North America, Europe, and Asia). Each geographic location has its own group of nodes deployed across one or more local regions in close proximity (e.g., US Midwest and East regions). Data is automatically pinned to a specific group of nodes based on the value of a geo-partitioning column. With this deployment option, you achieve low latency for both read and write workloads across the globe.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Multiple Clusters With Async Replication&lt;/strong&gt;: Multiple standalone clusters are deployed in various regions. The regions can be located in relatively close proximity (e.g., US Midwest and East regions) or in distant locations (e.g., US East and Asia South regions). The changes are replicated asynchronously between the clusters. You will achieve low latency for both read and write workloads across the globe, the same as with the previous option, but you will deal with multiple standalone clusters that exchange changes asynchronously.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Alright, matey, let’s move forward and review the first three deployment options for the geo-messenger’s use case. I will skip the fourth one since it doesn’t fit the messenger’s architecture which requires a single database cluster.&lt;/p&gt;

&lt;h1&gt;
  
  
  Single Stretched Cluster in the USA
&lt;/h1&gt;

&lt;p&gt;The first cluster spans three regions in the USA—US West, Central, and East.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fz5lwrj9bfi5pvffz0bd0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fz5lwrj9bfi5pvffz0bd0.png" alt="Image description" width="800" height="462"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Application/microservice instances and the API servers (not shown in the picture) are running in those same locations plus in the Europe West and Asia East regions.&lt;/p&gt;

&lt;h2&gt;
  
  
  Database Read/Write Latency for the US Traffic
&lt;/h2&gt;

&lt;p&gt;Suppose Ms. Blue is working from Iowa, US this week. She opens the geo-messenger to send a note to a corporate channel. Her traffic will be processed by the microservice instance deployed in the US Central region. &lt;/p&gt;

&lt;p&gt;Before Ms. Blue can send the message, the US Central microservice instance has to load the channel’s message history. Which database node will be serving that read request?&lt;/p&gt;

&lt;p&gt;In my case, the US Central region is configured as a &lt;a href="https://docs.yugabyte.com/preview/admin/yb-admin/#set-preferred-zones" rel="noopener noreferrer"&gt;preferred one&lt;/a&gt; for YugabyteDB, meaning a database node from that region will handle all the read requests from the application layer. It takes &lt;em&gt;10-15 ms&lt;/em&gt; to load the channel’s message history from that US Central database node to the application instance from the same region. Here is the output of my Spring Boot log with the last line showing the query execution time:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight java"&gt;&lt;code&gt;&lt;span class="nl"&gt;Hibernate:&lt;/span&gt; &lt;span class="n"&gt;select&lt;/span&gt; &lt;span class="n"&gt;message0_&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;country_code&lt;/span&gt; &lt;span class="n"&gt;as&lt;/span&gt; &lt;span class="n"&gt;country_1_1_&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt; &lt;span class="n"&gt;message0_&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;id&lt;/span&gt; &lt;span class="n"&gt;as&lt;/span&gt; &lt;span class="n"&gt;id2_1_&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt; &lt;span class="n"&gt;message0_&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;attachment&lt;/span&gt; &lt;span class="n"&gt;as&lt;/span&gt; &lt;span class="n"&gt;attachme3_1_&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt; &lt;span class="n"&gt;message0_&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;channel_id&lt;/span&gt; &lt;span class="n"&gt;as&lt;/span&gt; &lt;span class="n"&gt;channel_4_1_&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt; &lt;span class="n"&gt;message0_&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;message&lt;/span&gt; &lt;span class="n"&gt;as&lt;/span&gt; &lt;span class="n"&gt;message5_1_&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt; &lt;span class="n"&gt;message0_&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;sender_country_code&lt;/span&gt; &lt;span class="n"&gt;as&lt;/span&gt; &lt;span class="n"&gt;sender_c6_1_&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt; &lt;span class="n"&gt;message0_&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;sender_id&lt;/span&gt; &lt;span class="n"&gt;as&lt;/span&gt; &lt;span class="n"&gt;sender_i7_1_&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt; &lt;span class="n"&gt;message0_&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;sent_at&lt;/span&gt; &lt;span class="n"&gt;as&lt;/span&gt; &lt;span class="n"&gt;sent_at8_1_&lt;/span&gt; &lt;span class="n"&gt;from&lt;/span&gt; &lt;span class="n"&gt;message&lt;/span&gt; &lt;span class="n"&gt;message0_&lt;/span&gt; &lt;span class="n"&gt;where&lt;/span&gt; &lt;span class="n"&gt;message0_&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;channel_id&lt;/span&gt;&lt;span class="o"&gt;=?&lt;/span&gt; &lt;span class="n"&gt;order&lt;/span&gt; &lt;span class="n"&gt;by&lt;/span&gt; &lt;span class="n"&gt;message0_&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;id&lt;/span&gt; &lt;span class="n"&gt;asc&lt;/span&gt;

&lt;span class="no"&gt;INFO&lt;/span&gt; &lt;span class="mi"&gt;11744&lt;/span&gt; &lt;span class="o"&gt;---&lt;/span&gt; &lt;span class="o"&gt;[-&lt;/span&gt;&lt;span class="n"&gt;nio&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="mi"&gt;80&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;exec&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="mi"&gt;10&lt;/span&gt;&lt;span class="o"&gt;]&lt;/span&gt; &lt;span class="n"&gt;i&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;StatisticalLoggingSessionEventListener&lt;/span&gt; &lt;span class="o"&gt;:&lt;/span&gt; &lt;span class="nc"&gt;Session&lt;/span&gt; &lt;span class="nc"&gt;Metrics&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;
&lt;span class="mi"&gt;1337790&lt;/span&gt; &lt;span class="n"&gt;nanoseconds&lt;/span&gt; &lt;span class="n"&gt;spent&lt;/span&gt; &lt;span class="n"&gt;acquiring&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt; &lt;span class="no"&gt;JDBC&lt;/span&gt; &lt;span class="n"&gt;connections&lt;/span&gt;&lt;span class="o"&gt;;&lt;/span&gt;
&lt;span class="mi"&gt;0&lt;/span&gt; &lt;span class="n"&gt;nanoseconds&lt;/span&gt; &lt;span class="n"&gt;spent&lt;/span&gt; &lt;span class="n"&gt;releasing&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt; &lt;span class="no"&gt;JDBC&lt;/span&gt; &lt;span class="n"&gt;connections&lt;/span&gt;&lt;span class="o"&gt;;&lt;/span&gt;
&lt;span class="mi"&gt;1413081&lt;/span&gt; &lt;span class="n"&gt;nanoseconds&lt;/span&gt; &lt;span class="n"&gt;spent&lt;/span&gt; &lt;span class="n"&gt;preparing&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt; &lt;span class="no"&gt;JDBC&lt;/span&gt; &lt;span class="n"&gt;statements&lt;/span&gt;&lt;span class="o"&gt;;&lt;/span&gt;
&lt;span class="mi"&gt;14788369&lt;/span&gt; &lt;span class="n"&gt;nanoseconds&lt;/span&gt; &lt;span class="n"&gt;spent&lt;/span&gt; &lt;span class="n"&gt;executing&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt; &lt;span class="no"&gt;JDBC&lt;/span&gt; &lt;span class="n"&gt;statements&lt;/span&gt;&lt;span class="o"&gt;;&lt;/span&gt; &lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;14&lt;/span&gt; &lt;span class="n"&gt;ms&lt;/span&gt;&lt;span class="o"&gt;!)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Next, when Ms. Blue sends the message into the channel, the latency between the microservice and database will be around &lt;em&gt;90 ms&lt;/em&gt;. It takes more time than the previous operation because Hibernate generates multiple SQL queries for my &lt;code&gt;JpaRepository.save(message)&lt;/code&gt; method call (which certainly can be optimized), and then YugabyteDB needs to store a copy of the message on all nodes running across the US West, Central, and East. Here is what the output and latency look like:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight java"&gt;&lt;code&gt;&lt;span class="nl"&gt;Hibernate:&lt;/span&gt; &lt;span class="n"&gt;select&lt;/span&gt; &lt;span class="n"&gt;message0_&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;country_code&lt;/span&gt; &lt;span class="n"&gt;as&lt;/span&gt; &lt;span class="n"&gt;country_1_1_0_&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt; &lt;span class="n"&gt;message0_&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;id&lt;/span&gt; &lt;span class="n"&gt;as&lt;/span&gt; &lt;span class="n"&gt;id2_1_0_&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt; &lt;span class="n"&gt;message0_&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;attachment&lt;/span&gt; &lt;span class="n"&gt;as&lt;/span&gt; &lt;span class="n"&gt;attachme3_1_0_&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt; &lt;span class="n"&gt;message0_&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;channel_id&lt;/span&gt; &lt;span class="n"&gt;as&lt;/span&gt; &lt;span class="n"&gt;channel_4_1_0_&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt; &lt;span class="n"&gt;message0_&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;message&lt;/span&gt; &lt;span class="n"&gt;as&lt;/span&gt; &lt;span class="n"&gt;message5_1_0_&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt; &lt;span class="n"&gt;message0_&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;sender_country_code&lt;/span&gt; &lt;span class="n"&gt;as&lt;/span&gt; &lt;span class="n"&gt;sender_c6_1_0_&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt; &lt;span class="n"&gt;message0_&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;sender_id&lt;/span&gt; &lt;span class="n"&gt;as&lt;/span&gt; &lt;span class="n"&gt;sender_i7_1_0_&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt; &lt;span class="n"&gt;message0_&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;sent_at&lt;/span&gt; &lt;span class="n"&gt;as&lt;/span&gt; &lt;span class="n"&gt;sent_at8_1_0_&lt;/span&gt; &lt;span class="n"&gt;from&lt;/span&gt; &lt;span class="n"&gt;message&lt;/span&gt; &lt;span class="n"&gt;message0_&lt;/span&gt; &lt;span class="n"&gt;where&lt;/span&gt; &lt;span class="n"&gt;message0_&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;country_code&lt;/span&gt;&lt;span class="o"&gt;=?&lt;/span&gt; &lt;span class="n"&gt;and&lt;/span&gt; &lt;span class="n"&gt;message0_&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;id&lt;/span&gt;&lt;span class="o"&gt;=?&lt;/span&gt;
&lt;span class="nl"&gt;Hibernate:&lt;/span&gt; &lt;span class="n"&gt;select&lt;/span&gt; &lt;span class="nf"&gt;nextval&lt;/span&gt; &lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="err"&gt;'&lt;/span&gt;&lt;span class="n"&gt;message_id_seq&lt;/span&gt;&lt;span class="err"&gt;'&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt;
&lt;span class="nl"&gt;Hibernate:&lt;/span&gt; &lt;span class="n"&gt;insert&lt;/span&gt; &lt;span class="n"&gt;into&lt;/span&gt; &lt;span class="nf"&gt;message&lt;/span&gt; &lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="n"&gt;attachment&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt; &lt;span class="n"&gt;channel_id&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt; &lt;span class="n"&gt;message&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt; &lt;span class="n"&gt;sender_country_code&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt; &lt;span class="n"&gt;sender_id&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt; &lt;span class="n"&gt;sent_at&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt; &lt;span class="n"&gt;country_code&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt; &lt;span class="n"&gt;id&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt; &lt;span class="n"&gt;values&lt;/span&gt; &lt;span class="o"&gt;(?,&lt;/span&gt; &lt;span class="o"&gt;?,&lt;/span&gt; &lt;span class="o"&gt;?,&lt;/span&gt; &lt;span class="o"&gt;?,&lt;/span&gt; &lt;span class="o"&gt;?,&lt;/span&gt; &lt;span class="o"&gt;?,&lt;/span&gt; &lt;span class="o"&gt;?,&lt;/span&gt; &lt;span class="o"&gt;?)&lt;/span&gt;

&lt;span class="mi"&gt;31908&lt;/span&gt; &lt;span class="n"&gt;nanoseconds&lt;/span&gt; &lt;span class="n"&gt;spent&lt;/span&gt; &lt;span class="n"&gt;acquiring&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt; &lt;span class="no"&gt;JDBC&lt;/span&gt; &lt;span class="n"&gt;connections&lt;/span&gt;&lt;span class="o"&gt;;&lt;/span&gt;
&lt;span class="mi"&gt;0&lt;/span&gt; &lt;span class="n"&gt;nanoseconds&lt;/span&gt; &lt;span class="n"&gt;spent&lt;/span&gt; &lt;span class="n"&gt;releasing&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt; &lt;span class="no"&gt;JDBC&lt;/span&gt; &lt;span class="n"&gt;connections&lt;/span&gt;&lt;span class="o"&gt;;&lt;/span&gt;
&lt;span class="mi"&gt;461058&lt;/span&gt; &lt;span class="n"&gt;nanoseconds&lt;/span&gt; &lt;span class="n"&gt;spent&lt;/span&gt; &lt;span class="n"&gt;preparing&lt;/span&gt; &lt;span class="mi"&gt;3&lt;/span&gt; &lt;span class="no"&gt;JDBC&lt;/span&gt; &lt;span class="n"&gt;statements&lt;/span&gt;&lt;span class="o"&gt;;&lt;/span&gt;
&lt;span class="mi"&gt;91272173&lt;/span&gt; &lt;span class="n"&gt;nanoseconds&lt;/span&gt; &lt;span class="n"&gt;spent&lt;/span&gt; &lt;span class="n"&gt;executing&lt;/span&gt; &lt;span class="mi"&gt;3&lt;/span&gt; &lt;span class="no"&gt;JDBC&lt;/span&gt; &lt;span class="n"&gt;statements&lt;/span&gt;&lt;span class="o"&gt;;&lt;/span&gt; &lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;90&lt;/span&gt; &lt;span class="n"&gt;ms&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Database Read/Write Latency for the APAC Traffic
&lt;/h2&gt;

&lt;p&gt;Remember that with every multi-region database deployment there will be some tradeoffs. With the current database deployment, the read/write latency is low for the US-based traffic but high for the traffic originating from other, more distant locations. Let’s look at an example.&lt;/p&gt;

&lt;p&gt;Imagine Mr. Red, a colleague of Ms. Blue, receives a push notification about the latest message from Ms. Blue. Since Mr. Red is on a business trip in Taiwan, his traffic will be handled by the instance of the app deployed on that island. &lt;/p&gt;

&lt;p&gt;However, there is no database node deployed in or near Taiwan, so the microservice instance has to query the database nodes running in the US. This is why it takes &lt;em&gt;165 ms&lt;/em&gt; on average to load the entire channel’s history before Mr. Red sees the message of Ms. Blue:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight java"&gt;&lt;code&gt;&lt;span class="nl"&gt;Hibernate:&lt;/span&gt; &lt;span class="n"&gt;select&lt;/span&gt; &lt;span class="n"&gt;message0_&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;country_code&lt;/span&gt; &lt;span class="n"&gt;as&lt;/span&gt; &lt;span class="n"&gt;country_1_1_&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt; &lt;span class="n"&gt;message0_&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;id&lt;/span&gt; &lt;span class="n"&gt;as&lt;/span&gt; &lt;span class="n"&gt;id2_1_&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt; &lt;span class="n"&gt;message0_&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;attachment&lt;/span&gt; &lt;span class="n"&gt;as&lt;/span&gt; &lt;span class="n"&gt;attachme3_1_&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt; &lt;span class="n"&gt;message0_&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;channel_id&lt;/span&gt; &lt;span class="n"&gt;as&lt;/span&gt; &lt;span class="n"&gt;channel_4_1_&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt; &lt;span class="n"&gt;message0_&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;message&lt;/span&gt; &lt;span class="n"&gt;as&lt;/span&gt; &lt;span class="n"&gt;message5_1_&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt; &lt;span class="n"&gt;message0_&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;sender_country_code&lt;/span&gt; &lt;span class="n"&gt;as&lt;/span&gt; &lt;span class="n"&gt;sender_c6_1_&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt; &lt;span class="n"&gt;message0_&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;sender_id&lt;/span&gt; &lt;span class="n"&gt;as&lt;/span&gt; &lt;span class="n"&gt;sender_i7_1_&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt; &lt;span class="n"&gt;message0_&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;sent_at&lt;/span&gt; &lt;span class="n"&gt;as&lt;/span&gt; &lt;span class="n"&gt;sent_at8_1_&lt;/span&gt; &lt;span class="n"&gt;from&lt;/span&gt; &lt;span class="n"&gt;message&lt;/span&gt; &lt;span class="n"&gt;message0_&lt;/span&gt; &lt;span class="n"&gt;where&lt;/span&gt; &lt;span class="n"&gt;message0_&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;channel_id&lt;/span&gt;&lt;span class="o"&gt;=?&lt;/span&gt; &lt;span class="n"&gt;order&lt;/span&gt; &lt;span class="n"&gt;by&lt;/span&gt; &lt;span class="n"&gt;message0_&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;id&lt;/span&gt; &lt;span class="n"&gt;asc&lt;/span&gt;

&lt;span class="o"&gt;[&lt;/span&gt;&lt;span class="n"&gt;p&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;nio&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="mi"&gt;80&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;exec&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="mi"&gt;8&lt;/span&gt;&lt;span class="o"&gt;]&lt;/span&gt; &lt;span class="n"&gt;i&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;StatisticalLoggingSessionEventListener&lt;/span&gt; &lt;span class="o"&gt;:&lt;/span&gt; &lt;span class="nc"&gt;Session&lt;/span&gt; &lt;span class="nc"&gt;Metrics&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;
&lt;span class="mi"&gt;153152267&lt;/span&gt; &lt;span class="n"&gt;nanoseconds&lt;/span&gt; &lt;span class="n"&gt;spent&lt;/span&gt; &lt;span class="n"&gt;acquiring&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt; &lt;span class="no"&gt;JDBC&lt;/span&gt; &lt;span class="n"&gt;connections&lt;/span&gt;&lt;span class="o"&gt;;&lt;/span&gt;
&lt;span class="mi"&gt;0&lt;/span&gt; &lt;span class="n"&gt;nanoseconds&lt;/span&gt; &lt;span class="n"&gt;spent&lt;/span&gt; &lt;span class="n"&gt;releasing&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt; &lt;span class="no"&gt;JDBC&lt;/span&gt; &lt;span class="n"&gt;connections&lt;/span&gt;&lt;span class="o"&gt;;&lt;/span&gt;
&lt;span class="mi"&gt;153217915&lt;/span&gt; &lt;span class="n"&gt;nanoseconds&lt;/span&gt; &lt;span class="n"&gt;spent&lt;/span&gt; &lt;span class="n"&gt;preparing&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt; &lt;span class="no"&gt;JDBC&lt;/span&gt; &lt;span class="n"&gt;statements&lt;/span&gt;&lt;span class="o"&gt;;&lt;/span&gt;
&lt;span class="mi"&gt;165798894&lt;/span&gt; &lt;span class="n"&gt;nanoseconds&lt;/span&gt; &lt;span class="n"&gt;spent&lt;/span&gt; &lt;span class="n"&gt;executing&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt; &lt;span class="no"&gt;JDBC&lt;/span&gt; &lt;span class="n"&gt;statements&lt;/span&gt;&lt;span class="o"&gt;;&lt;/span&gt; &lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;165&lt;/span&gt; &lt;span class="n"&gt;ms&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;When Mr. Red responds to Ms. Blue in the same channel, it takes about &lt;em&gt;450 ms&lt;/em&gt; for Hibernate to prepare, send, and store the message in the database. Well, thanks to the laws of physics, the packet(s) with the message has to travel through the Pacific Ocean from Taiwan, and then the message copy has to be stored across US West, Central, and East:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight java"&gt;&lt;code&gt;&lt;span class="nl"&gt;Hibernate:&lt;/span&gt; &lt;span class="n"&gt;select&lt;/span&gt; &lt;span class="n"&gt;message0_&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;country_code&lt;/span&gt; &lt;span class="n"&gt;as&lt;/span&gt; &lt;span class="n"&gt;country_1_1_0_&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt; &lt;span class="n"&gt;message0_&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;id&lt;/span&gt; &lt;span class="n"&gt;as&lt;/span&gt; &lt;span class="n"&gt;id2_1_0_&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt; &lt;span class="n"&gt;message0_&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;attachment&lt;/span&gt; &lt;span class="n"&gt;as&lt;/span&gt; &lt;span class="n"&gt;attachme3_1_0_&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt; &lt;span class="n"&gt;message0_&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;channel_id&lt;/span&gt; &lt;span class="n"&gt;as&lt;/span&gt; &lt;span class="n"&gt;channel_4_1_0_&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt; &lt;span class="n"&gt;message0_&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;message&lt;/span&gt; &lt;span class="n"&gt;as&lt;/span&gt; &lt;span class="n"&gt;message5_1_0_&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt; &lt;span class="n"&gt;message0_&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;sender_country_code&lt;/span&gt; &lt;span class="n"&gt;as&lt;/span&gt; &lt;span class="n"&gt;sender_c6_1_0_&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt; &lt;span class="n"&gt;message0_&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;sender_id&lt;/span&gt; &lt;span class="n"&gt;as&lt;/span&gt; &lt;span class="n"&gt;sender_i7_1_0_&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt; &lt;span class="n"&gt;message0_&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;sent_at&lt;/span&gt; &lt;span class="n"&gt;as&lt;/span&gt; &lt;span class="n"&gt;sent_at8_1_0_&lt;/span&gt; &lt;span class="n"&gt;from&lt;/span&gt; &lt;span class="n"&gt;message&lt;/span&gt; &lt;span class="n"&gt;message0_&lt;/span&gt; &lt;span class="n"&gt;where&lt;/span&gt; &lt;span class="n"&gt;message0_&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;country_code&lt;/span&gt;&lt;span class="o"&gt;=?&lt;/span&gt; &lt;span class="n"&gt;and&lt;/span&gt; &lt;span class="n"&gt;message0_&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;id&lt;/span&gt;&lt;span class="o"&gt;=?&lt;/span&gt;
&lt;span class="n"&gt;select&lt;/span&gt; &lt;span class="nf"&gt;nextval&lt;/span&gt; &lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="err"&gt;'&lt;/span&gt;&lt;span class="n"&gt;message_id_seq&lt;/span&gt;&lt;span class="err"&gt;'&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt;
&lt;span class="n"&gt;insert&lt;/span&gt; &lt;span class="n"&gt;into&lt;/span&gt; &lt;span class="nf"&gt;message&lt;/span&gt; &lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="n"&gt;attachment&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt; &lt;span class="n"&gt;channel_id&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt; &lt;span class="n"&gt;message&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt; &lt;span class="n"&gt;sender_country_code&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt; &lt;span class="n"&gt;sender_id&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt; &lt;span class="n"&gt;sent_at&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt; &lt;span class="n"&gt;country_code&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt; &lt;span class="n"&gt;id&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt; &lt;span class="n"&gt;values&lt;/span&gt; &lt;span class="o"&gt;(?,&lt;/span&gt; &lt;span class="o"&gt;?,&lt;/span&gt; &lt;span class="o"&gt;?,&lt;/span&gt; &lt;span class="o"&gt;?,&lt;/span&gt; &lt;span class="o"&gt;?,&lt;/span&gt; &lt;span class="o"&gt;?,&lt;/span&gt; &lt;span class="o"&gt;?,&lt;/span&gt; &lt;span class="o"&gt;?)&lt;/span&gt;

 &lt;span class="n"&gt;i&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;StatisticalLoggingSessionEventListener&lt;/span&gt; &lt;span class="o"&gt;:&lt;/span&gt; &lt;span class="nc"&gt;Session&lt;/span&gt; &lt;span class="nc"&gt;Metrics&lt;/span&gt; 
    &lt;span class="mi"&gt;23488&lt;/span&gt; &lt;span class="n"&gt;nanoseconds&lt;/span&gt; &lt;span class="n"&gt;spent&lt;/span&gt; &lt;span class="n"&gt;acquiring&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt; &lt;span class="no"&gt;JDBC&lt;/span&gt; &lt;span class="n"&gt;connections&lt;/span&gt;&lt;span class="o"&gt;;&lt;/span&gt;
    &lt;span class="mi"&gt;0&lt;/span&gt; &lt;span class="n"&gt;nanoseconds&lt;/span&gt; &lt;span class="n"&gt;spent&lt;/span&gt; &lt;span class="n"&gt;releasing&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt; &lt;span class="no"&gt;JDBC&lt;/span&gt; &lt;span class="n"&gt;connections&lt;/span&gt;&lt;span class="o"&gt;;&lt;/span&gt;
    &lt;span class="mi"&gt;137247&lt;/span&gt; &lt;span class="n"&gt;nanoseconds&lt;/span&gt; &lt;span class="n"&gt;spent&lt;/span&gt; &lt;span class="n"&gt;preparing&lt;/span&gt; &lt;span class="mi"&gt;3&lt;/span&gt; &lt;span class="no"&gt;JDBC&lt;/span&gt; &lt;span class="n"&gt;statements&lt;/span&gt;&lt;span class="o"&gt;;&lt;/span&gt;
    &lt;span class="mi"&gt;454281135&lt;/span&gt; &lt;span class="n"&gt;nanoseconds&lt;/span&gt; &lt;span class="n"&gt;spent&lt;/span&gt; &lt;span class="n"&gt;executing&lt;/span&gt; &lt;span class="mi"&gt;3&lt;/span&gt; &lt;span class="no"&gt;JDBC&lt;/span&gt; &lt;span class="nf"&gt;statements&lt;/span&gt; &lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;450&lt;/span&gt; &lt;span class="n"&gt;ms&lt;/span&gt;&lt;span class="o"&gt;);&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now high latency for APAC-based traffic might not be a big deal for applications that don’t target users from the region, but that’s not so in my case. My geo-messenger has to function smoothly across the globe. Let’s fight this high latency, matey! We’ll start with the reads!&lt;/p&gt;

&lt;h1&gt;
  
  
  Deploying Read Replicas in Distant Locations
&lt;/h1&gt;

&lt;p&gt;The most straightforward way to improve the latency for read workloads in YugabyteDB is by deploying read replica nodes in distant locations. This is a purely operational task that can be performed on a live cluster.&lt;/p&gt;

&lt;p&gt;So, I “attached” a replica node to my live US-based database cluster, and that replica was placed in the Asia East region nearby the microservice instance that serves requests for Mr.Red.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4jdawwzorjomv1vp6o7v.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4jdawwzorjomv1vp6o7v.png" alt="Image description" width="800" height="576"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I then requested the application instance from Taiwan to use that replica node for database queries. The latency time for the channel’s history preloading for Mr.Red dropped from 165 ms to &lt;em&gt;10-15 ms&lt;/em&gt;! This is as fast as for Ms. Blue, who is based in the USA.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight java"&gt;&lt;code&gt;&lt;span class="nl"&gt;Hibernate:&lt;/span&gt; &lt;span class="n"&gt;select&lt;/span&gt; &lt;span class="n"&gt;message0_&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;country_code&lt;/span&gt; &lt;span class="n"&gt;as&lt;/span&gt; &lt;span class="n"&gt;country_1_1_&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt; &lt;span class="n"&gt;message0_&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;id&lt;/span&gt; &lt;span class="n"&gt;as&lt;/span&gt; &lt;span class="n"&gt;id2_1_&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt; &lt;span class="n"&gt;message0_&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;attachment&lt;/span&gt; &lt;span class="n"&gt;as&lt;/span&gt; &lt;span class="n"&gt;attachme3_1_&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt; &lt;span class="n"&gt;message0_&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;channel_id&lt;/span&gt; &lt;span class="n"&gt;as&lt;/span&gt; &lt;span class="n"&gt;channel_4_1_&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt; &lt;span class="n"&gt;message0_&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;message&lt;/span&gt; &lt;span class="n"&gt;as&lt;/span&gt; &lt;span class="n"&gt;message5_1_&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt; &lt;span class="n"&gt;message0_&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;sender_country_code&lt;/span&gt; &lt;span class="n"&gt;as&lt;/span&gt; &lt;span class="n"&gt;sender_c6_1_&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt; &lt;span class="n"&gt;message0_&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;sender_id&lt;/span&gt; &lt;span class="n"&gt;as&lt;/span&gt; &lt;span class="n"&gt;sender_i7_1_&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt; &lt;span class="n"&gt;message0_&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;sent_at&lt;/span&gt; &lt;span class="n"&gt;as&lt;/span&gt; &lt;span class="n"&gt;sent_at8_1_&lt;/span&gt; &lt;span class="n"&gt;from&lt;/span&gt; &lt;span class="n"&gt;message&lt;/span&gt; &lt;span class="n"&gt;message0_&lt;/span&gt; &lt;span class="n"&gt;where&lt;/span&gt; &lt;span class="n"&gt;message0_&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;channel_id&lt;/span&gt;&lt;span class="o"&gt;=?&lt;/span&gt; &lt;span class="n"&gt;order&lt;/span&gt; &lt;span class="n"&gt;by&lt;/span&gt; &lt;span class="n"&gt;message0_&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;id&lt;/span&gt; &lt;span class="n"&gt;asc&lt;/span&gt;

 &lt;span class="n"&gt;i&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;StatisticalLoggingSessionEventListener&lt;/span&gt; &lt;span class="o"&gt;:&lt;/span&gt; &lt;span class="nc"&gt;Session&lt;/span&gt; &lt;span class="nc"&gt;Metrics&lt;/span&gt; 
    &lt;span class="mi"&gt;1210615&lt;/span&gt; &lt;span class="n"&gt;nanoseconds&lt;/span&gt; &lt;span class="n"&gt;spent&lt;/span&gt; &lt;span class="n"&gt;acquiring&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt; &lt;span class="no"&gt;JDBC&lt;/span&gt; &lt;span class="n"&gt;connections&lt;/span&gt;&lt;span class="o"&gt;;&lt;/span&gt;
    &lt;span class="mi"&gt;0&lt;/span&gt; &lt;span class="n"&gt;nanoseconds&lt;/span&gt; &lt;span class="n"&gt;spent&lt;/span&gt; &lt;span class="n"&gt;releasing&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt; &lt;span class="no"&gt;JDBC&lt;/span&gt; &lt;span class="n"&gt;connections&lt;/span&gt;&lt;span class="o"&gt;;&lt;/span&gt;
    &lt;span class="mi"&gt;1255989&lt;/span&gt; &lt;span class="n"&gt;nanoseconds&lt;/span&gt; &lt;span class="n"&gt;spent&lt;/span&gt; &lt;span class="n"&gt;preparing&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt; &lt;span class="no"&gt;JDBC&lt;/span&gt; &lt;span class="n"&gt;statements&lt;/span&gt;&lt;span class="o"&gt;;&lt;/span&gt;
    &lt;span class="mi"&gt;12772870&lt;/span&gt; &lt;span class="n"&gt;nanoseconds&lt;/span&gt; &lt;span class="n"&gt;spent&lt;/span&gt; &lt;span class="n"&gt;executing&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt; &lt;span class="no"&gt;JDBC&lt;/span&gt; &lt;span class="n"&gt;statements&lt;/span&gt;&lt;span class="o"&gt;;&lt;/span&gt; &lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;12&lt;/span&gt; &lt;span class="n"&gt;mseconds&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;As a result, with read replicas, my geo-messenger can serve read requests at low latency regardless of the user’s location!&lt;/p&gt;

&lt;p&gt;But it’s too soon for a celebration. The writes are still way too slow.&lt;/p&gt;

&lt;p&gt;Imagine that Mr.Red sends another message to the channel. The microservice instance from Taiwan will ask the replica node to execute the query. And the replica will forward almost all requests generated by Hibernate to the US-based nodes that store primary copies of records. Thus, the latency still can be as high as &lt;em&gt;640 ms&lt;/em&gt; for the APAC traffic:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight java"&gt;&lt;code&gt;&lt;span class="nl"&gt;Hibernate:&lt;/span&gt; &lt;span class="n"&gt;set&lt;/span&gt; &lt;span class="n"&gt;transaction&lt;/span&gt; &lt;span class="n"&gt;read&lt;/span&gt; &lt;span class="n"&gt;write&lt;/span&gt;&lt;span class="o"&gt;;&lt;/span&gt;
&lt;span class="nl"&gt;Hibernate:&lt;/span&gt; &lt;span class="n"&gt;select&lt;/span&gt; &lt;span class="n"&gt;message0_&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;country_code&lt;/span&gt; &lt;span class="n"&gt;as&lt;/span&gt; &lt;span class="n"&gt;country_1_1_0_&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt; &lt;span class="n"&gt;message0_&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;id&lt;/span&gt; &lt;span class="n"&gt;as&lt;/span&gt; &lt;span class="n"&gt;id2_1_0_&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt; &lt;span class="n"&gt;message0_&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;attachment&lt;/span&gt; &lt;span class="n"&gt;as&lt;/span&gt; &lt;span class="n"&gt;attachme3_1_0_&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt; &lt;span class="n"&gt;message0_&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;channel_id&lt;/span&gt; &lt;span class="n"&gt;as&lt;/span&gt; &lt;span class="n"&gt;channel_4_1_0_&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt; &lt;span class="n"&gt;message0_&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;message&lt;/span&gt; &lt;span class="n"&gt;as&lt;/span&gt; &lt;span class="n"&gt;message5_1_0_&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt; &lt;span class="n"&gt;message0_&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;sender_country_code&lt;/span&gt; &lt;span class="n"&gt;as&lt;/span&gt; &lt;span class="n"&gt;sender_c6_1_0_&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt; &lt;span class="n"&gt;message0_&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;sender_id&lt;/span&gt; &lt;span class="n"&gt;as&lt;/span&gt; &lt;span class="n"&gt;sender_i7_1_0_&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt; &lt;span class="n"&gt;message0_&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;sent_at&lt;/span&gt; &lt;span class="n"&gt;as&lt;/span&gt; &lt;span class="n"&gt;sent_at8_1_0_&lt;/span&gt; &lt;span class="n"&gt;from&lt;/span&gt; &lt;span class="n"&gt;message&lt;/span&gt; &lt;span class="n"&gt;message0_&lt;/span&gt; &lt;span class="n"&gt;where&lt;/span&gt; &lt;span class="n"&gt;message0_&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;country_code&lt;/span&gt;&lt;span class="o"&gt;=?&lt;/span&gt; &lt;span class="n"&gt;and&lt;/span&gt; &lt;span class="n"&gt;message0_&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;id&lt;/span&gt;&lt;span class="o"&gt;=?&lt;/span&gt;
&lt;span class="nl"&gt;Hibernate:&lt;/span&gt; &lt;span class="n"&gt;select&lt;/span&gt; &lt;span class="nf"&gt;nextval&lt;/span&gt; &lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="err"&gt;'&lt;/span&gt;&lt;span class="n"&gt;message_id_seq&lt;/span&gt;&lt;span class="err"&gt;'&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt;
&lt;span class="nl"&gt;Hibernate:&lt;/span&gt; &lt;span class="n"&gt;insert&lt;/span&gt; &lt;span class="n"&gt;into&lt;/span&gt; &lt;span class="nf"&gt;message&lt;/span&gt; &lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="n"&gt;attachment&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt; &lt;span class="n"&gt;channel_id&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt; &lt;span class="n"&gt;message&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt; &lt;span class="n"&gt;sender_country_code&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt; &lt;span class="n"&gt;sender_id&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt; &lt;span class="n"&gt;sent_at&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt; &lt;span class="n"&gt;country_code&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt; &lt;span class="n"&gt;id&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt; &lt;span class="n"&gt;values&lt;/span&gt; &lt;span class="o"&gt;(?,&lt;/span&gt; &lt;span class="o"&gt;?,&lt;/span&gt; &lt;span class="o"&gt;?,&lt;/span&gt; &lt;span class="o"&gt;?,&lt;/span&gt; &lt;span class="o"&gt;?,&lt;/span&gt; &lt;span class="o"&gt;?,&lt;/span&gt; &lt;span class="o"&gt;?,&lt;/span&gt; &lt;span class="o"&gt;?)&lt;/span&gt;

 &lt;span class="n"&gt;i&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;StatisticalLoggingSessionEventListener&lt;/span&gt; &lt;span class="o"&gt;:&lt;/span&gt; &lt;span class="nc"&gt;Session&lt;/span&gt; &lt;span class="nc"&gt;Metrics&lt;/span&gt; 
    &lt;span class="mi"&gt;23215&lt;/span&gt; &lt;span class="n"&gt;nanoseconds&lt;/span&gt; &lt;span class="n"&gt;spent&lt;/span&gt; &lt;span class="n"&gt;acquiring&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt; &lt;span class="no"&gt;JDBC&lt;/span&gt; &lt;span class="n"&gt;connections&lt;/span&gt;&lt;span class="o"&gt;;&lt;/span&gt;
    &lt;span class="mi"&gt;0&lt;/span&gt; &lt;span class="n"&gt;nanoseconds&lt;/span&gt; &lt;span class="n"&gt;spent&lt;/span&gt; &lt;span class="n"&gt;releasing&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt; &lt;span class="no"&gt;JDBC&lt;/span&gt; &lt;span class="n"&gt;connections&lt;/span&gt;&lt;span class="o"&gt;;&lt;/span&gt;
    &lt;span class="mi"&gt;141888&lt;/span&gt; &lt;span class="n"&gt;nanoseconds&lt;/span&gt; &lt;span class="n"&gt;spent&lt;/span&gt; &lt;span class="n"&gt;preparing&lt;/span&gt; &lt;span class="mi"&gt;4&lt;/span&gt; &lt;span class="no"&gt;JDBC&lt;/span&gt; &lt;span class="n"&gt;statements&lt;/span&gt;&lt;span class="o"&gt;;&lt;/span&gt;
    &lt;span class="mi"&gt;640199316&lt;/span&gt; &lt;span class="n"&gt;nanoseconds&lt;/span&gt; &lt;span class="n"&gt;spent&lt;/span&gt; &lt;span class="n"&gt;executing&lt;/span&gt; &lt;span class="mi"&gt;4&lt;/span&gt; &lt;span class="no"&gt;JDBC&lt;/span&gt; &lt;span class="n"&gt;statements&lt;/span&gt;&lt;span class="o"&gt;;&lt;/span&gt; &lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;640&lt;/span&gt; &lt;span class="n"&gt;ms&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;At last, matey, let’s solve this issue once and for all! &lt;/p&gt;

&lt;h1&gt;
  
  
  Switching to Global Geo-Partitioned Cluster
&lt;/h1&gt;

&lt;p&gt;A global geo-partitioned cluster can deliver fast reads and writes across distant locations but requires you to introduce a special geo-partitioning column to the database schema. Based on the value of the column, the database will automatically decide what database node in what geography a record belongs to.&lt;/p&gt;

&lt;h2&gt;
  
  
  Database Schema Changes
&lt;/h2&gt;

&lt;p&gt;In a nutshell, my tables, such as the Message one, define the &lt;code&gt;country_code&lt;/code&gt; column:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight sql"&gt;&lt;code&gt;&lt;span class="k"&gt;CREATE&lt;/span&gt; &lt;span class="k"&gt;TABLE&lt;/span&gt; &lt;span class="n"&gt;Message&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="n"&gt;id&lt;/span&gt; &lt;span class="nb"&gt;integer&lt;/span&gt; &lt;span class="k"&gt;NOT&lt;/span&gt; &lt;span class="k"&gt;NULL&lt;/span&gt; &lt;span class="k"&gt;DEFAULT&lt;/span&gt; &lt;span class="n"&gt;nextval&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s1"&gt;'message_id_seq'&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt;
    &lt;span class="n"&gt;channel_id&lt;/span&gt; &lt;span class="nb"&gt;integer&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;sender_id&lt;/span&gt; &lt;span class="nb"&gt;integer&lt;/span&gt; &lt;span class="k"&gt;NOT&lt;/span&gt; &lt;span class="k"&gt;NULL&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;message&lt;/span&gt; &lt;span class="nb"&gt;text&lt;/span&gt; &lt;span class="k"&gt;NOT&lt;/span&gt; &lt;span class="k"&gt;NULL&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;attachment&lt;/span&gt; &lt;span class="nb"&gt;boolean&lt;/span&gt; &lt;span class="k"&gt;NOT&lt;/span&gt; &lt;span class="k"&gt;NULL&lt;/span&gt; &lt;span class="k"&gt;DEFAULT&lt;/span&gt; &lt;span class="k"&gt;FALSE&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;sent_at&lt;/span&gt; &lt;span class="nb"&gt;TIMESTAMP&lt;/span&gt; &lt;span class="k"&gt;NOT&lt;/span&gt; &lt;span class="k"&gt;NULL&lt;/span&gt; &lt;span class="k"&gt;DEFAULT&lt;/span&gt; &lt;span class="n"&gt;NOW&lt;/span&gt;&lt;span class="p"&gt;(),&lt;/span&gt;
    &lt;span class="n"&gt;country_code&lt;/span&gt; &lt;span class="nb"&gt;varchar&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;3&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;NOT&lt;/span&gt; &lt;span class="k"&gt;NULL&lt;/span&gt;
&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;PARTITION&lt;/span&gt; &lt;span class="k"&gt;BY&lt;/span&gt; &lt;span class="n"&gt;LIST&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;country_code&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Depending on the value of that column, a record can be placed in one of the following database partitions:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight sql"&gt;&lt;code&gt;&lt;span class="k"&gt;CREATE&lt;/span&gt; &lt;span class="k"&gt;TABLE&lt;/span&gt; &lt;span class="n"&gt;Message_USA&lt;/span&gt;
    &lt;span class="k"&gt;PARTITION&lt;/span&gt; &lt;span class="k"&gt;OF&lt;/span&gt; &lt;span class="n"&gt;Message&lt;/span&gt;
    &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;id&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;channel_id&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;sender_id&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;message&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;sent_at&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;country_code&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;sender_country_code&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="k"&gt;PRIMARY&lt;/span&gt; &lt;span class="k"&gt;KEY&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;id&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;country_code&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt;
    &lt;span class="k"&gt;FOR&lt;/span&gt; &lt;span class="k"&gt;VALUES&lt;/span&gt; &lt;span class="k"&gt;IN&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s1"&gt;'USA'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="n"&gt;TABLESPACE&lt;/span&gt; &lt;span class="n"&gt;us_central1_ts&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="k"&gt;CREATE&lt;/span&gt; &lt;span class="k"&gt;TABLE&lt;/span&gt; &lt;span class="n"&gt;Message_EU&lt;/span&gt;
    &lt;span class="k"&gt;PARTITION&lt;/span&gt; &lt;span class="k"&gt;OF&lt;/span&gt; &lt;span class="n"&gt;Message&lt;/span&gt;
    &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;id&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;channel_id&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;sender_id&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;message&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;sent_at&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;country_code&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;sender_country_code&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="k"&gt;PRIMARY&lt;/span&gt; &lt;span class="k"&gt;KEY&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;id&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;country_code&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt;
    &lt;span class="k"&gt;FOR&lt;/span&gt; &lt;span class="k"&gt;VALUES&lt;/span&gt; &lt;span class="k"&gt;IN&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s1"&gt;'DEU'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="n"&gt;TABLESPACE&lt;/span&gt; &lt;span class="n"&gt;europe_west3_ts&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="k"&gt;CREATE&lt;/span&gt; &lt;span class="k"&gt;TABLE&lt;/span&gt; &lt;span class="n"&gt;Message_APAC&lt;/span&gt;
    &lt;span class="k"&gt;PARTITION&lt;/span&gt; &lt;span class="k"&gt;OF&lt;/span&gt; &lt;span class="n"&gt;Message&lt;/span&gt;
    &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;id&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;channel_id&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;sender_id&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;message&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;sent_at&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;country_code&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;sender_country_code&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="k"&gt;PRIMARY&lt;/span&gt; &lt;span class="k"&gt;KEY&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;id&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;country_code&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt;
    &lt;span class="k"&gt;FOR&lt;/span&gt; &lt;span class="k"&gt;VALUES&lt;/span&gt; &lt;span class="k"&gt;IN&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s1"&gt;'TWN'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="n"&gt;TABLESPACE&lt;/span&gt; &lt;span class="n"&gt;asia_east1_ts&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Each partition is mapped to one of the tablespaces:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight sql"&gt;&lt;code&gt;&lt;span class="k"&gt;CREATE&lt;/span&gt; &lt;span class="n"&gt;TABLESPACE&lt;/span&gt; &lt;span class="n"&gt;us_central1_ts&lt;/span&gt; &lt;span class="k"&gt;WITH&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;
  &lt;span class="n"&gt;replica_placement&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s1"&gt;'{"num_replicas": 1, "placement_blocks":
  [{"cloud":"gcp","region":"us-central1","zone":"us-central1-b","min_num_replicas":1}]}'&lt;/span&gt;
&lt;span class="p"&gt;);&lt;/span&gt;

&lt;span class="k"&gt;CREATE&lt;/span&gt; &lt;span class="n"&gt;TABLESPACE&lt;/span&gt; &lt;span class="n"&gt;europe_west3_ts&lt;/span&gt; &lt;span class="k"&gt;WITH&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;
  &lt;span class="n"&gt;replica_placement&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s1"&gt;'{"num_replicas": 1, "placement_blocks":
  [{"cloud":"gcp","region":"europe-west3","zone":"europe-west3-b","min_num_replicas":1}]}'&lt;/span&gt;
&lt;span class="p"&gt;);&lt;/span&gt;

&lt;span class="k"&gt;CREATE&lt;/span&gt; &lt;span class="n"&gt;TABLESPACE&lt;/span&gt; &lt;span class="n"&gt;asia_east1_ts&lt;/span&gt; &lt;span class="k"&gt;WITH&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;
  &lt;span class="n"&gt;replica_placement&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s1"&gt;'{"num_replicas": 1, "placement_blocks":
  [{"cloud":"gcp","region":"asia-east1","zone":"asia-east1-b","min_num_replicas":1}]}'&lt;/span&gt;
&lt;span class="p"&gt;);&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;And each tablespace belongs to a group of database nodes from a particular geography. For instance, all the records with the country code of Taiwan (&lt;code&gt;country_code=TWN&lt;/code&gt;) will be stored on cluster nodes from the Asia East cloud region because those nodes hold the partition and tablespace for the APAC data. Check out the &lt;a href="https://dzone.com/articles/what-developers-need-to-know-about-table-geo-parti" rel="noopener noreferrer"&gt;following article&lt;/a&gt; if you want to learn the details of geo-partitioning.&lt;/p&gt;

&lt;h2&gt;
  
  
  Low Read/Write Latency Across Continents
&lt;/h2&gt;

&lt;p&gt;So, I deployed a three-node geo-partitioned cluster across the US Central, Europe West, and Asia East regions.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Feq6nz028kuh6uflpcqcz.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Feq6nz028kuh6uflpcqcz.png" alt="Image description" width="800" height="546"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now, let's ensure that the read latency for Mr. Red's requests remains intact. As with the read replica deployment, it still takes &lt;em&gt;5-15 ms&lt;/em&gt; to load the channel's message history (for the channels and messages belonging to the APAC region):&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight java"&gt;&lt;code&gt;&lt;span class="nl"&gt;Hibernate:&lt;/span&gt; &lt;span class="n"&gt;select&lt;/span&gt; &lt;span class="n"&gt;message0_&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;country_code&lt;/span&gt; &lt;span class="n"&gt;as&lt;/span&gt; &lt;span class="n"&gt;country_1_1_&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt; &lt;span class="n"&gt;message0_&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;id&lt;/span&gt; &lt;span class="n"&gt;as&lt;/span&gt; &lt;span class="n"&gt;id2_1_&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt; &lt;span class="n"&gt;message0_&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;attachment&lt;/span&gt; &lt;span class="n"&gt;as&lt;/span&gt; &lt;span class="n"&gt;attachme3_1_&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt; &lt;span class="n"&gt;message0_&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;channel_id&lt;/span&gt; &lt;span class="n"&gt;as&lt;/span&gt; &lt;span class="n"&gt;channel_4_1_&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt; &lt;span class="n"&gt;message0_&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;message&lt;/span&gt; &lt;span class="n"&gt;as&lt;/span&gt; &lt;span class="n"&gt;message5_1_&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt; &lt;span class="n"&gt;message0_&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;sender_country_code&lt;/span&gt; &lt;span class="n"&gt;as&lt;/span&gt; &lt;span class="n"&gt;sender_c6_1_&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt; &lt;span class="n"&gt;message0_&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;sender_id&lt;/span&gt; &lt;span class="n"&gt;as&lt;/span&gt; &lt;span class="n"&gt;sender_i7_1_&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt; &lt;span class="n"&gt;message0_&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;sent_at&lt;/span&gt; &lt;span class="n"&gt;as&lt;/span&gt; &lt;span class="n"&gt;sent_at8_1_&lt;/span&gt; &lt;span class="n"&gt;from&lt;/span&gt; &lt;span class="n"&gt;message&lt;/span&gt; &lt;span class="n"&gt;message0_&lt;/span&gt; &lt;span class="n"&gt;where&lt;/span&gt; &lt;span class="n"&gt;message0_&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;channel_id&lt;/span&gt;&lt;span class="o"&gt;=?&lt;/span&gt; &lt;span class="n"&gt;and&lt;/span&gt; &lt;span class="n"&gt;message0_&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;country_code&lt;/span&gt;&lt;span class="o"&gt;=?&lt;/span&gt; &lt;span class="n"&gt;order&lt;/span&gt; &lt;span class="n"&gt;by&lt;/span&gt; &lt;span class="n"&gt;message0_&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;id&lt;/span&gt; &lt;span class="n"&gt;asc&lt;/span&gt;

 &lt;span class="n"&gt;i&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;StatisticalLoggingSessionEventListener&lt;/span&gt; &lt;span class="o"&gt;:&lt;/span&gt; &lt;span class="nc"&gt;Session&lt;/span&gt; &lt;span class="nc"&gt;Metrics&lt;/span&gt; 
&lt;span class="mi"&gt;1516450&lt;/span&gt; &lt;span class="n"&gt;nanoseconds&lt;/span&gt; &lt;span class="n"&gt;spent&lt;/span&gt; &lt;span class="n"&gt;acquiring&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt; &lt;span class="no"&gt;JDBC&lt;/span&gt; &lt;span class="n"&gt;connections&lt;/span&gt;&lt;span class="o"&gt;;&lt;/span&gt;
  &lt;span class="mi"&gt;0&lt;/span&gt; &lt;span class="n"&gt;nanoseconds&lt;/span&gt; &lt;span class="n"&gt;spent&lt;/span&gt; &lt;span class="n"&gt;releasing&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt; &lt;span class="no"&gt;JDBC&lt;/span&gt; &lt;span class="n"&gt;connections&lt;/span&gt;&lt;span class="o"&gt;;&lt;/span&gt;
  &lt;span class="mi"&gt;1640860&lt;/span&gt; &lt;span class="n"&gt;nanoseconds&lt;/span&gt; &lt;span class="n"&gt;spent&lt;/span&gt; &lt;span class="n"&gt;preparing&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt; &lt;span class="no"&gt;JDBC&lt;/span&gt; &lt;span class="n"&gt;statements&lt;/span&gt;&lt;span class="o"&gt;;&lt;/span&gt;
  &lt;span class="mi"&gt;7495719&lt;/span&gt; &lt;span class="n"&gt;nanoseconds&lt;/span&gt; &lt;span class="n"&gt;spent&lt;/span&gt; &lt;span class="n"&gt;executing&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt; &lt;span class="no"&gt;JDBC&lt;/span&gt; &lt;span class="n"&gt;statements&lt;/span&gt;&lt;span class="o"&gt;;&lt;/span&gt; &lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;7&lt;/span&gt; &lt;span class="n"&gt;ms&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;And…drumroll please….when Mr. Red posts a message into an APAC channel, the writes latency has dropped from 400-650 ms to &lt;em&gt;6 ms&lt;/em&gt; on average!&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight java"&gt;&lt;code&gt;&lt;span class="nl"&gt;Hibernate:&lt;/span&gt; &lt;span class="n"&gt;select&lt;/span&gt; &lt;span class="n"&gt;message0_&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;country_code&lt;/span&gt; &lt;span class="n"&gt;as&lt;/span&gt; &lt;span class="n"&gt;country_1_1_0_&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt; &lt;span class="n"&gt;message0_&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;id&lt;/span&gt; &lt;span class="n"&gt;as&lt;/span&gt; &lt;span class="n"&gt;id2_1_0_&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt; &lt;span class="n"&gt;message0_&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;attachment&lt;/span&gt; &lt;span class="n"&gt;as&lt;/span&gt; &lt;span class="n"&gt;attachme3_1_0_&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt; &lt;span class="n"&gt;message0_&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;channel_id&lt;/span&gt; &lt;span class="n"&gt;as&lt;/span&gt; &lt;span class="n"&gt;channel_4_1_0_&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt; &lt;span class="n"&gt;message0_&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;message&lt;/span&gt; &lt;span class="n"&gt;as&lt;/span&gt; &lt;span class="n"&gt;message5_1_0_&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt; &lt;span class="n"&gt;message0_&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;sender_country_code&lt;/span&gt; &lt;span class="n"&gt;as&lt;/span&gt; &lt;span class="n"&gt;sender_c6_1_0_&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt; &lt;span class="n"&gt;message0_&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;sender_id&lt;/span&gt; &lt;span class="n"&gt;as&lt;/span&gt; &lt;span class="n"&gt;sender_i7_1_0_&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt; &lt;span class="n"&gt;message0_&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;sent_at&lt;/span&gt; &lt;span class="n"&gt;as&lt;/span&gt; &lt;span class="n"&gt;sent_at8_1_0_&lt;/span&gt; &lt;span class="n"&gt;from&lt;/span&gt; &lt;span class="n"&gt;message&lt;/span&gt; &lt;span class="n"&gt;message0_&lt;/span&gt; &lt;span class="n"&gt;where&lt;/span&gt; &lt;span class="n"&gt;message0_&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;country_code&lt;/span&gt;&lt;span class="o"&gt;=?&lt;/span&gt; &lt;span class="n"&gt;and&lt;/span&gt; &lt;span class="n"&gt;message0_&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;id&lt;/span&gt;&lt;span class="o"&gt;=?&lt;/span&gt;
&lt;span class="nl"&gt;Hibernate:&lt;/span&gt; &lt;span class="n"&gt;select&lt;/span&gt; &lt;span class="nf"&gt;nextval&lt;/span&gt; &lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="err"&gt;'&lt;/span&gt;&lt;span class="n"&gt;message_id_seq&lt;/span&gt;&lt;span class="err"&gt;'&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt;
&lt;span class="nl"&gt;Hibernate:&lt;/span&gt; &lt;span class="n"&gt;insert&lt;/span&gt; &lt;span class="n"&gt;into&lt;/span&gt; &lt;span class="nf"&gt;message&lt;/span&gt; &lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="n"&gt;attachment&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt; &lt;span class="n"&gt;channel_id&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt; &lt;span class="n"&gt;message&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt; &lt;span class="n"&gt;sender_country_code&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt; &lt;span class="n"&gt;sender_id&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt; &lt;span class="n"&gt;sent_at&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt; &lt;span class="n"&gt;country_code&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt; &lt;span class="n"&gt;id&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt; &lt;span class="n"&gt;values&lt;/span&gt; &lt;span class="o"&gt;(?,&lt;/span&gt; &lt;span class="o"&gt;?,&lt;/span&gt; &lt;span class="o"&gt;?,&lt;/span&gt; &lt;span class="o"&gt;?,&lt;/span&gt; &lt;span class="o"&gt;?,&lt;/span&gt; &lt;span class="o"&gt;?,&lt;/span&gt; &lt;span class="o"&gt;?,&lt;/span&gt; &lt;span class="o"&gt;?)&lt;/span&gt;

&lt;span class="mi"&gt;1123280&lt;/span&gt; &lt;span class="n"&gt;nanoseconds&lt;/span&gt; &lt;span class="n"&gt;spent&lt;/span&gt; &lt;span class="n"&gt;acquiring&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt; &lt;span class="no"&gt;JDBC&lt;/span&gt; &lt;span class="n"&gt;connections&lt;/span&gt;&lt;span class="o"&gt;;&lt;/span&gt;
  &lt;span class="mi"&gt;0&lt;/span&gt; &lt;span class="n"&gt;nanoseconds&lt;/span&gt; &lt;span class="n"&gt;spent&lt;/span&gt; &lt;span class="n"&gt;releasing&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt; &lt;span class="no"&gt;JDBC&lt;/span&gt; &lt;span class="n"&gt;connections&lt;/span&gt;&lt;span class="o"&gt;;&lt;/span&gt;
  &lt;span class="mi"&gt;123249&lt;/span&gt; &lt;span class="n"&gt;nanoseconds&lt;/span&gt; &lt;span class="n"&gt;spent&lt;/span&gt; &lt;span class="n"&gt;preparing&lt;/span&gt; &lt;span class="mi"&gt;3&lt;/span&gt; &lt;span class="no"&gt;JDBC&lt;/span&gt; &lt;span class="n"&gt;statements&lt;/span&gt;&lt;span class="o"&gt;;&lt;/span&gt;
  &lt;span class="mi"&gt;6597471&lt;/span&gt; &lt;span class="n"&gt;nanoseconds&lt;/span&gt; &lt;span class="n"&gt;spent&lt;/span&gt; &lt;span class="n"&gt;executing&lt;/span&gt; &lt;span class="mi"&gt;3&lt;/span&gt; &lt;span class="no"&gt;JDBC&lt;/span&gt; &lt;span class="n"&gt;statements&lt;/span&gt;&lt;span class="o"&gt;;&lt;/span&gt; &lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;6&lt;/span&gt; &lt;span class="n"&gt;ms&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Mission accomplished, matey! Now my geo-messenger’s database can serve reads and writes with low latency across countries and continents. I just need to tell my messenger where to deploy microservice instances and database nodes.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Case for Cross-Continent Queries
&lt;/h2&gt;

&lt;p&gt;Now for a quick comment on why I skipped the database deployment option with multiple standalone YugabyteDB clusters.&lt;/p&gt;

&lt;p&gt;It was important to me to have a single database for the messenger so that:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Users could join discussion channels belonging to any geography.&lt;/li&gt;
&lt;li&gt;Microservice instances from any location could access data in any location. &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;For instance, if Mr. Red joins a discussion channel belonging to the US nodes (&lt;code&gt;country_code=’USA'&lt;/code&gt;) and sends a message there, the microservice instance from Taiwan will send the request to the Taiwan-based database node and that node will forward the request to the US-based counterpart. The latency for this operation is comprised of three SQL requests and will be around &lt;em&gt;165 ms&lt;/em&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight java"&gt;&lt;code&gt;&lt;span class="nl"&gt;Hibernate:&lt;/span&gt; &lt;span class="n"&gt;select&lt;/span&gt; &lt;span class="n"&gt;message0_&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;country_code&lt;/span&gt; &lt;span class="n"&gt;as&lt;/span&gt; &lt;span class="n"&gt;country_1_1_0_&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt; &lt;span class="n"&gt;message0_&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;id&lt;/span&gt; &lt;span class="n"&gt;as&lt;/span&gt; &lt;span class="n"&gt;id2_1_0_&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt; &lt;span class="n"&gt;message0_&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;attachment&lt;/span&gt; &lt;span class="n"&gt;as&lt;/span&gt; &lt;span class="n"&gt;attachme3_1_0_&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt; &lt;span class="n"&gt;message0_&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;channel_id&lt;/span&gt; &lt;span class="n"&gt;as&lt;/span&gt; &lt;span class="n"&gt;channel_4_1_0_&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt; &lt;span class="n"&gt;message0_&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;message&lt;/span&gt; &lt;span class="n"&gt;as&lt;/span&gt; &lt;span class="n"&gt;message5_1_0_&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt; &lt;span class="n"&gt;message0_&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;sender_country_code&lt;/span&gt; &lt;span class="n"&gt;as&lt;/span&gt; &lt;span class="n"&gt;sender_c6_1_0_&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt; &lt;span class="n"&gt;message0_&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;sender_id&lt;/span&gt; &lt;span class="n"&gt;as&lt;/span&gt; &lt;span class="n"&gt;sender_i7_1_0_&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt; &lt;span class="n"&gt;message0_&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;sent_at&lt;/span&gt; &lt;span class="n"&gt;as&lt;/span&gt; &lt;span class="n"&gt;sent_at8_1_0_&lt;/span&gt; &lt;span class="n"&gt;from&lt;/span&gt; &lt;span class="n"&gt;message&lt;/span&gt; &lt;span class="n"&gt;message0_&lt;/span&gt; &lt;span class="n"&gt;where&lt;/span&gt; &lt;span class="n"&gt;message0_&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;country_code&lt;/span&gt;&lt;span class="o"&gt;=?&lt;/span&gt; &lt;span class="n"&gt;and&lt;/span&gt; &lt;span class="n"&gt;message0_&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;id&lt;/span&gt;&lt;span class="o"&gt;=?&lt;/span&gt;
&lt;span class="nl"&gt;Hibernate:&lt;/span&gt; &lt;span class="n"&gt;select&lt;/span&gt; &lt;span class="nf"&gt;nextval&lt;/span&gt; &lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="err"&gt;'&lt;/span&gt;&lt;span class="n"&gt;message_id_seq&lt;/span&gt;&lt;span class="err"&gt;'&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt;
&lt;span class="nl"&gt;Hibernate:&lt;/span&gt; &lt;span class="n"&gt;insert&lt;/span&gt; &lt;span class="n"&gt;into&lt;/span&gt; &lt;span class="nf"&gt;message&lt;/span&gt; &lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="n"&gt;attachment&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt; &lt;span class="n"&gt;channel_id&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt; &lt;span class="n"&gt;message&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt; &lt;span class="n"&gt;sender_country_code&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt; &lt;span class="n"&gt;sender_id&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt; &lt;span class="n"&gt;sent_at&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt; &lt;span class="n"&gt;country_code&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt; &lt;span class="n"&gt;id&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt; &lt;span class="n"&gt;values&lt;/span&gt; &lt;span class="o"&gt;(?,&lt;/span&gt; &lt;span class="o"&gt;?,&lt;/span&gt; &lt;span class="o"&gt;?,&lt;/span&gt; &lt;span class="o"&gt;?,&lt;/span&gt; &lt;span class="o"&gt;?,&lt;/span&gt; &lt;span class="o"&gt;?,&lt;/span&gt; &lt;span class="o"&gt;?,&lt;/span&gt; &lt;span class="o"&gt;?)&lt;/span&gt;

 &lt;span class="n"&gt;i&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;StatisticalLoggingSessionEventListener&lt;/span&gt; &lt;span class="o"&gt;:&lt;/span&gt; &lt;span class="nc"&gt;Session&lt;/span&gt; &lt;span class="nc"&gt;Metrics&lt;/span&gt; 
&lt;span class="mi"&gt;1310550&lt;/span&gt; &lt;span class="n"&gt;nanoseconds&lt;/span&gt; &lt;span class="n"&gt;spent&lt;/span&gt; &lt;span class="n"&gt;acquiring&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt; &lt;span class="no"&gt;JDBC&lt;/span&gt; &lt;span class="n"&gt;connections&lt;/span&gt;&lt;span class="o"&gt;;&lt;/span&gt;
  &lt;span class="mi"&gt;0&lt;/span&gt; &lt;span class="n"&gt;nanoseconds&lt;/span&gt; &lt;span class="n"&gt;spent&lt;/span&gt; &lt;span class="n"&gt;releasing&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt; &lt;span class="no"&gt;JDBC&lt;/span&gt; &lt;span class="n"&gt;connections&lt;/span&gt;&lt;span class="o"&gt;;&lt;/span&gt;
  &lt;span class="mi"&gt;159080&lt;/span&gt; &lt;span class="n"&gt;nanoseconds&lt;/span&gt; &lt;span class="n"&gt;spent&lt;/span&gt; &lt;span class="n"&gt;preparing&lt;/span&gt; &lt;span class="mi"&gt;3&lt;/span&gt; &lt;span class="no"&gt;JDBC&lt;/span&gt; &lt;span class="n"&gt;statements&lt;/span&gt;&lt;span class="o"&gt;;&lt;/span&gt;
  &lt;span class="mi"&gt;164660288&lt;/span&gt; &lt;span class="n"&gt;nanoseconds&lt;/span&gt; &lt;span class="n"&gt;spent&lt;/span&gt; &lt;span class="n"&gt;executing&lt;/span&gt; &lt;span class="mi"&gt;3&lt;/span&gt; &lt;span class="no"&gt;JDBC&lt;/span&gt; &lt;span class="n"&gt;statements&lt;/span&gt;&lt;span class="o"&gt;;&lt;/span&gt; &lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;164&lt;/span&gt; &lt;span class="n"&gt;ms&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;165 ms is undoubtedly higher than 6 ms (the latency when Mr. Red posts a message into a local APAC-based channel), but what’s important here is the ability to make cross-continent requests via a single database connection when necessary. Plus, as the execution plan shows, there is a lot of room for optimization at the Hibernate level. Presently, Hibernate translates my &lt;code&gt;JpaRepository.save(message)&lt;/code&gt; call into 3 JDBC statements. This is what can be optimized further to bring down the latency for the cross-continent requests from 165 ms to a much lower value. &lt;/p&gt;

&lt;h1&gt;
  
  
  Wrapping Up
&lt;/h1&gt;

&lt;p&gt;Alright, matey! &lt;/p&gt;

&lt;p&gt;As you see, distributed databases such as YugabyteDB can function seamlessly across geographies. You need to pick a deployment option that works best for your application. In my case, the &lt;a href="https://www.yugabyte.com/blog/geo-partitioning-of-data-in-yugabytedb/" rel="noopener noreferrer"&gt;geo-partitioned YugabyteDB cluster&lt;/a&gt; fits the most for the messenger's requirements.&lt;/p&gt;

&lt;p&gt;Well, this article ends the series about my geo-messenger's development journey. The application &lt;a href="https://github.com/YugabyteDB-Samples/geo-distributed-messenger" rel="noopener noreferrer"&gt;source code&lt;/a&gt;  is available on GitHub so that you can explore the logic and run the app in your environment.&lt;/p&gt;

&lt;p&gt;Next, I'll take a little break and then start preparing for my &lt;a href="https://springone.io/2022/sessions/weathering-the-cloud-storm-building-resilient-geo-distributed-apps-with-spring-cloud" rel="noopener noreferrer"&gt;upcoming SpringOne session&lt;/a&gt; that will feature the geo-messenger's version running on Spring Cloud components. So, if it's relevant to you, &lt;a href="https://twitter.com/denismagda" rel="noopener noreferrer"&gt;follow me&lt;/a&gt; to get notified about future articles related to Spring Cloud and geo-distributed apps.&lt;/p&gt;

</description>
      <category>yugabytedb</category>
      <category>distributedsql</category>
      <category>cloud</category>
      <category>distributedsystems</category>
    </item>
    <item>
      <title>Using Global Cloud Load Balancer to Route User Requests to the Closest App Instance</title>
      <dc:creator>Denis Magda</dc:creator>
      <pubDate>Tue, 25 Oct 2022 15:17:12 +0000</pubDate>
      <link>https://dev.to/yugabyte/using-global-cloud-load-balancer-to-route-user-requests-to-the-closest-app-instance-21d3</link>
      <guid>https://dev.to/yugabyte/using-global-cloud-load-balancer-to-route-user-requests-to-the-closest-app-instance-21d3</guid>
      <description>&lt;p&gt;Ahoy, Mateys! &lt;/p&gt;

&lt;p&gt;In my last two articles, I showed you how to deploy &lt;a href="https://dev.to/yugabyte/automating-java-application-deployment-across-multiple-cloud-regions-27db"&gt;microservice instances across several cloud regions&lt;/a&gt; and how to set up a &lt;a href="https://dev.to/yugabyte/geo-distributed-api-layer-with-kong-gateway-4d25"&gt;geo-distributed API layer&lt;/a&gt;. &lt;/p&gt;

&lt;p&gt;All seems good, except that I’ve now got several standalone application instances running across the US West, Central, and East regions, and each instance has its own private IP address.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbmda2hai8ljqg0lihwwz.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbmda2hai8ljqg0lihwwz.png" alt="Image description" width="800" height="691"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;My next challenge is figuring out how to forward application requests to the instances closest to the user. &lt;/p&gt;

&lt;p&gt;I don’t want my Web UI and mobile apps to keep track of all the IPs and decide how and where to route user requests. This is way too complicated and becomes unmanageable. Instead, the Web UI and mobile apps need to connect to a single IP address, which lets other components of my geo-distributed solution decide where to route a particular user request.&lt;/p&gt;

&lt;p&gt;What could those components be, matey? &lt;/p&gt;

&lt;p&gt;You’ve guessed it: usually, a proxy and load balancer. &lt;/p&gt;

&lt;p&gt;In this article, I’ll show you how to configure a &lt;a href="https://cloud.google.com/load-balancing" rel="noopener noreferrer"&gt;global cloud load balancer&lt;/a&gt; that serves as both a proxy and a load balancer. This type of load balancer comes with a single IP address that can be accessed from any location on earth and can route a request to the nearest (active!) application instance.&lt;/p&gt;

&lt;p&gt;So, if you’re still with me on this journey, then, as the pirates used to say, “Weigh Anchor and Hoist the Mizzen!” which means, “Pull up the anchor and get this ship sailing!”&lt;/p&gt;

&lt;h1&gt;
  
  
  Explaining Global Cloud Balancer Usage
&lt;/h1&gt;

&lt;p&gt;Before diving into details and technicalities, let’s use a few illustrations to see the value of the load balancer.&lt;/p&gt;

&lt;p&gt;Suppose Ms. Blue, one of my application users, is vacationing in the Tampa Bay area of Florida. She promised not to open my corporate Slack-like messenger (the &lt;a href="https://github.com/YugabyteDB-Samples/geo-distributed-messenger" rel="noopener noreferrer"&gt;geo-distributed app I’m building&lt;/a&gt;) while on vacation. &lt;/p&gt;

&lt;p&gt;She failed to keep her promise, but for a good reason. She couldn’t resist sharing a few pictures from the pristine beaches of Tampa Bay.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpm6aallmqcuo7haad6bu.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpm6aallmqcuo7haad6bu.png" alt="Image description" width="800" height="535"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The diagram above shows what happens when Ms. Blue opens the app and sends a few gorgeous photos. &lt;/p&gt;

&lt;p&gt;The app connects to the global load balancer's public IP address, &lt;code&gt;35.186.215.175&lt;/code&gt;. Next, the load balancer determines that the application instance with IP &lt;code&gt;10.1.12.8&lt;/code&gt; is closest to Ms. Blue (both Ms. Blue and the instance are on the US East coast). Finally, the load balancer asks the East Coast instance to upload the pictures.&lt;/p&gt;

&lt;p&gt;Next, Mr. Green, a colleague of Ms. Blue, working from his Seattle-based company office, receives a push notification from the messenger and can’t resist checking what’s been posted. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9i8ddnbodge7l9qgpi97.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9i8ddnbodge7l9qgpi97.png" alt="Image description" width="800" height="435"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Mr. Green opens the mobile app and the app connects to the same load balancer using the same public IP address, &lt;code&gt;35.186.215.175&lt;/code&gt;. This time, the load balancer forwards Mr. Green’s request to the application instance from the West Coast that uses &lt;code&gt;10.1.11.16&lt;/code&gt; IP address.&lt;/p&gt;

&lt;p&gt;Also, the load balancer is instrumental in case of cloud outages. Let’s consider a final example.&lt;/p&gt;

&lt;p&gt;A few days later, Ms. Blue decides to make her colleagues jealous again and opens the app to send another set of fabulous pictures. The application connects to the load balancer on the same public IP address (&lt;code&gt;35.186.215.175&lt;/code&gt;) and…&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fn07sdps3kj4it2cwlmsi.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fn07sdps3kj4it2cwlmsi.png" alt="Image description" width="800" height="435"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;…this time the load balancer forwards Ms. Blue’s request to the application instance in the US Central with IP address &lt;code&gt;10.1.10.10&lt;/code&gt;. &lt;/p&gt;

&lt;p&gt;This is because a storm in the Atlantic impacted the availability of the entire US East Coast cloud region. The US Central region is up and running as normal, and is now the closest for Ms. Blue. So, the request is routed there automatically.&lt;/p&gt;

&lt;p&gt;Alright, matey, now let’s move forward and look into the architecture, or the main building blocks, of the global cloud load balancer.&lt;/p&gt;

&lt;h1&gt;
  
  
  Global Load Balancer Architecture
&lt;/h1&gt;

&lt;p&gt;The global load balancer is comprised of several building blocks. Take a look at the picture below and review the components from left to right. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fv9hovsz75m72igzdboff.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fv9hovsz75m72igzdboff.png" alt="Image description" width="800" height="300"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This picture represents the architecture of the load balancer in Google Cloud. The components might vary depending on your cloud environment. But you can be sure that every major cloud environment supports the global load balancer.&lt;/p&gt;

&lt;p&gt;So, moving from left to right in the picture: As you already know, users connect to the public IP address assigned to the load balancer (or DNS name that translates to this address). After that, the load balancer consults with existing forwarding rules to determine which target proxy needs to handle the request.&lt;/p&gt;

&lt;p&gt;Once the target proxy is selected, the proxy checks its URL map to decide which backend services are responsible for the request.&lt;/p&gt;

&lt;p&gt;Finally, the backend service delegates the request to one of its backends (actual application instances). In our case, during normal operations, requests from Mr. Green, based in Seattle, will be processed by the US West backend instance. The traffic from Ms. Blue will go to the backend in the US East region.&lt;/p&gt;

&lt;p&gt;If you wish to learn more about the global load balancer’s architecture, check out &lt;a href="https://cloud.google.com/load-balancing/docs/https" rel="noopener noreferrer"&gt;this documentation&lt;/a&gt; for an extensive overview of all its components. &lt;/p&gt;

&lt;h1&gt;
  
  
  Global Load Balancer Deployment
&lt;/h1&gt;

&lt;p&gt;At last, matey, let’s go ahead and deploy an instance of the global external load balancer in Google Cloud.&lt;/p&gt;

&lt;p&gt;Assuming that applications instances are already running across US West, Central, and East, these are our next steps (explore &lt;a href="https://github.com/YugabyteDB-Samples/geo-distributed-messenger/blob/main/gcloud_deployment.md" rel="noopener noreferrer"&gt;how to deploy the instances in Google Cloud&lt;/a&gt;):&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;Reserve a static public IP address for the load balancer:&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;gcloud compute addresses create load-balancer-public-ip &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;--ip-version&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;IPV4 &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;--network-tier&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;PREMIUM &lt;span class="se"&gt;\&lt;/span&gt;
        &lt;span class="nt"&gt;--global&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Configure a health check for a backend service:&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;gcloud compute health-checks create http load-balancer-http-basic-check &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;--check-interval&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;20s &lt;span class="nt"&gt;--timeout&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;5s &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;--healthy-threshold&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;2 &lt;span class="nt"&gt;--unhealthy-threshold&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;2 &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;--request-path&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;/login &lt;span class="se"&gt;\&lt;/span&gt;
        &lt;span class="nt"&gt;--port&lt;/span&gt; 80
&lt;/code&gt;&lt;/pre&gt;

&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Create the backend service for the HTTP traffic and provide the just-created health check to ensure the application instances are up and running:&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;gcloud compute backend-services create load-balancer-backend-service &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;--load-balancing-scheme&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;EXTERNAL_MANAGED &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;--protocol&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;HTTP &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;--port-name&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;http &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;--health-checks&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;load-balancer-http-basic-check &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;--global&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Provide the backend service with backends (application instances) across several cloud regions:&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;gcloud compute backend-services add-backend load-balancer-backend-service &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;--balancing-mode&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;UTILIZATION &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;--max-utilization&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;0.8 &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;--capacity-scaler&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;1 &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;--instance-group&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;ig-us-central &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;--instance-group-zone&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;us-central1-b &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;--global&lt;/span&gt;

gcloud compute backend-services add-backend load-balancer-backend-service &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;--balancing-mode&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;UTILIZATION &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;--max-utilization&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;0.8 &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;--capacity-scaler&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;1 &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;--instance-group&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;ig-us-east &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;--instance-group-zone&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;us-east4-b &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;--global&lt;/span&gt;

gcloud compute backend-services add-backend load-balancer-backend-service &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;--balancing-mode&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;UTILIZATION &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;--max-utilization&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;0.8 &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;--capacity-scaler&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;1 &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;--instance-group&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;ig-us-west &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;--instance-group-zone&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;us-west2-b &lt;span class="se"&gt;\&lt;/span&gt;
        &lt;span class="nt"&gt;--global&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Finally, configure the default URL map to route all requests to the created backend service:&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;gcloud compute url-maps create load-balancer-url-map &lt;span class="nt"&gt;--default-service&lt;/span&gt; load-balancer-backend-service
&lt;/code&gt;&lt;/pre&gt;

&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;After the backend service is ready, go ahead and configure the frontend part of the load balancer which is the forwarding rule and target proxy.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;Create an HTTP proxy providing the URL map:&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;gcloud compute target-http-proxies create load-balancer-http-frontend &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;--url-map&lt;/span&gt; load-balancer-url-map &lt;span class="se"&gt;\&lt;/span&gt;
        &lt;span class="nt"&gt;--global&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Define a forwarding rule to direct the traffic from the public Internet to the just created proxy:&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;gcloud compute forwarding-rules create load-balancer-http-frontend-forwarding-rule &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;--load-balancing-scheme&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;EXTERNAL_MANAGED &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;--network-tier&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;PREMIUM &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;--address&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;load-balancer-public-ip  &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;--global&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;--target-http-proxy&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;load-balancer-http-frontend &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;--ports&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;80
&lt;/code&gt;&lt;/pre&gt;

&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;That’s it! The global external load balancer is fully configured. It might just take a few minutes for your configuration to propagate worldwide.&lt;/p&gt;

&lt;h1&gt;
  
  
  Testing Load Balancer
&lt;/h1&gt;

&lt;p&gt;Now, it’s testing time!&lt;/p&gt;

&lt;p&gt;I’ll play the role of Ms. Blue, who wants to share gorgeous photos from Tampa Bay. A colleague of mine who is based in Seattle will be Mr. Green.&lt;/p&gt;

&lt;p&gt;The manual testing is straightforward. I’m opening the messenger in my browser using the load balancer’s public IP address and uploading a few pictures:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdos7jal6jknit3lomrfk.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdos7jal6jknit3lomrfk.png" alt="Image description" width="800" height="388"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;How do I ensure those pictures were uploaded through an instance in the US East?&lt;/p&gt;

&lt;p&gt;Well, I opened the monitoring tab for the load balancer and confirmed all my requests are delegated to the &lt;code&gt;ig-us-east&lt;/code&gt; instance:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9inunh7eui6igx9bg7rn.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9inunh7eui6igx9bg7rn.png" alt="Image description" width="731" height="803"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;After that, my Seattle colleague opens the app to check the pictures. In a few moments, I can see that the load balancer directs his traffic to the &lt;code&gt;ig-us-west&lt;/code&gt;!&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fl5y0no3k8e8q9hrwqw8y.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fl5y0no3k8e8q9hrwqw8y.png" alt="Image description" width="725" height="797"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Job done! &lt;/p&gt;

&lt;p&gt;The global load balancer is fully configured and functions globally. This means my geo-distributed messenger can seamlessly process user requests with high performance across the globe and withstand various cloud outages.&lt;/p&gt;

&lt;h1&gt;
  
  
  What’s on the Horizon?
&lt;/h1&gt;

&lt;p&gt;Alright, matey! &lt;/p&gt;

&lt;p&gt;My geo-distributed app now functions across multiple cloud regions and availability zones. But the app, API layer, and database still run within the boundaries of the USA. &lt;/p&gt;

&lt;p&gt;My next move would be to scale the messenger to other countries and continents and experiment with several YugabyteDB deployment options (my geo-distributed database).&lt;/p&gt;

&lt;p&gt;&lt;a href="https://twitter.com/denismagda" rel="noopener noreferrer"&gt;Follow me&lt;/a&gt; to be notified as soon as the next update is published! And check out the &lt;a href="https://dev.to/denismagda/series/19150"&gt;previous articles&lt;/a&gt; in my geo-distributed messenger application development journey.&lt;/p&gt;

</description>
      <category>googlecloud</category>
      <category>webdev</category>
      <category>distributedsystems</category>
      <category>architecture</category>
    </item>
    <item>
      <title>Geo-Distributed API Layer With Kong Gateway</title>
      <dc:creator>Denis Magda</dc:creator>
      <pubDate>Tue, 18 Oct 2022 19:26:14 +0000</pubDate>
      <link>https://dev.to/yugabyte/geo-distributed-api-layer-with-kong-gateway-4d25</link>
      <guid>https://dev.to/yugabyte/geo-distributed-api-layer-with-kong-gateway-4d25</guid>
      <description>&lt;p&gt;Ahoy, Mateys! &lt;/p&gt;

&lt;p&gt;It’s been a while since I published my &lt;a href="https://dev.to/yugabyte/automating-java-application-deployment-across-multiple-cloud-regions-27db"&gt;last article&lt;/a&gt; on my geo-distributed application development journey. A hot tech conference season put my development on hold, but now I’m back home and ready to share my experience in building a geo-distributed API layer with Kong Gateway.&lt;/p&gt;

&lt;p&gt;Let me take a moment to review where the project had gotten to. This is what my geo-distributed messenger’s architecture looked like:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdt9vf8ydw8dz083c3prj.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdt9vf8ydw8dz083c3prj.png" alt="Image description" width="800" height="701"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I managed to &lt;a href="https://dev.to/yugabyte/automating-java-application-deployment-across-multiple-cloud-regions-27db"&gt;automate the deployment of application instances&lt;/a&gt; across several cloud regions (to see how the project started and continued, here is a list of &lt;a href="https://dev.to/denismagda/series/19150"&gt;all previous articles&lt;/a&gt;. However, the application is not a monolith; it’s comprised of several microservices.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fy0zjwcuz4pf5cns42z60.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fy0zjwcuz4pf5cns42z60.png" alt="Image description" width="800" height="743"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The Messaging microservice implements the key functionality that every messenger must possess—the ability to send messages across channels and workspaces. The Attachments microservice uploads pictures and other files. Finally, the Reminders microservice comes in handy when a user wants to ignore a message for now but get back to it later.&lt;/p&gt;

&lt;p&gt;So now, mateys, we reach the question, “What do all microservices need?” &lt;/p&gt;

&lt;p&gt;Well, they need to talk to each other! Microservices were not born to live in isolation. After following &lt;a href="https://konghq.com/" rel="noopener noreferrer"&gt;Kong&lt;/a&gt; for more than a year, I selected it to create a geo-distributed API layer that can be used as a synchronous communication layer by the microservices.&lt;/p&gt;

&lt;p&gt;In this article, we’ll review several ways to build a geo-distributed API layer with Kong. My geo-messenger needs an API layer that can withstand various cloud outages (including regional ones) and service requests from the locations closest to the users. &lt;/p&gt;

&lt;p&gt;So, if you’re still with me on this journey, then, as the pirates used to say, “Weigh Anchor and Hoist the Mizzen!” which means, “Pull up the anchor and get this ship sailing!” &lt;/p&gt;

&lt;h1&gt;
  
  
  Connecting Microservices With Kong Gateway
&lt;/h1&gt;

&lt;p&gt;For the sake of simplicity, I’ll use two microservice as an example.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhl4da91lrixee0sfy0g7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhl4da91lrixee0sfy0g7.png" alt="Image description" width="800" height="267"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;A user request always goes to the Messaging microservice first, the key one. But, if the user wants to share a picture in a chat channel, for example, the Messaging microservice delegates this job to its Attachments counterpart. In this case, the Kong Gateway is used as a communication channel. &lt;/p&gt;

&lt;p&gt;As long as my microservices instances are deployed across multiple cloud regions, I have to go through the same exercise for Kong Gateway. &lt;/p&gt;

&lt;p&gt;Let’s discuss the options for running Kong in a geo-distributed way.&lt;/p&gt;

&lt;h1&gt;
  
  
  Option #1: Deploying Kong as Sidecar Nearby Microservices
&lt;/h1&gt;

&lt;p&gt;The most straightforward way is to deploy Kong Gateway as a &lt;a href="https://dzone.com/articles/sidecar-design-pattern-in-your-microservices-ecosy-1" rel="noopener noreferrer"&gt;sidecar&lt;/a&gt; on the same virtual machine (VM) with your microservices. Then you can deploy the VMs with Kong and the microservices across several cloud regions. (NOTE: you can use Kubernetes instead if you like; there are no major architectural differences). &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvhyo0sl51es0hggbh4y7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvhyo0sl51es0hggbh4y7.png" alt="Image description" width="800" height="465"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In this example, the geo-messenger uses two VM instances—one in the US East and another in the US West. Each VM runs an instance of the microservice and Kong Gateway. In addition, every VM has its own instance of PostgreSQL which is used solely by Kong to store information about microservices and supported routes.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://cloud.google.com/storage" rel="noopener noreferrer"&gt;Google Cloud Storage&lt;/a&gt; and &lt;a href="https://www.yugabyte.com/yugabytedb/" rel="noopener noreferrer"&gt;YugabyteDB&lt;/a&gt; are geo-distributed by definition. Google Cloud Storage keeps attachments that users upload through the Attachments microservice, while YugabyteDB keeps all other application data (messages, channels, reminders, etc.)&lt;/p&gt;

&lt;p&gt;The user is connected to the VM closest to their location. In this example, the user is based near New York, so her requests are routed to the VM in the US East region. This is how I achieve the lowest latency possible.&lt;/p&gt;

&lt;p&gt;If the US East region becomes unavailable due to a region-level outage, then I’ll lose the VM from that region. But the messenger will continue working with no interruptions because the user requests will now be routed to the VM in the US West. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyqpnsm75b5epkaq67qbs.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyqpnsm75b5epkaq67qbs.png" alt="Image description" width="800" height="465"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  The Show, Don’t Tell!
&lt;/h2&gt;

&lt;p&gt;Now, let me demonstrate this deployment option. &lt;/p&gt;

&lt;p&gt;When I start a VM in Google Cloud, the VM executes a &lt;a href="https://github.com/YugabyteDB-Samples/geo-distributed-messenger/blob/main/gcloud/startup_script.sh" rel="noopener noreferrer"&gt;special startup script&lt;/a&gt; that installs required libraries, builds and runs the microservices, and configures Kong Gateway. These are the steps for Kong setup:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Download and install Kong Gateway:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;  &lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"deb [trusted=yes] https://download.konghq.com/gateway-3.x-ubuntu-&lt;/span&gt;&lt;span class="si"&gt;$(&lt;/span&gt;lsb_release &lt;span class="nt"&gt;-sc&lt;/span&gt;&lt;span class="si"&gt;)&lt;/span&gt;&lt;span class="s2"&gt;/ &lt;/span&gt;&lt;span class="se"&gt;\&lt;/span&gt;&lt;span class="s2"&gt;
  default all"&lt;/span&gt; | &lt;span class="nb"&gt;sudo tee&lt;/span&gt; /etc/apt/sources.list.d/kong.list
  &lt;span class="nb"&gt;sudo &lt;/span&gt;apt-get update
  &lt;span class="nb"&gt;sudo &lt;/span&gt;apt &lt;span class="nb"&gt;install&lt;/span&gt; &lt;span class="nt"&gt;--yes&lt;/span&gt; &lt;span class="nt"&gt;--force-yes&lt;/span&gt; kong-enterprise-edition&lt;span class="o"&gt;=&lt;/span&gt;3.0.0.0
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Configure Postgres and run Kong migrations:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;  &lt;span class="nb"&gt;sudo&lt;/span&gt; &lt;span class="nt"&gt;-u&lt;/span&gt; postgres psql &lt;span class="nt"&gt;-c&lt;/span&gt; &lt;span class="s2"&gt;"CREATE USER kong WITH PASSWORD 'password'"&lt;/span&gt;
  &lt;span class="nb"&gt;sudo&lt;/span&gt; &lt;span class="nt"&gt;-u&lt;/span&gt; postgres psql &lt;span class="nt"&gt;-c&lt;/span&gt; &lt;span class="s2"&gt;"CREATE DATABASE kong OWNER kong"&lt;/span&gt;

  &lt;span class="nb"&gt;sudo touch&lt;/span&gt; /etc/kong/kong.conf
  &lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s1"&gt;'pg_user = kong'&lt;/span&gt; | &lt;span class="nb"&gt;sudo tee&lt;/span&gt; &lt;span class="nt"&gt;--append&lt;/span&gt; /etc/kong/kong.conf
  &lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s1"&gt;'pg_password = password'&lt;/span&gt; | &lt;span class="nb"&gt;sudo tee&lt;/span&gt; &lt;span class="nt"&gt;--append&lt;/span&gt; /etc/kong/kong.conf
  &lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s1"&gt;'pg_database = kong'&lt;/span&gt; | &lt;span class="nb"&gt;sudo tee&lt;/span&gt; &lt;span class="nt"&gt;--append&lt;/span&gt; /etc/kong/kong.conf

  &lt;span class="nb"&gt;sudo &lt;/span&gt;kong migrations bootstrap &lt;span class="nt"&gt;-c&lt;/span&gt; /etc/kong/kong.conf
  &lt;span class="nb"&gt;sudo &lt;/span&gt;kong migrations up &lt;span class="nt"&gt;-c&lt;/span&gt; /etc/kong/kong.conf
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Create a Kong route for attachments uploading:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;  curl &lt;span class="nt"&gt;-i&lt;/span&gt; &lt;span class="nt"&gt;-X&lt;/span&gt; POST &lt;span class="se"&gt;\&lt;/span&gt;
          &lt;span class="nt"&gt;--url&lt;/span&gt; http://localhost:8001/services/ &lt;span class="se"&gt;\&lt;/span&gt;
          &lt;span class="nt"&gt;--data&lt;/span&gt; &lt;span class="s1"&gt;'name=attachments-service'&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
          &lt;span class="nt"&gt;--data&lt;/span&gt; &lt;span class="s1"&gt;'url=http://127.0.0.1:8081'&lt;/span&gt;

  curl &lt;span class="nt"&gt;-i&lt;/span&gt; &lt;span class="nt"&gt;-X&lt;/span&gt; POST http://localhost:8001/services/attachments-service/routes &lt;span class="se"&gt;\&lt;/span&gt;
       &lt;span class="nt"&gt;-d&lt;/span&gt; &lt;span class="s2"&gt;"name=upload-route"&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
       &lt;span class="nt"&gt;-d&lt;/span&gt; &lt;span class="s2"&gt;"paths[]=/upload"&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
       -d &lt;span class="s2"&gt;"strip_path=false"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Note, all IPs are local because Kong runs on the same VM with my microservices.&lt;/p&gt;

&lt;p&gt;Since you have that startup script that prepares a VM with a required configuration, you need to figure out how to deploy such VMs across multiple cloud regions.&lt;/p&gt;

&lt;p&gt;Easy! This step can be automated as well:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Call &lt;a href="https://github.com/YugabyteDB-Samples/geo-distributed-messenger/blob/main/gcloud/create_instance_template.sh" rel="noopener noreferrer"&gt;this script&lt;/a&gt; to create a VM instance template in the regions of your choice. I create templates for US West, Central, and East.&lt;/li&gt;
&lt;li&gt;Provision VMs with Kong and microservices in all those regions:
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;  gcloud compute instance-groups managed create ig-us-west &lt;span class="se"&gt;\&lt;/span&gt;
      &lt;span class="nt"&gt;--template&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;template-us-west &lt;span class="nt"&gt;--size&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;1 &lt;span class="nt"&gt;--zone&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;us-west2-b

  gcloud compute instance-groups managed create ig-us-central &lt;span class="se"&gt;\&lt;/span&gt;
      &lt;span class="nt"&gt;--template&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;template-us-central &lt;span class="nt"&gt;--size&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;1 &lt;span class="nt"&gt;--zone&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;us-central1-b

  gcloud compute instance-groups managed create ig-us-east &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;--template&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;template-us-east &lt;span class="nt"&gt;--size&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;1 &lt;span class="nt"&gt;--zone&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;us-east4-b
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Next, make sure that the VMs are booted and run normally across the cloud regions:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffuzjevrzw3yngatkplth.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffuzjevrzw3yngatkplth.png" alt="Image description" width="800" height="320"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Finally, pick any of the VMs and test the application by uploading a picture into a chat channel. The Messaging microservice calls the configured Kong route to delegate this task to the Attachments microservice.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fa4iryr0xvzho2erehy5j.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fa4iryr0xvzho2erehy5j.png" alt="Image description" width="800" height="419"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Easy, mateys, isn’t it? Now, let’s review two other alternatives for Kong geo-distributed deployments.&lt;/p&gt;

&lt;h1&gt;
  
  
  Option #2: Kong Standalone Deployment With Shared Database
&lt;/h1&gt;

&lt;p&gt;Another approach is better for applications with a significant number of microservices, or that need to deploy microservices and the API layer on separate VMs (or Kubernetes nodes/pods).&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbde37dth2d255dxsc7l8.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbde37dth2d255dxsc7l8.png" alt="Image description" width="800" height="583"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;As the diagram shows, we continue running our application across two cloud regions – US East and West. We also continue using Google Cloud Storage and YugabyteDB for application data and attachments. But now, the microservice and Kong Gateway instances are deployed in standalone VMs.&lt;/p&gt;

&lt;p&gt;Every microservice and Kong instance is location-aware. For example, when the user from New York wants to share a picture, the request will go to the Messaging instance in the US East (which is closest to the user). Then, that Messaging instance must forward the request to the Attachments service from the same cloud region. &lt;/p&gt;

&lt;p&gt;This happens by connecting to the Kong instance from US East and calling the &lt;code&gt;/api/us/east/upload&lt;/code&gt; API endpoint. That endpoint is a regional one (see &lt;code&gt;/us/east&lt;/code&gt; part in the API URL) and is set up to forward requests to the Attachments microservices from US East. &lt;/p&gt;

&lt;p&gt;Finally, Kong stores all the information about routes and services in a shared instance of PostgreSQL. If you use a PostgreSQL-managed service like Google Cloud SQL, it’s possible to configure PostgreSQL replica instances in a different availability zone. &lt;/p&gt;

&lt;p&gt;Cloud SQL will failover to the replica if a zone running the primary PostgreSQL instance becomes unavailable. The primary and replica must be in the same cloud region. Presently, Google Cloud SQL doesn’t tolerate region-level outages.&lt;/p&gt;

&lt;h1&gt;
  
  
  Option #3: Kong Standalone Deployment With Distributed Database
&lt;/h1&gt;

&lt;p&gt;Deployment option #3 is close to deployment option #2. The only exception is the database where Kong Gateway stores details about routes, services, and other system information. Option #2 used PostgreSQL, while this third option uses Apache Cassandra.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4x6lu04cxoh659yu4rp5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4x6lu04cxoh659yu4rp5.png" alt="Image description" width="800" height="583"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Kong supports Apache Cassandra for geo-distributed use cases. A Cassandra cluster can span multiple cloud regions, distributing the load across several distant locations and withstanding region-level outages that are not supported by Google Cloud SQL (managed PostgreSQL). &lt;/p&gt;

&lt;p&gt;However, starting with Kong 4.0, PostgreSQL will be the only officially supported database. The Cassandra integration is already &lt;a href="https://konghq.com/blog/cassandra-support-deprecated" rel="noopener noreferrer"&gt;deprecated&lt;/a&gt;. &lt;/p&gt;

&lt;p&gt;The Kong community will likely find a replacement for Cassandra in the foreseeable future. And that replacement might be YugabyteDB, which is built on the PostgreSQL source code and frequently referred to as distributed PostgreSQL.&lt;/p&gt;

&lt;h1&gt;
  
  
  What’s on the Horizon?
&lt;/h1&gt;

&lt;p&gt;Alright, mateys! Now I’ve got my microservices and API layer working across multiple cloud regions, what’s next? &lt;/p&gt;

&lt;p&gt;Well, I need to figure out how to configure the Global Cloud Load Balancer that intercepts user requests at the nearest PoP (point-of-presence) and then routes the requests to one of my application instances (VMs). &lt;/p&gt;

&lt;p&gt;&lt;a href="https://twitter.com/denismagda" rel="noopener noreferrer"&gt;Follow me&lt;/a&gt; to be notified as soon as the next update is published! And check out the &lt;a href="https://dev.to/denismagda/series/19150"&gt;previous articles&lt;/a&gt; in my development journey.&lt;/p&gt;

</description>
      <category>api</category>
      <category>distributedsystems</category>
      <category>googlecloud</category>
      <category>yugabytedb</category>
    </item>
    <item>
      <title>Automating Java Application Deployment Across Multiple Cloud Regions</title>
      <dc:creator>Denis Magda</dc:creator>
      <pubDate>Fri, 02 Sep 2022 20:10:05 +0000</pubDate>
      <link>https://dev.to/yugabyte/automating-java-application-deployment-across-multiple-cloud-regions-27db</link>
      <guid>https://dev.to/yugabyte/automating-java-application-deployment-across-multiple-cloud-regions-27db</guid>
      <description>&lt;p&gt;Ahoy, matey! &lt;/p&gt;

&lt;p&gt;What’s good about habits, I hear you ask? &lt;/p&gt;

&lt;p&gt;Well, some habits are better (and more socially acceptable) than others! In terms of coding, a habit or routine can help you progress more quickly toward your end goal. It’s better to take daily baby steps than wait for that “perfect” moment. &lt;/p&gt;

&lt;p&gt;I developed the habit of coding the &lt;a href="https://github.com/YugabyteDB-Samples/geo-distributed-messenger" rel="noopener noreferrer"&gt;geo-distributed messenger&lt;/a&gt; two days a week (strictly during business hours, as evenings are for my mischievous sons and dear wife!). So, here is the latest summary, for those of you who are following my development journey.&lt;/p&gt;

&lt;p&gt;This is what my end goal looks like:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqdb9cwk1jq3ts6snhei8.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqdb9cwk1jq3ts6snhei8.png" alt="Image description" width="800" height="723"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;My geo-messenger has to function across the globe, storing data and serving user requests from multiple cloud regions. &lt;/p&gt;

&lt;p&gt;And, how far have I progressed since the beginning of the project? Well, a week ago, my application looked like this:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4goenxfvcz4zu68scg5s.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4goenxfvcz4zu68scg5s.png" alt="Image description" width="800" height="701"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The app ran in &lt;a href="https://dev.to/yugabyte/how-to-connect-a-heroku-java-app-to-a-cloud-native-database-58h1"&gt;Heroku and used a multi-node YugabyteDB database cluster&lt;/a&gt;. The app instance and database were deployed in a single cloud region and could tolerate zone-level outages.&lt;/p&gt;

&lt;p&gt;What happened a few days ago? Well, I continued taking baby steps toward my end goal and managed to deploy several geo-messenger instances across multiple cloud regions. So, now my app looks like this:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxwgjgf0wnclytdwtb22s.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxwgjgf0wnclytdwtb22s.png" alt="Image description" width="800" height="701"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Several app instances run in the US West, Central, and East regions. I can use any of them to serve user traffic. In the case of regional outages, one instance could go down, but the availability of other instances won’t be disrupted. However, the database still works out of a single cloud region and is vulnerable to region-level incidents, but I’ll handle this later.&lt;/p&gt;

&lt;p&gt;So, if you’re still with me on this journey, then, as the pirates used to say, “All Hand Hoy!” which means, “Everyone on deck!” &lt;/p&gt;

&lt;p&gt;Let’s review the various connectivity options that I validated with my app. I’ll show you how I deploy those instances across several regions of Google Cloud. As a spoiler, I’ll give you advance notice that I don’t use Google Console (UI). &lt;/p&gt;

&lt;h1&gt;
  
  
  Creating Google Cloud Project
&lt;/h1&gt;

&lt;p&gt;All of the resources used by my application instances are located in a dedicated Google Cloud project. &lt;/p&gt;

&lt;p&gt;Speaking of the deployment automation approach, try to guess what I selected so far…  &lt;/p&gt;

&lt;p&gt;“Terraform?” No! &lt;/p&gt;

&lt;p&gt;“Maybe an Ansible script?” Nope, wrong again. &lt;/p&gt;

&lt;p&gt;“Come on, tell me that you at least deploy the app in Kubernetes!” Sorry, matey, nothing like that.&lt;/p&gt;

&lt;p&gt;In fact, I chose &lt;a href="https://cloud.google.com/sdk/gcloud" rel="noopener noreferrer"&gt;gcloud CLI&lt;/a&gt;, which allows me to create a project, configure the network, and provision VMs via easy-to-follow commands. This is how I began to create my project for the geo-messenger:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;gcloud projects create geo-distributed-messenger &lt;span class="nt"&gt;--name&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"Geo-Distributed Messenger"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Call me old-fashioned, but I don’t need to worry about Terraform, Ansible, and K8 yet. My priority is to get my app working across geographies in Google Cloud, so I want to go with the fastest approach. Once I reach my initial end goal, I can set another goal to support AWS, Azure, and Oracle Cloud, then use Terraform and other technologies to make the deployment cloud-agnostic.&lt;/p&gt;

&lt;h1&gt;
  
  
  Defining Firewall Rules
&lt;/h1&gt;

&lt;p&gt;Whenever you create a new project in Google Cloud, you get the &lt;code&gt;default&lt;/code&gt; virtual private cloud (VPC). The virtual machines (VMs) that will host my application instances will run within that VPC.&lt;/p&gt;

&lt;p&gt;I created a firewall rule to allow SSH, HTTP and HTTPS traffic:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;gcloud compute &lt;span class="nt"&gt;--project&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;geo-distributed-messenger firewall-rules create geo-messenger-allowed-traffic &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;--direction&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;INGRESS &lt;span class="nt"&gt;--priority&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;1000 &lt;span class="nt"&gt;--network&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;default &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;--action&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;ALLOW &lt;span class="nt"&gt;--rules&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;tcp:22,tcp:80,tcp:443 &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;--source-ranges&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;0.0.0.0/0 &lt;span class="nt"&gt;--target-tags&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;geo-messenger-instance
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The rule will apply to all the VMs labeled “geo-messenger-instance.” The other parameters are self-explanatory.&lt;/p&gt;

&lt;p&gt;BTW, the &lt;code&gt;default&lt;/code&gt; network is the Premium one. How is this different from the Standard tier? Check out this illustration:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fo9pmejlcgnl8b8fbrd64.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fo9pmejlcgnl8b8fbrd64.png" alt="Image description" width="800" height="403"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;For my multi-region app, I need the premium network so that user traffic enters the Google global network at the nearest point-of-presence (PoP) and gets to my app instance fast.&lt;/p&gt;

&lt;h1&gt;
  
  
  Deploying the First App Instance
&lt;/h1&gt;

&lt;p&gt;A gcloud command for the VM provisioning comes with more parameters than the two reviewed above. So, I put together a &lt;a href="https://github.com/YugabyteDB-Samples/geo-distributed-messenger/blob/main/gcloud/start_single_app_instance.sh" rel="noopener noreferrer"&gt;start_single_app_instance.sh&lt;/a&gt; script that starts a VM:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;./start_single_app_instance.sh &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;-n&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;INSTANCE_NAME&lt;span class="o"&gt;}&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;-z&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;CLOUD_ZONE_NAME&lt;span class="o"&gt;}&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;-a&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;APP_HTTP_PORT_NUMBER&lt;span class="o"&gt;}&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;-c&lt;/span&gt; &lt;span class="s2"&gt;"{DATABASE_CONNECTION_ENDPOINT}"&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;-u&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;DATABASE_USER&lt;span class="o"&gt;}&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;-p&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;DATABASE_PWD&lt;span class="o"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;As an example, this is how I start a VM in the US West and connect it to my &lt;a href="https://www.yugabyte.com/" rel="noopener noreferrer"&gt;YugabyteDB&lt;/a&gt; cluster in the US East:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;./start_single_app_instance.sh &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;-n&lt;/span&gt; messenger-us-west-instance &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;-z&lt;/span&gt; us-west2-a &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;-a&lt;/span&gt; 80 &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;-c&lt;/span&gt; &lt;span class="s2"&gt;"jdbc:postgresql://us-east1.my-instance-id.gcp.ybdb.io:5433/yugabyte?ssl=true&amp;amp;sslmode=require"&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;-u&lt;/span&gt; my-user-name &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;-p&lt;/span&gt; my-super-complex-password
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;After the VM is booted, it will execute &lt;a href="https://github.com/YugabyteDB-Samples/geo-distributed-messenger/blob/main/gcloud/startup_script.sh" rel="noopener noreferrer"&gt;the startup script&lt;/a&gt; that installs the required software and launches a geo-messenger instance on port &lt;code&gt;80&lt;/code&gt;.&lt;/p&gt;

&lt;h1&gt;
  
  
  Deploying More Instances
&lt;/h1&gt;

&lt;p&gt;The previous step was the most time-consuming. I had to recall how to write bash scripts and advance my knowledge about the gcloud API. But, once I got the script for app instances deployment, it only took a few seconds to start the app instances in other geographic locations.&lt;/p&gt;

&lt;p&gt;I used this command to launch an instance in the US Central region:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;./start_single_app_instance.sh &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;-n&lt;/span&gt; messenger-us-central-instance &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;-z&lt;/span&gt; us-central1-a &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;-a&lt;/span&gt; 80 &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;-c&lt;/span&gt; &lt;span class="s2"&gt;"jdbc:postgresql://us-east1.my-instance-id.gcp.ybdb.io:5433/yugabyte?ssl=true&amp;amp;sslmode=require"&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;-u&lt;/span&gt; my-user-name &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;-p&lt;/span&gt; my-super-complex-password
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This one launched an instance in the US East region, near Washington, DC:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;./start_single_app_instance.sh &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;-n&lt;/span&gt; messenger-us-east-instance &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;-z&lt;/span&gt; us-east4-a &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;-a&lt;/span&gt; 80 &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;-c&lt;/span&gt; &lt;span class="s2"&gt;"jdbc:postgresql://us-east1.my-instance-id.gcp.ybdb.io:5433/yugabyte?ssl=true&amp;amp;sslmode=require"&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;-u&lt;/span&gt; my-user-name &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;-p&lt;/span&gt; my-super-complex-password
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Finally, I confirmed that every instance could serve the user traffic from port &lt;code&gt;80&lt;/code&gt; using…my own hands and the browser:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3cfv72p90ivrp8n1v345.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3cfv72p90ivrp8n1v345.jpg" alt="Image description" width="800" height="405"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h1&gt;
  
  
  What’s on the Horizon?
&lt;/h1&gt;

&lt;p&gt;Alright, matey! Now I’ve got my app working across multiple cloud regions, what’s next? &lt;/p&gt;

&lt;p&gt;In my next blog I’ll talk though setting up the Global Cloud Load balancer to intercept user requests at the nearest PoP (point-of-presence) and then route them to one of my application instances. I’ll also provision a multi-region YugabyteDB cluster so that my app can withstand region-level outages. &lt;/p&gt;

&lt;p&gt;If you want to read back over my progress check &lt;a href="https://dev.to/denismagda/series/19150"&gt;the series here&lt;/a&gt;!&lt;/p&gt;

</description>
      <category>webdev</category>
      <category>cloudnative</category>
      <category>java</category>
      <category>database</category>
    </item>
    <item>
      <title>How To Connect a Heroku Java App to a Cloud-Native Database</title>
      <dc:creator>Denis Magda</dc:creator>
      <pubDate>Thu, 25 Aug 2022 20:11:00 +0000</pubDate>
      <link>https://dev.to/yugabyte/how-to-connect-a-heroku-java-app-to-a-cloud-native-database-58h1</link>
      <guid>https://dev.to/yugabyte/how-to-connect-a-heroku-java-app-to-a-cloud-native-database-58h1</guid>
      <description>&lt;p&gt;Ahoy, matey! I'm back from a short vacation and ready to continue my pet project: &lt;a href="https://github.com/YugabyteDB-Samples/geo-distributed-messenger" rel="noopener noreferrer"&gt;geo-distributed&lt;/a&gt; messenger in Java! &lt;/p&gt;

&lt;p&gt;&lt;a href="https://dev.to/yugabyte/how-to-build-a-multi-zone-java-app-in-one-day-1ajn"&gt;In the last article&lt;/a&gt;, I launched the first version of my app, which runs in Heroku and uses YugabyteDB Managed as a cloud-native distributed database. I now feel confident that the geo-messenger can tolerate zone-level outages. Today, I’ll look at several connectivity options for Heroku and YugabyteDB Managed.&lt;/p&gt;

&lt;p&gt;You might ask, "What's wrong with the connectivity between those two SaaS products?" &lt;/p&gt;

&lt;p&gt;First, your &lt;a href="http://cloud.yugabyte.com" rel="noopener noreferrer"&gt;YugabyteDB Managed&lt;/a&gt; instance won't be visible to the whole internet once deployed. You must provide the IP addresses of apps, services, VMs, etc. to connect to the database. This is a common and reasonable requirement for cloud-native databases. It's true for MongoDB Atlas, Amazon Aurora, and for any other cloud-native database that treats security seriously.  &lt;/p&gt;

&lt;p&gt;By default, the YugabyteDB Managed IPs allow list is empty. This means that my geo-messenger's requests will be rejected. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqw26mreek4wepotn6mgk.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqw26mreek4wepotn6mgk.png" alt="Image description" width="800" height="403"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You might smile and say, “Come on, Denis, just find the static IP address of your app in Heroku and add it to YugabyteDB!” &lt;/p&gt;

&lt;p&gt;This was my exact thinking, matey! However, Heroku does not provide static IP addresses in the &lt;a href="https://devcenter.heroku.com/articles/dyno-runtime#common-runtime" rel="noopener noreferrer"&gt;Common Runtime Environment&lt;/a&gt;. So I either need to switch to an enterprise plan that includes &lt;a href="https://devcenter.heroku.com/articles/private-spaces" rel="noopener noreferrer"&gt;private spaces&lt;/a&gt; or find another option. Being cost-conscious (ie: cheap), I selected the latter.&lt;/p&gt;

&lt;p&gt;So if you’re still with me on this journey, then, as the pirates used to say, “All Hand Hoy!” which means, “Everyone on deck!” Let’s review the various connectivity options that I validated with my app.&lt;/p&gt;

&lt;h1&gt;
  
  
  Allow All Connections
&lt;/h1&gt;

&lt;p&gt;A brute-force solution is to allow all connections to my YugabyteDB Managed instance. I did that by adding the &lt;code&gt;0.0.0.0/0&lt;/code&gt; address to the IP allow list. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fb0h8ji34kkohc8axp6uz.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fb0h8ji34kkohc8axp6uz.png" alt="Image description" width="800" height="358"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I recommend this option if you’re in early development (and want to focus on coding rather than infrastructure setup) or if you are deploying the complete solution in a VPC network. I was coding nonstop and didn’t want to be distracted, so I used the &lt;code&gt;0.0.0.0/0&lt;/code&gt; solution as a shortcut.&lt;/p&gt;

&lt;h1&gt;
  
  
  Use SOCK5 Proxy
&lt;/h1&gt;

&lt;p&gt;Once my coding slowed down, I decided to find a more elegant solution to the Heroku and YugabyteDB Managed connectivity issue. Obviously, I didn’t want my database instance to remain open to the entire internet.&lt;/p&gt;

&lt;p&gt;What was my next step? Well, as an experienced engineer with nearly 20 years in the tech industry I knew exactly what to do: I opened a browser and Googled “heroku static ip address java.” &lt;/p&gt;

&lt;p&gt;I soon realized that this problem is so widespread that the Heroku marketplace is full of add-ons that can proxy Heroku requests via static IP addresses. (Btw, a good business opportunity if you want to offer your proxy: how about founding a startup?) &lt;/p&gt;

&lt;p&gt;I then wasted an hour trying out different proxies. Nothing worked. YugabyteDB rejected the requests from my geo-messenger. I scratched my head, then scratched it again. Before scratching my head for the third time, I realized that all those proxy add-ons work for HTTP traffic only: REST, GraphQL, and other protocols that rely on HTTP. My app uses a JDBC driver that opens a direct socket connection to the database and exchanges messages in the PostgreSQL wire-level protocol.&lt;/p&gt;

&lt;p&gt;What was my next step? I’m sure you’ve guessed! I turned to Google again, this time searching for “heroku socks5 proxy for postgres," and finally came across the &lt;a href="https://elements.heroku.com/addons/fixie-socks" rel="noopener noreferrer"&gt;Fixie Socks&lt;/a&gt; add-on that worked for me.&lt;/p&gt;

&lt;p&gt;The step-by-step instructions are as follows:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;I installed the Fixie Socks add-on &lt;code&gt;heroku addons:create fixie-socks:handlebar -a geo-distributed-messenger&lt;/code&gt;. You can swap out “handlebar” for another tier. This one costs me $9 a month for 2,000 requests. There is also a free option for 100 requests a month called “grip.”&lt;/li&gt;
&lt;li&gt;I found my static IPs on the dashboard that you can launch with this command: &lt;code&gt;heroku addons:open fixie-socks&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;I added those IPs to the YugabyteDB Managed &lt;a href="https://docs.yugabyte.com/preview/yugabyte-cloud/cloud-secure-clusters/add-connections/" rel="noopener noreferrer"&gt;IP allow lists&lt;/a&gt;. &lt;/li&gt;
&lt;li&gt;Then I introduced the custom environment variable &lt;code&gt;USE_FIXIE_SOCKS&lt;/code&gt; which instructs my app to either use or bypass the proxy &lt;code&gt;heroku config:set USE_FIXIE_SOCKS=true -a geo-distributed-messenger&lt;/code&gt;.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;When &lt;code&gt;USE_FIXIE_SOCKS&lt;/code&gt; is set to true, the app configures two JVM-level properties (&lt;code&gt;socksProxyHost&lt;/code&gt; and &lt;code&gt;socksProxyPort&lt;/code&gt;) asking Java to send all network requests through the proxy:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight java"&gt;&lt;code&gt;&lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="nc"&gt;System&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;getenv&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"USE_FIXIE_SOCKS"&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt; &lt;span class="o"&gt;!=&lt;/span&gt; &lt;span class="kc"&gt;null&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;
    &lt;span class="kt"&gt;boolean&lt;/span&gt; &lt;span class="n"&gt;useFixie&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;Boolean&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;valueOf&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="nc"&gt;System&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;getenv&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"USE_FIXIE_SOCKS"&lt;/span&gt;&lt;span class="o"&gt;));&lt;/span&gt;

   &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="n"&gt;useFixie&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;
     &lt;span class="nc"&gt;System&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;out&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;println&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"Setting up Fixie Socks Proxy"&lt;/span&gt;&lt;span class="o"&gt;);&lt;/span&gt;

     &lt;span class="c1"&gt;// The FIXIE_SOCKS_HOST variable is added by the add-on to the environment&lt;/span&gt;
     &lt;span class="nc"&gt;String&lt;/span&gt;&lt;span class="o"&gt;[]&lt;/span&gt; &lt;span class="n"&gt;fixieData&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;System&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;getenv&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"FIXIE_SOCKS_HOST"&lt;/span&gt;&lt;span class="o"&gt;).&lt;/span&gt;&lt;span class="na"&gt;split&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"@"&lt;/span&gt;&lt;span class="o"&gt;);&lt;/span&gt;
     &lt;span class="nc"&gt;String&lt;/span&gt;&lt;span class="o"&gt;[]&lt;/span&gt; &lt;span class="n"&gt;fixieCredentials&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;fixieData&lt;/span&gt;&lt;span class="o"&gt;[&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="o"&gt;].&lt;/span&gt;&lt;span class="na"&gt;split&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="s"&gt;":"&lt;/span&gt;&lt;span class="o"&gt;);&lt;/span&gt;
     &lt;span class="nc"&gt;String&lt;/span&gt;&lt;span class="o"&gt;[]&lt;/span&gt; &lt;span class="n"&gt;fixieUrl&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;fixieData&lt;/span&gt;&lt;span class="o"&gt;[&lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="o"&gt;].&lt;/span&gt;&lt;span class="na"&gt;split&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="s"&gt;":"&lt;/span&gt;&lt;span class="o"&gt;);&lt;/span&gt;

     &lt;span class="nc"&gt;String&lt;/span&gt; &lt;span class="n"&gt;fixieHost&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;fixieUrl&lt;/span&gt;&lt;span class="o"&gt;[&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="o"&gt;];&lt;/span&gt;
     &lt;span class="nc"&gt;String&lt;/span&gt; &lt;span class="n"&gt;fixiePort&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;fixieUrl&lt;/span&gt;&lt;span class="o"&gt;[&lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="o"&gt;];&lt;/span&gt;
     &lt;span class="nc"&gt;String&lt;/span&gt; &lt;span class="n"&gt;fixieUser&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;fixieCredentials&lt;/span&gt;&lt;span class="o"&gt;[&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="o"&gt;];&lt;/span&gt;
     &lt;span class="nc"&gt;String&lt;/span&gt; &lt;span class="n"&gt;fixiePassword&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;fixieCredentials&lt;/span&gt;&lt;span class="o"&gt;[&lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="o"&gt;];&lt;/span&gt;

     &lt;span class="nc"&gt;System&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;setProperty&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"socksProxyHost"&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt; &lt;span class="n"&gt;fixieHost&lt;/span&gt;&lt;span class="o"&gt;);&lt;/span&gt;
     &lt;span class="nc"&gt;System&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;setProperty&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"socksProxyPort"&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt; &lt;span class="n"&gt;fixiePort&lt;/span&gt;&lt;span class="o"&gt;);&lt;/span&gt;

     &lt;span class="nc"&gt;System&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;out&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;println&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"Enabled Fixie Socks Proxy:"&lt;/span&gt; &lt;span class="o"&gt;+&lt;/span&gt; &lt;span class="n"&gt;fixieHost&lt;/span&gt;&lt;span class="o"&gt;);&lt;/span&gt;

     &lt;span class="nc"&gt;Authenticator&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;setDefault&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;ProxyAuthenticator&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="n"&gt;fixieUser&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt; &lt;span class="n"&gt;fixiePassword&lt;/span&gt;&lt;span class="o"&gt;));&lt;/span&gt;
   &lt;span class="o"&gt;}&lt;/span&gt;
 &lt;span class="o"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;After restarting the app in Heroku, I could connect to YugabyteDB Managed and see application workspaces, channels, and messages.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxzlvk5cxltd72pyeenjc.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxzlvk5cxltd72pyeenjc.png" alt="Image description" width="800" height="870"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Use Proxy for YugabyteDB Connections Only
&lt;/h2&gt;

&lt;p&gt;Even though the SOCKS5 proxy method worked, I still was not fully satisfied with my implementation. What’s wrong? Just look at these two lines:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight java"&gt;&lt;code&gt;&lt;span class="nc"&gt;System&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;setProperty&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"socksProxyHost"&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt; &lt;span class="n"&gt;fixieHost&lt;/span&gt;&lt;span class="o"&gt;);&lt;/span&gt;
&lt;span class="nc"&gt;System&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;setProperty&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"socksProxyPort"&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt; &lt;span class="n"&gt;fixiePort&lt;/span&gt;&lt;span class="o"&gt;);&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Those are JVM-related settings that require every TCP/IP connection to go through my proxy. That’s overkill. I only need the proxy for socket connections to YugabyteDB Managed.&lt;/p&gt;

&lt;p&gt;Luckily, this task is easy to solve in Java. You just need to provide an implementation of a custom &lt;a href="https://docs.oracle.com/javase/7/docs/api/java/net/ProxySelector.html" rel="noopener noreferrer"&gt;ProxySelector&lt;/a&gt;. This is what I did by introducing my own &lt;a href="https://github.com/YugabyteDB-Samples/geo-distributed-messenger/blob/main/src/main/java/com/yugabyte/app/messenger/DatabaseProxySelector.java" rel="noopener noreferrer"&gt;DatabaseProxySelector&lt;/a&gt; class:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight java"&gt;&lt;code&gt;&lt;span class="kd"&gt;public&lt;/span&gt; &lt;span class="kd"&gt;class&lt;/span&gt; &lt;span class="nc"&gt;DatabaseProxySelector&lt;/span&gt; &lt;span class="kd"&gt;extends&lt;/span&gt; &lt;span class="nc"&gt;ProxySelector&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;

    &lt;span class="kd"&gt;private&lt;/span&gt; &lt;span class="nc"&gt;String&lt;/span&gt; &lt;span class="n"&gt;proxyHost&lt;/span&gt;&lt;span class="o"&gt;;&lt;/span&gt;
    &lt;span class="kd"&gt;private&lt;/span&gt; &lt;span class="kt"&gt;int&lt;/span&gt; &lt;span class="n"&gt;proxyPort&lt;/span&gt;&lt;span class="o"&gt;;&lt;/span&gt;

    &lt;span class="kd"&gt;public&lt;/span&gt; &lt;span class="nf"&gt;DatabaseProxySelector&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="nc"&gt;String&lt;/span&gt; &lt;span class="n"&gt;proxyHost&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt; &lt;span class="kt"&gt;int&lt;/span&gt; &lt;span class="n"&gt;proxyPort&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;
        &lt;span class="k"&gt;this&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;proxyHost&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;proxyHost&lt;/span&gt;&lt;span class="o"&gt;;&lt;/span&gt;
        &lt;span class="k"&gt;this&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;proxyPort&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;proxyPort&lt;/span&gt;&lt;span class="o"&gt;;&lt;/span&gt;
    &lt;span class="o"&gt;}&lt;/span&gt;

    &lt;span class="nd"&gt;@Override&lt;/span&gt;
    &lt;span class="kd"&gt;public&lt;/span&gt; &lt;span class="nc"&gt;List&lt;/span&gt;&lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nc"&gt;Proxy&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt; &lt;span class="nf"&gt;select&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="no"&gt;URI&lt;/span&gt; &lt;span class="n"&gt;uri&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;
        &lt;span class="c1"&gt;// YugabyteDB Managed host always ends with `ybdb.io`&lt;/span&gt;
        &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="n"&gt;uri&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;toString&lt;/span&gt;&lt;span class="o"&gt;().&lt;/span&gt;&lt;span class="na"&gt;contains&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"ybdb.io"&lt;/span&gt;&lt;span class="o"&gt;))&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;
            &lt;span class="nc"&gt;System&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;out&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;println&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"Using the proxy for YugabyteDB Managed: "&lt;/span&gt; &lt;span class="o"&gt;+&lt;/span&gt; &lt;span class="n"&gt;uri&lt;/span&gt;&lt;span class="o"&gt;);&lt;/span&gt;

            &lt;span class="kd"&gt;final&lt;/span&gt; &lt;span class="nc"&gt;InetSocketAddress&lt;/span&gt; &lt;span class="n"&gt;proxyAddress&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;InetSocketAddress&lt;/span&gt;
                    &lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;createUnresolved&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="n"&gt;proxyHost&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt; &lt;span class="n"&gt;proxyPort&lt;/span&gt;&lt;span class="o"&gt;);&lt;/span&gt;
            &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="nc"&gt;Collections&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;singletonList&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;Proxy&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="nc"&gt;Type&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;SOCKS&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt; &lt;span class="n"&gt;proxyAddress&lt;/span&gt;&lt;span class="o"&gt;));&lt;/span&gt;
        &lt;span class="o"&gt;}&lt;/span&gt;

        &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="nc"&gt;Collections&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;singletonList&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="nc"&gt;Proxy&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;NO_PROXY&lt;/span&gt;&lt;span class="o"&gt;);&lt;/span&gt;
    &lt;span class="o"&gt;}&lt;/span&gt;

    &lt;span class="nd"&gt;@Override&lt;/span&gt;
    &lt;span class="kd"&gt;public&lt;/span&gt; &lt;span class="kt"&gt;void&lt;/span&gt; &lt;span class="nf"&gt;connectFailed&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="no"&gt;URI&lt;/span&gt; &lt;span class="n"&gt;uri&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt; &lt;span class="nc"&gt;SocketAddress&lt;/span&gt; &lt;span class="n"&gt;sa&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt; &lt;span class="nc"&gt;IOException&lt;/span&gt; &lt;span class="n"&gt;ioe&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;
        &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nf"&gt;IOException&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"Failed to connect to the proxy"&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt; &lt;span class="n"&gt;ioe&lt;/span&gt;&lt;span class="o"&gt;).&lt;/span&gt;&lt;span class="na"&gt;printStackTrace&lt;/span&gt;&lt;span class="o"&gt;();&lt;/span&gt;
    &lt;span class="o"&gt;}&lt;/span&gt;

&lt;span class="o"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;As you can see, the &lt;code&gt;DatabaseProxySelector&lt;/code&gt; enables the Fixie Socks proxy only for YugabyteDB Managed URIs. Finally, the selector instance is created on an application startup:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight java"&gt;&lt;code&gt;&lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="nc"&gt;System&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;getenv&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"USE_FIXIE_SOCKS"&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt; &lt;span class="o"&gt;!=&lt;/span&gt; &lt;span class="kc"&gt;null&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;
  &lt;span class="kt"&gt;boolean&lt;/span&gt; &lt;span class="n"&gt;useFixie&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;Boolean&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;valueOf&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="nc"&gt;System&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;getenv&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"USE_FIXIE_SOCKS"&lt;/span&gt;&lt;span class="o"&gt;));&lt;/span&gt;

  &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="n"&gt;useFixie&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;
    &lt;span class="nc"&gt;System&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;out&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;println&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"Setting up Fixie Socks Proxy"&lt;/span&gt;&lt;span class="o"&gt;);&lt;/span&gt;

    &lt;span class="nc"&gt;String&lt;/span&gt;&lt;span class="o"&gt;[]&lt;/span&gt; &lt;span class="n"&gt;fixieData&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;System&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;getenv&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"FIXIE_SOCKS_HOST"&lt;/span&gt;&lt;span class="o"&gt;).&lt;/span&gt;&lt;span class="na"&gt;split&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"@"&lt;/span&gt;&lt;span class="o"&gt;);&lt;/span&gt;
    &lt;span class="nc"&gt;String&lt;/span&gt;&lt;span class="o"&gt;[]&lt;/span&gt; &lt;span class="n"&gt;fixieCredentials&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;fixieData&lt;/span&gt;&lt;span class="o"&gt;[&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="o"&gt;].&lt;/span&gt;&lt;span class="na"&gt;split&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="s"&gt;":"&lt;/span&gt;&lt;span class="o"&gt;);&lt;/span&gt;
    &lt;span class="nc"&gt;String&lt;/span&gt;&lt;span class="o"&gt;[]&lt;/span&gt; &lt;span class="n"&gt;fixieUrl&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;fixieData&lt;/span&gt;&lt;span class="o"&gt;[&lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="o"&gt;].&lt;/span&gt;&lt;span class="na"&gt;split&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="s"&gt;":"&lt;/span&gt;&lt;span class="o"&gt;);&lt;/span&gt;

    &lt;span class="nc"&gt;String&lt;/span&gt; &lt;span class="n"&gt;fixieHost&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;fixieUrl&lt;/span&gt;&lt;span class="o"&gt;[&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="o"&gt;];&lt;/span&gt;
    &lt;span class="nc"&gt;String&lt;/span&gt; &lt;span class="n"&gt;fixiePort&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;fixieUrl&lt;/span&gt;&lt;span class="o"&gt;[&lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="o"&gt;];&lt;/span&gt;
    &lt;span class="nc"&gt;String&lt;/span&gt; &lt;span class="n"&gt;fixieUser&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;fixieCredentials&lt;/span&gt;&lt;span class="o"&gt;[&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="o"&gt;];&lt;/span&gt;
    &lt;span class="nc"&gt;String&lt;/span&gt; &lt;span class="n"&gt;fixiePassword&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;fixieCredentials&lt;/span&gt;&lt;span class="o"&gt;[&lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="o"&gt;];&lt;/span&gt;

    &lt;span class="nc"&gt;DatabaseProxySelector&lt;/span&gt; &lt;span class="n"&gt;proxySelector&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;DatabaseProxySelector&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="n"&gt;fixieHost&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt; &lt;span class="nc"&gt;Integer&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;parseInt&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="n"&gt;fixiePort&lt;/span&gt;&lt;span class="o"&gt;));&lt;/span&gt;
    &lt;span class="nc"&gt;ProxySelector&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;setDefault&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="n"&gt;proxySelector&lt;/span&gt;&lt;span class="o"&gt;);&lt;/span&gt;

    &lt;span class="nc"&gt;Authenticator&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;setDefault&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;ProxyAuthenticator&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="n"&gt;fixieUser&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt; &lt;span class="n"&gt;fixiePassword&lt;/span&gt;&lt;span class="o"&gt;));&lt;/span&gt;

    &lt;span class="nc"&gt;System&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;out&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;println&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"Enabled Fixie Socks Proxy:"&lt;/span&gt; &lt;span class="o"&gt;+&lt;/span&gt; &lt;span class="n"&gt;fixieHost&lt;/span&gt;&lt;span class="o"&gt;);&lt;/span&gt;
  &lt;span class="o"&gt;}&lt;/span&gt;
&lt;span class="o"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h1&gt;
  
  
  Explore Private Spaces
&lt;/h1&gt;

&lt;p&gt;At the start, I mentioned that the Heroku Private Spaces feature should also work (if to believe Heroku’s technical documentation). For the sake of completeness, I’ve added this option to the article. Theoretically, you can set up those private spaces in Heroku and peer the spaces with a YugabyteDB Managed VPC network, but I’ll let you validate that!&lt;/p&gt;

&lt;h1&gt;
  
  
  What’s on the Horizon?
&lt;/h1&gt;

&lt;p&gt;Alright, matey. This article concludes my thoughts on the current app version running in a single cloud region. Now, let me take a short break before I move on to the next milestone: next, I need the geo-messenger to function across several cloud regions. I’ll be looking at Google Cloud tools to automate the deployment. I’ll keep you posted!&lt;/p&gt;

&lt;p&gt;In the meantime, if you’re interested in how my dev journey began (and is going), check out the previous articles in &lt;a href="https://dev.to/denismagda/series/19150"&gt;this series&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>webdev</category>
      <category>database</category>
      <category>java</category>
      <category>heroku</category>
    </item>
    <item>
      <title>How to Build a Multi-Zone Java App in One Day</title>
      <dc:creator>Denis Magda</dc:creator>
      <pubDate>Thu, 18 Aug 2022 20:09:00 +0000</pubDate>
      <link>https://dev.to/yugabyte/how-to-build-a-multi-zone-java-app-in-one-day-1ajn</link>
      <guid>https://dev.to/yugabyte/how-to-build-a-multi-zone-java-app-in-one-day-1ajn</guid>
      <description>&lt;p&gt;Ahoy, matey! At last, the time has come to build and launch the first version of my geo-distributed Java application. &lt;/p&gt;

&lt;p&gt;It took me around 24 hours in total to create this version. The app currently runs on &lt;a href="https://vaadin.com" rel="noopener noreferrer"&gt;Vaadin&lt;/a&gt; and Spring, it can use PostgreSQL or &lt;a href="https://www.yugabyte.com" rel="noopener noreferrer"&gt;YugabyteDB&lt;/a&gt; as a database, and it either works locally or can be deployed in &lt;a href="https://heroku.com/" rel="noopener noreferrer"&gt;Heroku&lt;/a&gt;. &lt;/p&gt;

&lt;p&gt;Don’t have a clue of what I'm talking about? Welcome to my dev journal where I’ve been documenting my experience of building a geo-distributed app in Java from scratch. In my two previous articles, I talked about &lt;a href="https://dev.to/yugabyte/geo-what-a-quick-introduction-to-geo-distributed-apps-295b"&gt;geo-distributed apps&lt;/a&gt; and what &lt;a href="https://dev.to/yugabyte/what-makes-the-architecture-of-geo-distributed-apps-different-106c"&gt;their architecture&lt;/a&gt; looks like. In this blog, I’ll actually get things done, sharing my first results and any challenges.&lt;/p&gt;

&lt;p&gt;So, if you’re with me on this journey, then, as the pirates used to say - “All Hand Hoy!” which means “Everyone on deck!”&lt;/p&gt;

&lt;h1&gt;
  
  
  My App - The Big Picture
&lt;/h1&gt;

&lt;p&gt;Some of you might have noticed that I keep talking about building a generic geo-distributed Java app. But, what is my app gonna do? Well, I’m building a Slack-like corporate messenger. &lt;/p&gt;

&lt;p&gt;Yep, it may sound like I’m reinventing the wheel, but perhaps I have a killer feature in mind, my special secret sauce that will overturn the dominance of Slack!&lt;/p&gt;

&lt;p&gt;On a serious note, I’m just curious about how to design and build a geo-distributed corporate messenger (like Slack) that functions with low latency across the globe, tolerates cloud outages, and honors &lt;a href="https://en.wikipedia.org/wiki/General_Data_Protection_Regulation" rel="noopener noreferrer"&gt;GDPR&lt;/a&gt;-like requirements. So, I thought I would try to make my own! Thus, at the end of my journey, I’m expecting to have this:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F58y1dmn57jmh64ho7ecu.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F58y1dmn57jmh64ho7ecu.png" alt="Image description" width="800" height="722"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The messenger application instances and database nodes will be spread worldwide and run in the cloud regions closest to most users. The global load balancer will intercept user traffic and route requests to the application instances closest to the user.&lt;/p&gt;

&lt;p&gt;But, that’s the big picture. The finish line, so they say. However, we start with baby steps…&lt;/p&gt;

&lt;h1&gt;
  
  
  My App - Day 1
&lt;/h1&gt;

&lt;p&gt;While I have an ambitious goal for my geo-distributed messenger, the first version will be much more modest. &lt;/p&gt;

&lt;p&gt;I bet that on day 1, my Slack-killer, like most startup companies out there, will not be a big success. In fact, it might be barely noticed. If I get 100 active users daily within the first two months, then it’s already a big win.&lt;/p&gt;

&lt;p&gt;So, if I expect to have around 100 active users daily, why should I burn the money on infrastructure that spans the globe? It makes little sense. It’s far more prudent to start small and then grow big. That’s why the first version of my app looks like this:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frd0cjv9zw6xlejvcwpr2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frd0cjv9zw6xlejvcwpr2.png" alt="Image description" width="800" height="482"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;My messenger will be deployed within a single cloud region in the US. It’s going to be a monolith running in Heroku. There will be a three-node YugabyteDB cluster running across three availability zones. With such architecture, I can withstand zone-level outages and serve the user requests of Ms. Blue and other US-based users at low latency.&lt;/p&gt;

&lt;p&gt;Unfortunately, the latency for Mr. Green from Berlin and Mr. Red from Sydney will be high. But, as a startup, I can live with that, especially if I put most of my marketing dollars into growing the US-based user base. &lt;/p&gt;

&lt;p&gt;Once the time comes, I can easily scale my current multi-zone geo-distributed app to a multi-region one. At least, I hope that it’s going to be easy. Time will tell…&lt;/p&gt;

&lt;p&gt;Let me walk you through the main components of the first version of my messenger application. Here are its &lt;a href="https://github.com/YugabyteDB-Samples/geo-distributed-messenger" rel="noopener noreferrer"&gt;GitHub coordinates&lt;/a&gt;.&lt;/p&gt;

&lt;h1&gt;
  
  
  Vaadin: Backend and Frontend
&lt;/h1&gt;

&lt;p&gt;As an engineer, I always look for shortcuts when creating web applications, especially on the frontend side. I can’t be called a web developer, because the Web is not what I’m nerding on daily. &lt;/p&gt;

&lt;p&gt;It’s more like this… Every two years I join or start a web project and learn something new. I still remember creating my first web app in raw JavaScript with HTML+CSS, then jQuery was launched and was a big thing. Later, I used GWT for a few projects and during the last project got a chance to work on Angular.  &lt;/p&gt;

&lt;p&gt;Why did I pick Vaadin for this project? Well, even though I’m a polyglot, Java is my mother tongue. I built a special bond with Java after spending many years at Sun Microsystems and Oracle building JVM and JDK. Thus, as a guy who knows Java inside out, it’s always my first choice.&lt;/p&gt;

&lt;p&gt;But, putting my personal attachment to Java aside, what do I like about Vaadin after using it for 4 days in a row? Vaadin is a full-stack framework. You can use it to build both the backend and frontend logic.&lt;/p&gt;

&lt;p&gt;Firstly, as with many other frontend frameworks, you create/design pages and views using essential building blocks likes &lt;code&gt;Button&lt;/code&gt;, &lt;code&gt;Label&lt;/code&gt;, &lt;code&gt;HorizontalLayout&lt;/code&gt;, etc. Then the framework translates your code, written in Java, to the JavaScript counterpart with a nice-looking UI. This is what I created: &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fy0p6milfrpfcqvk1ntbr.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fy0p6milfrpfcqvk1ntbr.png" alt="Image description" width="800" height="485"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Not bad for a guy whose design and front-end skills require some work, right? Right?!&lt;/p&gt;

&lt;p&gt;Secondly, Vaadin doesn’t require me to run a separate backend instance and implement APIs for the frontend there. Instead, I just create my views (displayed in the browser) and add the server-side logic that the views then ask the backend to execute.  For instance, this is the Button that sends a message on a button-click event:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight java"&gt;&lt;code&gt;&lt;span class="n"&gt;sendMessageButton&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;Button&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"Send"&lt;/span&gt;&lt;span class="o"&gt;);&lt;/span&gt;
        &lt;span class="n"&gt;sendMessageButton&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;addClickListener&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="n"&gt;e&lt;/span&gt; &lt;span class="o"&gt;-&amp;gt;&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;
            &lt;span class="err"&gt;…&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt; 
                &lt;span class="nc"&gt;Profile&lt;/span&gt; &lt;span class="n"&gt;user&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;userOptional&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;get&lt;/span&gt;&lt;span class="o"&gt;();&lt;/span&gt;

                &lt;span class="nc"&gt;Message&lt;/span&gt; &lt;span class="n"&gt;newMessage&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;Message&lt;/span&gt;&lt;span class="o"&gt;();&lt;/span&gt;

                &lt;span class="n"&gt;newMessage&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;setChannelId&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="n"&gt;currentChannel&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;getId&lt;/span&gt;&lt;span class="o"&gt;());&lt;/span&gt;
                &lt;span class="n"&gt;newMessage&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;setCountryCode&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="n"&gt;currentChannel&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;getCountryCode&lt;/span&gt;&lt;span class="o"&gt;());&lt;/span&gt;
                &lt;span class="n"&gt;newMessage&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;setSenderId&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="n"&gt;user&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;getId&lt;/span&gt;&lt;span class="o"&gt;());&lt;/span&gt;
                &lt;span class="n"&gt;newMessage&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;setSenderCountryCode&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="n"&gt;user&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;getCountryCode&lt;/span&gt;&lt;span class="o"&gt;());&lt;/span&gt;

                &lt;span class="n"&gt;newMessage&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;setMessage&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="n"&gt;newMessageArea&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;getValue&lt;/span&gt;&lt;span class="o"&gt;().&lt;/span&gt;&lt;span class="na"&gt;trim&lt;/span&gt;&lt;span class="o"&gt;());&lt;/span&gt;

                &lt;span class="n"&gt;newMessage&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;messagingService&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;addMessage&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="n"&gt;newMessage&lt;/span&gt;&lt;span class="o"&gt;);&lt;/span&gt;
&lt;span class="o"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The listener logic is implemented in my custom UI component - &lt;a href="https://github.com/YugabyteDB-Samples/geo-distributed-messenger/blob/main/src/main/java/com/yugabyte/app/messenger/views/MessagesView.java" rel="noopener noreferrer"&gt;MessageView&lt;/a&gt;. The listener gets triggered in the browser end, it’s obvious. Then Vaadin sends this event for processing to the backend side where a new Message is constructed and added to the database via the call to a Spring Framework’s Service entity - &lt;code&gt;messagingService.addMessage(newMessage)&lt;/code&gt;. &lt;/p&gt;

&lt;p&gt;Nice! No need to create a separate REST API layer. Vaadin can do this for me.&lt;/p&gt;

&lt;h1&gt;
  
  
  PostgreSQL and YugabyteDB: Database
&lt;/h1&gt;

&lt;p&gt;My database choice was driven by past experience and personal preferences.&lt;/p&gt;

&lt;p&gt;PostgreSQL needs no introduction. This is my default go-to database for any project for the last 15+ years.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.yugabyte.com" rel="noopener noreferrer"&gt;YugabyteDB&lt;/a&gt; might still be a dark horse for many. That’s the database I’m nerding on these days. It’s a distributed SQL database built on PostgreSQL. Basically, it’s a distributed PostgreSQL that can work across geographies. Exactly what I need for my geo-distributed app.&lt;/p&gt;

&lt;p&gt;As long as YugabyteDB is Postgres-compliant, the app supports both databases out of the box. I use Postgres for my development environment and can always switch to a multi-node YugabyteDB cluster by tweaking the connectivity settings. &lt;/p&gt;

&lt;p&gt;Wanna try? No problem, matey, follow these steps:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Launch a Postgres instance locally and create the messenger’s schema following &lt;a href="https://github.com/YugabyteDB-Samples/geo-distributed-messenger#start-postgresql" rel="noopener noreferrer"&gt;these instructions&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Then start the app using a familiar syntax: &lt;code&gt;mvn spring-boot:run&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Open &lt;a href="http://localhost:8080" rel="noopener noreferrer"&gt;http://localhost:8080&lt;/a&gt; and enjoy this view&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fo3sfnkxz7nmdvzgk9ggi.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fo3sfnkxz7nmdvzgk9ggi.png" alt="Image description" width="800" height="472"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;It takes time to start the app for the first time as the &lt;a href="https://github.com/YugabyteDB-Samples/geo-distributed-messenger/blob/main/src/main/java/com/yugabyte/app/messenger/data/generator/DataGenerator.java" rel="noopener noreferrer"&gt;DataGenerator&lt;/a&gt; generates hundreds of Workspaces and Channels with thousands of Messages. But, after that, the startup time will be fast.&lt;/p&gt;

&lt;p&gt;Next, if you want to use YugabyteDB for the dev environment as well just:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Start &lt;a href="https://github.com/YugabyteDB-Samples/geo-distributed-messenger#start-yugabytedb-locally" rel="noopener noreferrer"&gt;YugabyteDB locally&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Tweak connectivity settings in the &lt;a href="https://github.com/YugabyteDB-Samples/geo-distributed-messenger/blob/main/src/main/resources/application-dev.properties" rel="noopener noreferrer"&gt;application-dev.properties&lt;/a&gt; file&lt;/li&gt;
&lt;li&gt;Start the app! Enjoy!&lt;/li&gt;
&lt;/ol&gt;

&lt;h1&gt;
  
  
  Heroku and YugabyteDB Managed: Deploying to Prod
&lt;/h1&gt;

&lt;p&gt;The deployment to the prod was the most straightforward part for me. I’m a lazy guy who uses cloud services whenever possible (or reasonable). My geo-messenger was no exception.&lt;/p&gt;

&lt;p&gt;As you remember, the first version of the app is supposed to work within a single cloud region with database nodes at least in three availability zones. Zones fail frequently and I need to be ready for that. So, what have I done?&lt;/p&gt;

&lt;p&gt;First, I deployed a multi-node YugabyteDB Managed cluster in South Carolina (&lt;code&gt;us-east1&lt;/code&gt;) region with nodes in three availability zones - &lt;code&gt;us-east1-b&lt;/code&gt;, &lt;code&gt;us-east1-c&lt;/code&gt; and &lt;code&gt;us-east1-d&lt;/code&gt;. Just in case, each node handles both reads and writes, there is no such a thing like “active and standby” nodes:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fy3a9wyb0tsq2sx13lc8j.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fy3a9wyb0tsq2sx13lc8j.png" alt="Image description" width="800" height="231"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Next, after creating a Heroku project, I deployed my Java app somewhere in the USA:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmhwxqgyo4cr5wmzsa0zd.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmhwxqgyo4cr5wmzsa0zd.png" alt="Image description" width="800" height="521"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Currently, Heroku is not transparent about the cloud region my app is assigned to. I can only hope that the app instance will be running close enough to my YugabyteDB cluster and that Heroku will take care of the app’s availability in case of cloud outages. I probably need to dig into Heroku internals here. Please share your thoughts in the comments if you know Heroku better than I do!&lt;/p&gt;

&lt;h1&gt;
  
  
  What’s On the Horizon?
&lt;/h1&gt;

&lt;p&gt;The development of the first version of my geo-distributed messenger was a swift and fun experience. It’s nice to get a multi-zone app functioning within just four days of coding.&lt;/p&gt;

&lt;p&gt;What do we have coming up next? &lt;/p&gt;

&lt;p&gt;Performance in prod sucks. It takes 3-6 seconds to load messages when I switch between channels. I can’t blame Heroku or YugabyteDB for that as I was coding too fast and used Spring Data in a less than optimal way. &lt;/p&gt;

&lt;p&gt;So, let me work on the performance issues next and then scale the app across multiple regions. Until I do that, you can deploy the &lt;a href="https://github.com/YugabyteDB-Samples/geo-distributed-messenger#deploy-to-heroku" rel="noopener noreferrer"&gt;current version of the app&lt;/a&gt; in Heroku and see how “fast” it is. &lt;/p&gt;

</description>
      <category>webdev</category>
      <category>database</category>
      <category>architecture</category>
      <category>java</category>
    </item>
    <item>
      <title>What Makes the Architecture of Geo-Distributed Apps Different?</title>
      <dc:creator>Denis Magda</dc:creator>
      <pubDate>Mon, 08 Aug 2022 12:55:00 +0000</pubDate>
      <link>https://dev.to/yugabyte/what-makes-the-architecture-of-geo-distributed-apps-different-106c</link>
      <guid>https://dev.to/yugabyte/what-makes-the-architecture-of-geo-distributed-apps-different-106c</guid>
      <description>&lt;p&gt;Ahoy, matey! &lt;/p&gt;

&lt;p&gt;Welcome back to my journal, where I’ve been documenting my experience of building a geo-distributed app in Java from scratch. In the &lt;a href="https://dev.to/yugabyte/geo-what-a-quick-introduction-to-geo-distributed-apps-295b"&gt;previous article&lt;/a&gt;, I broke down the definition of geo-distributed apps. If you missed that part of the journey, turn the “page” back to catch up. &lt;/p&gt;

&lt;p&gt;Today, I’ll compare and contrast geo-distributed apps with regular apps (the sort of apps you deploy within a single data center or availability zone). &lt;/p&gt;

&lt;p&gt;To understand the difference and find similarities, I need to drill down into the building blocks (architecture) of a typical geo-distributed app. So, if you’re with me on this journey, then, as the pirates used to say - “All Hand Hoy!” which means “Everyone on deck!”&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmucc2hld9bsjmvm82nu4.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmucc2hld9bsjmvm82nu4.gif" alt="Image description" width="480" height="294"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h1&gt;
  
  
  Geo-Distributed App Architecture
&lt;/h1&gt;

&lt;p&gt;What are the main building blocks (architecture) of a geo-distributed app? The answer will surprise you!&lt;/p&gt;

&lt;p&gt;Fundamentally, the architecture is the same as with regular applications. A geo-distributed app comes with data and application layers, just as you get with a standard app. It might have a dedicated API layer and consist of several micro-services and it may also use a load balancer to better handle user traffic. It can rely on some middleware and…the list goes on and on!&lt;/p&gt;

&lt;p&gt;So, what's the big difference then? Well, the key to the right answer is engraved in the definition of the geo-distributed apps:&lt;/p&gt;

&lt;p&gt;A geo-distributed app is an app that spans multiple geographic locations for &lt;strong&gt;high availability&lt;/strong&gt;, &lt;strong&gt;resiliency&lt;/strong&gt;, &lt;strong&gt;compliance&lt;/strong&gt; and &lt;strong&gt;performance&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;All of those building blocks and components have to possess the characteristics in &lt;strong&gt;bold&lt;/strong&gt;. For the sake of simplicity, let's review those characteristics in relation to the data layer, application layer and load balancer. And some visuals might also help!&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5w3dpdj7gbtj8y7fpdiw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5w3dpdj7gbtj8y7fpdiw.png" alt="Image description" width="800" height="722"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The &lt;strong&gt;Users&lt;/strong&gt; of a geo-distributed app live around the world. Depending on the app, you might be dealing with users residing in a single continent (or a large part of it) or, literally anywhere else on planet Earth. In our example (above), we have three users. Let’s call them Ms. Blue, Mr. Green, and Mr. Red. &lt;/p&gt;

&lt;p&gt;These users open up their laptops, launch a browser and go to the &lt;strong&gt;Internet&lt;/strong&gt;. They type our app address in the browser window and expect to see the app’s page within a few seconds, if not immediately. If the page doesn’t load within 1-3 seconds they would consider the app slow (and may go elsewhere) So, how does the geo-distributed app handle this?&lt;/p&gt;

&lt;h1&gt;
  
  
  Global Load Balancer
&lt;/h1&gt;

&lt;p&gt;The geo-distributed app in our illustration relies on a global load balancer that receives requests, figures out the sender's location, and forwards the requests to the &lt;strong&gt;application instance&lt;/strong&gt; closest to the user.&lt;/p&gt;

&lt;p&gt;The picture shows that Ms. Blue's request is routed to an application instance running in the US West.  Mr. Green's to an instance in Europe. And Mr. Red's to an instance in Australia. The closer the application instance to the user, the faster the app can process their request. This is how the global load balancer contributes to the performance characteristic of this geo-distributed app.&lt;/p&gt;

&lt;p&gt;But, how does the load balancer help with high availability and reliability? &lt;/p&gt;

&lt;p&gt;Imagine that the US West region becomes unavailable, and the load balancer can no longer forward Ms. Blue's requests there. The balancer won't panic. Instead, it will figure out another closest location for Ms. Blue (which should be US East) and start forwarding her traffic there. Automatically!&lt;/p&gt;

&lt;h1&gt;
  
  
  Geo-Distributed Application Layer
&lt;/h1&gt;

&lt;p&gt;Alright, let’s move to the application layer of the architecture. This is where the global load balancer forwards a user request.&lt;/p&gt;

&lt;p&gt;In the picture you can see that the application layer spans continents. The application can be a monolith or can be split into several micro-services. It doesn’t really matter for now. What’s most important is that multiple instances of the app are running across the globe. I guess it’s already apparent why it’s done this way, so let’s just recap.&lt;/p&gt;

&lt;p&gt;Multiple app instances ensure that the geo-distributed app can tolerate various cloud outages, including major incidents. This makes the application layer reliable and highly available. On top of that, with multiple instances, the geo-distributed app can serve user requests at low latency, regardless of the user’s location. This is what makes the app layer performant. &lt;/p&gt;

&lt;h1&gt;
  
  
  Horizontally Scalable Data Layer
&lt;/h1&gt;

&lt;p&gt;Finally, the application instance selected to serve a user request needs to read data from, or write it to, the data layer (database). With geo-distributed apps, you usually use a distributed database that can scale horizontally. This is why the picture above has multiple database nodes scattered worldwide. &lt;/p&gt;

&lt;p&gt;Distributed databases greatly improve the reliability and availability of geo-distributed apps. These databases store redundant copies of data, allowing the app to remain operational during various outages. If a database node becomes unavailable due to a cloud incident, the application requests can be handled by alive and healthy nodes.&lt;/p&gt;

&lt;p&gt;From a performance standpoint, the closer the user data to the application instance, the better. You don’t want an application instance from Sydney handling requests for Mr. Red to go to a database node in Asia or South America. Right? Right! With distributed databases, it’s possible to arrange data close to the user’s location, minimizing latencies.&lt;/p&gt;

&lt;p&gt;Importantly, the ability to place user data in specific locations also enables the geo-distributed app to comply with data residency requirements. GDPR is a serious thing. So, when the global load balancer forwards Mr. Green’s requests to an app instance in Europe, the instance has to read personal data from and write it to the database node(s) deployed in the European Union. &lt;/p&gt;

&lt;p&gt;That’s it. As you see, the architecture of a geo-distributed app comes with the same components you use in regular applications - including the data layer, application layer, and load balancer. The only difference is, in the case of the geo-distributed apps all of those components have to function across several distant locations.&lt;/p&gt;

&lt;p&gt;Still confused? Any questions? Nudge me in the comments and we’ll get things sorted together!&lt;/p&gt;

&lt;h1&gt;
  
  
  What’s on the Horizon?
&lt;/h1&gt;

&lt;p&gt;Alright, with this article I’ve finished discussing the basic concepts related to geo-distributed applications. Now you know what beast a geo-distributed app is, and what components it includes.&lt;/p&gt;

&lt;p&gt;What do we have coming up next? &lt;/p&gt;

&lt;p&gt;Well, I’ll introduce you to my first geo-distributed app prototype. It’s still raw, but that’s OK as long as there’s a foundation to build upon on.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpkke0cfnb1fuyc7htfxs.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpkke0cfnb1fuyc7htfxs.png" alt="Image description" width="603" height="538"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This is a Java web application built on the Vaadin framework and deployed in Heroku. The database is YugabyteDB. The prototype functions within a single cloud region, but across multiple availability zones.&lt;/p&gt;

</description>
      <category>webdev</category>
      <category>database</category>
      <category>architecture</category>
      <category>microservices</category>
    </item>
    <item>
      <title>Geo What? A Quick Introduction to Geo-Distributed Apps</title>
      <dc:creator>Denis Magda</dc:creator>
      <pubDate>Wed, 03 Aug 2022 02:02:24 +0000</pubDate>
      <link>https://dev.to/yugabyte/geo-what-a-quick-introduction-to-geo-distributed-apps-295b</link>
      <guid>https://dev.to/yugabyte/geo-what-a-quick-introduction-to-geo-distributed-apps-295b</guid>
      <description>&lt;p&gt;Have you heard of geo-distributed apps? According to my statistics around 50% of us haven’t! &lt;/p&gt;

&lt;p&gt;Microsoft defines a geo-distributed app as an &lt;a href="https://docs.microsoft.com/en-us/learn/modules/design-a-geographically-distributed-application/" rel="noopener noreferrer"&gt;app that spans multiple geographic locations&lt;/a&gt; for high availability and resiliency. Geo-distributed is a relatively new term, coined around the time when many of us jumped on the cloud bandwagon and started building cloud-native apps like crazy. &lt;/p&gt;

&lt;p&gt;Dealing with geo-distributed apps is now my day duty. While I reside on the data layer side of things - working with &lt;a href="https://www.yugabyte.com" rel="noopener noreferrer"&gt;YugabyteDB&lt;/a&gt; - I'm a proponent of the "eat your own dog food" approach! &lt;br&gt;
Many developers use YugabyteDB as the database component in geo-distributed apps. But, the database is just a piece of the puzzle. There are many more layers to take care of - middleware, backend, API, networking, frontend, and global cloud load balancer, to name just a few. &lt;/p&gt;

&lt;p&gt;This article is the first in my journal, where I'll be documenting my experience of building a geo-distributed app in Java from scratch. I know it’s going to be tough,  but it’s going to be a completely open and uncensored story. It will include wins, tips &amp;amp; tricks, struggles, and things to avoid! &lt;/p&gt;

&lt;p&gt;This story may last a few months, or might even take a bit longer. The one thing I know for sure is that I'll finish the project much faster than Stephen King who worked on the &lt;a href="https://en.wikipedia.org/wiki/The_Dark_Tower_(series)" rel="noopener noreferrer"&gt;Dark Tower series&lt;/a&gt; for 34 years!&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnl1zgk7xeunfrlqmnstk.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnl1zgk7xeunfrlqmnstk.gif" alt="Image description" width="480" height="294"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;So, if you’re with me, then, as the pirates used to say - “All Hand Hoy!” which means “Everyone on deck!”&lt;/p&gt;

&lt;h1&gt;
  
  
  Clarifying the Definition of Geo-Distributed Apps
&lt;/h1&gt;

&lt;p&gt;Personally, I would expand Microsoft’s definition of geo-distributed apps by adding a couple of extra words - an app that spans multiple geographic locations for high availability, resiliency, &lt;strong&gt;compliance&lt;/strong&gt; and &lt;strong&gt;performance&lt;/strong&gt;. Now that’s too many buzzwords, right? So, let’s break it into pieces because, frankly, the definition is accurate and concise if you know the full context.&lt;/p&gt;

&lt;p&gt;The high-availability/resiliency characteristic implies that the app can withstand all types of cloud outages, including major incidents. A typical cloud environment comes with abundant resources and services, but you shouldn’t expect that all those resources to be available 100% of the time. &lt;a href="https://aws.amazon.com/premiumsupport/technology/pes/" rel="noopener noreferrer"&gt;Starting in 2011&lt;/a&gt;, AWS alone had a major outage at least once a year. It took an average of four hours to recover. And minor outages happen all the time. The purpose of geo-distributed apps is to remain available and resilient even in the event of major cloud incidents.&lt;/p&gt;

&lt;p&gt;So, what does &lt;strong&gt;compliance&lt;/strong&gt; mean in relation to the geo-distributed apps? This one is simpler. Remember &lt;a href="https://en.wikipedia.org/wiki/General_Data_Protection_Regulation" rel="noopener noreferrer"&gt;GDPR&lt;/a&gt; and similar data residency requirements? That’s about it. Suppose you create an app for US-based users, and it becomes a big success. The next milestone is to offer the same service in Europe. Is your architecture ready for that? Or do you need to postpone the launch because the app is not yet GDPR-ready? With a geo-distributed app it happens naturally and as quickly as you like, as long as those apps are capable of keeping users’ personal data in the countries of their origin.&lt;/p&gt;

&lt;p&gt;Yo Ho Ho! We’ve got to the &lt;strong&gt;performance&lt;/strong&gt; characteristic of the apps. Let’s use some visuals for this:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frb79t7empfz4gctyxkht.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frb79t7empfz4gctyxkht.png" alt="Image description" width="800" height="414"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Suppose an app’s data resides in a single cloud region, like the US West. Users who live nearby can experience a latency as low as 5 ms (Google Cloud strives to deliver 5ms roundtrip latencies between availability zones within a region). But, if the user interacts with the app from South Asia, latency might be 40x higher (&lt;a href="https://docs.google.com/spreadsheets/u/1/d/1lCUjdT-JNoATftGshtUIPQIl0CLb2Z8DCL-k8UAMtec/pubhtml" rel="noopener noreferrer"&gt;~220ms in Google Cloud&lt;/a&gt; because the traffic needs to travel through cables under the ocean surface). &lt;/p&gt;

&lt;p&gt;The geo-distributed app’s promise and requirement are to compete against the laws of physics and brutal reality (such as the cables in the ocean) in a way that allows the user from South Asia to experience the same (or comparable) latency as the user from the US:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxhziv8rnadw8ajlt3poj.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxhziv8rnadw8ajlt3poj.png" alt="Image description" width="800" height="414"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In the next articles in my developer journal, I’ll dive into how to drop latency from 220ms to 8ms for folks in South Asia. In the meantime, as the picture shows, let me add a database instance to that region, which is a bare minimum.&lt;/p&gt;

&lt;p&gt;With those examples in place, the definition of a geo-distributed app should make more sense to you.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;A geo-distributed app is an app that spans multiple geographic locations for high availability, resiliency, compliance and performance.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Still confused or do you have any questions on the above? Nudge me in the comments and we’ll get things sorted together!&lt;/p&gt;

&lt;h1&gt;
  
  
  What’s on the Horizon?
&lt;/h1&gt;

&lt;p&gt;Alright, that’s enough for my opening chapter. What do we have on the horizon? &lt;/p&gt;

&lt;p&gt;In the next chapters, I’ll introduce you to the typical building blocks of geo-distributed apps (aka. architecture) and share the results of the prototype I’ve developed over the last four days. The prototype is promising, but, arrrgghh, the performance sucks, and the UI is as ugly as...&lt;/p&gt;

</description>
      <category>webdev</category>
      <category>database</category>
      <category>architecture</category>
      <category>microservices</category>
    </item>
    <item>
      <title>Distributing Data Across Distant Locations With Table Geo-Partitioning</title>
      <dc:creator>Denis Magda</dc:creator>
      <pubDate>Wed, 20 Jul 2022 13:27:34 +0000</pubDate>
      <link>https://dev.to/yugabyte/distributing-data-across-distant-locations-with-table-geo-partitioning-d0d</link>
      <guid>https://dev.to/yugabyte/distributing-data-across-distant-locations-with-table-geo-partitioning-d0d</guid>
      <description>&lt;p&gt;In the first two articles of the table partitioning series, we reviewed how the &lt;a href="https://dev.to/yugabyte/optimizing-application-queries-with-partition-pruning-4g4b"&gt;partition pruning&lt;/a&gt; and &lt;a href="https://dev.to/yugabyte/managing-data-placement-with-table-partitioning-3pjm"&gt;maintenance capabilities&lt;/a&gt; of PostgreSQL can speed up and facilitate the design of our applications. In this final post in the &lt;a href="https://dev.to/denismagda/series/18783"&gt;series&lt;/a&gt;, we’ll experiment with the table geo-partitioning feature that automatically distributes application data across multiple locations.&lt;/p&gt;

&lt;p&gt;We’ll continue using a large pizza chain as an example. This pizza chain has branches in New York, London, and Hong Kong. It also uses a single centralized database cluster to track the orders of its delighted customers.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjsorhjyyrxysqjtthn5r.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjsorhjyyrxysqjtthn5r.png" alt="Image description" width="800" height="420"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This time, the geo-distributed database cluster runs on &lt;a href="https://www.yugabyte.com" rel="noopener noreferrer"&gt;YugabyteDB&lt;/a&gt;. For those who are not familiar with this database yet, YugabyteDB is a PostgreSQL-compliant distributed database that distributes data evenly across a cluster of nodes. Additionally, it scales both read and write operations by utilizing all the cluster resources. YugabyteDB supports several &lt;a href="https://dzone.com/articles/exploring-multi-region-database-deployment-options" rel="noopener noreferrer"&gt;multi-region deployments&lt;/a&gt;; however, in this article, we’ll focus on the geo-partitioned option.&lt;/p&gt;

&lt;h1&gt;
  
  
  Geo-Partitioned Table
&lt;/h1&gt;

&lt;p&gt;Let’s take the &lt;code&gt;PizzaOrders&lt;/code&gt; table again but now partition it by the &lt;code&gt;Region&lt;/code&gt; column. The table tracks the order’s progress (introduced in the &lt;a href="https://dev.to/yugabyte/optimizing-application-queries-with-partition-pruning-4g4b"&gt;first article&lt;/a&gt;), and the newly added &lt;code&gt;Region&lt;/code&gt; column defines a location of an order:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzh1lgcwvue1gxj5a8133.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzh1lgcwvue1gxj5a8133.png" alt="Image description" width="800" height="513"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;Orders_US&lt;/code&gt;: this partitioned table is for the orders placed in New York and other cities in the US region. &lt;/li&gt;
&lt;li&gt;
&lt;code&gt;Orders_EU&lt;/code&gt;: the branch in London will keep all its orders in this partition. Once the pizza chain opens new locations in Europe, orders from those branches will go to the &lt;code&gt;Orders_EU&lt;/code&gt; as well. &lt;/li&gt;
&lt;li&gt;
&lt;code&gt;Orders_APAC&lt;/code&gt;: when Hong Kong customers order a pizza, the order goes to this partition. &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;As you can see, it’s straightforward to split the &lt;code&gt;PizzaOrders&lt;/code&gt; into partitions that will be distributed by the database across distant geographical locations. But, before we get to the step-by-step instructions, let’s provide some rationale for this type of partitioning:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;em&gt;It’s good for performance &amp;amp; user experience&lt;/em&gt; — the speed and responsiveness of your pizza application will be similar for all the customers, regardless of their whereabouts. For instance, orders from Hong Kong will go through the &lt;code&gt;Orders_APAC&lt;/code&gt; table, where data is stored on the database nodes in APAC.&lt;/li&gt;
&lt;li&gt;
&lt;em&gt;It’s good for data regulation&lt;/em&gt; — all the orders and personal data of European customers won’t leave the boundaries of the EU. The data that belongs to the &lt;code&gt;Orders_EU&lt;/code&gt; partition will be located on database nodes in Europe which satisfies the GDPR requirements.&lt;/li&gt;
&lt;li&gt;
&lt;em&gt;It’s good from a business perspective&lt;/em&gt; — the pizza chain headquarters has full control of a single database cluster that can be scaled in specific geographies once the load increases. Also, the app can easily query and join geo-distributed data through a single database endpoint.&lt;/li&gt;
&lt;/ul&gt;

&lt;h1&gt;
  
  
  Starting the Geo-Partitioned Cluster
&lt;/h1&gt;

&lt;p&gt;Now, let’s experiment with geo-partitioned tables. First, deploy a geo-partitioned instance of YugabyteDB. Here you have two options.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Option #1:&lt;/strong&gt; You can deploy and configure a geo-partitioned cluster through the &lt;a href="https://cloud.yugabyte.com" rel="noopener noreferrer"&gt;YugabyteDB Managed&lt;/a&gt; interface:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpy9c4z7r71xr9f9jy8s5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpy9c4z7r71xr9f9jy8s5.png" alt="Image description" width="800" height="1023"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Option #2:&lt;/strong&gt; You can simulate a geo-partitioned cluster on your local machine with YugabyteDB open source and Docker.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="s"&gt;mkdir ~/yb_docker_data&lt;/span&gt;

&lt;span class="s"&gt;docker network create yugabytedb_network&lt;/span&gt;
&lt;span class="c1"&gt;# Starting a node in the US&lt;/span&gt;
&lt;span class="s"&gt;docker run -d --name yugabytedb_node_us --net yugabytedb_network -p 7001:7000 -p 9000:9000 -p 5433:5433 \&lt;/span&gt;
  &lt;span class="s"&gt;-v ~/yb_docker_data/node_us:/home/yugabyte/yb_data --restart unless-stopped \&lt;/span&gt;
  &lt;span class="s"&gt;yugabytedb/yugabyte:latest bin/yugabyted start --listen=yugabytedb_node_us \&lt;/span&gt;
  &lt;span class="s"&gt;--base_dir=/home/yugabyte/yb_data --daemon=false \&lt;/span&gt;
  &lt;span class="s"&gt;--master_flags="placement_zone=A,placement_region=US,placement_cloud=CLOUD" \&lt;/span&gt;
  &lt;span class="s"&gt;--tserver_flags="placement_zone=A,placement_region=US,placement_cloud=CLOUD"&lt;/span&gt;

&lt;span class="c1"&gt;# Starting a node in Europe&lt;/span&gt;
&lt;span class="s"&gt;docker run -d --name yugabytedb_node_eu --net yugabytedb_network \&lt;/span&gt;
  &lt;span class="s"&gt;-v ~/yb_docker_data/node_eu:/home/yugabyte/yb_data --restart unless-stopped \&lt;/span&gt;
  &lt;span class="s"&gt;yugabytedb/yugabyte:latest bin/yugabyted start --join=yugabytedb_node_us --listen=yugabytedb_node_eu \&lt;/span&gt;
  &lt;span class="s"&gt;--base_dir=/home/yugabyte/yb_data --daemon=false \&lt;/span&gt;
  &lt;span class="s"&gt;--master_flags="placement_zone=A,placement_region=EU,placement_cloud=CLOUD" \&lt;/span&gt;
  &lt;span class="s"&gt;--tserver_flags="placement_zone=A,placement_region=EU,placement_cloud=CLOUD"&lt;/span&gt;

&lt;span class="c1"&gt;# Starting a node in APAC&lt;/span&gt;
&lt;span class="s"&gt;docker run -d --name yugabytedb_node_apac --net yugabytedb_network \&lt;/span&gt;
  &lt;span class="s"&gt;-v ~/yb_docker_data/node_apac:/home/yugabyte/yb_data --restart unless-stopped \&lt;/span&gt;
  &lt;span class="s"&gt;yugabytedb/yugabyte:latest bin/yugabyted start --join=yugabytedb_node_us --listen=yugabytedb_node_apac \&lt;/span&gt;
  &lt;span class="s"&gt;--base_dir=/home/yugabyte/yb_data --daemon=false \&lt;/span&gt;
 &lt;span class="s"&gt;--master_flags="placement_zone=A,placement_region=APAC,placement_cloud=CLOUD" \&lt;/span&gt;
  &lt;span class="s"&gt;--tserver_flags="placement_zone=A,placement_region=APAC,placement_cloud=CLOUD"&lt;/span&gt;

&lt;span class="c1"&gt;# Updating the nodes’ placement&lt;/span&gt;
&lt;span class="s"&gt;docker exec -i yugabytedb_node_us \&lt;/span&gt;
&lt;span class="s"&gt;yb-admin -master_addresses yugabytedb_node_us:7100,yugabytedb_node_eu:7100,yugabytedb_node_apac:7100 \&lt;/span&gt;
&lt;span class="s"&gt;modify_placement_info CLOUD.US.A,CLOUD.EU.A,CLOUD.APAC.A &lt;/span&gt;&lt;span class="m"&gt;3&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We’ll use the latter option that starts a three-node cluster with one node in each geographic location:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;yugabytedb_node_us&lt;/code&gt;: the node is placed in this location &lt;code&gt;placement_region=US&lt;/code&gt; and will keep data of the &lt;code&gt;Orders_US&lt;/code&gt; partition.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;yugabytedb_node_eu&lt;/code&gt;: the node is located in Europe (&lt;code&gt;placement_region=EU&lt;/code&gt;) and will store orders from the &lt;code&gt;Orders_EU&lt;/code&gt; partition.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;yugabytedb_node_apac&lt;/code&gt;: as you can guess, this node is for the orders of the APAC customers. Thus, it’s placed in the &lt;code&gt;placement_region=APAC&lt;/code&gt; region.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Once started, you can connect to the database instance using the following psql command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;psql &lt;span class="nt"&gt;-h&lt;/span&gt; 127.0.0.1 &lt;span class="nt"&gt;-p&lt;/span&gt; 5433 yugabyte &lt;span class="nt"&gt;-U&lt;/span&gt; yugabyte &lt;span class="nt"&gt;-w&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h1&gt;
  
  
  Creating Tablespaces
&lt;/h1&gt;

&lt;p&gt;&lt;a href="https://www.postgresql.org/docs/current/manage-ag-tablespaces.html" rel="noopener noreferrer"&gt;Tablespaces&lt;/a&gt; is a handy PostgreSQL feature. It allows you to define locations on the filesystem where the files representing a database object are stored. As a Postgres-compliant database, YugabyteDB supports this feature and lets you use the tablespaces for geo-partitioning needs.&lt;/p&gt;

&lt;p&gt;So, your next step is to create tablespaces for the geo-partitioned &lt;code&gt;PizzaOrders&lt;/code&gt; table:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight sql"&gt;&lt;code&gt;&lt;span class="k"&gt;CREATE&lt;/span&gt; &lt;span class="n"&gt;TABLESPACE&lt;/span&gt; &lt;span class="n"&gt;us_tablespace&lt;/span&gt; &lt;span class="k"&gt;WITH&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;
  &lt;span class="n"&gt;replica_placement&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s1"&gt;'{"num_replicas": 1, "placement_blocks":
  [{"cloud":"CLOUD","region":"US","zone":"A","min_num_replicas":1}]}'&lt;/span&gt;
&lt;span class="p"&gt;);&lt;/span&gt;

&lt;span class="k"&gt;CREATE&lt;/span&gt; &lt;span class="n"&gt;TABLESPACE&lt;/span&gt; &lt;span class="n"&gt;eu_tablespace&lt;/span&gt; &lt;span class="k"&gt;WITH&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;
  &lt;span class="n"&gt;replica_placement&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s1"&gt;'{"num_replicas": 1, "placement_blocks":
  [{"cloud":"CLOUD","region":"EU","zone":"A","min_num_replicas":1}]}'&lt;/span&gt;
&lt;span class="p"&gt;);&lt;/span&gt;

&lt;span class="k"&gt;CREATE&lt;/span&gt; &lt;span class="n"&gt;TABLESPACE&lt;/span&gt; &lt;span class="n"&gt;apac_tablespace&lt;/span&gt; &lt;span class="k"&gt;WITH&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;
  &lt;span class="n"&gt;replica_placement&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s1"&gt;'{"num_replicas": 1, "placement_blocks":
  [{"cloud":"CLOUD","region":"APAC","zone":"A","min_num_replicas":1}]}'&lt;/span&gt;
&lt;span class="p"&gt;);&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;These commands create a tablespace for each location: the US, EU, and APAC. Each tablespace gets assigned to the node with similar placement information. For instance, the &lt;code&gt;us_tablespace&lt;/code&gt; will be assigned to the &lt;code&gt;yugabytedb_node_us&lt;/code&gt; node that you started earlier, as long as the placement info of that node (&lt;code&gt;placement_zone=A,placement_region=US,placement_cloud=CLOUD&lt;/code&gt;) corresponds to the placement of &lt;code&gt;us_tablespace&lt;/code&gt;. &lt;/p&gt;

&lt;h1&gt;
  
  
  Creating Partitions
&lt;/h1&gt;

&lt;p&gt;The final configuration step is to get the &lt;code&gt;PizzaOrders&lt;/code&gt; table split into three previously discussed partitions: &lt;code&gt;Orders_US&lt;/code&gt;, &lt;code&gt;Orders_EU&lt;/code&gt;, and &lt;code&gt;Orders_APAC&lt;/code&gt;. You can use the commands below to do this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight sql"&gt;&lt;code&gt;&lt;span class="k"&gt;CREATE&lt;/span&gt; &lt;span class="k"&gt;TYPE&lt;/span&gt; &lt;span class="n"&gt;status_t&lt;/span&gt; &lt;span class="k"&gt;AS&lt;/span&gt; &lt;span class="nb"&gt;ENUM&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s1"&gt;'ordered'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s1"&gt;'baking'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s1"&gt;'delivering'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s1"&gt;'yummy-in-my-tummy'&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

&lt;span class="k"&gt;CREATE&lt;/span&gt; &lt;span class="k"&gt;TABLE&lt;/span&gt; &lt;span class="n"&gt;PizzaOrders&lt;/span&gt;
 &lt;span class="p"&gt;(&lt;/span&gt;
   &lt;span class="n"&gt;order_id&lt;/span&gt;   &lt;span class="nb"&gt;int&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
   &lt;span class="n"&gt;order_status&lt;/span&gt;   &lt;span class="n"&gt;status_t&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
   &lt;span class="n"&gt;order_time&lt;/span&gt;   &lt;span class="nb"&gt;timestamp&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
   &lt;span class="n"&gt;region&lt;/span&gt; &lt;span class="nb"&gt;text&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
   &lt;span class="k"&gt;PRIMARY&lt;/span&gt; &lt;span class="k"&gt;KEY&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;order_id&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;region&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
 &lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;PARTITION&lt;/span&gt; &lt;span class="k"&gt;BY&lt;/span&gt; &lt;span class="n"&gt;LIST&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;region&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

&lt;span class="k"&gt;CREATE&lt;/span&gt; &lt;span class="k"&gt;TABLE&lt;/span&gt; &lt;span class="n"&gt;Orders_US&lt;/span&gt;
    &lt;span class="k"&gt;PARTITION&lt;/span&gt; &lt;span class="k"&gt;OF&lt;/span&gt; &lt;span class="n"&gt;PizzaOrders&lt;/span&gt;
    &lt;span class="k"&gt;FOR&lt;/span&gt; &lt;span class="k"&gt;VALUES&lt;/span&gt; &lt;span class="k"&gt;IN&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s1"&gt;'US'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="n"&gt;TABLESPACE&lt;/span&gt; &lt;span class="n"&gt;us_tablespace&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="k"&gt;CREATE&lt;/span&gt; &lt;span class="k"&gt;TABLE&lt;/span&gt; &lt;span class="n"&gt;Orders_EU&lt;/span&gt;
    &lt;span class="k"&gt;PARTITION&lt;/span&gt; &lt;span class="k"&gt;OF&lt;/span&gt; &lt;span class="n"&gt;PizzaOrders&lt;/span&gt;
    &lt;span class="k"&gt;FOR&lt;/span&gt; &lt;span class="k"&gt;VALUES&lt;/span&gt; &lt;span class="k"&gt;IN&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s1"&gt;'EU'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="n"&gt;TABLESPACE&lt;/span&gt; &lt;span class="n"&gt;eu_tablespace&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="k"&gt;CREATE&lt;/span&gt; &lt;span class="k"&gt;TABLE&lt;/span&gt; &lt;span class="n"&gt;Orders_APAC&lt;/span&gt;
    &lt;span class="k"&gt;PARTITION&lt;/span&gt; &lt;span class="k"&gt;OF&lt;/span&gt; &lt;span class="n"&gt;PizzaOrders&lt;/span&gt;
&lt;span class="err"&gt; &lt;/span&gt; &lt;span class="err"&gt; &lt;/span&gt; &lt;span class="k"&gt;FOR&lt;/span&gt; &lt;span class="k"&gt;VALUES&lt;/span&gt; &lt;span class="k"&gt;IN&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s1"&gt;'APAC'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="n"&gt;TABLESPACE&lt;/span&gt; &lt;span class="n"&gt;apac_tablespace&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In fact, a combination of several capabilities enables geo-partitioning in YugabyteDB:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;First, the original table (&lt;code&gt;PizzaOrders&lt;/code&gt;) is partitioned using the LIST Partitioning method (&lt;code&gt;PARTITION BY LIST (region)&lt;/code&gt;).&lt;/li&gt;
&lt;li&gt;Second, each partition is assigned to one of the tablespaces. For instance, in the command above, the &lt;code&gt;Orders_APAC&lt;/code&gt; partition is assigned to the &lt;code&gt;TABLESPACE apac_tablespace&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;Lastly, each tablespace with its partitions is automatically mapped to (i.e., placed on) YugabyteDB nodes from the corresponding geography. &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Alright, now let’s run this command just to make sure that the &lt;code&gt;PizzaOrders&lt;/code&gt; table was, in fact, partitioned properly:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight sql"&gt;&lt;code&gt;&lt;span class="err"&gt;\&lt;/span&gt;&lt;span class="n"&gt;d&lt;/span&gt;&lt;span class="o"&gt;+&lt;/span&gt; &lt;span class="n"&gt;PizzaOrders&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
                                       &lt;span class="n"&gt;Partitioned&lt;/span&gt; &lt;span class="k"&gt;table&lt;/span&gt; &lt;span class="nv"&gt;"public.pizzaorders"&lt;/span&gt;
    &lt;span class="k"&gt;Column&lt;/span&gt;    &lt;span class="o"&gt;|&lt;/span&gt;            &lt;span class="k"&gt;Type&lt;/span&gt;             &lt;span class="o"&gt;|&lt;/span&gt; &lt;span class="k"&gt;Collation&lt;/span&gt; &lt;span class="o"&gt;|&lt;/span&gt; &lt;span class="k"&gt;Nullable&lt;/span&gt; &lt;span class="o"&gt;|&lt;/span&gt; &lt;span class="k"&gt;Default&lt;/span&gt; &lt;span class="o"&gt;|&lt;/span&gt; &lt;span class="k"&gt;Storage&lt;/span&gt;  &lt;span class="o"&gt;|&lt;/span&gt; &lt;span class="n"&gt;Stats&lt;/span&gt; &lt;span class="n"&gt;target&lt;/span&gt; &lt;span class="o"&gt;|&lt;/span&gt; &lt;span class="n"&gt;Description&lt;/span&gt; 
&lt;span class="c1"&gt;--------------+-----------------------------+-----------+----------+---------+----------+--------------+-------------&lt;/span&gt;
 &lt;span class="n"&gt;order_id&lt;/span&gt;     &lt;span class="o"&gt;|&lt;/span&gt; &lt;span class="nb"&gt;integer&lt;/span&gt;                     &lt;span class="o"&gt;|&lt;/span&gt;           &lt;span class="o"&gt;|&lt;/span&gt; &lt;span class="k"&gt;not&lt;/span&gt; &lt;span class="k"&gt;null&lt;/span&gt; &lt;span class="o"&gt;|&lt;/span&gt;         &lt;span class="o"&gt;|&lt;/span&gt; &lt;span class="n"&gt;plain&lt;/span&gt;    &lt;span class="o"&gt;|&lt;/span&gt;              &lt;span class="o"&gt;|&lt;/span&gt; 
 &lt;span class="n"&gt;order_status&lt;/span&gt; &lt;span class="o"&gt;|&lt;/span&gt; &lt;span class="n"&gt;status_t&lt;/span&gt;                    &lt;span class="o"&gt;|&lt;/span&gt;           &lt;span class="o"&gt;|&lt;/span&gt;          &lt;span class="o"&gt;|&lt;/span&gt;         &lt;span class="o"&gt;|&lt;/span&gt; &lt;span class="n"&gt;plain&lt;/span&gt;    &lt;span class="o"&gt;|&lt;/span&gt;              &lt;span class="o"&gt;|&lt;/span&gt; 
 &lt;span class="n"&gt;order_time&lt;/span&gt;   &lt;span class="o"&gt;|&lt;/span&gt; &lt;span class="nb"&gt;timestamp&lt;/span&gt; &lt;span class="k"&gt;without&lt;/span&gt; &lt;span class="nb"&gt;time&lt;/span&gt; &lt;span class="k"&gt;zone&lt;/span&gt; &lt;span class="o"&gt;|&lt;/span&gt;           &lt;span class="o"&gt;|&lt;/span&gt;          &lt;span class="o"&gt;|&lt;/span&gt;         &lt;span class="o"&gt;|&lt;/span&gt; &lt;span class="n"&gt;plain&lt;/span&gt;    &lt;span class="o"&gt;|&lt;/span&gt;              &lt;span class="o"&gt;|&lt;/span&gt; 
 &lt;span class="n"&gt;region&lt;/span&gt;       &lt;span class="o"&gt;|&lt;/span&gt; &lt;span class="nb"&gt;text&lt;/span&gt;                        &lt;span class="o"&gt;|&lt;/span&gt;           &lt;span class="o"&gt;|&lt;/span&gt; &lt;span class="k"&gt;not&lt;/span&gt; &lt;span class="k"&gt;null&lt;/span&gt; &lt;span class="o"&gt;|&lt;/span&gt;         &lt;span class="o"&gt;|&lt;/span&gt; &lt;span class="n"&gt;extended&lt;/span&gt; &lt;span class="o"&gt;|&lt;/span&gt;              &lt;span class="o"&gt;|&lt;/span&gt; 
&lt;span class="k"&gt;Partition&lt;/span&gt; &lt;span class="k"&gt;key&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;LIST&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;region&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="n"&gt;Indexes&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
    &lt;span class="nv"&gt;"pizzaorders_pkey"&lt;/span&gt; &lt;span class="k"&gt;PRIMARY&lt;/span&gt; &lt;span class="k"&gt;KEY&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;lsm&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;order_id&lt;/span&gt; &lt;span class="n"&gt;HASH&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;region&lt;/span&gt; &lt;span class="k"&gt;ASC&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="n"&gt;Partitions&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;orders_apac&lt;/span&gt; &lt;span class="k"&gt;FOR&lt;/span&gt; &lt;span class="k"&gt;VALUES&lt;/span&gt; &lt;span class="k"&gt;IN&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s1"&gt;'APAC'&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt;
            &lt;span class="n"&gt;orders_eu&lt;/span&gt; &lt;span class="k"&gt;FOR&lt;/span&gt; &lt;span class="k"&gt;VALUES&lt;/span&gt; &lt;span class="k"&gt;IN&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s1"&gt;'EU'&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt;
&lt;span class="err"&gt; &lt;/span&gt; &lt;span class="err"&gt; &lt;/span&gt; &lt;span class="err"&gt; &lt;/span&gt; &lt;span class="err"&gt; &lt;/span&gt; &lt;span class="err"&gt; &lt;/span&gt; &lt;span class="err"&gt; &lt;/span&gt; &lt;span class="n"&gt;orders_us&lt;/span&gt; &lt;span class="k"&gt;FOR&lt;/span&gt; &lt;span class="k"&gt;VALUES&lt;/span&gt; &lt;span class="k"&gt;IN&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s1"&gt;'US'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h1&gt;
  
  
  Testing Table Geo-Partitioning
&lt;/h1&gt;

&lt;p&gt;You are now ready to do the final test. Go ahead and add a few orders into the database. As of now, put some data in the US-based pizza chain:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight sql"&gt;&lt;code&gt;&lt;span class="k"&gt;INSERT&lt;/span&gt; &lt;span class="k"&gt;INTO&lt;/span&gt; &lt;span class="n"&gt;PizzaOrders&lt;/span&gt; &lt;span class="k"&gt;VALUES&lt;/span&gt; 
&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s1"&gt;'yummy-in-my-tummy'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s1"&gt;'2021-12-27 22:00:00'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s1"&gt;'US'&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt;
&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;2&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s1"&gt;'yummy-in-my-tummy'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s1"&gt;'2022-05-15 13:00:00'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s1"&gt;'US'&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt;
&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;6&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s1"&gt;'baking'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s1"&gt;'2022-06-24 8:45:00'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s1"&gt;'US'&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt;
&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;7&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s1"&gt;'baking'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s1"&gt;'2022-06-24 9:00:00'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s1"&gt;'US'&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt; 
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Double check the data got placed in the &lt;code&gt;Orders_US&lt;/code&gt; partition (refer to the &lt;code&gt;tableoid&lt;/code&gt; column in the result):&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight sql"&gt;&lt;code&gt;&lt;span class="k"&gt;SELECT&lt;/span&gt; &lt;span class="n"&gt;tableoid&lt;/span&gt;&lt;span class="p"&gt;::&lt;/span&gt;&lt;span class="n"&gt;regclass&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="n"&gt;PizzaOrders&lt;/span&gt; 
  &lt;span class="k"&gt;ORDER&lt;/span&gt; &lt;span class="k"&gt;BY&lt;/span&gt; &lt;span class="n"&gt;order_id&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

 &lt;span class="n"&gt;tableoid&lt;/span&gt;  &lt;span class="o"&gt;|&lt;/span&gt; &lt;span class="n"&gt;order_id&lt;/span&gt; &lt;span class="o"&gt;|&lt;/span&gt;   &lt;span class="n"&gt;order_status&lt;/span&gt;    &lt;span class="o"&gt;|&lt;/span&gt;     &lt;span class="n"&gt;order_time&lt;/span&gt;      &lt;span class="o"&gt;|&lt;/span&gt; &lt;span class="n"&gt;region&lt;/span&gt; 
&lt;span class="c1"&gt;-----------+----------+-------------------+---------------------+--------&lt;/span&gt;
 &lt;span class="n"&gt;orders_us&lt;/span&gt; &lt;span class="o"&gt;|&lt;/span&gt;        &lt;span class="mi"&gt;1&lt;/span&gt; &lt;span class="o"&gt;|&lt;/span&gt; &lt;span class="n"&gt;yummy&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="k"&gt;in&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;my&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;tummy&lt;/span&gt; &lt;span class="o"&gt;|&lt;/span&gt; &lt;span class="mi"&gt;2021&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="mi"&gt;12&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="mi"&gt;27&lt;/span&gt; &lt;span class="mi"&gt;22&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="mi"&gt;00&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="mi"&gt;00&lt;/span&gt; &lt;span class="o"&gt;|&lt;/span&gt; &lt;span class="n"&gt;US&lt;/span&gt;
 &lt;span class="n"&gt;orders_us&lt;/span&gt; &lt;span class="o"&gt;|&lt;/span&gt;        &lt;span class="mi"&gt;2&lt;/span&gt; &lt;span class="o"&gt;|&lt;/span&gt; &lt;span class="n"&gt;yummy&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="k"&gt;in&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;my&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;tummy&lt;/span&gt; &lt;span class="o"&gt;|&lt;/span&gt; &lt;span class="mi"&gt;2022&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="mi"&gt;05&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="mi"&gt;15&lt;/span&gt; &lt;span class="mi"&gt;13&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="mi"&gt;00&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="mi"&gt;00&lt;/span&gt; &lt;span class="o"&gt;|&lt;/span&gt; &lt;span class="n"&gt;US&lt;/span&gt;
 &lt;span class="n"&gt;orders_us&lt;/span&gt; &lt;span class="o"&gt;|&lt;/span&gt;        &lt;span class="mi"&gt;6&lt;/span&gt; &lt;span class="o"&gt;|&lt;/span&gt; &lt;span class="n"&gt;baking&lt;/span&gt;            &lt;span class="o"&gt;|&lt;/span&gt; &lt;span class="mi"&gt;2022&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="mi"&gt;06&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="mi"&gt;24&lt;/span&gt; &lt;span class="mi"&gt;08&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="mi"&gt;45&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="mi"&gt;00&lt;/span&gt; &lt;span class="o"&gt;|&lt;/span&gt; &lt;span class="n"&gt;US&lt;/span&gt;
&lt;span class="err"&gt; &lt;/span&gt;&lt;span class="n"&gt;orders_us&lt;/span&gt; &lt;span class="o"&gt;|&lt;/span&gt; &lt;span class="err"&gt; &lt;/span&gt; &lt;span class="err"&gt; &lt;/span&gt; &lt;span class="err"&gt; &lt;/span&gt; &lt;span class="err"&gt; &lt;/span&gt;&lt;span class="mi"&gt;7&lt;/span&gt; &lt;span class="o"&gt;|&lt;/span&gt; &lt;span class="n"&gt;baking&lt;/span&gt; &lt;span class="err"&gt; &lt;/span&gt; &lt;span class="err"&gt; &lt;/span&gt; &lt;span class="err"&gt; &lt;/span&gt; &lt;span class="err"&gt; &lt;/span&gt; &lt;span class="err"&gt; &lt;/span&gt; &lt;span class="err"&gt; &lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt; &lt;span class="mi"&gt;2022&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="mi"&gt;06&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="mi"&gt;24&lt;/span&gt; &lt;span class="mi"&gt;09&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="mi"&gt;00&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="mi"&gt;00&lt;/span&gt; &lt;span class="o"&gt;|&lt;/span&gt; &lt;span class="n"&gt;US&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Next, attempt to put pizza orders in but for customers from London (EU region) and Hong Kong (APAC region):&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight sql"&gt;&lt;code&gt;&lt;span class="k"&gt;INSERT&lt;/span&gt; &lt;span class="k"&gt;INTO&lt;/span&gt; &lt;span class="n"&gt;PizzaOrders&lt;/span&gt; &lt;span class="k"&gt;VALUES&lt;/span&gt; 
&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;3&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s1"&gt;'yummy-in-my-tummy'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s1"&gt;'2022-05-23 10:00:00'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s1"&gt;'EU'&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt;
&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;4&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s1"&gt;'yummy-in-my-tummy'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s1"&gt;'2022-06-23 19:00:00'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s1"&gt;'APAC'&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt;
&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;5&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s1"&gt;'delivering'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s1"&gt;'2022-06-24 8:30:00'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s1"&gt;'APAC'&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt;
&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;8&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s1"&gt;'ordered'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s1"&gt;'2022-06-24 10:00:00'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s1"&gt;'EU'&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt; 

&lt;span class="n"&gt;ERROR&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="err"&gt; &lt;/span&gt;&lt;span class="n"&gt;Illegal&lt;/span&gt; &lt;span class="k"&gt;state&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;Nonlocal&lt;/span&gt; &lt;span class="n"&gt;tablet&lt;/span&gt; &lt;span class="n"&gt;accessed&lt;/span&gt; &lt;span class="k"&gt;in&lt;/span&gt; &lt;span class="k"&gt;local&lt;/span&gt; &lt;span class="n"&gt;transaction&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;tablet&lt;/span&gt; &lt;span class="mi"&gt;49&lt;/span&gt;&lt;span class="n"&gt;fb2ab17298424096d215f5f1f32515&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If you’ve been following these instructions, you will receive the above error message. This happens because our psql session is opened through a YugabyteDB node deployed in the US (&lt;code&gt;yugabytedb_node_us&lt;/code&gt;). And, by default, YugabyteDB doesn’t let you perform transactions that span across several geographies. So what are your options? You can connect to the nodes in EU and APAC and insert data from there. Or, you can toggle the &lt;code&gt;force_global_transaction&lt;/code&gt; on and insert the data from the US-based node:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight sql"&gt;&lt;code&gt;&lt;span class="k"&gt;SET&lt;/span&gt; &lt;span class="n"&gt;force_global_transaction&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;TRUE&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="k"&gt;INSERT&lt;/span&gt; &lt;span class="k"&gt;INTO&lt;/span&gt; &lt;span class="n"&gt;PizzaOrders&lt;/span&gt; &lt;span class="k"&gt;VALUES&lt;/span&gt; 
&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;3&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s1"&gt;'yummy-in-my-tummy'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s1"&gt;'2022-05-23 10:00:00'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s1"&gt;'EU'&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt;
&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;4&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s1"&gt;'yummy-in-my-tummy'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s1"&gt;'2022-06-23 19:00:00'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s1"&gt;'APAC'&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt;
&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;5&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s1"&gt;'delivering'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s1"&gt;'2022-06-24 8:30:00'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s1"&gt;'APAC'&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt;
&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;8&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s1"&gt;'ordered'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s1"&gt;'2022-06-24 10:00:00'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s1"&gt;'EU'&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt; 
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Once the command succeeds, confirm that the orders are placed properly across the partitions and respective geographies (again refer to the &lt;code&gt;tableoid&lt;/code&gt; column in the output):&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight sql"&gt;&lt;code&gt;&lt;span class="k"&gt;SELECT&lt;/span&gt; &lt;span class="n"&gt;tableoid&lt;/span&gt;&lt;span class="p"&gt;::&lt;/span&gt;&lt;span class="n"&gt;regclass&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="n"&gt;PizzaOrders&lt;/span&gt; 
  &lt;span class="k"&gt;ORDER&lt;/span&gt; &lt;span class="k"&gt;BY&lt;/span&gt; &lt;span class="n"&gt;order_id&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
  &lt;span class="n"&gt;tableoid&lt;/span&gt;   &lt;span class="o"&gt;|&lt;/span&gt; &lt;span class="n"&gt;order_id&lt;/span&gt; &lt;span class="o"&gt;|&lt;/span&gt;   &lt;span class="n"&gt;order_status&lt;/span&gt;    &lt;span class="o"&gt;|&lt;/span&gt;     &lt;span class="n"&gt;order_time&lt;/span&gt;      &lt;span class="o"&gt;|&lt;/span&gt; &lt;span class="n"&gt;region&lt;/span&gt; 
&lt;span class="c1"&gt;-------------+----------+-------------------+---------------------+--------&lt;/span&gt;
 &lt;span class="n"&gt;orders_us&lt;/span&gt;   &lt;span class="o"&gt;|&lt;/span&gt;        &lt;span class="mi"&gt;1&lt;/span&gt; &lt;span class="o"&gt;|&lt;/span&gt; &lt;span class="n"&gt;yummy&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="k"&gt;in&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;my&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;tummy&lt;/span&gt; &lt;span class="o"&gt;|&lt;/span&gt; &lt;span class="mi"&gt;2021&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="mi"&gt;12&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="mi"&gt;27&lt;/span&gt; &lt;span class="mi"&gt;22&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="mi"&gt;00&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="mi"&gt;00&lt;/span&gt; &lt;span class="o"&gt;|&lt;/span&gt; &lt;span class="n"&gt;US&lt;/span&gt;
 &lt;span class="n"&gt;orders_us&lt;/span&gt;   &lt;span class="o"&gt;|&lt;/span&gt;        &lt;span class="mi"&gt;2&lt;/span&gt; &lt;span class="o"&gt;|&lt;/span&gt; &lt;span class="n"&gt;yummy&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="k"&gt;in&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;my&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;tummy&lt;/span&gt; &lt;span class="o"&gt;|&lt;/span&gt; &lt;span class="mi"&gt;2022&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="mi"&gt;05&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="mi"&gt;15&lt;/span&gt; &lt;span class="mi"&gt;13&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="mi"&gt;00&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="mi"&gt;00&lt;/span&gt; &lt;span class="o"&gt;|&lt;/span&gt; &lt;span class="n"&gt;US&lt;/span&gt;
 &lt;span class="n"&gt;orders_eu&lt;/span&gt;   &lt;span class="o"&gt;|&lt;/span&gt;        &lt;span class="mi"&gt;3&lt;/span&gt; &lt;span class="o"&gt;|&lt;/span&gt; &lt;span class="n"&gt;yummy&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="k"&gt;in&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;my&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;tummy&lt;/span&gt; &lt;span class="o"&gt;|&lt;/span&gt; &lt;span class="mi"&gt;2022&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="mi"&gt;05&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="mi"&gt;23&lt;/span&gt; &lt;span class="mi"&gt;10&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="mi"&gt;00&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="mi"&gt;00&lt;/span&gt; &lt;span class="o"&gt;|&lt;/span&gt; &lt;span class="n"&gt;EU&lt;/span&gt;
 &lt;span class="n"&gt;orders_apac&lt;/span&gt; &lt;span class="o"&gt;|&lt;/span&gt;        &lt;span class="mi"&gt;4&lt;/span&gt; &lt;span class="o"&gt;|&lt;/span&gt; &lt;span class="n"&gt;yummy&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="k"&gt;in&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;my&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;tummy&lt;/span&gt; &lt;span class="o"&gt;|&lt;/span&gt; &lt;span class="mi"&gt;2022&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="mi"&gt;06&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="mi"&gt;23&lt;/span&gt; &lt;span class="mi"&gt;19&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="mi"&gt;00&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="mi"&gt;00&lt;/span&gt; &lt;span class="o"&gt;|&lt;/span&gt; &lt;span class="n"&gt;APAC&lt;/span&gt;
 &lt;span class="n"&gt;orders_apac&lt;/span&gt; &lt;span class="o"&gt;|&lt;/span&gt;        &lt;span class="mi"&gt;5&lt;/span&gt; &lt;span class="o"&gt;|&lt;/span&gt; &lt;span class="n"&gt;delivering&lt;/span&gt;        &lt;span class="o"&gt;|&lt;/span&gt; &lt;span class="mi"&gt;2022&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="mi"&gt;06&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="mi"&gt;24&lt;/span&gt; &lt;span class="mi"&gt;08&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="mi"&gt;30&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="mi"&gt;00&lt;/span&gt; &lt;span class="o"&gt;|&lt;/span&gt; &lt;span class="n"&gt;APAC&lt;/span&gt;
 &lt;span class="n"&gt;orders_us&lt;/span&gt;   &lt;span class="o"&gt;|&lt;/span&gt;        &lt;span class="mi"&gt;6&lt;/span&gt; &lt;span class="o"&gt;|&lt;/span&gt; &lt;span class="n"&gt;baking&lt;/span&gt;            &lt;span class="o"&gt;|&lt;/span&gt; &lt;span class="mi"&gt;2022&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="mi"&gt;06&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="mi"&gt;24&lt;/span&gt; &lt;span class="mi"&gt;08&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="mi"&gt;45&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="mi"&gt;00&lt;/span&gt; &lt;span class="o"&gt;|&lt;/span&gt; &lt;span class="n"&gt;US&lt;/span&gt;
 &lt;span class="n"&gt;orders_us&lt;/span&gt;   &lt;span class="o"&gt;|&lt;/span&gt;        &lt;span class="mi"&gt;7&lt;/span&gt; &lt;span class="o"&gt;|&lt;/span&gt; &lt;span class="n"&gt;baking&lt;/span&gt;            &lt;span class="o"&gt;|&lt;/span&gt; &lt;span class="mi"&gt;2022&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="mi"&gt;06&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="mi"&gt;24&lt;/span&gt; &lt;span class="mi"&gt;09&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="mi"&gt;00&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="mi"&gt;00&lt;/span&gt; &lt;span class="o"&gt;|&lt;/span&gt; &lt;span class="n"&gt;US&lt;/span&gt;
 &lt;span class="n"&gt;orders_eu&lt;/span&gt;   &lt;span class="o"&gt;|&lt;/span&gt;        &lt;span class="mi"&gt;8&lt;/span&gt; &lt;span class="o"&gt;|&lt;/span&gt; &lt;span class="n"&gt;ordered&lt;/span&gt;           &lt;span class="o"&gt;|&lt;/span&gt; &lt;span class="mi"&gt;2022&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="mi"&gt;06&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="mi"&gt;24&lt;/span&gt; &lt;span class="mi"&gt;10&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="mi"&gt;00&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="mi"&gt;00&lt;/span&gt; &lt;span class="o"&gt;|&lt;/span&gt; &lt;span class="n"&gt;EU&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h1&gt;
  
  
  Wrapping Up…
&lt;/h1&gt;

&lt;p&gt;Alright, this is more than enough for starters when it comes to table geo-partitioning. &lt;a href="https://docs.yugabyte.com/preview/explore/multi-region-deployments/row-level-geo-partitioning/#example-scenario" rel="noopener noreferrer"&gt;Check out this article&lt;/a&gt; if you’d like to learn how to add new regions to a pre-existing geo-partitioned cluster or how to withstand region-level outages. &lt;/p&gt;

&lt;p&gt;This article wraps up our &lt;a href="https://dev.to/denismagda/series/18783"&gt;series&lt;/a&gt; dedicated to the topic of table partitioning for application developers. Have fun building these apps, and stay tuned for some new content related to databases, distributed systems, and Java!&lt;/p&gt;

</description>
      <category>yugabytedb</category>
      <category>postgres</category>
      <category>architecture</category>
      <category>database</category>
    </item>
    <item>
      <title>Managing Data Placement With Table Partitioning</title>
      <dc:creator>Denis Magda</dc:creator>
      <pubDate>Tue, 12 Jul 2022 20:22:44 +0000</pubDate>
      <link>https://dev.to/yugabyte/managing-data-placement-with-table-partitioning-3pjm</link>
      <guid>https://dev.to/yugabyte/managing-data-placement-with-table-partitioning-3pjm</guid>
      <description>&lt;p&gt;Table partitioning is a very convenient technique supported by several databases including MySQL, Oracle, PostgreSQL, and YugabyteDB. &lt;a href="https://dev.to/yugabyte/optimizing-application-queries-with-partition-pruning-4g4b"&gt;In the first article of this series&lt;/a&gt;, we discussed an application that automates the operations of a large pizza chain. We reviewed how PostgreSQL improves the application’s performance with the partition pruning feature by eliminating unnecessary partitions from the query execution plan. &lt;/p&gt;

&lt;p&gt;In this article, we’ll examine how &lt;a href="https://www.postgresql.org/docs/current/ddl-partitioning.html" rel="noopener noreferrer"&gt;PostgreSQL’s partition maintenance&lt;/a&gt; capabilities can further influence and simplify the architecture of your apps. We’ll again take the pizza chain app as an example, whose database schema comes with the &lt;code&gt;PizzaOrders&lt;/code&gt; table. To remind you, the table tracks the order’s progress (&lt;a href="https://dev.to/yugabyte/optimizing-application-queries-with-partition-pruning-4g4b"&gt;table data is explained in the first article&lt;/a&gt;):&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjbmf9fvc7ksofcw3aq8o.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjbmf9fvc7ksofcw3aq8o.png" alt="Image description" width="480" height="323"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now, pretend you need to separate the orders for the current, previous, and all other earlier months. So, you go ahead and partition the &lt;code&gt;PizzaOrders&lt;/code&gt; by the &lt;code&gt;OrderTime&lt;/code&gt; column:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9yh9qk09wh3hvqq3in5c.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9yh9qk09wh3hvqq3in5c.png" alt="Image description" width="800" height="604"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;As a result, the original table gets split into three partitioned tables or partitions:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;Orders_2022_06&lt;/code&gt; - the table keeps all the orders for the current month (June 2022). Suppose that the customer-facing microservices heavily use the data from this partition.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;Order_2022_05&lt;/code&gt; - the table stores orders for the previous month (May 2022). Assume internal microservices that facilitate short-term planning regularly query this data in combination with the current month’s data.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;OrdersOthers&lt;/code&gt; - the remaining historical data used by the BI tools for strategic planning.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Nice, you can partition data by time, and the database will ensure each microservice queries only the data it needs. But, wait, the current and past months are not static notions. Once the calendar page flips to July 1st, July 2020 will become the current month. But how do you reflect this change at the database level? Let’s talk about partition maintenance techniques.&lt;/p&gt;

&lt;h1&gt;
  
  
  Partition Maintenance
&lt;/h1&gt;

&lt;p&gt;The structure of your partitions might be dynamic. Quite frequently, you might want to remove partitions holding old data and add new partitions with the new data. That’s the case with our pizza chain. And this maintenance task can be easily fulfilled at the database level with no code changes on the application side.&lt;/p&gt;

&lt;p&gt;In PostgreSQL, partitions are regular tables that you can query or alter directly. So, whenever necessary you can use standard DDL commands to &lt;code&gt;CREATE&lt;/code&gt;, &lt;code&gt;ATTACH&lt;/code&gt;, &lt;code&gt;DETACH&lt;/code&gt;, and &lt;code&gt;DROP&lt;/code&gt; partitions. &lt;/p&gt;

&lt;h2&gt;
  
  
  Creating Original Partitions
&lt;/h2&gt;

&lt;p&gt;First, let’s create the original partitions that we’ve discussed above:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight sql"&gt;&lt;code&gt;&lt;span class="k"&gt;CREATE&lt;/span&gt; &lt;span class="k"&gt;TYPE&lt;/span&gt; &lt;span class="n"&gt;status_t&lt;/span&gt; &lt;span class="k"&gt;AS&lt;/span&gt; &lt;span class="nb"&gt;ENUM&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s1"&gt;'ordered'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s1"&gt;'baking'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s1"&gt;'delivering'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s1"&gt;'yummy-in-my-tummy'&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

&lt;span class="k"&gt;CREATE&lt;/span&gt; &lt;span class="k"&gt;TABLE&lt;/span&gt; &lt;span class="n"&gt;PizzaOrders&lt;/span&gt;
 &lt;span class="p"&gt;(&lt;/span&gt;
   &lt;span class="n"&gt;id&lt;/span&gt;   &lt;span class="nb"&gt;int&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
   &lt;span class="n"&gt;status&lt;/span&gt;   &lt;span class="n"&gt;status_t&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
   &lt;span class="n"&gt;ordertime&lt;/span&gt;   &lt;span class="nb"&gt;timestamp&lt;/span&gt;
 &lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;PARTITION&lt;/span&gt; &lt;span class="k"&gt;BY&lt;/span&gt; &lt;span class="k"&gt;RANGE&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;ordertime&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

&lt;span class="k"&gt;CREATE&lt;/span&gt; &lt;span class="k"&gt;TABLE&lt;/span&gt; &lt;span class="n"&gt;orders_2022_06&lt;/span&gt; &lt;span class="k"&gt;PARTITION&lt;/span&gt; &lt;span class="k"&gt;OF&lt;/span&gt; &lt;span class="n"&gt;PizzaOrders&lt;/span&gt; 
  &lt;span class="k"&gt;FOR&lt;/span&gt; &lt;span class="k"&gt;VALUES&lt;/span&gt; &lt;span class="k"&gt;FROM&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s1"&gt;'2022-06-01'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;TO&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s1"&gt;'2022-07-01'&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

&lt;span class="k"&gt;CREATE&lt;/span&gt; &lt;span class="k"&gt;TABLE&lt;/span&gt; &lt;span class="n"&gt;orders_2022_05&lt;/span&gt; &lt;span class="k"&gt;PARTITION&lt;/span&gt; &lt;span class="k"&gt;OF&lt;/span&gt; &lt;span class="n"&gt;PizzaOrders&lt;/span&gt; 
  &lt;span class="k"&gt;FOR&lt;/span&gt; &lt;span class="k"&gt;VALUES&lt;/span&gt; &lt;span class="k"&gt;FROM&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s1"&gt;'2022-05-01'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;TO&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s1"&gt;'2022-06-01'&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

&lt;span class="k"&gt;CREATE&lt;/span&gt; &lt;span class="k"&gt;TABLE&lt;/span&gt; &lt;span class="n"&gt;orders_others&lt;/span&gt; &lt;span class="k"&gt;PARTITION&lt;/span&gt; &lt;span class="k"&gt;OF&lt;/span&gt; &lt;span class="n"&gt;PizzaOrders&lt;/span&gt; &lt;span class="k"&gt;DEFAULT&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The &lt;code&gt;PARTITION BY RANGE (order_time)&lt;/code&gt; clause requests to split the &lt;code&gt;PizzaOrders&lt;/code&gt; table using the &lt;a href="https://www.postgresql.org/docs/current/ddl-partitioning.html#DDL-PARTITIONING-OVERVIEW" rel="noopener noreferrer"&gt;Range Partitioning&lt;/a&gt; method. The resulting partitions will keep the orders based on the value of the &lt;code&gt;ordertime&lt;/code&gt; column. For instance, if the &lt;code&gt;ordertime&lt;/code&gt; is between '2022-06-01' (inclusive) and '2022-07-01' (exclusive), then a pizza order goes into the current month’s partition (which is &lt;code&gt;orders_2022_06&lt;/code&gt;). The &lt;code&gt;orders_others&lt;/code&gt; partition is the DEFAULT one as it will keep all the orders that &lt;code&gt;ordertime&lt;/code&gt;value doesn’t fit into the range of any other partition.&lt;/p&gt;

&lt;p&gt;Second, all the created partitions are regular tables that you can work with using DDL and DML commands. For instance, let’s load sample data and query the current month’s partitioned table directly:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight sql"&gt;&lt;code&gt;&lt;span class="k"&gt;INSERT&lt;/span&gt; &lt;span class="k"&gt;INTO&lt;/span&gt; &lt;span class="n"&gt;PizzaOrders&lt;/span&gt; &lt;span class="k"&gt;VALUES&lt;/span&gt; 
&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s1"&gt;'yummy-in-my-tummy'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s1"&gt;'2021-12-27 22:00:00'&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt;
&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;2&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s1"&gt;'yummy-in-my-tummy'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s1"&gt;'2022-05-15 13:00:00'&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt;
&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;3&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s1"&gt;'yummy-in-my-tummy'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s1"&gt;'2022-05-23 10:00:00'&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt;
&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;4&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s1"&gt;'yummy-in-my-tummy'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s1"&gt;'2022-06-23 19:00:00'&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt;
&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;5&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s1"&gt;'delivering'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s1"&gt;'2022-06-24 8:30:00'&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt;
&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;6&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s1"&gt;'baking'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s1"&gt;'2022-06-24 8:45:00'&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt;
&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;7&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s1"&gt;'baking'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s1"&gt;'2022-06-24 9:00:00'&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt;
&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;8&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s1"&gt;'ordered'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s1"&gt;'2022-06-24 10:00:00'&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt; 

&lt;span class="k"&gt;SELECT&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="k"&gt;FROM&lt;/span&gt; &lt;span class="n"&gt;orders_2022_06&lt;/span&gt; &lt;span class="k"&gt;WHERE&lt;/span&gt; &lt;span class="n"&gt;ordertime&lt;/span&gt; &lt;span class="k"&gt;BETWEEN&lt;/span&gt; &lt;span class="s1"&gt;'2022_06_20'&lt;/span&gt; &lt;span class="k"&gt;AND&lt;/span&gt; &lt;span class="s1"&gt;'2022_06_30'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

 &lt;span class="n"&gt;id&lt;/span&gt; &lt;span class="o"&gt;|&lt;/span&gt;      &lt;span class="n"&gt;status&lt;/span&gt;       &lt;span class="o"&gt;|&lt;/span&gt;      &lt;span class="n"&gt;ordertime&lt;/span&gt;      
&lt;span class="c1"&gt;----+-------------------+---------------------&lt;/span&gt;
  &lt;span class="mi"&gt;4&lt;/span&gt; &lt;span class="o"&gt;|&lt;/span&gt; &lt;span class="n"&gt;yummy&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="k"&gt;in&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;my&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;tummy&lt;/span&gt; &lt;span class="o"&gt;|&lt;/span&gt; &lt;span class="mi"&gt;2022&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="mi"&gt;06&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="mi"&gt;23&lt;/span&gt; &lt;span class="mi"&gt;19&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="mi"&gt;00&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="mi"&gt;00&lt;/span&gt;
  &lt;span class="mi"&gt;5&lt;/span&gt; &lt;span class="o"&gt;|&lt;/span&gt; &lt;span class="n"&gt;delivering&lt;/span&gt;        &lt;span class="o"&gt;|&lt;/span&gt; &lt;span class="mi"&gt;2022&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="mi"&gt;06&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="mi"&gt;24&lt;/span&gt; &lt;span class="mi"&gt;08&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="mi"&gt;30&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="mi"&gt;00&lt;/span&gt;
  &lt;span class="mi"&gt;6&lt;/span&gt; &lt;span class="o"&gt;|&lt;/span&gt; &lt;span class="n"&gt;baking&lt;/span&gt;            &lt;span class="o"&gt;|&lt;/span&gt; &lt;span class="mi"&gt;2022&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="mi"&gt;06&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="mi"&gt;24&lt;/span&gt; &lt;span class="mi"&gt;08&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="mi"&gt;45&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="mi"&gt;00&lt;/span&gt;
  &lt;span class="mi"&gt;7&lt;/span&gt; &lt;span class="o"&gt;|&lt;/span&gt; &lt;span class="n"&gt;baking&lt;/span&gt;            &lt;span class="o"&gt;|&lt;/span&gt; &lt;span class="mi"&gt;2022&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="mi"&gt;06&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="mi"&gt;24&lt;/span&gt; &lt;span class="mi"&gt;09&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="mi"&gt;00&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="mi"&gt;00&lt;/span&gt;
&lt;span class="err"&gt; &lt;/span&gt; &lt;span class="mi"&gt;8&lt;/span&gt; &lt;span class="o"&gt;|&lt;/span&gt; &lt;span class="n"&gt;ordered&lt;/span&gt; &lt;span class="err"&gt; &lt;/span&gt; &lt;span class="err"&gt; &lt;/span&gt; &lt;span class="err"&gt; &lt;/span&gt; &lt;span class="err"&gt; &lt;/span&gt; &lt;span class="err"&gt; &lt;/span&gt; &lt;span class="o"&gt;|&lt;/span&gt; &lt;span class="mi"&gt;2022&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="mi"&gt;06&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="mi"&gt;24&lt;/span&gt; &lt;span class="mi"&gt;10&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="mi"&gt;00&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="mi"&gt;00&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;It’s certainly handy that we can query partitioned tables directly. However, you don’t want your customer-facing microservices to remember the actual current month and what partition to query. Instead, the microservices will be querying the top-level &lt;code&gt;PizzaOrders&lt;/code&gt; table and PostgreSQL will apply the partitioning pruning optimization the following way:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight sql"&gt;&lt;code&gt;&lt;span class="k"&gt;EXPLAIN&lt;/span&gt; &lt;span class="k"&gt;ANALYZE&lt;/span&gt; &lt;span class="k"&gt;SELECT&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="k"&gt;FROM&lt;/span&gt; &lt;span class="n"&gt;PizzaOrders&lt;/span&gt; 
    &lt;span class="k"&gt;WHERE&lt;/span&gt; &lt;span class="n"&gt;ordertime&lt;/span&gt; &lt;span class="k"&gt;BETWEEN&lt;/span&gt; &lt;span class="s1"&gt;'2022_06_20'&lt;/span&gt; &lt;span class="k"&gt;AND&lt;/span&gt; &lt;span class="s1"&gt;'2022_06_30'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
                                                                     &lt;span class="n"&gt;QUERY&lt;/span&gt; &lt;span class="n"&gt;PLAN&lt;/span&gt;                                                                      
&lt;span class="c1"&gt;-----------------------------------------------------------------------------------------------------------------------------------------------------&lt;/span&gt;
 &lt;span class="n"&gt;Seq&lt;/span&gt; &lt;span class="n"&gt;Scan&lt;/span&gt; &lt;span class="k"&gt;on&lt;/span&gt; &lt;span class="n"&gt;orders_2022_06&lt;/span&gt; &lt;span class="n"&gt;pizzaorders&lt;/span&gt;  &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;cost&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="mi"&gt;00&lt;/span&gt;&lt;span class="p"&gt;..&lt;/span&gt;&lt;span class="mi"&gt;37&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="mi"&gt;75&lt;/span&gt; &lt;span class="k"&gt;rows&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mi"&gt;9&lt;/span&gt; &lt;span class="n"&gt;width&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mi"&gt;16&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;actual&lt;/span&gt; &lt;span class="nb"&gt;time&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="mi"&gt;010&lt;/span&gt;&lt;span class="p"&gt;..&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="mi"&gt;012&lt;/span&gt; &lt;span class="k"&gt;rows&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mi"&gt;5&lt;/span&gt; &lt;span class="n"&gt;loops&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
   &lt;span class="n"&gt;Filter&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;((&lt;/span&gt;&lt;span class="n"&gt;ordertime&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;=&lt;/span&gt; &lt;span class="s1"&gt;'2022-06-20 00:00:00'&lt;/span&gt;&lt;span class="p"&gt;::&lt;/span&gt;&lt;span class="nb"&gt;timestamp&lt;/span&gt; &lt;span class="k"&gt;without&lt;/span&gt; &lt;span class="nb"&gt;time&lt;/span&gt; &lt;span class="k"&gt;zone&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;AND&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;ordertime&lt;/span&gt; &lt;span class="o"&gt;&amp;lt;=&lt;/span&gt; &lt;span class="s1"&gt;'2022-06-30 00:00:00'&lt;/span&gt;&lt;span class="p"&gt;::&lt;/span&gt;&lt;span class="nb"&gt;timestamp&lt;/span&gt; &lt;span class="k"&gt;without&lt;/span&gt; &lt;span class="nb"&gt;time&lt;/span&gt; &lt;span class="k"&gt;zone&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt;
 &lt;span class="n"&gt;Planning&lt;/span&gt; &lt;span class="nb"&gt;Time&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="mi"&gt;122&lt;/span&gt; &lt;span class="n"&gt;ms&lt;/span&gt;
 &lt;span class="n"&gt;Execution&lt;/span&gt; &lt;span class="nb"&gt;Time&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="mi"&gt;040&lt;/span&gt; &lt;span class="n"&gt;ms&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This is the same query but PostgreSQL (and not your application layer) decides which partition keeps the data. The execution plan shows that the query ran against the &lt;code&gt;orders_2022_06&lt;/code&gt; partition, bypassing the others.&lt;/p&gt;

&lt;p&gt;However, this ability to work with partitioned tables directly is extremely useful when you need to change the structure of your partitions. Now, assume that tomorrow is July 1st, 2022. You need to add a new partition that will keep the orders for that new current month (July), as well as introduce a few other changes.&lt;/p&gt;

&lt;h2&gt;
  
  
  Detaching Old Partitions
&lt;/h2&gt;

&lt;p&gt;Let’s first deal with partition &lt;code&gt;orders_2022_05&lt;/code&gt; that presently holds data for the “previous month” (May 2022). You do this because once July becomes the “current month”, June will become the “previous month”, according to the application logic. &lt;/p&gt;

&lt;p&gt;First, let’s remove the May partition from the partitions structure using the &lt;code&gt;DETACH&lt;/code&gt; command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight sql"&gt;&lt;code&gt;&lt;span class="k"&gt;ALTER&lt;/span&gt; &lt;span class="k"&gt;TABLE&lt;/span&gt; &lt;span class="n"&gt;PizzaOrders&lt;/span&gt; &lt;span class="n"&gt;DETACH&lt;/span&gt; &lt;span class="k"&gt;PARTITION&lt;/span&gt; &lt;span class="n"&gt;orders_2022_05&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;&lt;span class="err"&gt; &lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Once you do this, attempt to read all the records from the &lt;code&gt;PizzaOrders&lt;/code&gt; table to confirm there are no records left for May:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight sql"&gt;&lt;code&gt;&lt;span class="k"&gt;SELECT&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="k"&gt;FROM&lt;/span&gt; &lt;span class="n"&gt;PizzaOrders&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
 &lt;span class="n"&gt;id&lt;/span&gt; &lt;span class="o"&gt;|&lt;/span&gt;      &lt;span class="n"&gt;status&lt;/span&gt;       &lt;span class="o"&gt;|&lt;/span&gt;      &lt;span class="n"&gt;ordertime&lt;/span&gt;      
&lt;span class="c1"&gt;----+-------------------+---------------------&lt;/span&gt;
  &lt;span class="mi"&gt;4&lt;/span&gt; &lt;span class="o"&gt;|&lt;/span&gt; &lt;span class="n"&gt;yummy&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="k"&gt;in&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;my&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;tummy&lt;/span&gt; &lt;span class="o"&gt;|&lt;/span&gt; &lt;span class="mi"&gt;2022&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="mi"&gt;06&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="mi"&gt;23&lt;/span&gt; &lt;span class="mi"&gt;19&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="mi"&gt;00&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="mi"&gt;00&lt;/span&gt;
  &lt;span class="mi"&gt;5&lt;/span&gt; &lt;span class="o"&gt;|&lt;/span&gt; &lt;span class="n"&gt;delivering&lt;/span&gt;        &lt;span class="o"&gt;|&lt;/span&gt; &lt;span class="mi"&gt;2022&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="mi"&gt;06&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="mi"&gt;24&lt;/span&gt; &lt;span class="mi"&gt;08&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="mi"&gt;30&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="mi"&gt;00&lt;/span&gt;
  &lt;span class="mi"&gt;6&lt;/span&gt; &lt;span class="o"&gt;|&lt;/span&gt; &lt;span class="n"&gt;baking&lt;/span&gt;            &lt;span class="o"&gt;|&lt;/span&gt; &lt;span class="mi"&gt;2022&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="mi"&gt;06&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="mi"&gt;24&lt;/span&gt; &lt;span class="mi"&gt;08&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="mi"&gt;45&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="mi"&gt;00&lt;/span&gt;
  &lt;span class="mi"&gt;7&lt;/span&gt; &lt;span class="o"&gt;|&lt;/span&gt; &lt;span class="n"&gt;baking&lt;/span&gt;            &lt;span class="o"&gt;|&lt;/span&gt; &lt;span class="mi"&gt;2022&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="mi"&gt;06&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="mi"&gt;24&lt;/span&gt; &lt;span class="mi"&gt;09&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="mi"&gt;00&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="mi"&gt;00&lt;/span&gt;
  &lt;span class="mi"&gt;8&lt;/span&gt; &lt;span class="o"&gt;|&lt;/span&gt; &lt;span class="n"&gt;ordered&lt;/span&gt;           &lt;span class="o"&gt;|&lt;/span&gt; &lt;span class="mi"&gt;2022&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="mi"&gt;06&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="mi"&gt;24&lt;/span&gt; &lt;span class="mi"&gt;10&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="mi"&gt;00&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="mi"&gt;00&lt;/span&gt;
&lt;span class="err"&gt; &lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt; &lt;span class="o"&gt;|&lt;/span&gt; &lt;span class="n"&gt;yummy&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="k"&gt;in&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;my&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;tummy&lt;/span&gt; &lt;span class="o"&gt;|&lt;/span&gt; &lt;span class="mi"&gt;2021&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="mi"&gt;12&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="mi"&gt;27&lt;/span&gt; &lt;span class="mi"&gt;22&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="mi"&gt;00&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="mi"&gt;00&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Don’t get scared, the data for May didn’t evaporate! The data is still in the same partitioned table that you can query directly:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight sql"&gt;&lt;code&gt;&lt;span class="k"&gt;SELECT&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="k"&gt;FROM&lt;/span&gt; &lt;span class="n"&gt;orders_2022_05&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

 &lt;span class="n"&gt;id&lt;/span&gt; &lt;span class="o"&gt;|&lt;/span&gt;      &lt;span class="n"&gt;status&lt;/span&gt;       &lt;span class="o"&gt;|&lt;/span&gt;      &lt;span class="n"&gt;ordertime&lt;/span&gt;      
&lt;span class="c1"&gt;----+-------------------+---------------------&lt;/span&gt;
  &lt;span class="mi"&gt;2&lt;/span&gt; &lt;span class="o"&gt;|&lt;/span&gt; &lt;span class="n"&gt;yummy&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="k"&gt;in&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;my&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;tummy&lt;/span&gt; &lt;span class="o"&gt;|&lt;/span&gt; &lt;span class="mi"&gt;2022&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="mi"&gt;05&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="mi"&gt;15&lt;/span&gt; &lt;span class="mi"&gt;13&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="mi"&gt;00&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="mi"&gt;00&lt;/span&gt;
&lt;span class="err"&gt; &lt;/span&gt; &lt;span class="mi"&gt;3&lt;/span&gt; &lt;span class="o"&gt;|&lt;/span&gt; &lt;span class="n"&gt;yummy&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="k"&gt;in&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;my&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;tummy&lt;/span&gt; &lt;span class="o"&gt;|&lt;/span&gt; &lt;span class="mi"&gt;2022&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="mi"&gt;05&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="mi"&gt;23&lt;/span&gt; &lt;span class="mi"&gt;10&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="mi"&gt;00&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="mi"&gt;00&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Next, do you remember that we have the &lt;code&gt;orders_others&lt;/code&gt; partition that keeps all the orders that the &lt;code&gt;ordertime&lt;/code&gt; column doesn’t fit into the ranges of other partitions? Now go ahead and put the records for May there. You can do this by inserting the data into the top-level &lt;code&gt;PizzaOrders&lt;/code&gt; table and letting PostgreSQL arrange records across partitions:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight sql"&gt;&lt;code&gt; &lt;span class="k"&gt;INSERT&lt;/span&gt; &lt;span class="k"&gt;INTO&lt;/span&gt; &lt;span class="n"&gt;PizzaOrders&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;id&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="n"&gt;status&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="n"&gt;ordertime&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; 
    &lt;span class="k"&gt;SELECT&lt;/span&gt; &lt;span class="n"&gt;detached&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;id&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;detached&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;status&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;detached&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;ordertime&lt;/span&gt; 
&lt;span class="err"&gt; &lt;/span&gt;  &lt;span class="k"&gt;FROM&lt;/span&gt; &lt;span class="n"&gt;orders_2022_05&lt;/span&gt; &lt;span class="k"&gt;as&lt;/span&gt; &lt;span class="n"&gt;detached&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Lastly, you can safely drop the &lt;code&gt;orders_2022_05&lt;/code&gt; partition because you already have a copy of the orders for May in the &lt;code&gt;orders_others&lt;/code&gt; partition:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight sql"&gt;&lt;code&gt;&lt;span class="k"&gt;DROP&lt;/span&gt; &lt;span class="k"&gt;TABLE&lt;/span&gt; &lt;span class="n"&gt;orders_2022_05&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="k"&gt;SELECT&lt;/span&gt; &lt;span class="n"&gt;tableoid&lt;/span&gt;&lt;span class="p"&gt;::&lt;/span&gt;&lt;span class="n"&gt;regclass&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="n"&gt;PizzaOrders&lt;/span&gt; 
  &lt;span class="k"&gt;ORDER&lt;/span&gt; &lt;span class="k"&gt;BY&lt;/span&gt; &lt;span class="n"&gt;id&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

    &lt;span class="n"&gt;tableoid&lt;/span&gt;    &lt;span class="o"&gt;|&lt;/span&gt; &lt;span class="n"&gt;id&lt;/span&gt; &lt;span class="o"&gt;|&lt;/span&gt;      &lt;span class="n"&gt;status&lt;/span&gt;       &lt;span class="o"&gt;|&lt;/span&gt;      &lt;span class="n"&gt;ordertime&lt;/span&gt;      
&lt;span class="c1"&gt;----------------+----+-------------------+---------------------&lt;/span&gt;
 &lt;span class="n"&gt;orders_others&lt;/span&gt;  &lt;span class="o"&gt;|&lt;/span&gt;  &lt;span class="mi"&gt;1&lt;/span&gt; &lt;span class="o"&gt;|&lt;/span&gt; &lt;span class="n"&gt;yummy&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="k"&gt;in&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;my&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;tummy&lt;/span&gt; &lt;span class="o"&gt;|&lt;/span&gt; &lt;span class="mi"&gt;2021&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="mi"&gt;12&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="mi"&gt;27&lt;/span&gt; &lt;span class="mi"&gt;22&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="mi"&gt;00&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="mi"&gt;00&lt;/span&gt;
 &lt;span class="n"&gt;orders_others&lt;/span&gt;  &lt;span class="o"&gt;|&lt;/span&gt;  &lt;span class="mi"&gt;2&lt;/span&gt; &lt;span class="o"&gt;|&lt;/span&gt; &lt;span class="n"&gt;yummy&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="k"&gt;in&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;my&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;tummy&lt;/span&gt; &lt;span class="o"&gt;|&lt;/span&gt; &lt;span class="mi"&gt;2022&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="mi"&gt;05&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="mi"&gt;15&lt;/span&gt; &lt;span class="mi"&gt;13&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="mi"&gt;00&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="mi"&gt;00&lt;/span&gt;
 &lt;span class="n"&gt;orders_others&lt;/span&gt;  &lt;span class="o"&gt;|&lt;/span&gt;  &lt;span class="mi"&gt;3&lt;/span&gt; &lt;span class="o"&gt;|&lt;/span&gt; &lt;span class="n"&gt;yummy&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="k"&gt;in&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;my&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;tummy&lt;/span&gt; &lt;span class="o"&gt;|&lt;/span&gt; &lt;span class="mi"&gt;2022&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="mi"&gt;05&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="mi"&gt;23&lt;/span&gt; &lt;span class="mi"&gt;10&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="mi"&gt;00&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="mi"&gt;00&lt;/span&gt;
 &lt;span class="n"&gt;orders_2022_06&lt;/span&gt; &lt;span class="o"&gt;|&lt;/span&gt;  &lt;span class="mi"&gt;4&lt;/span&gt; &lt;span class="o"&gt;|&lt;/span&gt; &lt;span class="n"&gt;yummy&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="k"&gt;in&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;my&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;tummy&lt;/span&gt; &lt;span class="o"&gt;|&lt;/span&gt; &lt;span class="mi"&gt;2022&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="mi"&gt;06&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="mi"&gt;23&lt;/span&gt; &lt;span class="mi"&gt;19&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="mi"&gt;00&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="mi"&gt;00&lt;/span&gt;
 &lt;span class="n"&gt;orders_2022_06&lt;/span&gt; &lt;span class="o"&gt;|&lt;/span&gt;  &lt;span class="mi"&gt;5&lt;/span&gt; &lt;span class="o"&gt;|&lt;/span&gt; &lt;span class="n"&gt;delivering&lt;/span&gt;        &lt;span class="o"&gt;|&lt;/span&gt; &lt;span class="mi"&gt;2022&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="mi"&gt;06&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="mi"&gt;24&lt;/span&gt; &lt;span class="mi"&gt;08&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="mi"&gt;30&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="mi"&gt;00&lt;/span&gt;
 &lt;span class="n"&gt;orders_2022_06&lt;/span&gt; &lt;span class="o"&gt;|&lt;/span&gt;  &lt;span class="mi"&gt;6&lt;/span&gt; &lt;span class="o"&gt;|&lt;/span&gt; &lt;span class="n"&gt;baking&lt;/span&gt;            &lt;span class="o"&gt;|&lt;/span&gt; &lt;span class="mi"&gt;2022&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="mi"&gt;06&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="mi"&gt;24&lt;/span&gt; &lt;span class="mi"&gt;08&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="mi"&gt;45&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="mi"&gt;00&lt;/span&gt;
 &lt;span class="n"&gt;orders_2022_06&lt;/span&gt; &lt;span class="o"&gt;|&lt;/span&gt;  &lt;span class="mi"&gt;7&lt;/span&gt; &lt;span class="o"&gt;|&lt;/span&gt; &lt;span class="n"&gt;baking&lt;/span&gt;            &lt;span class="o"&gt;|&lt;/span&gt; &lt;span class="mi"&gt;2022&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="mi"&gt;06&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="mi"&gt;24&lt;/span&gt; &lt;span class="mi"&gt;09&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="mi"&gt;00&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="mi"&gt;00&lt;/span&gt;
&lt;span class="err"&gt; &lt;/span&gt;&lt;span class="n"&gt;orders_2022_06&lt;/span&gt; &lt;span class="o"&gt;|&lt;/span&gt; &lt;span class="err"&gt; &lt;/span&gt;&lt;span class="mi"&gt;8&lt;/span&gt; &lt;span class="o"&gt;|&lt;/span&gt; &lt;span class="n"&gt;ordered&lt;/span&gt; &lt;span class="err"&gt; &lt;/span&gt; &lt;span class="err"&gt; &lt;/span&gt; &lt;span class="err"&gt; &lt;/span&gt; &lt;span class="err"&gt; &lt;/span&gt; &lt;span class="err"&gt; &lt;/span&gt; &lt;span class="o"&gt;|&lt;/span&gt; &lt;span class="mi"&gt;2022&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="mi"&gt;06&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="mi"&gt;24&lt;/span&gt; &lt;span class="mi"&gt;10&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="mi"&gt;00&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="mi"&gt;00&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Attaching New Partitions
&lt;/h2&gt;

&lt;p&gt;Finally, let’s create a partition for July that’s about to become the “current month”, in accordance with the application logic. &lt;/p&gt;

&lt;p&gt;The most straightforward way to do this is by attaching a new partition to the &lt;code&gt;PizzaOrders&lt;/code&gt; table:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight sql"&gt;&lt;code&gt;&lt;span class="k"&gt;CREATE&lt;/span&gt; &lt;span class="k"&gt;TABLE&lt;/span&gt; &lt;span class="n"&gt;orders_2022_07&lt;/span&gt; &lt;span class="k"&gt;PARTITION&lt;/span&gt; &lt;span class="k"&gt;OF&lt;/span&gt; &lt;span class="n"&gt;PizzaOrders&lt;/span&gt; 
&lt;span class="err"&gt; &lt;/span&gt; &lt;span class="k"&gt;FOR&lt;/span&gt; &lt;span class="k"&gt;VALUES&lt;/span&gt; &lt;span class="k"&gt;FROM&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s1"&gt;'2022-07-01'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;TO&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s1"&gt;'2022-08-01'&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The name of the new partition is &lt;code&gt;orders_2022_07&lt;/code&gt; and it’s added to the partitions structure:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight sql"&gt;&lt;code&gt;&lt;span class="err"&gt;\&lt;/span&gt;&lt;span class="n"&gt;d&lt;/span&gt;&lt;span class="o"&gt;+&lt;/span&gt; &lt;span class="n"&gt;PizzaOrders&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
                                            &lt;span class="n"&gt;Partitioned&lt;/span&gt; &lt;span class="k"&gt;table&lt;/span&gt; &lt;span class="nv"&gt;"public.pizzaorders"&lt;/span&gt;
  &lt;span class="k"&gt;Column&lt;/span&gt;   &lt;span class="o"&gt;|&lt;/span&gt;            &lt;span class="k"&gt;Type&lt;/span&gt;             &lt;span class="o"&gt;|&lt;/span&gt; &lt;span class="k"&gt;Collation&lt;/span&gt; &lt;span class="o"&gt;|&lt;/span&gt; &lt;span class="k"&gt;Nullable&lt;/span&gt; &lt;span class="o"&gt;|&lt;/span&gt; &lt;span class="k"&gt;Default&lt;/span&gt; &lt;span class="o"&gt;|&lt;/span&gt; &lt;span class="k"&gt;Storage&lt;/span&gt; &lt;span class="o"&gt;|&lt;/span&gt; &lt;span class="n"&gt;Compression&lt;/span&gt; &lt;span class="o"&gt;|&lt;/span&gt; &lt;span class="n"&gt;Stats&lt;/span&gt; &lt;span class="n"&gt;target&lt;/span&gt; &lt;span class="o"&gt;|&lt;/span&gt; &lt;span class="n"&gt;Description&lt;/span&gt; 
&lt;span class="c1"&gt;-----------+-----------------------------+-----------+----------+---------+---------+-------------+--------------+-------------&lt;/span&gt;
 &lt;span class="n"&gt;id&lt;/span&gt;        &lt;span class="o"&gt;|&lt;/span&gt; &lt;span class="nb"&gt;integer&lt;/span&gt;                     &lt;span class="o"&gt;|&lt;/span&gt;           &lt;span class="o"&gt;|&lt;/span&gt;          &lt;span class="o"&gt;|&lt;/span&gt;         &lt;span class="o"&gt;|&lt;/span&gt; &lt;span class="n"&gt;plain&lt;/span&gt;   &lt;span class="o"&gt;|&lt;/span&gt;             &lt;span class="o"&gt;|&lt;/span&gt;              &lt;span class="o"&gt;|&lt;/span&gt; 
 &lt;span class="n"&gt;status&lt;/span&gt;    &lt;span class="o"&gt;|&lt;/span&gt; &lt;span class="n"&gt;status_t&lt;/span&gt;                    &lt;span class="o"&gt;|&lt;/span&gt;           &lt;span class="o"&gt;|&lt;/span&gt;          &lt;span class="o"&gt;|&lt;/span&gt;         &lt;span class="o"&gt;|&lt;/span&gt; &lt;span class="n"&gt;plain&lt;/span&gt;   &lt;span class="o"&gt;|&lt;/span&gt;             &lt;span class="o"&gt;|&lt;/span&gt;              &lt;span class="o"&gt;|&lt;/span&gt; 
 &lt;span class="n"&gt;ordertime&lt;/span&gt; &lt;span class="o"&gt;|&lt;/span&gt; &lt;span class="nb"&gt;timestamp&lt;/span&gt; &lt;span class="k"&gt;without&lt;/span&gt; &lt;span class="nb"&gt;time&lt;/span&gt; &lt;span class="k"&gt;zone&lt;/span&gt; &lt;span class="o"&gt;|&lt;/span&gt;           &lt;span class="o"&gt;|&lt;/span&gt;          &lt;span class="o"&gt;|&lt;/span&gt;         &lt;span class="o"&gt;|&lt;/span&gt; &lt;span class="n"&gt;plain&lt;/span&gt;   &lt;span class="o"&gt;|&lt;/span&gt;             &lt;span class="o"&gt;|&lt;/span&gt;              &lt;span class="o"&gt;|&lt;/span&gt; 
&lt;span class="k"&gt;Partition&lt;/span&gt; &lt;span class="k"&gt;key&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="k"&gt;RANGE&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;ordertime&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="n"&gt;Partitions&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;orders_2022_06&lt;/span&gt; &lt;span class="k"&gt;FOR&lt;/span&gt; &lt;span class="k"&gt;VALUES&lt;/span&gt; &lt;span class="k"&gt;FROM&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s1"&gt;'2022-06-01 00:00:00'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;TO&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s1"&gt;'2022-07-01 00:00:00'&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt;
            &lt;span class="n"&gt;orders_2022_07&lt;/span&gt; &lt;span class="k"&gt;FOR&lt;/span&gt; &lt;span class="k"&gt;VALUES&lt;/span&gt; &lt;span class="k"&gt;FROM&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s1"&gt;'2022-07-01 00:00:00'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;TO&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s1"&gt;'2022-08-01 00:00:00'&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt;
&lt;span class="err"&gt; &lt;/span&gt; &lt;span class="err"&gt; &lt;/span&gt; &lt;span class="err"&gt; &lt;/span&gt; &lt;span class="err"&gt; &lt;/span&gt; &lt;span class="err"&gt; &lt;/span&gt; &lt;span class="err"&gt; &lt;/span&gt; &lt;span class="n"&gt;orders_others&lt;/span&gt; &lt;span class="k"&gt;DEFAULT&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Easy, isn’t it? Let’s test the changes by inserting dummy data for July 2022 and checking what partition those records belong to:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight sql"&gt;&lt;code&gt;&lt;span class="k"&gt;INSERT&lt;/span&gt; &lt;span class="k"&gt;INTO&lt;/span&gt; &lt;span class="n"&gt;PizzaOrders&lt;/span&gt; &lt;span class="k"&gt;VALUES&lt;/span&gt; 
&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;9&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s1"&gt;'ordered'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s1"&gt;'2022-07-02 10:00:00'&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt;
&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;10&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s1"&gt;'baking'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s1"&gt;'2022-07-02 9:50:00'&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt;
&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;11&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s1"&gt;'yummy-in-my-tummy'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s1"&gt;'2022-07-01 18:10:00'&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

&lt;span class="k"&gt;SELECT&lt;/span&gt; &lt;span class="n"&gt;tableoid&lt;/span&gt;&lt;span class="p"&gt;::&lt;/span&gt;&lt;span class="n"&gt;regclass&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="n"&gt;PizzaOrders&lt;/span&gt; 
  &lt;span class="k"&gt;ORDER&lt;/span&gt; &lt;span class="k"&gt;BY&lt;/span&gt; &lt;span class="n"&gt;id&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
    &lt;span class="n"&gt;tableoid&lt;/span&gt;    &lt;span class="o"&gt;|&lt;/span&gt; &lt;span class="n"&gt;id&lt;/span&gt; &lt;span class="o"&gt;|&lt;/span&gt;      &lt;span class="n"&gt;status&lt;/span&gt;       &lt;span class="o"&gt;|&lt;/span&gt;      &lt;span class="n"&gt;ordertime&lt;/span&gt;      
&lt;span class="c1"&gt;----------------+----+-------------------+---------------------&lt;/span&gt;
 &lt;span class="n"&gt;orders_others&lt;/span&gt;  &lt;span class="o"&gt;|&lt;/span&gt;  &lt;span class="mi"&gt;1&lt;/span&gt; &lt;span class="o"&gt;|&lt;/span&gt; &lt;span class="n"&gt;yummy&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="k"&gt;in&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;my&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;tummy&lt;/span&gt; &lt;span class="o"&gt;|&lt;/span&gt; &lt;span class="mi"&gt;2021&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="mi"&gt;12&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="mi"&gt;27&lt;/span&gt; &lt;span class="mi"&gt;22&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="mi"&gt;00&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="mi"&gt;00&lt;/span&gt;
 &lt;span class="n"&gt;orders_others&lt;/span&gt;  &lt;span class="o"&gt;|&lt;/span&gt;  &lt;span class="mi"&gt;2&lt;/span&gt; &lt;span class="o"&gt;|&lt;/span&gt; &lt;span class="n"&gt;yummy&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="k"&gt;in&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;my&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;tummy&lt;/span&gt; &lt;span class="o"&gt;|&lt;/span&gt; &lt;span class="mi"&gt;2022&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="mi"&gt;05&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="mi"&gt;15&lt;/span&gt; &lt;span class="mi"&gt;13&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="mi"&gt;00&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="mi"&gt;00&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Done! You could easily change the structure of the partitions by detaching the partition for May and attaching a new one for July. And no changes were necessary on the application side. Our microservices continued to query the &lt;code&gt;PizzaOrders&lt;/code&gt; table directly without bothering underlying partitions.&lt;/p&gt;

&lt;h1&gt;
  
  
  To Be Continued…
&lt;/h1&gt;

&lt;p&gt;Alright, with this article we finished the review of partition pruning and maintenance capabilities that can improve performance and facilitate the design of your application. Check out this &lt;a href="https://www.postgresql.org/docs/current/ddl-partitioning.html" rel="noopener noreferrer"&gt;PostgreSQL resource&lt;/a&gt; to learn more.&lt;/p&gt;

&lt;p&gt;In a follow-up article, you’ll learn how to &lt;a href="https://dzone.com/articles/how-to-geo-partition-data-in-distributed-sql" rel="noopener noreferrer"&gt;use geo-partitioning&lt;/a&gt; to pin pizza orders to a specific geographic location. After all, we’ve been working on the application for a large pizza chain that feeds and delights customers across countries and continents. Stay tuned!&lt;/p&gt;

</description>
      <category>postgres</category>
      <category>architecture</category>
      <category>devops</category>
      <category>database</category>
    </item>
    <item>
      <title>Optimizing Application Queries With Partition Pruning</title>
      <dc:creator>Denis Magda</dc:creator>
      <pubDate>Thu, 07 Jul 2022 16:16:42 +0000</pubDate>
      <link>https://dev.to/yugabyte/optimizing-application-queries-with-partition-pruning-4g4b</link>
      <guid>https://dev.to/yugabyte/optimizing-application-queries-with-partition-pruning-4g4b</guid>
      <description>&lt;p&gt;&lt;a href="https://dzone.com/articles/sql-database-table-and-data-partitioning-when-and" rel="noopener noreferrer"&gt;Table partitioning&lt;/a&gt; is a very handy feature supported by several databases, including PostgreSQL, MySQL, Oracle, and YugabyteDB. This feature is useful when you need to split a large table into smaller independent pieces called partitioned tables or partitions. If you’re not familiar with this feature yet, consider the following simple example.&lt;/p&gt;

&lt;p&gt;Let’s pretend you develop an application that automates operations for a large pizza chain. Your database schema has a &lt;code&gt;PizzaOrders&lt;/code&gt; table that tracks the order’s progress.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F195fcwtutdurph2znuzz.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F195fcwtutdurph2znuzz.png" alt="Image description" width="480" height="323"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Each row in the table represents a customer order with a unique identifier (&lt;code&gt;ID&lt;/code&gt;), the time the customer put the order in (&lt;code&gt;OrderTime&lt;/code&gt;), and the current order’s &lt;code&gt;Status&lt;/code&gt;. The status ranges from &lt;code&gt;ORDERED&lt;/code&gt; to &lt;code&gt;YUMMY-YUMMY-IN-MY-TUMMY&lt;/code&gt;. The latter status means that a pizza has already been delivered to the customer’s doors and, hopefully, consumed.&lt;/p&gt;

&lt;p&gt;As a large pizza chain, the application needs to deal with thousands of orders daily. This means the size of the &lt;code&gt;PizzaOrders&lt;/code&gt; table is getting out of control. Eventually, you &lt;a href="https://dzone.com/articles/mysql-partition-pruning" rel="noopener noreferrer"&gt;take advantage of the table partitioning feature&lt;/a&gt; and split the table into several smaller independent tables called partitions.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdonbcduanrhkf0nq4sxr.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdonbcduanrhkf0nq4sxr.png" alt="Image description" width="800" height="604"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The original table gets split by the &lt;code&gt;Status&lt;/code&gt; column into three partitioned tables:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The &lt;code&gt;OrdersCompleted&lt;/code&gt; table stores all the orders that were handed over to the customer.&lt;/li&gt;
&lt;li&gt;The &lt;code&gt;OrdersInDelivery&lt;/code&gt; tracks the orders on the road to the customer’s location.&lt;/li&gt;
&lt;li&gt;And the &lt;code&gt;OrdersInProgress&lt;/code&gt; is for the pizzas that are yet to be baked. &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Nice! As a quick win, now you can manage the size and state of each partition independently and control the cost, for instance, by using a cheaper storage medium for historical data (&lt;code&gt;OrdersCompleted&lt;/code&gt;). Also, you might see an immediate performance gain after the split for queries that request only pizzas with a specific status. For example, if the app requests all the orders that are in progress, then the request will go to the &lt;code&gt;OrdersInProgress&lt;/code&gt; table, bypassing other partitions.&lt;/p&gt;

&lt;p&gt;Alright, now you have a basic understanding of table partitioning. Next, let’s dive deeper and see how table partitioning can improve your apps’ performance. In this article, we start with the partition pruning feature.&lt;/p&gt;

&lt;h2&gt;
  
  
  Partition Pruning
&lt;/h2&gt;

&lt;p&gt;In PostgreSQL, &lt;a href="https://www.postgresql.org/docs/current/ddl-partitioning.html#DDL-PARTITION-PRUNING" rel="noopener noreferrer"&gt;partition pruning&lt;/a&gt; is an optimization technique that allows the SQL engine to exclude unnecessary partitions from the execution. As a result, your query will run against a smaller data set, bypassing the excluded partitions. The smaller data set to query the better the performance.&lt;/p&gt;

&lt;p&gt;Let’s see what happens when the planner and executor apply this optimization technique.&lt;/p&gt;

&lt;h3&gt;
  
  
  Creating Partitions
&lt;/h3&gt;

&lt;p&gt;Earlier, we discussed how to use the status column to partition the &lt;code&gt;PizzaOrders&lt;/code&gt; table into three tables: &lt;code&gt;OrdersInProgress&lt;/code&gt;, &lt;code&gt;OrdersInDelivery&lt;/code&gt; and &lt;code&gt;OrdersCompleted&lt;/code&gt;. These are the DDL commands:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight sql"&gt;&lt;code&gt;&lt;span class="k"&gt;CREATE&lt;/span&gt; &lt;span class="k"&gt;TYPE&lt;/span&gt; &lt;span class="n"&gt;status_t&lt;/span&gt; &lt;span class="k"&gt;AS&lt;/span&gt; &lt;span class="nb"&gt;ENUM&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s1"&gt;'ordered'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s1"&gt;'baking'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s1"&gt;'delivering'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s1"&gt;'yummy-in-my-tummy'&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

&lt;span class="k"&gt;CREATE&lt;/span&gt; &lt;span class="k"&gt;TABLE&lt;/span&gt; &lt;span class="n"&gt;PizzaOrders&lt;/span&gt;
 &lt;span class="p"&gt;(&lt;/span&gt;
   &lt;span class="n"&gt;id&lt;/span&gt;   &lt;span class="nb"&gt;int&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
   &lt;span class="n"&gt;status&lt;/span&gt;   &lt;span class="n"&gt;status_t&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
   &lt;span class="n"&gt;ordertime&lt;/span&gt;   &lt;span class="nb"&gt;timestamp&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
   &lt;span class="k"&gt;PRIMARY&lt;/span&gt; &lt;span class="k"&gt;KEY&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;id&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;status&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
 &lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;PARTITION&lt;/span&gt; &lt;span class="k"&gt;BY&lt;/span&gt; &lt;span class="n"&gt;LIST&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;status&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

&lt;span class="k"&gt;CREATE&lt;/span&gt; &lt;span class="k"&gt;TABLE&lt;/span&gt; &lt;span class="n"&gt;OrdersInProgress&lt;/span&gt; &lt;span class="k"&gt;PARTITION&lt;/span&gt; &lt;span class="k"&gt;OF&lt;/span&gt; &lt;span class="n"&gt;PizzaOrders&lt;/span&gt; 
  &lt;span class="k"&gt;FOR&lt;/span&gt; &lt;span class="k"&gt;VALUES&lt;/span&gt; &lt;span class="k"&gt;IN&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s1"&gt;'ordered'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="s1"&gt;'baking'&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

&lt;span class="k"&gt;CREATE&lt;/span&gt; &lt;span class="k"&gt;TABLE&lt;/span&gt; &lt;span class="n"&gt;OrdersInDelivery&lt;/span&gt; &lt;span class="k"&gt;PARTITION&lt;/span&gt; &lt;span class="k"&gt;OF&lt;/span&gt; &lt;span class="n"&gt;PizzaOrders&lt;/span&gt; 
  &lt;span class="k"&gt;FOR&lt;/span&gt; &lt;span class="k"&gt;VALUES&lt;/span&gt; &lt;span class="k"&gt;IN&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s1"&gt;'delivering'&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

&lt;span class="k"&gt;CREATE&lt;/span&gt; &lt;span class="k"&gt;TABLE&lt;/span&gt; &lt;span class="n"&gt;OrdersCompleted&lt;/span&gt; &lt;span class="k"&gt;PARTITION&lt;/span&gt; &lt;span class="k"&gt;OF&lt;/span&gt; &lt;span class="n"&gt;PizzaOrders&lt;/span&gt; 
&lt;span class="err"&gt; &lt;/span&gt; &lt;span class="k"&gt;FOR&lt;/span&gt; &lt;span class="k"&gt;VALUES&lt;/span&gt; &lt;span class="k"&gt;IN&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s1"&gt;'yummy-in-my-tummy'&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The &lt;code&gt;PARTITION BY LIST (status)&lt;/code&gt; clause requests to split the table using the &lt;a href="https://www.postgresql.org/docs/current/ddl-partitioning.html#DDL-PARTITIONING-OVERVIEW" rel="noopener noreferrer"&gt;List Partitioning&lt;/a&gt; method. With this method, the table is partitioned by explicitly listing which value(s) appear in each partition. PostgreSQL also supports &lt;a href="https://www.postgresql.org/docs/current/ddl-partitioning.html#DDL-PARTITIONING-OVERVIEW" rel="noopener noreferrer"&gt;Hash, Range, and Multilevel partitioning methods&lt;/a&gt; that you can review later.   &lt;/p&gt;

&lt;p&gt;After the split, you can use the command below to see the information about the table with its partitions:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight sql"&gt;&lt;code&gt;&lt;span class="err"&gt;\&lt;/span&gt;&lt;span class="n"&gt;d&lt;/span&gt;&lt;span class="o"&gt;+&lt;/span&gt; &lt;span class="n"&gt;PizzaOrders&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="n"&gt;Partitioned&lt;/span&gt; &lt;span class="k"&gt;table&lt;/span&gt; &lt;span class="nv"&gt;"public.pizzaorders"&lt;/span&gt;
  &lt;span class="k"&gt;Column&lt;/span&gt;   &lt;span class="o"&gt;|&lt;/span&gt;            &lt;span class="k"&gt;Type&lt;/span&gt;             &lt;span class="o"&gt;|&lt;/span&gt; &lt;span class="k"&gt;Collation&lt;/span&gt; &lt;span class="o"&gt;|&lt;/span&gt; &lt;span class="k"&gt;Nullable&lt;/span&gt; &lt;span class="o"&gt;|&lt;/span&gt; &lt;span class="k"&gt;Default&lt;/span&gt; &lt;span class="o"&gt;|&lt;/span&gt; &lt;span class="k"&gt;Storage&lt;/span&gt; &lt;span class="o"&gt;|&lt;/span&gt; &lt;span class="n"&gt;Compression&lt;/span&gt; &lt;span class="o"&gt;|&lt;/span&gt; &lt;span class="n"&gt;Stats&lt;/span&gt; &lt;span class="n"&gt;target&lt;/span&gt; &lt;span class="o"&gt;|&lt;/span&gt; &lt;span class="n"&gt;Description&lt;/span&gt; 
&lt;span class="c1"&gt;-----------+-----------------------------+-----------+----------+---------+---------+-------------+--------------+-------------&lt;/span&gt;
 &lt;span class="n"&gt;id&lt;/span&gt;        &lt;span class="o"&gt;|&lt;/span&gt; &lt;span class="nb"&gt;integer&lt;/span&gt;                     &lt;span class="o"&gt;|&lt;/span&gt;           &lt;span class="o"&gt;|&lt;/span&gt; &lt;span class="k"&gt;not&lt;/span&gt; &lt;span class="k"&gt;null&lt;/span&gt; &lt;span class="o"&gt;|&lt;/span&gt;         &lt;span class="o"&gt;|&lt;/span&gt; &lt;span class="n"&gt;plain&lt;/span&gt;   &lt;span class="o"&gt;|&lt;/span&gt;             &lt;span class="o"&gt;|&lt;/span&gt;              &lt;span class="o"&gt;|&lt;/span&gt; 
 &lt;span class="n"&gt;status&lt;/span&gt;    &lt;span class="o"&gt;|&lt;/span&gt; &lt;span class="n"&gt;status_t&lt;/span&gt;                    &lt;span class="o"&gt;|&lt;/span&gt;           &lt;span class="o"&gt;|&lt;/span&gt; &lt;span class="k"&gt;not&lt;/span&gt; &lt;span class="k"&gt;null&lt;/span&gt; &lt;span class="o"&gt;|&lt;/span&gt;         &lt;span class="o"&gt;|&lt;/span&gt; &lt;span class="n"&gt;plain&lt;/span&gt;   &lt;span class="o"&gt;|&lt;/span&gt;             &lt;span class="o"&gt;|&lt;/span&gt;              &lt;span class="o"&gt;|&lt;/span&gt; 
 &lt;span class="n"&gt;ordertime&lt;/span&gt; &lt;span class="o"&gt;|&lt;/span&gt; &lt;span class="nb"&gt;timestamp&lt;/span&gt; &lt;span class="k"&gt;without&lt;/span&gt; &lt;span class="nb"&gt;time&lt;/span&gt; &lt;span class="k"&gt;zone&lt;/span&gt; &lt;span class="o"&gt;|&lt;/span&gt;           &lt;span class="o"&gt;|&lt;/span&gt;          &lt;span class="o"&gt;|&lt;/span&gt;         &lt;span class="o"&gt;|&lt;/span&gt; &lt;span class="n"&gt;plain&lt;/span&gt;   &lt;span class="o"&gt;|&lt;/span&gt;             &lt;span class="o"&gt;|&lt;/span&gt;              &lt;span class="o"&gt;|&lt;/span&gt; 
&lt;span class="k"&gt;Partition&lt;/span&gt; &lt;span class="k"&gt;key&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;LIST&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;status&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="n"&gt;Indexes&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
    &lt;span class="nv"&gt;"pizzaorders_pkey"&lt;/span&gt; &lt;span class="k"&gt;PRIMARY&lt;/span&gt; &lt;span class="k"&gt;KEY&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;btree&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;id&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;status&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="n"&gt;Partitions&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;orderscompleted&lt;/span&gt; &lt;span class="k"&gt;FOR&lt;/span&gt; &lt;span class="k"&gt;VALUES&lt;/span&gt; &lt;span class="k"&gt;IN&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s1"&gt;'yummy-in-my-tummy'&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt;
            &lt;span class="n"&gt;ordersindelivery&lt;/span&gt; &lt;span class="k"&gt;FOR&lt;/span&gt; &lt;span class="k"&gt;VALUES&lt;/span&gt; &lt;span class="k"&gt;IN&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s1"&gt;'delivering'&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt;
            &lt;span class="n"&gt;ordersinprogress&lt;/span&gt; &lt;span class="k"&gt;FOR&lt;/span&gt; &lt;span class="k"&gt;VALUES&lt;/span&gt; &lt;span class="k"&gt;IN&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s1"&gt;'ordered'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s1"&gt;'baking'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Next, let’s insert sample data:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight sql"&gt;&lt;code&gt;&lt;span class="k"&gt;INSERT&lt;/span&gt; &lt;span class="k"&gt;INTO&lt;/span&gt; &lt;span class="n"&gt;PizzaOrders&lt;/span&gt; &lt;span class="k"&gt;VALUES&lt;/span&gt; 
&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s1"&gt;'yummy-in-my-tummy'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s1"&gt;'2021-12-27 22:00:00'&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt;
&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;2&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s1"&gt;'yummy-in-my-tummy'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s1"&gt;'2022-05-15 13:00:00'&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt;
&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;3&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s1"&gt;'yummy-in-my-tummy'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s1"&gt;'2022-05-23 10:00:00'&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt;
&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;4&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s1"&gt;'yummy-in-my-tummy'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s1"&gt;'2022-06-23 19:00:00'&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt;
&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;5&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s1"&gt;'delivering'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s1"&gt;'2022-06-24 8:30:00'&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt;
&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;6&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s1"&gt;'baking'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s1"&gt;'2022-06-24 8:45:00'&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt;
&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;7&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s1"&gt;'baking'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s1"&gt;'2022-06-24 9:00:00'&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt;
&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;8&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s1"&gt;'ordered'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s1"&gt;'2022-06-24 10:00:00'&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt; 
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;And run the query below to see in which partition each record is stored:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight sql"&gt;&lt;code&gt;&lt;span class="k"&gt;SELECT&lt;/span&gt; &lt;span class="n"&gt;tableoid&lt;/span&gt;&lt;span class="p"&gt;::&lt;/span&gt;&lt;span class="n"&gt;regclass&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="n"&gt;PizzaOrders&lt;/span&gt; 
  &lt;span class="k"&gt;ORDER&lt;/span&gt; &lt;span class="k"&gt;BY&lt;/span&gt; &lt;span class="n"&gt;id&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

     &lt;span class="n"&gt;tableoid&lt;/span&gt;     &lt;span class="o"&gt;|&lt;/span&gt; &lt;span class="n"&gt;id&lt;/span&gt; &lt;span class="o"&gt;|&lt;/span&gt;      &lt;span class="n"&gt;status&lt;/span&gt;       &lt;span class="o"&gt;|&lt;/span&gt;      &lt;span class="n"&gt;ordertime&lt;/span&gt;      
&lt;span class="c1"&gt;------------------+----+-------------------+---------------------&lt;/span&gt;
 &lt;span class="n"&gt;orderscompleted&lt;/span&gt;  &lt;span class="o"&gt;|&lt;/span&gt;  &lt;span class="mi"&gt;1&lt;/span&gt; &lt;span class="o"&gt;|&lt;/span&gt; &lt;span class="n"&gt;yummy&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="k"&gt;in&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;my&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;tummy&lt;/span&gt; &lt;span class="o"&gt;|&lt;/span&gt; &lt;span class="mi"&gt;2021&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="mi"&gt;12&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="mi"&gt;27&lt;/span&gt; &lt;span class="mi"&gt;22&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="mi"&gt;00&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="mi"&gt;00&lt;/span&gt;
 &lt;span class="n"&gt;orderscompleted&lt;/span&gt;  &lt;span class="o"&gt;|&lt;/span&gt;  &lt;span class="mi"&gt;2&lt;/span&gt; &lt;span class="o"&gt;|&lt;/span&gt; &lt;span class="n"&gt;yummy&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="k"&gt;in&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;my&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;tummy&lt;/span&gt; &lt;span class="o"&gt;|&lt;/span&gt; &lt;span class="mi"&gt;2022&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="mi"&gt;05&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="mi"&gt;15&lt;/span&gt; &lt;span class="mi"&gt;13&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="mi"&gt;00&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="mi"&gt;00&lt;/span&gt;
 &lt;span class="n"&gt;orderscompleted&lt;/span&gt;  &lt;span class="o"&gt;|&lt;/span&gt;  &lt;span class="mi"&gt;3&lt;/span&gt; &lt;span class="o"&gt;|&lt;/span&gt; &lt;span class="n"&gt;yummy&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="k"&gt;in&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;my&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;tummy&lt;/span&gt; &lt;span class="o"&gt;|&lt;/span&gt; &lt;span class="mi"&gt;2022&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="mi"&gt;05&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="mi"&gt;23&lt;/span&gt; &lt;span class="mi"&gt;10&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="mi"&gt;00&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="mi"&gt;00&lt;/span&gt;
 &lt;span class="n"&gt;orderscompleted&lt;/span&gt;  &lt;span class="o"&gt;|&lt;/span&gt;  &lt;span class="mi"&gt;4&lt;/span&gt; &lt;span class="o"&gt;|&lt;/span&gt; &lt;span class="n"&gt;yummy&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="k"&gt;in&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;my&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;tummy&lt;/span&gt; &lt;span class="o"&gt;|&lt;/span&gt; &lt;span class="mi"&gt;2022&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="mi"&gt;06&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="mi"&gt;23&lt;/span&gt; &lt;span class="mi"&gt;19&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="mi"&gt;00&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="mi"&gt;00&lt;/span&gt;
 &lt;span class="n"&gt;ordersindelivery&lt;/span&gt; &lt;span class="o"&gt;|&lt;/span&gt;  &lt;span class="mi"&gt;5&lt;/span&gt; &lt;span class="o"&gt;|&lt;/span&gt; &lt;span class="n"&gt;delivering&lt;/span&gt;        &lt;span class="o"&gt;|&lt;/span&gt; &lt;span class="mi"&gt;2022&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="mi"&gt;06&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="mi"&gt;24&lt;/span&gt; &lt;span class="mi"&gt;08&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="mi"&gt;30&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="mi"&gt;00&lt;/span&gt;
 &lt;span class="n"&gt;ordersinprogress&lt;/span&gt; &lt;span class="o"&gt;|&lt;/span&gt;  &lt;span class="mi"&gt;6&lt;/span&gt; &lt;span class="o"&gt;|&lt;/span&gt; &lt;span class="n"&gt;baking&lt;/span&gt;            &lt;span class="o"&gt;|&lt;/span&gt; &lt;span class="mi"&gt;2022&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="mi"&gt;06&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="mi"&gt;24&lt;/span&gt; &lt;span class="mi"&gt;08&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="mi"&gt;45&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="mi"&gt;00&lt;/span&gt;
 &lt;span class="n"&gt;ordersinprogress&lt;/span&gt; &lt;span class="o"&gt;|&lt;/span&gt;  &lt;span class="mi"&gt;7&lt;/span&gt; &lt;span class="o"&gt;|&lt;/span&gt; &lt;span class="n"&gt;baking&lt;/span&gt;            &lt;span class="o"&gt;|&lt;/span&gt; &lt;span class="mi"&gt;2022&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="mi"&gt;06&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="mi"&gt;24&lt;/span&gt; &lt;span class="mi"&gt;09&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="mi"&gt;00&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="mi"&gt;00&lt;/span&gt;
&lt;span class="err"&gt; &lt;/span&gt;&lt;span class="n"&gt;ordersinprogress&lt;/span&gt; &lt;span class="o"&gt;|&lt;/span&gt; &lt;span class="err"&gt; &lt;/span&gt;&lt;span class="mi"&gt;8&lt;/span&gt; &lt;span class="o"&gt;|&lt;/span&gt; &lt;span class="n"&gt;ordered&lt;/span&gt; &lt;span class="err"&gt; &lt;/span&gt; &lt;span class="err"&gt; &lt;/span&gt; &lt;span class="err"&gt; &lt;/span&gt; &lt;span class="err"&gt; &lt;/span&gt; &lt;span class="err"&gt; &lt;/span&gt; &lt;span class="o"&gt;|&lt;/span&gt; &lt;span class="mi"&gt;2022&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="mi"&gt;06&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="mi"&gt;24&lt;/span&gt; &lt;span class="mi"&gt;10&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="mi"&gt;00&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="mi"&gt;00&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The &lt;code&gt;tableoid&lt;/code&gt; column shows the name of a table that stores a respective record. As you can see, all the records were properly arranged across the partitioned tables based on the value of their &lt;code&gt;Status&lt;/code&gt; column.&lt;/p&gt;

&lt;h2&gt;
  
  
  Partition Pruning in Action
&lt;/h2&gt;

&lt;p&gt;Lastly, let’s assume you want to get all the orders that were handed over to customers and, hopefully, consumed  by them:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight sql"&gt;&lt;code&gt;&lt;span class="k"&gt;EXPLAIN&lt;/span&gt; &lt;span class="k"&gt;ANALYZE&lt;/span&gt; &lt;span class="k"&gt;SELECT&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="k"&gt;FROM&lt;/span&gt; &lt;span class="n"&gt;PizzaOrders&lt;/span&gt; 
  &lt;span class="k"&gt;WHERE&lt;/span&gt; &lt;span class="n"&gt;status&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s1"&gt;'yummy-in-my-tummy'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

                                                           &lt;span class="n"&gt;QUERY&lt;/span&gt; &lt;span class="n"&gt;PLAN&lt;/span&gt;                                                           
&lt;span class="c1"&gt;--------------------------------------------------------------------------------------------------------------------------------&lt;/span&gt;
 &lt;span class="n"&gt;Bitmap&lt;/span&gt; &lt;span class="n"&gt;Heap&lt;/span&gt; &lt;span class="n"&gt;Scan&lt;/span&gt; &lt;span class="k"&gt;on&lt;/span&gt; &lt;span class="n"&gt;orderscompleted&lt;/span&gt; &lt;span class="n"&gt;pizzaorders&lt;/span&gt;  &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;cost&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mi"&gt;22&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="mi"&gt;03&lt;/span&gt;&lt;span class="p"&gt;..&lt;/span&gt;&lt;span class="mi"&gt;32&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="mi"&gt;57&lt;/span&gt; &lt;span class="k"&gt;rows&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mi"&gt;9&lt;/span&gt; &lt;span class="n"&gt;width&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mi"&gt;16&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;actual&lt;/span&gt; &lt;span class="nb"&gt;time&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="mi"&gt;019&lt;/span&gt;&lt;span class="p"&gt;..&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="mi"&gt;020&lt;/span&gt; &lt;span class="k"&gt;rows&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mi"&gt;4&lt;/span&gt; &lt;span class="n"&gt;loops&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
   &lt;span class="k"&gt;Recheck&lt;/span&gt; &lt;span class="n"&gt;Cond&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;status&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s1"&gt;'yummy-in-my-tummy'&lt;/span&gt;&lt;span class="p"&gt;::&lt;/span&gt;&lt;span class="n"&gt;status_t&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
   &lt;span class="n"&gt;Heap&lt;/span&gt; &lt;span class="n"&gt;Blocks&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;exact&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;
   &lt;span class="o"&gt;-&amp;gt;&lt;/span&gt;  &lt;span class="n"&gt;Bitmap&lt;/span&gt; &lt;span class="k"&gt;Index&lt;/span&gt; &lt;span class="n"&gt;Scan&lt;/span&gt; &lt;span class="k"&gt;on&lt;/span&gt; &lt;span class="n"&gt;orderscompleted_pkey&lt;/span&gt;  &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;cost&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="mi"&gt;00&lt;/span&gt;&lt;span class="p"&gt;..&lt;/span&gt;&lt;span class="mi"&gt;22&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="mi"&gt;03&lt;/span&gt; &lt;span class="k"&gt;rows&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mi"&gt;9&lt;/span&gt; &lt;span class="n"&gt;width&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;actual&lt;/span&gt; &lt;span class="nb"&gt;time&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="mi"&gt;012&lt;/span&gt;&lt;span class="p"&gt;..&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="mi"&gt;013&lt;/span&gt; &lt;span class="k"&gt;rows&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mi"&gt;4&lt;/span&gt; &lt;span class="n"&gt;loops&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
         &lt;span class="k"&gt;Index&lt;/span&gt; &lt;span class="n"&gt;Cond&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;status&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s1"&gt;'yummy-in-my-tummy'&lt;/span&gt;&lt;span class="p"&gt;::&lt;/span&gt;&lt;span class="n"&gt;status_t&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
 &lt;span class="n"&gt;Planning&lt;/span&gt; &lt;span class="nb"&gt;Time&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="mi"&gt;321&lt;/span&gt; &lt;span class="n"&gt;ms&lt;/span&gt;
&lt;span class="err"&gt; &lt;/span&gt;&lt;span class="n"&gt;Execution&lt;/span&gt; &lt;span class="nb"&gt;Time&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="mi"&gt;052&lt;/span&gt; &lt;span class="n"&gt;ms&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The execution plan shows that the query was executed over a single partition (&lt;code&gt;OrdersCompleted&lt;/code&gt;) bypassing the others. This is the partition pruning feature in action! And it’s enabled by default in PostgreSQL.&lt;/p&gt;

&lt;h2&gt;
  
  
  Partition Pruning and Functions
&lt;/h2&gt;

&lt;p&gt;Partition pruning works only if a query filters data by applying constant value or externally supplied parameters. If you try to use a non-immutable function within the &lt;code&gt;WHERE&lt;/code&gt; clause section, the planner will skip this optimization and scan the entire table instead. Don’t be surprised if you see the following execution plan:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight sql"&gt;&lt;code&gt;&lt;span class="k"&gt;EXPLAIN&lt;/span&gt; &lt;span class="k"&gt;ANALYZE&lt;/span&gt; &lt;span class="k"&gt;SELECT&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="k"&gt;FROM&lt;/span&gt; &lt;span class="n"&gt;PizzaOrders&lt;/span&gt; 
  &lt;span class="k"&gt;WHERE&lt;/span&gt; &lt;span class="n"&gt;status&lt;/span&gt;&lt;span class="p"&gt;::&lt;/span&gt;&lt;span class="nb"&gt;text&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;lower&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s1"&gt;'yummy-in-my-tummy'&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

                                                           &lt;span class="n"&gt;QUERY&lt;/span&gt; &lt;span class="n"&gt;PLAN&lt;/span&gt;                                                           
&lt;span class="c1"&gt;--------------------------------------------------------------------------------------------------------------------------------&lt;/span&gt;
 &lt;span class="n"&gt;Append&lt;/span&gt;  &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;cost&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="mi"&gt;00&lt;/span&gt;&lt;span class="p"&gt;..&lt;/span&gt;&lt;span class="mi"&gt;127&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="mi"&gt;26&lt;/span&gt; &lt;span class="k"&gt;rows&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mi"&gt;27&lt;/span&gt; &lt;span class="n"&gt;width&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mi"&gt;16&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;actual&lt;/span&gt; &lt;span class="nb"&gt;time&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="mi"&gt;074&lt;/span&gt;&lt;span class="p"&gt;..&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="mi"&gt;076&lt;/span&gt; &lt;span class="k"&gt;rows&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mi"&gt;4&lt;/span&gt; &lt;span class="n"&gt;loops&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
   &lt;span class="o"&gt;-&amp;gt;&lt;/span&gt;  &lt;span class="n"&gt;Seq&lt;/span&gt; &lt;span class="n"&gt;Scan&lt;/span&gt; &lt;span class="k"&gt;on&lt;/span&gt; &lt;span class="n"&gt;ordersinprogress&lt;/span&gt; &lt;span class="n"&gt;pizzaorders_1&lt;/span&gt;  &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;cost&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="mi"&gt;00&lt;/span&gt;&lt;span class="p"&gt;..&lt;/span&gt;&lt;span class="mi"&gt;42&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="mi"&gt;38&lt;/span&gt; &lt;span class="k"&gt;rows&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mi"&gt;9&lt;/span&gt; &lt;span class="n"&gt;width&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mi"&gt;16&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;actual&lt;/span&gt; &lt;span class="nb"&gt;time&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="mi"&gt;040&lt;/span&gt;&lt;span class="p"&gt;..&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="mi"&gt;040&lt;/span&gt; &lt;span class="k"&gt;rows&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt; &lt;span class="n"&gt;loops&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
         &lt;span class="n"&gt;Filter&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;((&lt;/span&gt;&lt;span class="n"&gt;status&lt;/span&gt;&lt;span class="p"&gt;)::&lt;/span&gt;&lt;span class="nb"&gt;text&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s1"&gt;'yummy-in-my-tummy'&lt;/span&gt;&lt;span class="p"&gt;::&lt;/span&gt;&lt;span class="nb"&gt;text&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
         &lt;span class="k"&gt;Rows&lt;/span&gt; &lt;span class="n"&gt;Removed&lt;/span&gt; &lt;span class="k"&gt;by&lt;/span&gt; &lt;span class="n"&gt;Filter&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;3&lt;/span&gt;
   &lt;span class="o"&gt;-&amp;gt;&lt;/span&gt;  &lt;span class="n"&gt;Seq&lt;/span&gt; &lt;span class="n"&gt;Scan&lt;/span&gt; &lt;span class="k"&gt;on&lt;/span&gt; &lt;span class="n"&gt;ordersindelivery&lt;/span&gt; &lt;span class="n"&gt;pizzaorders_2&lt;/span&gt;  &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;cost&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="mi"&gt;00&lt;/span&gt;&lt;span class="p"&gt;..&lt;/span&gt;&lt;span class="mi"&gt;42&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="mi"&gt;38&lt;/span&gt; &lt;span class="k"&gt;rows&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mi"&gt;9&lt;/span&gt; &lt;span class="n"&gt;width&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mi"&gt;16&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;actual&lt;/span&gt; &lt;span class="nb"&gt;time&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="mi"&gt;019&lt;/span&gt;&lt;span class="p"&gt;..&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="mi"&gt;019&lt;/span&gt; &lt;span class="k"&gt;rows&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt; &lt;span class="n"&gt;loops&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
         &lt;span class="n"&gt;Filter&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;((&lt;/span&gt;&lt;span class="n"&gt;status&lt;/span&gt;&lt;span class="p"&gt;)::&lt;/span&gt;&lt;span class="nb"&gt;text&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s1"&gt;'yummy-in-my-tummy'&lt;/span&gt;&lt;span class="p"&gt;::&lt;/span&gt;&lt;span class="nb"&gt;text&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
         &lt;span class="k"&gt;Rows&lt;/span&gt; &lt;span class="n"&gt;Removed&lt;/span&gt; &lt;span class="k"&gt;by&lt;/span&gt; &lt;span class="n"&gt;Filter&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt;
   &lt;span class="o"&gt;-&amp;gt;&lt;/span&gt;  &lt;span class="n"&gt;Seq&lt;/span&gt; &lt;span class="n"&gt;Scan&lt;/span&gt; &lt;span class="k"&gt;on&lt;/span&gt; &lt;span class="n"&gt;orderscompleted&lt;/span&gt; &lt;span class="n"&gt;pizzaorders_3&lt;/span&gt;  &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;cost&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="mi"&gt;00&lt;/span&gt;&lt;span class="p"&gt;..&lt;/span&gt;&lt;span class="mi"&gt;42&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="mi"&gt;38&lt;/span&gt; &lt;span class="k"&gt;rows&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mi"&gt;9&lt;/span&gt; &lt;span class="n"&gt;width&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mi"&gt;16&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;actual&lt;/span&gt; &lt;span class="nb"&gt;time&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="mi"&gt;014&lt;/span&gt;&lt;span class="p"&gt;..&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="mi"&gt;015&lt;/span&gt; &lt;span class="k"&gt;rows&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mi"&gt;4&lt;/span&gt; &lt;span class="n"&gt;loops&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
         &lt;span class="n"&gt;Filter&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;((&lt;/span&gt;&lt;span class="n"&gt;status&lt;/span&gt;&lt;span class="p"&gt;)::&lt;/span&gt;&lt;span class="nb"&gt;text&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s1"&gt;'yummy-in-my-tummy'&lt;/span&gt;&lt;span class="p"&gt;::&lt;/span&gt;&lt;span class="nb"&gt;text&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
 &lt;span class="n"&gt;Planning&lt;/span&gt; &lt;span class="nb"&gt;Time&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;11&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="mi"&gt;435&lt;/span&gt; &lt;span class="n"&gt;ms&lt;/span&gt;
&lt;span class="err"&gt; &lt;/span&gt;&lt;span class="n"&gt;Execution&lt;/span&gt; &lt;span class="nb"&gt;Time&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="mi"&gt;222&lt;/span&gt; &lt;span class="n"&gt;ms&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The planner doesn’t like guessing the time it might take to execute the function. Therefore, it queries all the partitions, which might be significantly faster.&lt;/p&gt;

&lt;h2&gt;
  
  
  To Be Continued…
&lt;/h2&gt;

&lt;p&gt;Alright, that’s enough to understand the basics of table partitioning and how your apps can execute faster if the database applies the partition pruning optimization technique. What’s good about pruning is that you, as a developer, just define your tables and respective partitions, and then the database engine will take care of the optimization.&lt;/p&gt;

&lt;p&gt;In the following articles, you’ll learn how partition management can simplify and improve the architecture of event-driven and streaming applications. You’ll also discover how you can use the geo-partitioning technique to pin user data to specific geographies. The latter method comes in handy when an app must comply with data regulatory requirements or serve user requests with low latency, regardless of the user’s location.&lt;/p&gt;

</description>
      <category>database</category>
      <category>postgres</category>
      <category>performance</category>
      <category>architecture</category>
    </item>
    <item>
      <title>How Java Apps Litter Beyond the Heap</title>
      <dc:creator>Denis Magda</dc:creator>
      <pubDate>Mon, 27 Jun 2022 20:03:53 +0000</pubDate>
      <link>https://dev.to/yugabyte/how-java-apps-litter-beyond-the-heap-4jl8</link>
      <guid>https://dev.to/yugabyte/how-java-apps-litter-beyond-the-heap-4jl8</guid>
      <description>&lt;p&gt;As Java developers, we’re no strangers to the concept of garbage collection. Our apps generate garbage all the time, and that garbage is meticulously cleaned out by CMS, G1, Azul C4, and other types of collectors. Basically, our apps are born to bring value to this world, but, nothing is perfect—including our apps that leave litter in the Java heap.&lt;/p&gt;

&lt;p&gt;However, the story doesn’t end with the Java heap. In fact, it only starts there. Let’s take the example of a basic Java application that uses a relational database such as PostgreSQL and solid-state drives (SSDs) as a storage device. From here, we’ll explore how our applications generate garbage beyond the boundaries of the Java runtime.&lt;/p&gt;

&lt;h2&gt;
  
  
  Filling Up PostgreSQL With Dead Tuples
&lt;/h2&gt;

&lt;p&gt;When your Java application executes a DELETE or UPDATE statement against a PostgreSQL database, a deleted record is not removed immediately nor is an existing record updated in its place. Instead, the deleted record is marked as a dead tuple and will remain in storage. The updated record is, in fact, a brand new record that PostgreSQL inserts by copying the previous version of the record and updating requested columns. The previous version of that updated record is considered deleted and, as with the DELETE operation, marked as a dead tuple.&lt;/p&gt;

&lt;p&gt;There is a good reason why the database engine keeps old versions of the deleted and updated records in its storage. For starters, your application can run a bunch of transactions against PostgreSQL in parallel. Some of those transactions do start earlier than others. But if a transaction deletes a record that still might be of interest to a few transactions started earlier, then the record needs to be kept in the database (at least until the point in time when all earlier started transactions finish). This is how PostgreSQL implements &lt;a href="https://dzone.com/articles/all-about-two-phase-locking" rel="noopener noreferrer"&gt;MVCC (multi-version concurrency protocol)&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;It’s clear that PostgreSQL can’t and doesn’t want to keep the dead tuples forever. This is why the database has its own garbage collection process called &lt;a href="https://www.postgresql.org/docs/9.6/routine-vacuuming.html" rel="noopener noreferrer"&gt;vacuuming&lt;/a&gt;. There are two types of &lt;a href="https://www.postgresql.org/docs/current/sql-vacuum.html" rel="noopener noreferrer"&gt;VACUUM&lt;/a&gt; — the plain one and the full one. The plain VACUUM works in parallel with your application workloads and doesn’t block your queries. This type of vacuuming marks the space occupied by dead tuples as free, making it available for new data that your app will add to the same table later. The plain VACUUM doesn’t return the space to the operating system so that it can be reused by other tables or 3rd party applications (except in some corner cases when a page includes only dead tuples and the page is in the end of a table). &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcpst7gtj3l0371q6tisr.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcpst7gtj3l0371q6tisr.png" alt="Image description" width="800" height="504"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;By contrast, the full VACUUM does reclaim the free space to the operating system, but it blocks application workloads. You can think of it as Java’s “stop-the-world” garbage collection pause. It’s only in PostgreSQL that such a pause can last for hours (or days). Thus, database admins try their best to prevent the full VACUUM from happening at all.&lt;/p&gt;

&lt;p&gt;Let me stop here and move down to the next level — SSDs. Check out this &lt;a href="https://www.percona.com/blog/2018/08/06/basic-understanding-bloat-vacuum-postgresql-mvcc/" rel="noopener noreferrer"&gt;demo-driven article&lt;/a&gt; if you’d like to develop a much deeper understanding of vacuuming.&lt;/p&gt;

&lt;h2&gt;
  
  
  Generating Stale Data in SSDs
&lt;/h2&gt;

&lt;p&gt;If you thought garbage collection is just for software then… surprise, surprise! Some hardware devices also need to perform garbage collection routines. &lt;a href="https://en.wikipedia.org/wiki/Solid-state_drive" rel="noopener noreferrer"&gt;SSDs&lt;/a&gt; do garbage collection all the time!&lt;/p&gt;

&lt;p&gt;Whenever your Java application deletes or updates any data on disk - through PostgreSQL as discussed above or directly via the Java File API - then the app generates garbage on SSDs.&lt;/p&gt;

&lt;p&gt;An SSD stores data in pages (usually between 4KB and 16KB in size) and the latter are grouped in blocks. While your data can be written or read at the page level, the stale (deleted) data can be erased only at the block level. The erasure requires more voltage than for reading/writing operations, and it’s hard to target that voltage at the page level without impacting the adjacent cells.&lt;/p&gt;

&lt;p&gt;So, if your Java app updates a file, then, in fact, an updated segment will be written to an empty page potentially in a different block. The segment with the old data will be marked as stale and garbage collected later. First, a &lt;a href="https://www.techtarget.com/searchstorage/definition/solid-state-storage-SSS-garbage-collection" rel="noopener noreferrer"&gt;garbage collector in SSDs&lt;/a&gt; traverses blocks of pages with stale data and moves good data to other blocks (similar to the compaction phase in  Java’s G1 collector). Second, the collector erases blocks that have only stale data left and makes those blocks available to future data.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkpaitt0kw32284kwe6cb.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkpaitt0kw32284kwe6cb.png" alt="Image description" width="800" height="522"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Curious how SSD manufacturers prevent or minimize the number of “stop-the-world” pauses? There is a concept of SSD over-provisioning, when each device comes with an extra space that is unavailable to your apps. That space is a sort of a safe buffer that allows apps to continue writing or modifying data while the garbage collector erases stale data concurrently. Read more about the over-provisioning &lt;a href="https://www.seagate.com/tech-insights/ssd-over-provisioning-benefits-master-ti/" rel="noopener noreferrer"&gt;here&lt;/a&gt;. &lt;/p&gt;

&lt;h2&gt;
  
  
  Summary
&lt;/h2&gt;

&lt;p&gt;So, next time someone asks you to explain the internals of Java garbage collection, go ahead and surprise them by expanding the topic to include databases and hardware. &lt;/p&gt;

&lt;p&gt;On a serious note, garbage collection is a widespread technique that is used far beyond the Java ecosystem. If implemented properly, garbage collection can simplify the architecture of software and hardware without performance impact. Java, PostgreSQL, and SSDs are all good examples of products that successfully take advantage of garbage collection and still remain among the top products in their categories.&lt;/p&gt;

</description>
      <category>java</category>
      <category>postgres</category>
      <category>programming</category>
      <category>database</category>
    </item>
  </channel>
</rss>
