<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Stephen</title>
    <description>The latest articles on DEV Community by Stephen (@stephen84s).</description>
    <link>https://dev.to/stephen84s</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/stephen84s"/>
    <language>en</language>
    <item>
      <title>Working with Gigantic Google BigQuery Partitioned Tables in DBT</title>
      <dc:creator>Stephen</dc:creator>
      <pubDate>Fri, 20 Sep 2024 16:26:31 +0000</pubDate>
      <link>https://dev.to/stephen84s/working-with-google-bigquery-partitioned-tables-in-dbt-2dg0</link>
      <guid>https://dev.to/stephen84s/working-with-google-bigquery-partitioned-tables-in-dbt-2dg0</guid>
      <description>&lt;h2&gt;
  
  
  The Problem
&lt;/h2&gt;

&lt;p&gt;When working with large tables (think TBs), I encountered a very serious problem with my dbt models - We were scanning too much data from the source tables !!!&lt;/p&gt;

&lt;p&gt;Despite doing :-&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Partitioning our tables by day&lt;/li&gt;
&lt;li&gt;Using incremental loading with &lt;code&gt;is_incremental&lt;/code&gt; macro and using the partition column in our queries, we were still scanning a whole lot of data (and getting billed for it as well).&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;And that is how I embarked on my quest optimise our dbt models and save some $$$ as well (honestly though as per my boss save the $$$ was the main quest).&lt;/p&gt;

&lt;h2&gt;
  
  
  The Face Palm Moment
&lt;/h2&gt;

&lt;p&gt;When building the initial models, I had expected our tables to increase in size pretty fast, so day partitioning based on ingestion time was already in there along with incrementally building the models. The code for those looked something like this:-&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{{ config(
    materialized="incremental",
    partition_by={
        "field": "ingested_at",
        "data_type": "timestamp",
        "granularity": "day"
    }
}}
with max_ingested_this as (
    select max(ingested_at) as max_ingested_at
    from {{ this }}
),

source1 as (
    select *
    from {{ ref("source1") }} as s
    inner join max_ingested_this as mit
        on s.ingested_at &amp;gt; mit.ingested_at
)

......


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;On the surface that code looked ok, at least to me, two years ago. The idea was &lt;code&gt;ingested_at&lt;/code&gt; column exists in every table and we just process the new data from the source and append to our table.&lt;/p&gt;

&lt;p&gt;Except for, this clear and succinctly written documentation on why my above query will not help BigQuery prune partitions&lt;sup&gt;&lt;a href="https://cloud.google.com/bigquery/docs/querying-partitioned-tables" rel="noopener noreferrer"&gt;1&lt;/a&gt;&lt;/sup&gt; from the &lt;code&gt;source1&lt;/code&gt; table and no I am not being sarcastic about how good the documentation is.&lt;/p&gt;

&lt;p&gt;So, for partition pruning to happen, my query had to be something like this:-&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;source1 as (
    select *
    from {{ ref("source1") }}
    ingested_at &amp;gt; "2012-10-10T13:00:00"
)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Not sure how I missed this (or maybe the documentation was updated to make it more clear&lt;sup&gt;ø&lt;/sup&gt;) the first time around, but seriously it meant that the nice smart query I had written previously was truly dumb&lt;sup&gt;#&lt;/sup&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Solution
&lt;/h2&gt;

&lt;p&gt;Once we had discovered the true cause of our queries scanning so much data and getting over a mini panic attack arising from thinking I chose the wrong tool for the job. We set out figuring out a solution.&lt;/p&gt;

&lt;p&gt;We needed the dates to be dynamic(they needed to be derived at run time), yet they needed to be static in the compiled query (which would be sent to BigQuery).&lt;/p&gt;

&lt;p&gt;And this is where one of my favourite dbt macros comes in, &lt;a href="https://github.com/dbt-labs/dbt-utils?tab=readme-ov-file#get_single_value-source" rel="noopener noreferrer"&gt;get_single_value&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;So instead of getting the &lt;code&gt;max(ingested_at)&lt;/code&gt; in the query, we switch to something like this:-&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{% set sql_statement %}
    select max(ingested_at) from {{ this }}
{% endset %}

{%- set max_ingested_this = dbt_utils.get_single_value(sql_statement, default="'1970-01-01'") -%}

with source1 as (
    select *
    from {{ ref("source1") }}
    where ingested_at &amp;gt; {{ '"' ~ max_ingested_this ~ '"' }}
)

...

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now on &lt;code&gt;dbt compile&lt;/code&gt; we will get the following output:-&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;with source1 as (
    select *
    from source1_bq_table
    where ingested_at &amp;gt; "2022-10-10T13:00:00"
)
...
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now as you can see when the final BigQuery job to process your data will be fired, it will have the static date in the SQL which means BigQuery can now do partition pruning and scan only the relevant partitions for your job&lt;sup&gt;µ&lt;/sup&gt;.&lt;/p&gt;

&lt;p&gt;Obviously the best way to see the reduction in amount of data scanned is to paste this generated SQL into BigQuery Studio and see in the top right the amount of data scanned of the new query vs the old one.&lt;/p&gt;

&lt;h2&gt;
  
  
  Notes
&lt;/h2&gt;

&lt;p&gt;* - Pruning partitions refers to the part where based on your query BigQuery is able to calculate in advance which partitions could contain the data you are interested in and so doesn't even bother scanning the data in other partitions. The direct effect of this is obviously lesser data scanned = faster query performance and / or you query costing lesser.&lt;/p&gt;

&lt;p&gt;# - This is where I guess the ancient wisdom of the Gods applies. RTFM and read it regularly, you might just stumble upon important new features or re-discover some old ones.&lt;/p&gt;

&lt;p&gt;ø - Just to make myself feel better, I am going to hope that documentation was &lt;em&gt;fixed&lt;/em&gt; after I read it or my queries worked perfectly at the time and the BigQuery team changed something :P&lt;/p&gt;

&lt;p&gt;&lt;u&gt;µ&lt;/u&gt; - Note the compromise here though, dbt has to now invoke two BigQuery jobs, first to get the &lt;code&gt;max_ingested_this&lt;/code&gt; at compile time and the next which actually builds the model table in BigQuery, so ensure your model output table is big enough to justify this performance penalty.&lt;br&gt;
If your table is big enough you can probably take this a step further and use the BigQuery Information schema models, to get the latest partition and only query that partition for maximum value of the &lt;code&gt;ingested_at&lt;/code&gt; columns.&lt;/p&gt;

</description>
      <category>bigquery</category>
      <category>dbt</category>
      <category>dataengineering</category>
      <category>googlecloud</category>
    </item>
  </channel>
</rss>
