<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: DataEngBytes</title>
    <description>The latest articles on DEV Community by DataEngBytes (@dataengbytes).</description>
    <link>https://dev.to/dataengbytes</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/dataengbytes"/>
    <language>en</language>
    <item>
      <title>Build a simple Data Lakehouse with Fivetran and Databricks</title>
      <dc:creator>Howard</dc:creator>
      <pubDate>Wed, 01 Jun 2022 04:51:56 +0000</pubDate>
      <link>https://dev.to/dataengbytes/build-a-simple-data-lakehouse-with-fivetran-and-databricks-14mn</link>
      <guid>https://dev.to/dataengbytes/build-a-simple-data-lakehouse-with-fivetran-and-databricks-14mn</guid>
      <description>&lt;p&gt;The Modern Data Stack (MDS) is gaining increasing popularity in recent years along with cloud computing. MDS put the cloud data warehouse such as Snowflake and Databricks at its core and uses modern data integration tools to load data into the cloud data warehouse.&lt;/p&gt;

&lt;p&gt;In the post, I will demonstrate how to use some emerging MDS tools on the market to build a simple data lakehouse. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The tools I use are:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Fivetran - a leading cloud-based data integration tool. It offers many connectors to help connect to the data source and load the data into cloud data storage. &lt;/li&gt;
&lt;li&gt;Databricks - Databricks is a well-known product in the AI/ML space for many years, and they launched Delta Lake - an open-source application in 2019. You can build a Data Lakehouse with it. &lt;/li&gt;
&lt;li&gt;Postgres - A open-source transactional database. I will use it as my source data database. The source data I use for this demo is called Chinook database. You can find the data &lt;a href="https://github.com/lerocha/chinook-database"&gt;here&lt;/a&gt;.&lt;/li&gt;
&lt;li&gt;I have launched an AWS RDS instance to host the database to let Fivetran read the data. &lt;/li&gt;
&lt;li&gt;dbt - an open-source application to build the data models.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  The Task
&lt;/h2&gt;

&lt;p&gt;In this demo, I will convert the OLTP database into a star schema. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The Source schema:&lt;/strong&gt;&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--udxrHjk7--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/zpe4bwa5af3kxgtn2u5t.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--udxrHjk7--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/zpe4bwa5af3kxgtn2u5t.png" alt="Image description" width="880" height="939"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The target schema&lt;/strong&gt;&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--ArIl8vsp--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/omr2420wnf92j1h9hm69.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--ArIl8vsp--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/omr2420wnf92j1h9hm69.png" alt="Image description" width="880" height="899"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Lakehouse layers design
&lt;/h2&gt;

&lt;p&gt;The lakehouse contains the following layers:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Landing Layer&lt;/strong&gt;&lt;br&gt;
The landing layer contains all the data loaded by Fivetran. Tables have additional metadata fields for tracking purposes. All the data should be in their original format. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;History Layer&lt;/strong&gt;&lt;br&gt;
The history layer contains history tables using SCD2 methodology. Every table in the landing area has a Historical table. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Integration Layer&lt;/strong&gt;&lt;br&gt;
The integration layer contains temporary tables to associate transformation. This is where you denormalize the source data and apply business rules.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Presentation Layer&lt;/strong&gt;&lt;br&gt;
The presentation layer keeps the transformed and business-friendly star schemas, in other words: the data warehouse. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--WvnwXnAX--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/61eof67qf94kwjisosm7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--WvnwXnAX--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/61eof67qf94kwjisosm7.png" alt="Image description" width="846" height="978"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  The build
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Step 1: Set up the environment&lt;/strong&gt; &lt;br&gt;
AWS RDS&lt;br&gt;
As I mentioned before, I have used the AWS RDS to host a Postgres database and create the Chinook database. Allow For Public Access option is selected. &lt;/p&gt;

&lt;p&gt;Databricks&lt;br&gt;
First, create a cluster on your Databricks account (I am using Databricks on Azure). There are two types of clusters. For this practice, I created a single node cluster. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--HhHW0N-9--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/77x3a0q726aw7y8vpvq1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--HhHW0N-9--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/77x3a0q726aw7y8vpvq1.png" alt="Image description" width="728" height="296"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 2: use Fivetran to load the data into Databricks&lt;/strong&gt; &lt;br&gt;
Set up the destinations&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--569eBBOv--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/b8zirolalzsfcnv3vrhp.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--569eBBOv--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/b8zirolalzsfcnv3vrhp.png" alt="Image description" width="880" height="347"&gt;&lt;/a&gt;&lt;br&gt;
The Port and Http path can be found under Databricks cluster attribute. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Set up the connector&lt;/strong&gt; &lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Select Postgres RDS instance &lt;/li&gt;
&lt;li&gt;Set the schema prefix. I named it as source_fivetran_pg. This indicates the data in this schema are extracted from Fivetran and in their original format&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;Test the connection&lt;/strong&gt; &lt;br&gt;
During the connection setup, Fivetran may prompt you to select the certification, select the root level certification. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Strat Initial sync&lt;/strong&gt; &lt;br&gt;
Fivetran will start the initial sync after it can connect to the database. Select the table or schema you want to sync, the leave the rest to Fivetran.&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--yvDhBbrP--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/faqkfq836bt6xofeaeqe.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--yvDhBbrP--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/faqkfq836bt6xofeaeqe.png" alt="Image description" width="678" height="886"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Check your data on Databricks&lt;/strong&gt; &lt;br&gt;
Fivetran will create a new schema in Databricks called source_fivetran_pg_chinook_public, and all the tables I selected are in this schema. &lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--1nNmf7t1--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/idn17cboz4174y63fn0r.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--1nNmf7t1--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/idn17cboz4174y63fn0r.png" alt="Image description" width="508" height="820"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 3: build the history tables&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Fivetran creates two metadata columns in each source table: _fivetran_deleted and _fivetran_synced. &lt;br&gt;
_fivetran_synced field contains the timestamp when Fivetran load the data into Databricks, I will use the column and the primary key of each table to create the history table. &lt;/p&gt;

&lt;p&gt;In dbt, create the snapshot model file and use the config function to define the schema name of the history table, unique_key, update_at and file_format. The file_format parameter must be set to ‘delta’, dbt will create a delta table in Databricks. &lt;/p&gt;

&lt;p&gt;Example of snapshot model in dbt:&lt;br&gt;
&lt;code&gt;{% snapshot CHINOOK_GENRE %}&lt;br&gt;
{{&lt;br&gt;
   config(&lt;br&gt;
     target_schema='dev_dlh_hist_chinook',&lt;br&gt;
     unique_key='genreid',&lt;br&gt;
     strategy='timestamp',&lt;br&gt;
     updated_at='_FIVETRAN_SYNCED',&lt;br&gt;
     file_format='delta',&lt;br&gt;
   )&lt;br&gt;
}}&lt;br&gt;
select * from {{ source('chinook_landing', 'genre') }}&lt;br&gt;
{% endsnapshot %}&lt;br&gt;
&lt;/code&gt;&lt;br&gt;
Then run the ‘dbt snapshot’ command. It will create the history table in the target schema.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--ygVetkDO--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/hwhphf2bnuase232vdg3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--ygVetkDO--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/hwhphf2bnuase232vdg3.png" alt="Image description" width="880" height="560"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This is an example of the history Artist table (dbt creates 4 additional metadata columns to maintain the history of each table)&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--FmVFxVQP--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/iavm9sq06fezxu7dmxdq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--FmVFxVQP--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/iavm9sq06fezxu7dmxdq.png" alt="Image description" width="880" height="306"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 4: Integrate the data&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;After the history tables have been created, we can start to join the tables together, apply the business logic and create the dimension table and fact tables. &lt;/p&gt;

&lt;p&gt;The objects I created in this schema are views, not tables. They only contain the current values of each dimension and fact table. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--KCp4_22W--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/xqj2fpxy9m5z57kpu8a6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--KCp4_22W--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/xqj2fpxy9m5z57kpu8a6.png" alt="Image description" width="880" height="275"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 5: create the star schema&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Finally, we can create the star schema. Once again, I use the incremental functionality from dbt to build the table. Each dimension table has start_timestamp and end_timestamp to maintain the history. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--OvL93f8L--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/1feipnmvou9waws7d4ww.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--OvL93f8L--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/1feipnmvou9waws7d4ww.png" alt="Image description" width="880" height="324"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Example of the dim_track table: &lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--kn26v_98--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/kp64t6skils5xhbhsccs.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--kn26v_98--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/kp64t6skils5xhbhsccs.png" alt="Image description" width="880" height="308"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;MDS makes building the data lakehouse and ELT a lot easier. Fivetran is great for loading the data from the source systems into the cloud data warehouse; Databricks is very powerful for data insertion, selection, and table join. &lt;/p&gt;

</description>
      <category>databricks</category>
      <category>fivetran</category>
      <category>dbt</category>
      <category>mds</category>
    </item>
    <item>
      <title>Cloud gurus interaction with data engineers</title>
      <dc:creator>Andrew Perelson</dc:creator>
      <pubDate>Tue, 17 May 2022 03:57:22 +0000</pubDate>
      <link>https://dev.to/dataengbytes/cloud-gurus-interaction-with-data-engineers-1m16</link>
      <guid>https://dev.to/dataengbytes/cloud-gurus-interaction-with-data-engineers-1m16</guid>
      <description>&lt;p&gt;In my current role there is a constant interaction with, and need for, cloud techies that configure networking and resources in order to enable the functionality that I need to do my job. Any organisation will normally have restrictions in place where certain roles have the ability to create the resources needed by other roles. It would streamline the process if I had the security rights to simply go ahead and create what I need. But restrictions are put in place for a reason. They're good, but they're also restrictive!&lt;/p&gt;

</description>
      <category>cloud</category>
      <category>database</category>
      <category>devops</category>
    </item>
    <item>
      <title>Welcome, DataEngHack online!</title>
      <dc:creator>Peter Hanssens #BlackLivesMatter</dc:creator>
      <pubDate>Thu, 28 Apr 2022 01:57:00 +0000</pubDate>
      <link>https://dev.to/dataengbytes/welcome-dataenghack-online-300</link>
      <guid>https://dev.to/dataengbytes/welcome-dataenghack-online-300</guid>
      <description>&lt;p&gt;Hey folks,&lt;/p&gt;

&lt;p&gt;Peter Hanssens here... welcome to the online DataEngHack blogging competition that we are running in the month of May!&lt;/p&gt;

&lt;p&gt;This blogging competition is designed to get you hands on with some cutting edge data engineering technology and public exposure for your awesome work in the process. You will be featured on the &lt;a href="https://dataengconf.com.au/"&gt;DataEngAu&lt;/a&gt; website and you have the chance to win an awesome set of prizes. You can submit your blog right away and in order to win a prize, your blog must be submitted by the 31st of May.&lt;/p&gt;

&lt;h2&gt;
  
  
  Prizes
&lt;/h2&gt;

&lt;p&gt;So first up, anyone who submits a blog (with a few caveats around it being an appropriate data engineering blog) will be sent a free DataEngHack t-shirt.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--0evi36Rh--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/y98ifi9pd00ut5ec747u.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--0evi36Rh--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/y98ifi9pd00ut5ec747u.png" alt="DataEngHack T-Shirt" width="591" height="842"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;For the top 10 blogs as determined by weighted popularity, will be each given one of either:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;one of 5 Lego kits valued around $100 including the Lego Vesper 125&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Zhamak Dheghani's book on Data Mesh&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Technology sponsors
&lt;/h2&gt;

&lt;p&gt;This is a sponsored event and as such we encourage our participants to use at least one of the technologies of our technology partners in their solution. These vendors are leaders in their space and often provide really fun and innovative ways of achieving great Data Engineering outcomes, so why not give them a go:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--5eJlyUYk--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/rvkyhge1s2d9dryxb44e.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--5eJlyUYk--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/rvkyhge1s2d9dryxb44e.png" alt="DataEngHack Sponsors" width="880" height="247"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;So our list of sponsors are:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://datastax.com/"&gt;datastax&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://neo4j.com/"&gt;neo4j&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.snowflake.com/"&gt;snowflake&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://fivetran.com/"&gt;fivetran&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://databricks.com/"&gt;databricks&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://azure.microsoft.com/en-au/"&gt;microsoft azure&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://aws.amazon.com/"&gt;aws&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://imply.io/"&gt;imply&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  How to get involved
&lt;/h2&gt;

&lt;p&gt;See the comments below for the way to register!&lt;/p&gt;

&lt;p&gt;This blog was originally published on &lt;a href="https://dataengconf.com.au/blog/2022-04/welcome-to-dataenghack-online"&gt;DataEngAu&lt;/a&gt;&lt;/p&gt;

</description>
      <category>dataengineering</category>
      <category>dataarchitecture</category>
      <category>dataops</category>
      <category>hackathon</category>
    </item>
  </channel>
</rss>
