<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Abdelrahman Adnan</title>
    <description>The latest articles on DEV Community by Abdelrahman Adnan (@abdelrahman_adnan).</description>
    <link>https://dev.to/abdelrahman_adnan</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/abdelrahman_adnan"/>
    <language>en</language>
    <item>
      <title>Part 14 - Cloud Deployment and Lessons Learned ☁️</title>
      <dc:creator>Abdelrahman Adnan</dc:creator>
      <pubDate>Tue, 21 Apr 2026 00:35:59 +0000</pubDate>
      <link>https://dev.to/abdelrahman_adnan/part-14-cloud-deployment-and-lessons-learned-2d4n</link>
      <guid>https://dev.to/abdelrahman_adnan/part-14-cloud-deployment-and-lessons-learned-2d4n</guid>
      <description>&lt;h1&gt;
  
  
  Part 14 - Cloud Deployment and Lessons Learned ☁️
&lt;/h1&gt;

&lt;p&gt;This final part continues from the local deployment story and closes the loop with the cloud architecture in &lt;a href="//../../../terraform/main.tf"&gt;terraform/main.tf&lt;/a&gt; and &lt;a href="//../../../terraform/user_data.sh.tftpl"&gt;terraform/user_data.sh.tftpl&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  What Terraform provisions
&lt;/h2&gt;

&lt;p&gt;The Terraform layer creates the cloud resources needed to run the project in AWS:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;an S3 bucket for data lake storage,&lt;/li&gt;
&lt;li&gt;an EC2 instance for Airflow and Superset,&lt;/li&gt;
&lt;li&gt;an EMR Serverless application for Spark,&lt;/li&gt;
&lt;li&gt;IAM roles and policies,&lt;/li&gt;
&lt;li&gt;and SSM parameters that publish runtime configuration.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;That is enough to reproduce the same pipeline outside a local Docker environment.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why the EC2 bootstrap matters
&lt;/h2&gt;

&lt;p&gt;The user data template clones the repository, writes the environment file, and starts Docker Compose on the instance. That keeps the cloud setup aligned with the local development workflow.&lt;/p&gt;

&lt;p&gt;The result is a single codebase that can run in two environments with minimal friction.&lt;/p&gt;

&lt;h2&gt;
  
  
  What this project teaches
&lt;/h2&gt;

&lt;p&gt;This repository is a good Zoomcamp final project because it demonstrates the main ideas of modern data engineering:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;ingestion from external APIs,&lt;/li&gt;
&lt;li&gt;raw zone storage,&lt;/li&gt;
&lt;li&gt;batch transformation with Spark,&lt;/li&gt;
&lt;li&gt;warehouse loading,&lt;/li&gt;
&lt;li&gt;dbt modeling,&lt;/li&gt;
&lt;li&gt;dashboard automation,&lt;/li&gt;
&lt;li&gt;and cloud infrastructure as code.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  A few lessons from the implementation
&lt;/h2&gt;

&lt;p&gt;A few practical lessons stand out:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;keep environment-specific logic in one config layer,&lt;/li&gt;
&lt;li&gt;isolate API clients from orchestration code,&lt;/li&gt;
&lt;li&gt;use partitioned storage for time-based data,&lt;/li&gt;
&lt;li&gt;model analytics tables in dbt instead of ad hoc SQL,&lt;/li&gt;
&lt;li&gt;and automate the dashboard so the final result can be reproduced.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Closing note
&lt;/h2&gt;

&lt;p&gt;This final article closes the tutorial series. If someone reads the 14 parts in order, they should be able to understand the entire project from raw data collection to dashboard delivery.&lt;/p&gt;

&lt;p&gt;This series is now ready to publish as a continuous learning path, and the &lt;code&gt;data-engineering-zoomcamp&lt;/code&gt; tag appears in every article so the set stays grouped together.&lt;/p&gt;

&lt;p&gt;Tag: #dataengineeringzoomcamp&lt;/p&gt;

</description>
      <category>aws</category>
      <category>dataengineering</category>
      <category>devops</category>
      <category>terraform</category>
    </item>
    <item>
      <title>Part 13 - Local Development and Docker Compose 🐳</title>
      <dc:creator>Abdelrahman Adnan</dc:creator>
      <pubDate>Tue, 21 Apr 2026 00:35:18 +0000</pubDate>
      <link>https://dev.to/abdelrahman_adnan/part-13-local-development-and-docker-compose-4h4d</link>
      <guid>https://dev.to/abdelrahman_adnan/part-13-local-development-and-docker-compose-4h4d</guid>
      <description>&lt;h1&gt;
  
  
  Part 13 - Local Development and Docker Compose 🐳
&lt;/h1&gt;

&lt;p&gt;This part continues from the Superset automation and explains how the repository is meant to run locally.&lt;/p&gt;

&lt;h2&gt;
  
  
  The local stack
&lt;/h2&gt;

&lt;p&gt;The local environment is defined in &lt;a href="//../../../docker-compose.yml"&gt;docker-compose.yml&lt;/a&gt;. It brings up:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;PostgreSQL,&lt;/li&gt;
&lt;li&gt;Airflow webserver,&lt;/li&gt;
&lt;li&gt;Airflow scheduler,&lt;/li&gt;
&lt;li&gt;Airflow initialization,&lt;/li&gt;
&lt;li&gt;Superset initialization,&lt;/li&gt;
&lt;li&gt;and the Superset web server.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;That gives the project a full end-to-end development stack without needing AWS.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why Docker Compose is useful here
&lt;/h2&gt;

&lt;p&gt;Docker Compose makes the project easier to understand because every service is declared in one place. A reader can see immediately how Airflow connects to PostgreSQL and how Superset depends on the warehouse.&lt;/p&gt;

&lt;h2&gt;
  
  
  Makefile workflow
&lt;/h2&gt;

&lt;p&gt;The &lt;code&gt;Makefile&lt;/code&gt; provides short commands for common actions such as:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;make install&lt;/code&gt;,&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;make local-up&lt;/code&gt;,&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;make local-init&lt;/code&gt;,&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;make generate-egypt-stations&lt;/code&gt;,&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;make demo-run&lt;/code&gt;.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;That is good developer ergonomics even if some documentation still needs cleanup.&lt;/p&gt;

&lt;h2&gt;
  
  
  What to notice in the local setup
&lt;/h2&gt;

&lt;p&gt;The project makes a few decisions that are worth learning from:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;the same code works locally and in cloud mode,&lt;/li&gt;
&lt;li&gt;the local container mounts the DAGs, scripts, and spark jobs,&lt;/li&gt;
&lt;li&gt;the environment file drives runtime values,&lt;/li&gt;
&lt;li&gt;and the demo run can be triggered after the stack is ready.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Continue
&lt;/h2&gt;

&lt;p&gt;The final part explains the Terraform deployment, the EC2 bootstrap flow, and the lessons learned from building the project as a Zoomcamp final project.&lt;/p&gt;

&lt;p&gt;Continue to Part 14: Cloud Deployment and Lessons Learned.&lt;/p&gt;

&lt;p&gt;Tag: #dataengineeringzoomcamp&lt;/p&gt;

</description>
      <category>dataengineering</category>
      <category>devops</category>
      <category>docker</category>
      <category>tutorial</category>
    </item>
    <item>
      <title>Part 12 - Superset Seeding and Dashboards 🎛️</title>
      <dc:creator>Abdelrahman Adnan</dc:creator>
      <pubDate>Tue, 21 Apr 2026 00:34:47 +0000</pubDate>
      <link>https://dev.to/abdelrahman_adnan/part-12-superset-seeding-and-dashboards-4jmo</link>
      <guid>https://dev.to/abdelrahman_adnan/part-12-superset-seeding-and-dashboards-4jmo</guid>
      <description>&lt;h1&gt;
  
  
  Part 12 - Superset Seeding and Dashboards 🎛️
&lt;/h1&gt;

&lt;p&gt;This part continues from the warehouse models and explains &lt;a href="//../../../scripts/seed_superset_dashboard.py"&gt;scripts/seed_superset_dashboard.py&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why this script exists
&lt;/h2&gt;

&lt;p&gt;The script automates the Superset setup so the dashboard can be recreated consistently instead of being assembled manually in the UI.&lt;/p&gt;

&lt;p&gt;That matters for a tutorial project because the dashboard becomes part of the codebase, not just a one-time hand-built artifact.&lt;/p&gt;

&lt;h2&gt;
  
  
  Waiting for the warehouse
&lt;/h2&gt;

&lt;p&gt;The first important behavior is &lt;code&gt;wait_for_warehouse()&lt;/code&gt;. The script checks that the dbt tables exist and that the fact table contains data before seeding the dashboard.&lt;/p&gt;

&lt;p&gt;That avoids a common failure mode where a dashboard points to empty tables or missing datasets.&lt;/p&gt;

&lt;h2&gt;
  
  
  Datasets and charts
&lt;/h2&gt;

&lt;p&gt;The script creates:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;a physical dataset for &lt;code&gt;dim_station&lt;/code&gt;,&lt;/li&gt;
&lt;li&gt;multiple virtual SQL datasets for dashboard views,&lt;/li&gt;
&lt;li&gt;and a set of charts that are attached to those datasets.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The virtual datasets are especially useful because they encode the exact business questions the dashboard is trying to answer.&lt;/p&gt;

&lt;h2&gt;
  
  
  Dashboard layout
&lt;/h2&gt;

&lt;p&gt;The &lt;code&gt;build_dashboard_layout()&lt;/code&gt; function constructs the nested layout structure that Superset expects. Then &lt;code&gt;ensure_dashboard()&lt;/code&gt; replaces or recreates the dashboard so the final result is deterministic.&lt;/p&gt;

&lt;p&gt;That is a neat pattern for automation because it keeps the dashboard in sync with the code.&lt;/p&gt;

&lt;h2&gt;
  
  
  What to learn from this file
&lt;/h2&gt;

&lt;p&gt;This script is a strong example of how to treat analytics delivery as code:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;build datasets from warehouse tables,&lt;/li&gt;
&lt;li&gt;define charts in Python,&lt;/li&gt;
&lt;li&gt;assemble the dashboard layout programmatically,&lt;/li&gt;
&lt;li&gt;and keep the output repeatable.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Continue
&lt;/h2&gt;

&lt;p&gt;The next part turns to the local developer experience: Docker Compose, the Makefile, and how the project is meant to be run on a laptop.&lt;/p&gt;

&lt;p&gt;Continue to Part 13: Local Development and Docker Compose.&lt;/p&gt;

&lt;p&gt;Tag: #dataengineeringzoomcamp&lt;/p&gt;

</description>
      <category>automation</category>
      <category>dataengineering</category>
      <category>python</category>
      <category>tutorial</category>
    </item>
    <item>
      <title>Part 11 - Dimensions and Fact Table 📊</title>
      <dc:creator>Abdelrahman Adnan</dc:creator>
      <pubDate>Tue, 21 Apr 2026 00:34:23 +0000</pubDate>
      <link>https://dev.to/abdelrahman_adnan/part-11-dimensions-and-fact-table-9cn</link>
      <guid>https://dev.to/abdelrahman_adnan/part-11-dimensions-and-fact-table-9cn</guid>
      <description>&lt;h1&gt;
  
  
  Part 11 - Dimensions and Fact Table 📊
&lt;/h1&gt;

&lt;p&gt;This part continues from the base model and explains the mart layer in &lt;a href="//../../../dags/air_quality_dbt/models/marts/"&gt;dags/air_quality_dbt/models/marts/&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why the mart layer exists
&lt;/h2&gt;

&lt;p&gt;The mart layer is where the analytics shape becomes obvious. Instead of keeping everything in one large staging table, the project splits the data into a star-schema-style layout.&lt;/p&gt;

&lt;p&gt;That makes downstream querying simpler and more efficient.&lt;/p&gt;

&lt;h2&gt;
  
  
  The dimension tables
&lt;/h2&gt;

&lt;p&gt;The project creates two dimensions:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;dim_station&lt;/code&gt; from &lt;a href="//../../../dags/air_quality_dbt/models/marts/dim_station.sql"&gt;dim_station.sql&lt;/a&gt;,&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;dim_sensor&lt;/code&gt; from &lt;a href="//../../../dags/air_quality_dbt/models/marts/dim_sensor.sql"&gt;dim_sensor.sql&lt;/a&gt;.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;These are deduplicated reference tables. Each one extracts the stable descriptive fields that are useful for analysis and dashboard filtering.&lt;/p&gt;

&lt;h2&gt;
  
  
  The fact table
&lt;/h2&gt;

&lt;p&gt;The main analytical table is &lt;code&gt;fact_air_quality&lt;/code&gt;, defined in &lt;a href="//../../../dags/air_quality_dbt/models/marts/fact_air_quality.sql"&gt;fact_air_quality.sql&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;This table keeps the actual readings and the relevant weather context in one place. That is why the dashboard queries can stay straightforward.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why the star schema is useful here
&lt;/h2&gt;

&lt;p&gt;The warehouse design intentionally denormalizes some location fields into the fact table. That reduces the number of joins needed for common dashboard questions while still keeping clean dimension tables for reference and filtering.&lt;/p&gt;

&lt;p&gt;For a tutorial project, this is a solid middle ground between realism and simplicity.&lt;/p&gt;

&lt;h2&gt;
  
  
  Tests on the mart layer
&lt;/h2&gt;

&lt;p&gt;The mart schema tests check the key constraints:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;station_id should be unique and not null in dim_station,&lt;/li&gt;
&lt;li&gt;sensor_id should be unique and not null in dim_sensor,&lt;/li&gt;
&lt;li&gt;station_id and sensor_id should not be null in fact_air_quality.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;That keeps the model graph honest.&lt;/p&gt;

&lt;h2&gt;
  
  
  Continue
&lt;/h2&gt;

&lt;p&gt;The next part moves from warehouse modeling into the visualization layer and explains how Superset is seeded with datasets, charts, and a dashboard layout.&lt;/p&gt;

&lt;p&gt;Continue to Part 12: Superset Seeding and Dashboards.&lt;/p&gt;

&lt;p&gt;Tag: #dataengineeringzoomcamp&lt;/p&gt;

</description>
      <category>database</category>
      <category>dataengineering</category>
      <category>sql</category>
      <category>tutorial</category>
    </item>
    <item>
      <title>Part 10 - Base Model and Data Quality ✅</title>
      <dc:creator>Abdelrahman Adnan</dc:creator>
      <pubDate>Tue, 21 Apr 2026 00:33:58 +0000</pubDate>
      <link>https://dev.to/abdelrahman_adnan/part-10-base-model-and-data-quality-54b9</link>
      <guid>https://dev.to/abdelrahman_adnan/part-10-base-model-and-data-quality-54b9</guid>
      <description>&lt;h1&gt;
  
  
  Part 10 - Base Model and Data Quality ✅
&lt;/h1&gt;

&lt;p&gt;This part continues from the dbt setup and looks at the base layer in &lt;a href="//../../../dags/air_quality_dbt/models/base/base_air_quality.sql"&gt;dags/air_quality_dbt/models/base/base_air_quality.sql&lt;/a&gt; and &lt;a href="//../../../dags/air_quality_dbt/models/base/schema.yml"&gt;dags/air_quality_dbt/models/base/schema.yml&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  The base model
&lt;/h2&gt;

&lt;p&gt;The base model is a view over the staging table. It selects the core fields needed by downstream models:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;station identifiers,&lt;/li&gt;
&lt;li&gt;sensor identifiers,&lt;/li&gt;
&lt;li&gt;measurement values,&lt;/li&gt;
&lt;li&gt;coordinates,&lt;/li&gt;
&lt;li&gt;weather context,&lt;/li&gt;
&lt;li&gt;and time partitions.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This layer is the place where the project standardizes the source before turning it into marts.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why a view makes sense here
&lt;/h2&gt;

&lt;p&gt;A base view is a good fit because it avoids copying data unnecessarily while still giving dbt a clean object to reference.&lt;/p&gt;

&lt;p&gt;That means the warehouse load handles physical persistence, and dbt handles logical modeling.&lt;/p&gt;

&lt;h2&gt;
  
  
  Data quality checks
&lt;/h2&gt;

&lt;p&gt;The schema file adds simple but important tests:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;station_id should not be null,&lt;/li&gt;
&lt;li&gt;sensor_id should not be null,&lt;/li&gt;
&lt;li&gt;target_country_name should not be null.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;These tests are not complicated, but they help catch broken ingestion or malformed records early.&lt;/p&gt;

&lt;h2&gt;
  
  
  What this teaches
&lt;/h2&gt;

&lt;p&gt;This is a good example of how data quality in dbt starts small:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;declare the source,&lt;/li&gt;
&lt;li&gt;expose a clean base model,&lt;/li&gt;
&lt;li&gt;assert the essential keys,&lt;/li&gt;
&lt;li&gt;and let the downstream marts depend on the trusted layer.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Continue
&lt;/h2&gt;

&lt;p&gt;The next part explains the mart tables themselves and shows how the project separates stations, sensors, and the final fact table.&lt;/p&gt;

&lt;p&gt;Continue to Part 11: Dimensions and Fact Table.&lt;/p&gt;

&lt;p&gt;Tag: #dataengineeringzoomcamp&lt;/p&gt;

</description>
      <category>data</category>
      <category>dataengineering</category>
      <category>sql</category>
      <category>tutorial</category>
    </item>
    <item>
      <title>Part 9 - dbt Project Setup and Contracts 🧱</title>
      <dc:creator>Abdelrahman Adnan</dc:creator>
      <pubDate>Tue, 21 Apr 2026 00:33:36 +0000</pubDate>
      <link>https://dev.to/abdelrahman_adnan/part-9-dbt-project-setup-and-contracts-13m2</link>
      <guid>https://dev.to/abdelrahman_adnan/part-9-dbt-project-setup-and-contracts-13m2</guid>
      <description>&lt;h1&gt;
  
  
  Part 9 - dbt Project Setup and Contracts 🧱
&lt;/h1&gt;

&lt;p&gt;This part continues from the warehouse load and looks at the dbt project under &lt;a href="//../../../dags/air_quality_dbt/"&gt;dags/air_quality_dbt/&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  What dbt is doing here
&lt;/h2&gt;

&lt;p&gt;In this repository, dbt is the modeling layer that turns the loaded staging table into structured analytics tables.&lt;/p&gt;

&lt;p&gt;The main project file, &lt;a href="//../../../dags/air_quality_dbt/dbt_project.yml"&gt;dbt_project.yml&lt;/a&gt;, defines the model folders and default materializations:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;base models become views,&lt;/li&gt;
&lt;li&gt;mart models become tables.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;That split is intentional and easy to reason about.&lt;/p&gt;

&lt;h2&gt;
  
  
  The dbt profile
&lt;/h2&gt;

&lt;p&gt;The profile in &lt;a href="//../../../dags/air_quality_dbt/profiles.yml"&gt;profiles.yml&lt;/a&gt; connects dbt to PostgreSQL using environment variables. That means the same project works in a containerized local environment and in a cloud runtime where the connection values are injected differently.&lt;/p&gt;

&lt;h2&gt;
  
  
  The source contract
&lt;/h2&gt;

&lt;p&gt;The base model starts from the source &lt;code&gt;airquality_dwh.stg_air_quality&lt;/code&gt;. That source declaration creates a clear contract: dbt expects the warehouse load step to create and populate the staging table before modeling begins.&lt;/p&gt;

&lt;p&gt;This is a useful teaching point because it shows how data contracts are formed in practice.&lt;/p&gt;

&lt;h2&gt;
  
  
  Model layers
&lt;/h2&gt;

&lt;p&gt;The project is split into:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;base models that clean or standardize the source,&lt;/li&gt;
&lt;li&gt;mart models that reshape the data for analysis,&lt;/li&gt;
&lt;li&gt;and schema tests that validate important fields.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;That organization matches the way many production dbt projects are structured.&lt;/p&gt;

&lt;h2&gt;
  
  
  Continue
&lt;/h2&gt;

&lt;p&gt;In the next part, I will walk through the base model and schema tests so you can see how the loaded warehouse table becomes a clean dbt source for the marts.&lt;/p&gt;

&lt;p&gt;Continue to Part 10: Base Model and Data Quality.&lt;/p&gt;

&lt;p&gt;Tag: #dataengineeringzoomcamp&lt;/p&gt;

</description>
      <category>database</category>
      <category>dataengineering</category>
      <category>postgres</category>
      <category>tutorial</category>
    </item>
    <item>
      <title>Part 8 - Staging Load into Postgres 🗃️</title>
      <dc:creator>Abdelrahman Adnan</dc:creator>
      <pubDate>Tue, 21 Apr 2026 00:33:11 +0000</pubDate>
      <link>https://dev.to/abdelrahman_adnan/part-8-staging-load-into-postgres-1pl</link>
      <guid>https://dev.to/abdelrahman_adnan/part-8-staging-load-into-postgres-1pl</guid>
      <description>&lt;h1&gt;
  
  
  Part 8 - Staging Load into Postgres 🗃️
&lt;/h1&gt;

&lt;p&gt;This part continues from the Spark transform and explains how the parquet output is loaded into PostgreSQL in &lt;a href="//../../../dags/staging_load_dag.py"&gt;dags/staging_load_dag.py&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  What this DAG is responsible for
&lt;/h2&gt;

&lt;p&gt;The staging load DAG takes the transformed parquet files and inserts them into the warehouse table that dbt will use as its source.&lt;/p&gt;

&lt;p&gt;Its responsibilities are:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;read the staging parquet for the requested run hour,&lt;/li&gt;
&lt;li&gt;normalize a few timestamp columns,&lt;/li&gt;
&lt;li&gt;infer reasonable PostgreSQL column types,&lt;/li&gt;
&lt;li&gt;create the schema and table if needed,&lt;/li&gt;
&lt;li&gt;bulk insert the data,&lt;/li&gt;
&lt;li&gt;and trigger the dbt DAG afterward.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Local and cloud reads
&lt;/h2&gt;

&lt;p&gt;The load code can read data from local parquet paths or from S3 using awswrangler. That mirrors the same local/cloud split used elsewhere in the project.&lt;/p&gt;

&lt;p&gt;This is a good example of how to keep warehouse loading logic environment-agnostic.&lt;/p&gt;

&lt;h2&gt;
  
  
  Type preparation
&lt;/h2&gt;

&lt;p&gt;The helper functions in this file convert dataframe columns into PostgreSQL-safe values. The code infers types such as:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;BOOLEAN,&lt;/li&gt;
&lt;li&gt;BIGINT,&lt;/li&gt;
&lt;li&gt;DOUBLE PRECISION,&lt;/li&gt;
&lt;li&gt;TIMESTAMP,&lt;/li&gt;
&lt;li&gt;and TEXT.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;That keeps the load process flexible without requiring a large manual schema file for the staging table.&lt;/p&gt;

&lt;h2&gt;
  
  
  Bulk insertion
&lt;/h2&gt;

&lt;p&gt;Instead of inserting row by row, the code uses &lt;code&gt;execute_values()&lt;/code&gt; from psycopg2. That is much faster and is the right approach for a batch warehouse load.&lt;/p&gt;

&lt;p&gt;The target table is created in the &lt;code&gt;airquality_dwh&lt;/code&gt; schema, and the inserted table is &lt;code&gt;stg_air_quality&lt;/code&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why this design works well for the tutorial
&lt;/h2&gt;

&lt;p&gt;This step shows a simple but realistic warehouse loading pattern:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;transform data into parquet first,&lt;/li&gt;
&lt;li&gt;bulk load the warehouse from parquet,&lt;/li&gt;
&lt;li&gt;then let dbt build the analytical models.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;That separation is cleaner than trying to do everything inside one SQL script.&lt;/p&gt;

&lt;h2&gt;
  
  
  Continue
&lt;/h2&gt;

&lt;p&gt;Next, I will explain the dbt project setup, including how the warehouse source is declared and how the model graph is organized.&lt;/p&gt;

&lt;p&gt;Continue to Part 9: dbt Project Setup and Contracts.&lt;/p&gt;

&lt;p&gt;Tag: #dataengineeringzoomcamp&lt;/p&gt;

</description>
      <category>database</category>
      <category>dataengineering</category>
      <category>postgres</category>
      <category>tutorial</category>
    </item>
    <item>
      <title>Part 7 - Spark Transform Local vs Cloud ⚡</title>
      <dc:creator>Abdelrahman Adnan</dc:creator>
      <pubDate>Tue, 21 Apr 2026 00:32:24 +0000</pubDate>
      <link>https://dev.to/abdelrahman_adnan/part-7-spark-transform-local-vs-cloud-45l7</link>
      <guid>https://dev.to/abdelrahman_adnan/part-7-spark-transform-local-vs-cloud-45l7</guid>
      <description>&lt;p&gt;Part 7 - Spark Transform Local vs Cloud ⚡&lt;/p&gt;

&lt;p&gt;This part continues from the API client layer and explains the transformation job in &lt;a href="//../../../spark_jobs/air_quality_to_parquet.py"&gt;spark_jobs/air_quality_to_parquet.py&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  What the Spark job does
&lt;/h2&gt;

&lt;p&gt;The job reads raw OpenAQ and weather JSON, flattens nested structures, joins the datasets, and writes parquet into a staging layer partitioned by time.&lt;/p&gt;

&lt;p&gt;That is the classic lakehouse-style move from raw JSON to structured analytics data.&lt;/p&gt;

&lt;h2&gt;
  
  
  Local versus cloud execution
&lt;/h2&gt;

&lt;p&gt;The job can run in two different environments:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;locally with &lt;code&gt;SparkSession.builder.master("local[*]")&lt;/code&gt;,&lt;/li&gt;
&lt;li&gt;or in the cloud through EMR Serverless.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The path resolution logic in &lt;code&gt;resolve_paths()&lt;/code&gt; is what makes that possible. In local mode it reads and writes from the filesystem. In cloud mode it uses the bucket name pulled from SSM and points Spark to S3 locations instead.&lt;/p&gt;

&lt;h2&gt;
  
  
  Flattening the raw payloads
&lt;/h2&gt;

&lt;p&gt;The Spark code expands nested arrays and structs to create a row-per-reading structure. The important pieces are:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;air quality readings are exploded from the &lt;code&gt;results&lt;/code&gt; array,&lt;/li&gt;
&lt;li&gt;station metadata is exploded from the station sample,&lt;/li&gt;
&lt;li&gt;weather fields are selected from the current conditions payload,&lt;/li&gt;
&lt;li&gt;and the two data sets are joined on station and hour.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;That join is where the project starts to look like an analytics pipeline instead of a raw ingestion job.&lt;/p&gt;

&lt;h2&gt;
  
  
  Schema stability
&lt;/h2&gt;

&lt;p&gt;The job explicitly casts columns into stable types before writing parquet. That protects downstream consumers from schema drift and helps the warehouse load stay predictable.&lt;/p&gt;

&lt;p&gt;This is a very useful lesson: in data engineering, the output contract is often more important than the implementation detail.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why partitioning matters
&lt;/h2&gt;

&lt;p&gt;The final write uses &lt;code&gt;partitionBy("year", "month", "day", "hour")&lt;/code&gt;. That keeps the staging layer aligned with the raw layer and makes time-based reads efficient.&lt;/p&gt;

&lt;h2&gt;
  
  
  Continue
&lt;/h2&gt;

&lt;p&gt;The next part explains how the staging parquet lands in PostgreSQL and how the pipeline keeps the warehouse tables available for dbt and Superset.&lt;/p&gt;

&lt;p&gt;Continue to Part 8: Staging Load into Postgres.&lt;/p&gt;

&lt;p&gt;Tag: #dataengineeringzoomcamp&lt;/p&gt;

</description>
      <category>aws</category>
      <category>cloud</category>
      <category>dataengineering</category>
      <category>tutorial</category>
    </item>
    <item>
      <title>Part 6 - API Client Design and Reliability 🔁</title>
      <dc:creator>Abdelrahman Adnan</dc:creator>
      <pubDate>Tue, 21 Apr 2026 00:31:48 +0000</pubDate>
      <link>https://dev.to/abdelrahman_adnan/part-6-api-client-design-and-reliability-2ojp</link>
      <guid>https://dev.to/abdelrahman_adnan/part-6-api-client-design-and-reliability-2ojp</guid>
      <description>&lt;h1&gt;
  
  
  Part 6 - API Client Design and Reliability 🔁
&lt;/h1&gt;

&lt;p&gt;This part continues from the ingestion DAG and explains the reusable client functions in &lt;a href="//../../../dags/air_quality_fetchers.py"&gt;dags/air_quality_fetchers.py&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why the API layer is separated
&lt;/h2&gt;

&lt;p&gt;Keeping API logic out of the DAG file makes the code easier to test and easier to reuse. The DAG can focus on scheduling and control flow while the fetcher module handles HTTP details.&lt;/p&gt;

&lt;p&gt;That separation is a small design choice, but it matters when the project grows.&lt;/p&gt;

&lt;h2&gt;
  
  
  OpenWeather air quality data
&lt;/h2&gt;

&lt;p&gt;The function &lt;code&gt;fetch_openweather_air_quality()&lt;/code&gt; queries the OpenWeather air pollution endpoint using the station coordinates. It then reshapes the response into the ingestion format expected by downstream code.&lt;/p&gt;

&lt;p&gt;That normalization step is important because the downstream Spark job expects a consistent structure, not a raw vendor payload.&lt;/p&gt;

&lt;h2&gt;
  
  
  Weather fallback behavior
&lt;/h2&gt;

&lt;p&gt;The &lt;code&gt;fetch_weather()&lt;/code&gt; function prefers the OpenWeather One Call API, but it falls back to the legacy weather endpoint when the primary request is unauthorized or unavailable.&lt;/p&gt;

&lt;p&gt;That is a practical resilience pattern:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;try the richer endpoint first,&lt;/li&gt;
&lt;li&gt;fall back to a simpler endpoint,&lt;/li&gt;
&lt;li&gt;keep the payload shape stable after normalization.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Retry strategy
&lt;/h2&gt;

&lt;p&gt;The module also configures a requests session with retry handling for transient failures such as:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;429 rate limits,&lt;/li&gt;
&lt;li&gt;500-level server errors,&lt;/li&gt;
&lt;li&gt;and similar temporary issues.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;That means the ingestion layer is not just making one-off calls. It is designed to survive short-term API instability.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why the normalized shape matters
&lt;/h2&gt;

&lt;p&gt;The fetchers emit a payload that contains a &lt;code&gt;results&lt;/code&gt; array with station id, sensor id, value, timestamp, and coordinates. That shape is intentionally simple so the Spark job and the raw storage layer can process it with minimal special handling.&lt;/p&gt;

&lt;h2&gt;
  
  
  Lesson from this module
&lt;/h2&gt;

&lt;p&gt;The main lesson is that reliable ingestion is not only about calling an API. It is about shaping the response into something downstream systems can trust.&lt;/p&gt;

&lt;h2&gt;
  
  
  Continue
&lt;/h2&gt;

&lt;p&gt;The next part moves into the transformation stage and shows how the same data becomes partitioned parquet through Spark, both locally and on EMR Serverless.&lt;/p&gt;

&lt;p&gt;Continue to Part 7: Spark Transform Local vs Cloud.&lt;/p&gt;

&lt;p&gt;Tag: #dataengineeringzoomcamp&lt;/p&gt;

</description>
      <category>api</category>
      <category>architecture</category>
      <category>dataengineering</category>
      <category>python</category>
    </item>
    <item>
      <title>Part 5 - Ingestion DAG and Raw Storage 📥</title>
      <dc:creator>Abdelrahman Adnan</dc:creator>
      <pubDate>Tue, 21 Apr 2026 00:31:23 +0000</pubDate>
      <link>https://dev.to/abdelrahman_adnan/part-5-ingestion-dag-and-raw-storage-3lci</link>
      <guid>https://dev.to/abdelrahman_adnan/part-5-ingestion-dag-and-raw-storage-3lci</guid>
      <description>&lt;h1&gt;
  
  
  Part 5 - Ingestion DAG and Raw Storage 📥
&lt;/h1&gt;

&lt;p&gt;This part continues from the runtime config and looks at the first real Airflow DAG in the chain: &lt;a href="//../../../dags/api_ingestion_dag.py"&gt;dags/api_ingestion_dag.py&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  What the DAG does
&lt;/h2&gt;

&lt;p&gt;The ingestion DAG runs every three minutes. Its job is to:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;load the station sample,&lt;/li&gt;
&lt;li&gt;pick one or more stations for the current interval,&lt;/li&gt;
&lt;li&gt;fetch OpenAQ and OpenWeather payloads,&lt;/li&gt;
&lt;li&gt;save those payloads as raw JSON,&lt;/li&gt;
&lt;li&gt;and trigger the next DAG in the chain.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;That is the point where the project stops being a bootstrap script and becomes a scheduled pipeline.&lt;/p&gt;

&lt;h2&gt;
  
  
  How station rotation works
&lt;/h2&gt;

&lt;p&gt;Instead of hitting all stations every time, the DAG rotates through the sample using the current data interval. That gives the project a simple fairness mechanism:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;different stations are chosen on different runs,&lt;/li&gt;
&lt;li&gt;API usage is spread across the sample,&lt;/li&gt;
&lt;li&gt;and the same DAG can keep running without manual intervention.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This logic is handled in &lt;code&gt;run_ingestion()&lt;/code&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Raw storage layout
&lt;/h2&gt;

&lt;p&gt;The helper &lt;code&gt;save_to_storage()&lt;/code&gt; writes payloads using the same partition logic in both modes:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;local mode writes JSON into &lt;code&gt;local_data/raw/...&lt;/code&gt;,&lt;/li&gt;
&lt;li&gt;cloud mode writes JSON into S3 under &lt;code&gt;raw/...&lt;/code&gt;.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The directory structure is time-partitioned by year, month, day, and hour. That makes it easy for the Spark job to read a specific window later.&lt;/p&gt;

&lt;h2&gt;
  
  
  DAG to DAG orchestration
&lt;/h2&gt;

&lt;p&gt;At the end of ingestion, the DAG uses &lt;code&gt;TriggerDagRunOperator&lt;/code&gt; to start the transform DAG. That is a useful Airflow pattern because each stage can stay focused on one responsibility while still being chained in order.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why this is a good learning example
&lt;/h2&gt;

&lt;p&gt;This file demonstrates several pipeline ideas in a small space:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;scheduling,&lt;/li&gt;
&lt;li&gt;retry behavior,&lt;/li&gt;
&lt;li&gt;deterministic station rotation,&lt;/li&gt;
&lt;li&gt;raw-zone storage,&lt;/li&gt;
&lt;li&gt;and downstream triggering.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If you are learning Airflow, this is a good pattern to study because it keeps orchestration readable instead of turning the DAG into a giant script.&lt;/p&gt;

&lt;h2&gt;
  
  
  Continue
&lt;/h2&gt;

&lt;p&gt;The next part zooms in on the API clients themselves so you can see how the project handles retries, normalization, and fallback behavior before data reaches the raw layer.&lt;/p&gt;

&lt;p&gt;Continue to Part 6: API Client Design and Reliability.&lt;/p&gt;

&lt;p&gt;Tag: #dataengineeringzoomcamp&lt;/p&gt;

</description>
      <category>automation</category>
      <category>dataengineering</category>
      <category>python</category>
      <category>tutorial</category>
    </item>
    <item>
      <title>Part 4 - Airflow Runtime and Shared Config ⚙️</title>
      <dc:creator>Abdelrahman Adnan</dc:creator>
      <pubDate>Tue, 21 Apr 2026 00:30:47 +0000</pubDate>
      <link>https://dev.to/abdelrahman_adnan/part-4-airflow-runtime-and-shared-config-48lg</link>
      <guid>https://dev.to/abdelrahman_adnan/part-4-airflow-runtime-and-shared-config-48lg</guid>
      <description>&lt;h1&gt;
  
  
  Part 4 - Airflow Runtime and Shared Config ⚙️
&lt;/h1&gt;

&lt;p&gt;This part continues from the bootstrap logic and explains the configuration layer that keeps the rest of the codebase portable.&lt;/p&gt;

&lt;h2&gt;
  
  
  The role of pipeline_config.py
&lt;/h2&gt;

&lt;p&gt;The file &lt;a href="//../../../dags/pipeline_config.py"&gt;dags/pipeline_config.py&lt;/a&gt; is the central runtime configuration module. It decides whether the project is running locally or in cloud mode and exposes the paths and credentials the other modules need.&lt;/p&gt;

&lt;p&gt;That is a clean design because it avoids repeating environment logic in every DAG or script.&lt;/p&gt;

&lt;h2&gt;
  
  
  Local versus cloud behavior
&lt;/h2&gt;

&lt;p&gt;The first important flag is &lt;code&gt;PIPELINE_ENV&lt;/code&gt;. When it is set to &lt;code&gt;local&lt;/code&gt;, the project uses:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;local filesystem storage,&lt;/li&gt;
&lt;li&gt;local parquet directories,&lt;/li&gt;
&lt;li&gt;Dockerized Postgres,&lt;/li&gt;
&lt;li&gt;and local Spark execution.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;When it is not local, the same code paths shift toward:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;S3 for raw and staging data,&lt;/li&gt;
&lt;li&gt;AWS region-based clients,&lt;/li&gt;
&lt;li&gt;and cloud runtime configuration such as SSM parameters.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Paths and partition helpers
&lt;/h2&gt;

&lt;p&gt;The module also creates and manages the local data tree:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;raw data under &lt;code&gt;local_data/raw&lt;/code&gt;,&lt;/li&gt;
&lt;li&gt;staging data under &lt;code&gt;local_data/staging&lt;/code&gt;,&lt;/li&gt;
&lt;li&gt;configuration under &lt;code&gt;local_data/config&lt;/code&gt;,&lt;/li&gt;
&lt;li&gt;and logs under &lt;code&gt;local_data/logs&lt;/code&gt;.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Two helpers are especially important:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;local_raw_path()&lt;/code&gt; builds the raw JSON file path by prefix, station, and timestamp.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;local_staging_path()&lt;/code&gt; builds the parquet partition path in a year/month/day/hour layout.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Those helpers define the physical layout used by both the ingestion and transformation stages.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why this module is worth copying in other projects
&lt;/h2&gt;

&lt;p&gt;This file is small, but it is doing real platform work:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;it standardizes runtime settings,&lt;/li&gt;
&lt;li&gt;it creates expected directories early,&lt;/li&gt;
&lt;li&gt;it keeps the path logic consistent,&lt;/li&gt;
&lt;li&gt;and it reduces duplication across DAGs and scripts.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If you are building your own project, this is the kind of module that saves you time once the pipeline grows beyond a few files.&lt;/p&gt;

&lt;h2&gt;
  
  
  Next step
&lt;/h2&gt;

&lt;p&gt;Now that the shared config is clear, the next article explains the ingestion DAG that uses it: how the pipeline fetches station data, stores raw JSON, and triggers the transformation job.&lt;/p&gt;

&lt;p&gt;Continue to Part 5: Ingestion DAG and Raw Storage.&lt;/p&gt;

&lt;p&gt;Tag: #dataengineeringzoomcamp&lt;/p&gt;

</description>
      <category>architecture</category>
      <category>automation</category>
      <category>dataengineering</category>
      <category>python</category>
    </item>
    <item>
      <title>Part 3 - Station Sampling and Cache Building 🗂️</title>
      <dc:creator>Abdelrahman Adnan</dc:creator>
      <pubDate>Tue, 21 Apr 2026 00:29:41 +0000</pubDate>
      <link>https://dev.to/abdelrahman_adnan/part-3-station-sampling-and-cache-building-22ei</link>
      <guid>https://dev.to/abdelrahman_adnan/part-3-station-sampling-and-cache-building-22ei</guid>
      <description>&lt;h1&gt;
  
  
  Part 3 - Station Sampling and Cache Building 🗂️
&lt;/h1&gt;

&lt;p&gt;This part continues from the data source overview and focuses on the bootstrap script that prepares the station list used by ingestion.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why this script exists
&lt;/h2&gt;

&lt;p&gt;The pipeline needs a stable list of stations before Airflow starts fetching readings. Rather than hard-coding stations manually, &lt;a href="//../../../scripts/build_station_sample.py"&gt;scripts/build_station_sample.py&lt;/a&gt; discovers them from OpenAQ and stores the result in a cached JSON file.&lt;/p&gt;

&lt;p&gt;That gives the project a real-world bootstrap pattern:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;discover reference data,&lt;/li&gt;
&lt;li&gt;normalize it,&lt;/li&gt;
&lt;li&gt;cache it,&lt;/li&gt;
&lt;li&gt;and reuse it from the DAG.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  How the script works
&lt;/h2&gt;

&lt;p&gt;The script is organized into a few focused functions:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;resolve_country_id()&lt;/code&gt; finds the OpenAQ country id for Egypt.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;_fetch_locations()&lt;/code&gt; retrieves station records with retry handling.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;enrich_station_sample()&lt;/code&gt; adds normalized country metadata.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;load_or_fetch_station_sample()&lt;/code&gt; prefers the local cache when it is already valid.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;save_to_storage()&lt;/code&gt; writes the sample either to local disk or to S3.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;That structure is easy to follow because each function has one responsibility.&lt;/p&gt;

&lt;h2&gt;
  
  
  The caching behavior
&lt;/h2&gt;

&lt;p&gt;Caching is important here because the station set does not need to be rebuilt every time the pipeline runs. The script checks whether a local cache already exists and whether it is large enough. If the cache is too small, it is discarded and rebuilt.&lt;/p&gt;

&lt;p&gt;This is a small but useful pattern to study:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;bootstrap data is cached,&lt;/li&gt;
&lt;li&gt;stale cache can be refreshed,&lt;/li&gt;
&lt;li&gt;and the rest of the pipeline depends on a predictable reference file.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Why this matters for the rest of the pipeline
&lt;/h2&gt;

&lt;p&gt;The ingestion DAG reads the sample from the same location every time. That keeps the flow deterministic. It also means the downstream Spark job can read the same station metadata file and join readings back to the same station definitions.&lt;/p&gt;

&lt;p&gt;In other words, this script is not just a setup helper. It is part of the data contract for the whole repository.&lt;/p&gt;

&lt;h2&gt;
  
  
  Continue
&lt;/h2&gt;

&lt;p&gt;Next, I will explain the shared configuration module and show how one file controls local paths, environment selection, and warehouse connection settings across the project.&lt;/p&gt;

&lt;p&gt;Continue to Part 4: Airflow Runtime and Shared Config.&lt;/p&gt;

&lt;p&gt;Tag: #dataengineeringzoomcamp&lt;/p&gt;

</description>
      <category>api</category>
      <category>dataengineering</category>
      <category>python</category>
      <category>tutorial</category>
    </item>
  </channel>
</rss>
