<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Rusab Naeem Khan</title>
    <description>The latest articles on DEV Community by Rusab Naeem Khan (@rusab_khan).</description>
    <link>https://dev.to/rusab_khan</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/rusab_khan"/>
    <language>en</language>
    <item>
      <title>How We Built OpenETL: A Simple, Scalable Data Migration Tool for Everyone 🚀</title>
      <dc:creator>Rusab Naeem Khan</dc:creator>
      <pubDate>Sat, 25 Jan 2025 14:19:00 +0000</pubDate>
      <link>https://dev.to/rusab_khan/how-we-built-openetl-a-simple-scalable-data-migration-tool-for-everyone-4d59</link>
      <guid>https://dev.to/rusab_khan/how-we-built-openetl-a-simple-scalable-data-migration-tool-for-everyone-4d59</guid>
      <description>&lt;h2&gt;
  
  
  &lt;strong&gt;Why Did We Do It?&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;A while ago, my friends and I were freelancing as data engineers, and every day we found ourselves migrating data for&lt;br&gt;
clients. It was always the same—creating custom scripts for each project, using different tools, and repeating the&lt;br&gt;
process. We thought, "Why not create a boilerplate code to reuse for all these migrations?" But we didn’t stop there. We&lt;br&gt;
decided to make it more accessible by adding a UI and turning it into a full-fledged app!&lt;/p&gt;

&lt;p&gt;You can find the code here: &lt;a href="https://github.com/RusabKhan/OpenETL" rel="noopener noreferrer"&gt;OpenETL Github&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;That's how &lt;strong&gt;OpenETL&lt;/strong&gt; was born. It’s an open-source ETL tool designed to simplify data migration with minimal setup.&lt;/p&gt;

&lt;p&gt;We worked hard to maintain code quality while ensuring the tool was beginner-friendly. Our goal was to help&lt;br&gt;
people—whether they’re just starting or are mid-level engineers—who have to deal with daily data migration tasks. The&lt;br&gt;
design makes it easy to understand and use, so you can focus on your work instead of battling with complex&lt;br&gt;
configurations.&lt;/p&gt;


&lt;h2&gt;
  
  
  &lt;strong&gt;How to Use OpenETL&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Before starting, ensure you have the following:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Python Installed&lt;/strong&gt;: OpenETL is a Python-based tool. Install Python 3.7 or later.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Access to HubSpot&lt;/strong&gt;: You’ll need to generate an API key or private token.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;PostgreSQL Database&lt;/strong&gt;: Ensure you have a running PostgreSQL instance.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Docker&lt;/strong&gt;: Install Docker to run OpenETL in a container.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;
&lt;h3&gt;
  
  
  &lt;strong&gt;Step 1: Configure Environment Settings&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Rename and edit the &lt;code&gt;.env&lt;/code&gt; file in the OpenETL directory to include your environment configuration:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;OPENETL_DOCUMENT_HOST&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;localhost  &lt;span class="c"&gt;# Replace with your host&lt;/span&gt;
&lt;span class="nv"&gt;OPENETL_DOCUMENT_DB&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;airflow  &lt;span class="c"&gt;# Replace with your database name&lt;/span&gt;
&lt;span class="nv"&gt;OPENETL_DOCUMENT_SCHEMA&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;open_etl  &lt;span class="c"&gt;# Replace with your schema&lt;/span&gt;
&lt;span class="nv"&gt;OPENETL_DOCUMENT_USER&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;MY_USER  &lt;span class="c"&gt;# Replace with your username&lt;/span&gt;
&lt;span class="nv"&gt;OPENETL_DOCUMENT_PASS&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;1234  &lt;span class="c"&gt;# Replace with your password&lt;/span&gt;
&lt;span class="nv"&gt;OPENETL_DOCUMENT_PORT&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;5432  &lt;span class="c"&gt;# Replace with your port&lt;/span&gt;
&lt;span class="nv"&gt;OPENETL_DOCUMENT_ENGINE&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;PostgreSQL  &lt;span class="c"&gt;# Use PostgreSQL (recommended)&lt;/span&gt;
&lt;span class="nv"&gt;OPENETL_HOME&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;/Users/usr/OpenETL  &lt;span class="c"&gt;# Path to OpenETL repository&lt;/span&gt;
&lt;span class="nv"&gt;CELERY_BROKER_URL&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;redis://redis:6379/0  &lt;span class="c"&gt;# Replace with your Redis URL&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Install and Start OpenETL&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Clone the OpenETL repository, install dependencies, and start the application using Docker Compose:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;git clone https://github.com/RusabKhan/OpenETL
&lt;span class="nb"&gt;cd &lt;/span&gt;OpenETL
docker compose up &lt;span class="nt"&gt;--build&lt;/span&gt; &lt;span class="nt"&gt;-d&lt;/span&gt; backend &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; docker compose up &lt;span class="nt"&gt;--build&lt;/span&gt; &lt;span class="nt"&gt;-d&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Step 1: Setting Up Connections in OpenETL
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F25ajwzrzauxmggy9jstc.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F25ajwzrzauxmggy9jstc.png" alt="create connection" width="800" height="379"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;OpenETL makes it easy to configure both source and target connections via its user interface:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1. Source Connection:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;– Navigate to the Create Connection screen.&lt;/p&gt;

&lt;p&gt;– Select your choice of connector and provide the authentication details.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2. Target Connection:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;– Navigate to the Create Connection screen.&lt;/p&gt;

&lt;p&gt;– Select the target and enter your database credentials.&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 2: Creating an ETL Pipeline
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffitt7d4gxh2qcilng7j4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffitt7d4gxh2qcilng7j4.png" alt="create integration" width="800" height="379"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;After configuring connections, you can set up the ETL pipeline in OpenETL:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Navigate to Create ETL from the sidebar.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Specify the source details, such as the table (Contacts) and type (API).&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Enter the target details, including your connection.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Configure optional compute settings like Spark or Hadoop if needed, or skip to the next step.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;3. Set ETL parameters:&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Load Type: Choose full or incremental.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Batch Size: Define the number of records processed per batch.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Schedule: Specify the pipeline frequency (e.g., hourly, daily).&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Click Create Integration to finalize your ETL pipeline.&lt;br&gt;
&lt;strong&gt;Step 5: Monitoring the Integration&lt;/strong&gt;&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Once the pipeline is created, you can view its status and logs:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Foljxhbcvxw2y8owsx5cv.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Foljxhbcvxw2y8owsx5cv.png" alt="logging" width="800" height="379"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1. Integration Screen:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;– Navigate to the Integrations page.&lt;/p&gt;

&lt;p&gt;– Click on the integration ID to view its history and execution details.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2. API Logs:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;– Navigate to the Logs screen.&lt;/p&gt;

&lt;p&gt;– Click on the integration ID to check logs for troubleshooting or debugging.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3. Dashboard:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F79autho9dfmsdjwquyiy.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F79autho9dfmsdjwquyiy.png" alt="dashboard" width="800" height="379"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;– Navigate to the Dashboard screen.&lt;/p&gt;

&lt;p&gt;– The Dashboard provides a visual representation of the pipeline’s progress.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1. Debugging Issues:&lt;/strong&gt; Use the API logs in OpenETL to identify and resolve errors in the pipeline.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2. Data Transformation:&lt;/strong&gt; OpenETL allows for built-in transformations like mapping, normalization, and null-value&lt;br&gt;
handling.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3. Scheduling Pipelines:&lt;/strong&gt; Leverage OpenETL’s scheduling features or external workflow tools like Apache Airflow for&lt;br&gt;
automation.&lt;/p&gt;

</description>
      <category>python</category>
      <category>programming</category>
      <category>beginners</category>
      <category>dataengineering</category>
    </item>
  </channel>
</rss>
