<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: sngvfx</title>
    <description>The latest articles on DEV Community by sngvfx (@sngvfx).</description>
    <link>https://dev.to/sngvfx</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/sngvfx"/>
    <language>en</language>
    <item>
      <title>AWS: S3 Bucket access IAM policy example</title>
      <dc:creator>sngvfx</dc:creator>
      <pubDate>Thu, 22 Dec 2022 06:43:54 +0000</pubDate>
      <link>https://dev.to/sngvfx/aws-s3-bucket-access-iam-policy-example-f6n</link>
      <guid>https://dev.to/sngvfx/aws-s3-bucket-access-iam-policy-example-f6n</guid>
      <description>&lt;p&gt;Here is an easy way to grant all access to an S3 bucket and its objects to a specific user or group, you can use an S3 bucket policy similar to the following:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxkebhwi6ibq6kwu4wizo.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxkebhwi6ibq6kwu4wizo.png" alt="policy_s3" width="800" height="386"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This policy grants all access (indicated by the "Action": "s3:&lt;em&gt;" line) to the user or group identified by the "Principal" field (in this case, the user "&lt;/em&gt;&lt;em&gt;demoteam&lt;/em&gt;&lt;em&gt;" with the AWS account number **123456789012&lt;/em&gt;&lt;em&gt;) for all objects in the bucket (indicated by the "Resource": "arn:aws:s3:::my-bucket/&lt;/em&gt;" line).&lt;/p&gt;

&lt;p&gt;It's important to note that this policy grants the specified user or group full access to the bucket and its objects, which may not be appropriate in all cases. &lt;/p&gt;

&lt;p&gt;Always customize the policy to grant less access to users and just based on your specific needs.&lt;/p&gt;

</description>
      <category>career</category>
      <category>cicd</category>
      <category>fullstack</category>
    </item>
    <item>
      <title>AWS: Lambda function</title>
      <dc:creator>sngvfx</dc:creator>
      <pubDate>Fri, 16 Dec 2022 07:20:08 +0000</pubDate>
      <link>https://dev.to/sngvfx/aws-lambda-function-1jn6</link>
      <guid>https://dev.to/sngvfx/aws-lambda-function-1jn6</guid>
      <description>&lt;p&gt;&lt;strong&gt;Amazon Lambda&lt;/strong&gt; is a serverless computing platform that allows you to run code in response to specific events or triggers. &lt;/p&gt;

&lt;p&gt;A Lambda function is a piece of code that is executed by the Lambda service in response to a trigger.&lt;/p&gt;

&lt;p&gt;Here is an example of how you might use a Lambda function:&lt;/p&gt;

&lt;p&gt;Imagine that you have a web application that allows users to upload images. When a user uploads an image (an event), you might want to automatically resize the image and create a thumbnail version. To do this, you could create a Lambda function that is triggered by the upload event and performs the image resizing.&lt;/p&gt;

&lt;p&gt;The Lambda function might look something like this:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--Bac3zYLb--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/fvje87trmxht8nu7zryv.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--Bac3zYLb--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/fvje87trmxht8nu7zryv.png" alt="AWS lambda function" width="880" height="299"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This is just one example of how you might use a Lambda function. You can use Lambda functions to perform a wide variety of tasks, such as processing data streams, automating back-end tasks, and integrating with other AWS services.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Apache Airflow - Steps by steps</title>
      <dc:creator>sngvfx</dc:creator>
      <pubDate>Fri, 16 Dec 2022 06:00:33 +0000</pubDate>
      <link>https://dev.to/sngvfx/apache-airflow-steps-by-steps-4mg1</link>
      <guid>https://dev.to/sngvfx/apache-airflow-steps-by-steps-4mg1</guid>
      <description>&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbisjinpdsnvjv5macj6c.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbisjinpdsnvjv5macj6c.png" alt="Airflow logo" width="674" height="262"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Apache Airflow&lt;/strong&gt; is an open-source platform for managing and scheduling data pipelines. It is commonly used in data engineering and data science to orchestrate and automate complex workflows, such as data ingestion, data transformation, and data analysis.&lt;/p&gt;

&lt;p&gt;Here are the steps to build an Apache Airflow pipeline:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Install Apache Airflow: To use Apache Airflow, you need to install it first. You can install Apache Airflow either using pip or by downloading the source code and installing it manually. &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Set up a database: Apache Airflow uses a database to store its metadata, such as the list of tasks, their dependencies, and their execution history. You can use either a built-in database (such as SQLite) or a separate database service (such as MySQL or PostgreSQL). &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Define the pipeline: A pipeline in Apache Airflow is a directed acyclic graph (DAG) of tasks that need to be executed. To define a pipeline, you need to create a Python script and define a DAG object in it. Within the DAG, you can define individual tasks and their dependencies. &lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1iq4171jvsgv0q5crgcl.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1iq4171jvsgv0q5crgcl.png" alt="bashoperator" width="800" height="113"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Create tasks: Tasks in Apache Airflow are represented by Operators, which are classes that define the behavior of a task. There are various types of operators available in Apache Airflow, such as BashOperator for executing a Bash command, PythonOperator for executing a Python function, and others. &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Set up the execution schedule: To run a pipeline, you need to specify when and how often the pipeline should be executed. This can be done by setting the start_date and schedule_interval parameters of the DAG object. &lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fd386av4jc2egp3399jb6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fd386av4jc2egp3399jb6.png" alt="architecture" width="800" height="333"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Test the pipeline: Before deploying the pipeline, it is a good idea to test it locally to make sure it is working as expected. You can do this by starting the Apache Airflow web server and running the pipeline in the web UI. &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Deploy the pipeline: Once you have tested the pipeline and are satisfied with the results, you can deploy it to the Apache Airflow environment. This can be done by committing the DAG Python script to a version control system and pushing it to the Apache Airflow environment. &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Monitor and maintain the pipeline: After deploying the pipeline, you need to monitor it to make sure it is running as expected. You can use the Apache Airflow web UI or the command-line interface to view the status of the pipeline and troubleshoot any issues that may arise. You should also regularly maintain the pipeline by updating it as needed and cleaning up old data to keep it running efficiently.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>gratitude</category>
    </item>
    <item>
      <title>AWS: Difference between a Network Load Balancer and an Application Load Balancer</title>
      <dc:creator>sngvfx</dc:creator>
      <pubDate>Fri, 16 Dec 2022 05:42:38 +0000</pubDate>
      <link>https://dev.to/sngvfx/aws-difference-between-a-network-load-balancer-and-an-application-load-balancer-3ke6</link>
      <guid>https://dev.to/sngvfx/aws-difference-between-a-network-load-balancer-and-an-application-load-balancer-3ke6</guid>
      <description>&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--71g1Eznb--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/zkaokudtmu7qs4f2r2r2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--71g1Eznb--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/zkaokudtmu7qs4f2r2r2.png" alt="AWS Load Balancer" width="803" height="649"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Today we will talk about the difference between a Network Load Balancer and an Application Load Balancer.&lt;/p&gt;

&lt;p&gt;In Amazon Web Services (AWS), an Application Load Balancer (ALB) and a Network Load Balancer (NLB) are both types of load balancers that distribute incoming traffic across multiple targets, such as EC2 instances, in one or more Availability Zones. However, they differ in terms of the traffic they are designed to handle and the level of control they provide over the traffic routing.&lt;/p&gt;

&lt;p&gt;An Application Load Balancer is a load balancing service that operates at the application layer (layer 7 in the OSI model). It is intended for load balancing of HTTP and HTTPS traffic and provides a number of features that enable you to route traffic based on advanced traffic routing rules and content-based routing rules.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--ymd_ElBp--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/yh1li5mlnnjqnptxpn2p.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--ymd_ElBp--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/yh1li5mlnnjqnptxpn2p.png" alt="AWS network Load balancer" width="880" height="495"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In contrast, a Network Load Balancer is a load balancing service that operates at the connection level (layer 4 in the OSI model). It is designed for high performance and can handle millions of requests per second with very low latencies. It is best suited for load balancing of TCP traffic, and can also handle traffic for other protocols such as TLS and UDP.&lt;/p&gt;

&lt;p&gt;In summary, an ALB is best suited for load balancing of HTTP and HTTPS traffic and provides a more feature-rich experience, while an NLB is better suited for load balancing of TCP traffic and is designed for high performance and low latencies.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Simple KAFKA streaming Pipeline</title>
      <dc:creator>sngvfx</dc:creator>
      <pubDate>Thu, 15 Dec 2022 20:52:36 +0000</pubDate>
      <link>https://dev.to/sngvfx/simple-kafka-streaming-pipeline-m1k</link>
      <guid>https://dev.to/sngvfx/simple-kafka-streaming-pipeline-m1k</guid>
      <description>&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--FVbwmmIi--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/q1gy66okoc5ow888fzqm.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--FVbwmmIi--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/q1gy66okoc5ow888fzqm.png" alt="Kafka Logo" width="813" height="204"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Let's say you are looking to explore the relationship between extreme weather and people's conversations about it on Twitter, you can gather and analyze weather and Twitter event data streams.&lt;/p&gt;

&lt;p&gt;You will need the following (both in JSON format):&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;a Weather API to obtain real-time &amp;amp; forecasted weather data,&lt;/li&gt;
&lt;li&gt;a Twitter API to also get real-time tweets &amp;amp; mentions.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The National Weather Service API can be accessed &lt;a href="https://www.weather.gov/documentation/services-web-api"&gt;here&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The Twitter API can be accessed &lt;a href="https://developer.twitter.com/en/docs/twitter-api"&gt;here&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Create a &lt;strong&gt;weather topic&lt;/strong&gt; and &lt;strong&gt;Twitter topic&lt;/strong&gt; in a Kafka cluster with some partitions and replications. &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Create a &lt;strong&gt;weather producer&lt;/strong&gt; and &lt;strong&gt;Twitter producer&lt;/strong&gt; to publish the JSON data to the two topics, respectively. &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Create a &lt;strong&gt;weather consumer&lt;/strong&gt; and &lt;strong&gt;Twitter consumer&lt;/strong&gt; to read the events from the two topics, and the bytes stored in the Kafka topics into event JSON data. &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Use a DB writer to parse the JSON files and create database records, which can be written into the database with SQL insert statements. &lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Finally, you can query and visualize the database records in a dashboard to complete the end-to-end pipeline.&lt;/p&gt;

&lt;p&gt;Additional infos:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;u&gt;What is a Kafka DB Writer:&lt;/u&gt;&lt;/strong&gt;&lt;br&gt;
A Kafka DB writer is a type of software that enables a user to write data from a database into a Kafka topic. It can be used for data integration, stream processing, or data ingestion. Kafka DB writers are often used for large-scale distributed systems, such as streaming analytics and real-time decision making.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;u&gt;KAFKA TOOL - ksqlDB: &lt;/u&gt;&lt;/strong&gt;&lt;br&gt;
KSQLDB is an open-source, cloud-native streaming SQL database built on Apache Kafka. It enables you to easily develop streaming applications that process data in real-time and respond to events as they occur. KSQLDB supports a wide range of operations including stream processing, time-based event-driven processing, windowed operations, joins, and more. It also provides a high-performance SQL query engine for accessing and analyzing data from Apache Kafka and other sources. With KSQLDB, developers can quickly build and deploy powerful streaming applications with a few lines of code.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--Jlk9Kv7Q--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/kmyhgaze75bvp6mfxihr.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--Jlk9Kv7Q--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/kmyhgaze75bvp6mfxihr.png" alt="Kafka SLQ DB Pipeline" width="880" height="353"&gt;&lt;/a&gt;&lt;/p&gt;

</description>
    </item>
    <item>
      <title>TIC TAC TOE game in JS</title>
      <dc:creator>sngvfx</dc:creator>
      <pubDate>Sat, 22 Oct 2022 21:03:03 +0000</pubDate>
      <link>https://dev.to/sngvfx/tic-tac-toe-game-in-js-39fi</link>
      <guid>https://dev.to/sngvfx/tic-tac-toe-game-in-js-39fi</guid>
      <description>&lt;p&gt;This game is simple. Just place the mouse-over a square to play your turn. It will randomly play an X or O.&lt;br&gt;
The first to match 3 lined-up X or O wins the game!&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/sngvfx/tictactoe"&gt;&lt;/a&gt;&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Simple Json Database</title>
      <dc:creator>sngvfx</dc:creator>
      <pubDate>Sat, 22 Oct 2022 20:49:51 +0000</pubDate>
      <link>https://dev.to/sngvfx/simple-json-database-121</link>
      <guid>https://dev.to/sngvfx/simple-json-database-121</guid>
      <description>&lt;p&gt;ZooDB - a simple database using CSS, JSON and HTML to display a pre-entered list of animals with details.&lt;/p&gt;

&lt;p&gt;Click on the link below:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/sngvfx/ZooDB"&gt;https://github.com/sngvfx/ZooDB&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/sngvfx/ZooDB"&gt;&lt;/a&gt;&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Simple Password Generator</title>
      <dc:creator>sngvfx</dc:creator>
      <pubDate>Sat, 22 Oct 2022 16:30:16 +0000</pubDate>
      <link>https://dev.to/sngvfx/simple-password-generator-591m</link>
      <guid>https://dev.to/sngvfx/simple-password-generator-591m</guid>
      <description>&lt;p&gt;Here is a simple python generator that use the "Random" module with a random.sample() method. The generated password is a combination of upper and lowercase letters, numbers and symbols. To make the password more complex and not easy to hack, the length of the password has been set to 20.&lt;/p&gt;

&lt;p&gt;Give it a try, feel free to improve it and let me know!&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--lpeO3xhV--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/1957gs7e9byq9aqr1pg1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--lpeO3xhV--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/1957gs7e9byq9aqr1pg1.png" alt="Image description" width="880" height="334"&gt;&lt;/a&gt;&lt;/p&gt;

</description>
      <category>generator</category>
      <category>python</category>
    </item>
    <item>
      <title>WELCOME TO SNGVFX</title>
      <dc:creator>sngvfx</dc:creator>
      <pubDate>Fri, 21 Oct 2022 19:36:12 +0000</pubDate>
      <link>https://dev.to/sngvfx/welcome-to-my-space-46j8</link>
      <guid>https://dev.to/sngvfx/welcome-to-my-space-46j8</guid>
      <description>&lt;p&gt;Welcome to my coding page. Here you will find a list of my current and past coding projects....  Created using a combination of the following programming languages: &lt;strong&gt;&lt;em&gt;Python, Javascript, CSS, PHP, Linux/Bash and Database MySQL, PostgreSQL and MongoDB&lt;/em&gt;&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;My projects include:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Simple Password Manager&lt;/li&gt;
&lt;li&gt;Pro Password Manager V.1&lt;/li&gt;
&lt;li&gt;FolderDeletor V.1&lt;/li&gt;
&lt;li&gt;Video Processor V.2&lt;/li&gt;
&lt;li&gt;Serato Live Track Reader V.1&lt;/li&gt;
&lt;li&gt;Simple Music File &amp;amp; Folder Organizer V.1&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;and more...&lt;br&gt;
I will be sharing some here and on GITHUB.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>ETL Process - The ABC of the DATA Engineer</title>
      <dc:creator>sngvfx</dc:creator>
      <pubDate>Sun, 06 Feb 2022 15:24:41 +0000</pubDate>
      <link>https://dev.to/sngvfx/etl-process-the-abc-of-the-data-engineer-2f1e</link>
      <guid>https://dev.to/sngvfx/etl-process-the-abc-of-the-data-engineer-2f1e</guid>
      <description>&lt;p&gt;ETL - the process of extracting, transforming and loading data, also called streaming data process, is the foundation of data engineering. &lt;/p&gt;

&lt;p&gt;As someone who is switch career (From Web development/Front-End), learning all the tools, processes and concept behind .... &lt;/p&gt;

</description>
      <category>python</category>
      <category>etl</category>
      <category>dataengineering</category>
      <category>sql</category>
    </item>
  </channel>
</rss>
