<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: luis zuñiga</title>
    <description>The latest articles on DEV Community by luis zuñiga (@exegol).</description>
    <link>https://dev.to/exegol</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/exegol"/>
    <language>en</language>
    <item>
      <title>Building AI-Powered Business Solutions for LATAM with Amazon Quick: A Hands-on Technical Guide</title>
      <dc:creator>luis zuñiga</dc:creator>
      <pubDate>Mon, 20 Apr 2026 16:29:59 +0000</pubDate>
      <link>https://dev.to/exegol/building-ai-powered-business-solutions-for-latam-with-amazon-quick-a-hands-on-technical-guide-52ol</link>
      <guid>https://dev.to/exegol/building-ai-powered-business-solutions-for-latam-with-amazon-quick-a-hands-on-technical-guide-52ol</guid>
      <description>&lt;p&gt;Over the past few weeks, I built and deployed five production-ready use cases using Amazon Quick (the agentic evolution of QuickSight) specifically tailored for the Small and Mid-size Business (SMB) market in Latin America.What makes Amazon Quick particularly relevant for our region is the current market gap: while enterprise AI deployment grew by 68% in 2025, only 3% of SMBs have achieved full integration (IDC Latin America ICT Spending). Amazon Quick bridges this divide with a simplified pricing model (starting at $20/user) and natural language capabilities that eliminate the need for massive technical departments.🗺️ Mental Map: The Amazon Quick EcosystemPlaintextAMAZON QUICK ARCHITECTURE&lt;br&gt;
│&lt;br&gt;
├── 🧠 INTERFACE (Natural Language)&lt;br&gt;
│   ├── Chat Agents (Contextual conversation per use case)&lt;br&gt;
│   └── Spaces (Environment organization by project/team)&lt;br&gt;
│&lt;br&gt;
├── 🤖 AGENTIC MODULES&lt;br&gt;
│   ├── Quick Research (Deep research: S3 + Web + Premium sources)&lt;br&gt;
│   ├── Quick Flows (Automated workflows for business users)&lt;br&gt;
│   └── Quick Automate (Complex multi-step automation for engineers)&lt;br&gt;
│&lt;br&gt;
├── 📊 ANALYTICS CORE&lt;br&gt;
│   ├── Quick Sight (Visual BI + SPICE In-Memory Engine)&lt;br&gt;
│   └── Topic Q (NLQ with bilingual synonym support)&lt;br&gt;
│&lt;br&gt;
└── 🛠️ INFRASTRUCTURE (IaC)&lt;br&gt;
    ├── S3 Knowledge Base (Data Lake: CSV, JSON, TXT)&lt;br&gt;
    ├── CloudFormation (Modular deployment via Stacks)&lt;br&gt;
    └── IAM (Least Privilege &amp;amp; SourceAccount conditions)&lt;br&gt;
🏗️ Infrastructure Design &amp;amp; Technical DecisionsFor these deployments, I utilized an Infrastructure as Code (IaC) approach based on 11 CloudFormation templates. My core design principles were:Stack Isolation: One independent stack per use case to simplify maintenance.Layered Security: All S3 buckets utilize AES-256 encryption, full Block Public Access, and DeletionPolicy: Retain.Strict Least Privilege: IAM roles scoped specifically to each bucket, strictly avoiding wildcards (*).QuickSight Deployment Pattern via CloudFormationOne of my key takeaways was splitting the QuickSight deployment into 5 layers. The QuickSight API requires propagation time and has complex resource dependencies:Layer 1: S3 + IAM Roles.Layer 2: DataSource (S3/JSON Connector).Layer 3: DataSet (SPICE ingestion with explicit type casting).Layer 4: Topic Q (Natural language semantic configuration).Layer 5: Analysis &amp;amp; Dashboard (36-column grid layout definition).🤖 Success Case: Sales Automation with Quick AutomateThe most significant impact was seen in sales, where we eliminated 2 hours of manual work every Monday. We leveraged Quick Automate to generate dynamic reporting pipelines.The "Inline Agent": The Heart of AutomationI used an AI Inline Agent within the flow to transform raw DataTable objects into professional HTML reports with embedded CSS, which are then automatically uploaded to S3.[!CAUTION]Technical Gotcha: The Quick Automate runtime has critical quirks:Double Datetime: You must use datetime.datetime.now(). Using a simple datetime.now() will trigger a NameError.No Boto3: External modules cannot be imported. All S3 interactions must be performed via Action Connectors.⚠️ Builder’s Log: Lessons LearnedThe S3 Typing Challenge: By default, S3/CSV sources import everything as a STRING. If you do not perform a CastColumnTypeOperation within your CloudFormation LogicalTableMap, Topic Q will be unable to perform aggregations.Localization for LATAM: To ensure the AI functions effectively in Spanish, I configured bilingual synonyms in the column metadata (e.g., total_usd → [monto, revenue, ingreso, venta]).Quick Research &amp;amp; Dual Citation: In the Regulatory Compliance use case, Quick Research's ability to cite internal sources (our PDFs in S3) and external sources simultaneously was the primary trust-builder for stakeholders.💰 Cost Analysis (Why LATAM ❤️ Quick)ComponentEstimated Cost (2026)Infrastructure Fee$250/month per account (fixed)Professional User$20/month (Includes Research &amp;amp; Flows)Enterprise User$40/month (Includes Automate Authoring)A deployment for 36 users across 5 critical areas cost approximately $994/month, providing a massive ROI compared to manual reporting hours.ConclusionAmazon Quick is no longer just a visualization tool; it is an Agentic AI platform that democratizes technology for LATAM SMBs. As builders, our mission is to shield these systems with robust architectures and IaC.⚖️ Technical &amp;amp; Legal Safe Harbor DisclaimerAUTHORSHIP AND INDEPENDENT CAPACITY: This publication is authored solely by me in my individual and private capacity. The views, methodologies, and technical workflows expressed herein are my own and do not necessarily reflect the official policy, position, or strategic direction of my current or former employers, clients, or any legal entity I am affiliated with.INTELLECTUAL PROPERTY &amp;amp; CONFIDENTIALITY COMPLIANCE:Zero Proprietary Disclosure: This content has been developed using publicly available information and personal research. No confidential information or internal proprietary source code belonging to my employer has been disclosed.Independent Development: The workflows described are based on general industry best practices and were not developed as a "work for hire".LIMITATION OF LIABILITY (NO WARRANTY): All code snippets and architectural patterns are provided "AS IS" without warranty of any kind.COMPLIANCE: This contribution is made in good faith under the AWS Builder Terms and the MIT-0 License for any included source code.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>analytics</category>
      <category>aws</category>
      <category>tutorial</category>
    </item>
    <item>
      <title>Deep Dive: Accelerating Infrastructure as Code (IaC) on GCP using Terraform and Antigravity</title>
      <dc:creator>luis zuñiga</dc:creator>
      <pubDate>Fri, 17 Apr 2026 00:42:09 +0000</pubDate>
      <link>https://dev.to/exegol/deep-dive-accelerating-infrastructure-as-code-iac-on-gcp-using-terraform-and-antigravity-1ne9</link>
      <guid>https://dev.to/exegol/deep-dive-accelerating-infrastructure-as-code-iac-on-gcp-using-terraform-and-antigravity-1ne9</guid>
      <description>&lt;p&gt;Key Stack: Terraform, Google Cloud Platform (GCP), Cloud Armor, Cloud SQL, Antigravity AI&lt;/p&gt;

&lt;p&gt;When designing robust and scalable architectures for production environments, efficiency is non-negotiable. Traditionally, SRE and Infrastructure teams spend significant cycles managing network segregation, variable consistency, and manual security audits. However, the paradigm has shifted: Generative AI applied to Platform Engineering has arrived to eliminate operational toil.&lt;/p&gt;

&lt;p&gt;In this article, we will technically analyze the paquetesaction project. We will explore how to deploy advanced Terraform modules on Google Cloud by operating alongside Antigravity—an AI-powered assistant tailored for infrastructure workflows based on Google DeepMind technology—which acts as an additional software engineer within your terminal.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;The Multi-Project Modularity Challenge
For this use case, the requirements demanded four distinct architectures designed to coexist within an enterprise ecosystem. The goal was clear: total isolation and automated scalability.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Base VPC (Core Networking): Implementation of custom networks with Private Google Access enabled for internal consumption of Google APIs without internet egress.&lt;/p&gt;

&lt;p&gt;Private Data Workloads: Cloud SQL (MySQL) with restricted access via VPC Peering, eliminating any public IP exposure.&lt;/p&gt;

&lt;p&gt;Resilient L7 Frontend: Global HTTP(S) Load Balancer supported by Managed Instance Groups (MIG) and perimeter protection via Cloud Armor.&lt;/p&gt;

&lt;p&gt;Management Access (Bastion): e2-micro instances for administration, using strict tag-based routing.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Antigravity: Pair Programming "On Steroids"
The true disruption of Antigravity lies not in static code generation, but in its ability to execute an iterative framework within the DevOps lifecycle.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Rather than generating isolated code, the agent operated as a collaborator aware of the repository and the Terraform lifecycle. While orchestrating the environments/dev directory, the agent autonomously structured:&lt;/p&gt;

&lt;p&gt;The file architecture (main.tf, variables.tf, outputs.tf).&lt;/p&gt;

&lt;p&gt;Initialization logic via CLI commands (terraform init and terraform fmt).&lt;/p&gt;

&lt;p&gt;Selection of optimized images (Debian 11) to meet internal compliance policies.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Technical Architecture &amp;amp; Data Flow
Below is a breakdown of the critical infrastructure components designed for this project.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;A. Managed Database Isolation (Cloud SQL)&lt;br&gt;
Exposing a database to the internet is an unacceptable risk. We utilized Private Services Access to connect our VPC with the Google Tenant Project.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
+----------------------------------------------------+
| Your GCP Project (Consumer VPC)                    |
|                                                    |
|  +----------------------------------------------+  |
|  | VPC: "mysql-vpc-dev"                         |  |
|  |                                              |  |
|  |  [Global IP Range Reservation: /16]          |  |
|  |             |                                |  |
|  +-------------|--------------------------------+  |
|                |                                   |
|                v (Automatic VPC Peering)           |
|                                                    |
|  +----------------------------------------------+  |
|  | Google Managed Services (Tenant VPC)          |  |
|  |                                              |  |
|  |  +---------------------------------------+   |  |
|  |  | Cloud SQL Instance (MySQL 8.0)         |   |  |
|  |  | - IPv4_enabled: OFF                   |   |  |
|  |  | - Private IP (from reserved range)    |   |  |
|  |  +---------------------------------------+   |  |
|  +----------------------------------------------+  |
+----------------------------------------------------+

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;B. Next-Gen WAF Defenses via Cloud Armor&lt;br&gt;
To protect backends, we delegate security to Google's Edge. Cloud Armor acts as a Layer 7 shield, filtering threats before they ever reach our compute instances.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
              Inbound Web Traffic
                      |
                      v
       +-------------------------------+
       | Global HTTP Load Balancer     | 
       +--------------+----------------+
                      |
       +--------------v----------------+
       | Cloud Armor Security Policy   |  (L7 Filtering)
       | -&amp;gt; Blocks SQLi, XSS, LFI      |
       +--------------+----------------+
                      |
                      v (Sanitized Traffic)
+---------------------+-----------------------------------+
| Principal VPC                                           |
|   +-------------------------------------------------+   |
|   | Subnet                                          |   |
|   | Firewall: Allow ONLY Google LB IPs              |   |
|   |           (130.211.0.0/22 &amp;amp; 35.191.0.0/16)      |   |
|   |                                                 |   |
|   |   +-----------------------------------------+   |   |
|   |   | Managed Instance Group (MIG)            |   |   |
|   |   |  [ Apache Web Server VM - Debian 11 ]   |   |   |
|   |   +-----------------------------------------+   |   |
|   +-------------------------------------------------+   |
+---------------------------------------------------------+
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;Checkov: Closing the Governance Loop
In a production workflow, compliance is vital. When integrating tools like Checkov, it is common to trigger security alerts. Antigravity helped us apply the Principle of Least Privilege, replacing default accounts with dedicated IAM Service Accounts.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;For cases where the design required specific exceptions, the agent injected formal suppression syntax:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;Terraform&lt;br&gt;
resource "google_compute_instance" "public_instance" {&lt;br&gt;
  # checkov:skip=CKV_GCP_40: Public IP explicitly required for administrative bastion&lt;br&gt;
  # checkov:skip=CKV_GCP_32: OS Login bypass authorized for this specific use-case&lt;br&gt;
  name         = "bastion-dev"&lt;br&gt;
  machine_type = "e2-micro"&lt;br&gt;
  ...&lt;br&gt;
}&lt;/code&gt;&lt;br&gt;
Each exception was evaluated during the design phase and documented inline to preserve traceability and facilitate future audits.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;The Future of Infrastructure
The paquetesaction project demonstrates that the future of the cloud is hybrid: human strategic judgment amplified by AI execution speed. Our next steps involve expanding toward Vertex AI and consolidating security operations with Mandiant.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;The infrastructure is code, and AI is its most powerful catalyst!&lt;/p&gt;

&lt;p&gt;⚖️ Technical &amp;amp; Legal Safe Harbor Disclaimer&lt;br&gt;
AUTHORSHIP AND INDEPENDENT CAPACITY: This publication is authored solely by me in my individual and private capacity. The views, methodologies, and technical workflows expressed herein are my own and do not necessarily reflect the official policy, position, or strategic direction of my current or former employers, clients, or any legal entity I am affiliated with.&lt;/p&gt;

&lt;p&gt;INTELLECTUAL PROPERTY &amp;amp; CONFIDENTIALITY COMPLIANCE:&lt;/p&gt;

&lt;p&gt;Zero Proprietary Disclosure: This content has been developed using publicly available information and personal research. No confidential information or internal proprietary source code belonging to any specific organization has been disclosed.&lt;/p&gt;

&lt;p&gt;Independent Development: The workflows described are based on general industry best practices and were not developed as a "work for hire".&lt;/p&gt;

&lt;p&gt;LIMITATION OF LIABILITY (NO WARRANTY): All code snippets and architectural patterns are provided "AS IS" without warranty of any kind.&lt;/p&gt;

&lt;p&gt;COMPLIANCE: This contribution is made in good faith under the MIT-0 License for any included source code patterns.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>devops</category>
      <category>google</category>
      <category>terraform</category>
    </item>
    <item>
      <title>🚀 Mastering the Hybrid Workflow: AWS Development with Kiro Pro (IDE + CLI)</title>
      <dc:creator>luis zuñiga</dc:creator>
      <pubDate>Wed, 15 Apr 2026 15:48:29 +0000</pubDate>
      <link>https://dev.to/exegol/title-mastering-the-hybrid-workflow-aws-development-with-kiro-pro-ide-cli-1498</link>
      <guid>https://dev.to/exegol/title-mastering-the-hybrid-workflow-aws-development-with-kiro-pro-ide-cli-1498</guid>
      <description>&lt;p&gt;Description&lt;br&gt;
Stop choosing between a GUI and a Terminal. Learn how to leverage Kiro Pro, AWS MCPs, and a hybrid Windows/Linux workflow to build secure, well-architected cloud projects from scratch to deployment.&lt;/p&gt;

&lt;p&gt;The Content&lt;br&gt;
Hey fellow builders! 👋&lt;/p&gt;

&lt;p&gt;As a Cloud Engineer, I’m always looking for ways to bridge the gap between "writing code" and "deploying securely." Lately, I’ve been experimenting with Kiro Pro and its AWS Model Context Protocol (MCP) integration.&lt;/p&gt;

&lt;p&gt;I’ve found that the secret sauce isn't just using the tool—it's how you split the work between Windows (IDE) for planning and Linux (CLI) for the heavy lifting. I recently published a deep dive on this methodology, which you can check out here:&lt;/p&gt;

&lt;p&gt;👉 The Art of Hybrid Development: Optimizing Application Lifecycles with Kiro Pro and AWS&lt;/p&gt;

&lt;p&gt;Here’s the breakdown of one possible professional workflow for building high-quality AWS projects.&lt;/p&gt;

&lt;p&gt;🏗️ Phase 1: The "Architect" (Kiro IDE on Windows)&lt;br&gt;
I recommend using the IDE when you are in "creation mode." It’s where the logic is born.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;The Setup&lt;br&gt;
First, ensure your AWS MCPs are active. This gives the AI the "eyes" it needs to see your AWS environment in real-time.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;The "Requirements-First" Strategy&lt;br&gt;
Don't just prompt "build an app." Create a requirements.md file. This is your project's source of truth. Include:&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;User Stories: Step-by-step behavior.&lt;/p&gt;

&lt;p&gt;Hard Constraints: e.g., "Never hardcode credentials—use AWS Secrets Manager." 🛡️&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Visualizing with ASCII&lt;br&gt;
Ask Kiro to generate an ASCII Flowchart. Seeing the data flow between the Frontend, Backend, and S3/DynamoDB in plain text helps catch logic flaws before you even start coding.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;The Power Move: Claude 3.5/Opus&lt;br&gt;
When it's time to generate the code, I suggest switching to Claude 3.5 Sonnet or Opus. The reasoning capabilities for Infrastructure as Code (IaC) are top-tier. Once validated, push everything to a private GitHub repo.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;🐧 Phase 2: The "Operator" (Kiro CLI on Linux)&lt;br&gt;
When it's time to get "dirty" with deployment, moving to a Linux VM ensures superior operational control.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Secure Credential Injection&lt;br&gt;
Use a custom script to inject temporary AWS credentials. No long-term keys, no leaks.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Deep Contextualization&lt;br&gt;
Once you clone the repo, run:&lt;br&gt;
kiro prompt "Deep dive into local repo XXX and build full context."&lt;br&gt;
This ensures the CLI knows exactly what was built in Phase 1.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;The "Least Privilege" Audit 🔍&lt;br&gt;
This is the most critical step. Ask Kiro to:&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Validate credentials against the repo resources.&lt;/p&gt;

&lt;p&gt;Output a JSON of required permissions. This allows for the creation of a scoped IAM Policy that follows the Principle of Least Privilege before hitting "deploy."&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;The "Smart" README &amp;amp; Technical Memory
After a successful deploy, let Kiro handle the documentation. It can update the README.md with the actual execution findings and generate a Technical Memory file for future reference.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;💡 Pro-Tips for the Community&lt;br&gt;
Sync your MCPs: If your IDE knows something your CLI doesn't, you're going to have a bad time. Keep them updated.&lt;/p&gt;

&lt;p&gt;Separation of Concerns: Use the IDE for Design and the CLI for Implementation.&lt;/p&gt;

&lt;p&gt;Score your work: Ask Kiro for a "Project Score" at the end to see where you can optimize your AWS architecture.&lt;/p&gt;

&lt;p&gt;What about you? Are you using Kiro or other AI-assisted tools for your AWS deployments? Check out my full article on AWS Builder Center and let’s discuss in the comments! 👇&lt;/p&gt;

&lt;h1&gt;
  
  
  aws #cloudcomputing #devops #kiropro #productivity #architecture #awsbuilders
&lt;/h1&gt;

&lt;p&gt;⚖️ Technical &amp;amp; Legal Safe Harbor Disclaimer&lt;br&gt;
AUTHORSHIP AND INDEPENDENT CAPACITY: This publication is authored solely by me in my individual and private capacity. The views, methodologies, and technical workflows expressed herein are my own and do not necessarily reflect the official policy, position, or strategic direction of my current or former employers, clients, or any legal entity I am affiliated with.&lt;/p&gt;

&lt;p&gt;INTELLECTUAL PROPERTY &amp;amp; CONFIDENTIALITY COMPLIANCE:&lt;/p&gt;

&lt;p&gt;Zero Proprietary Disclosure: This content has been developed using publicly available information, official documentation, and personal research. No confidential information, trade secrets, internal proprietary source code, or non-public infrastructure schemas belonging to my employer or any third party have been used, referenced, or disclosed in this publication.&lt;/p&gt;

&lt;p&gt;Independent Development: The workflows described (including the Kiro Pro / AWS hybrid methodology) are based on general industry best practices and were not developed as a "work for hire" or as part of specific assigned duties for any organization.&lt;/p&gt;

&lt;p&gt;Standard Industry Tools: References to third-party tools (AWS, Kiro, Anthropic/Claude) are for educational purposes and based on commercially available features.&lt;/p&gt;

&lt;p&gt;LIMITATION OF LIABILITY (NO WARRANTY): All code snippets, scripts, and architectural patterns are provided "AS IS" without warranty of any kind, express or implied, including but not limited to the warranties of merchantability or fitness for a particular purpose. In no event shall the author be liable for any claim, damages, or other liability arising from the use of this technical information.&lt;/p&gt;

&lt;p&gt;COMPLIANCE: This contribution is made in good faith and intended to foster community knowledge under the AWS Builder Terms and the MIT-0 License for any included source code.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>cli</category>
      <category>mcp</category>
      <category>tooling</category>
    </item>
    <item>
      <title>From Learning to Practice: Why I Decided to Stop Studying Privately and Start Sharing 🚀</title>
      <dc:creator>luis zuñiga</dc:creator>
      <pubDate>Thu, 02 Apr 2026 02:38:58 +0000</pubDate>
      <link>https://dev.to/exegol/del-aprendizaje-a-la-practica-por-que-decidi-dejar-de-estudiar-en-privado-y-empezar-a-compartir-298g</link>
      <guid>https://dev.to/exegol/del-aprendizaje-a-la-practica-por-que-decidi-dejar-de-estudiar-en-privado-y-empezar-a-compartir-298g</guid>
      <description>&lt;p&gt;​Hi everyone! 👋&lt;br&gt;
​I’ve spent a long time immersed in courses, labs, and documentation. For months (and even years), my focus has been on absorbing as much as possible about Cloud Engineering, Data Analysis, and beyond. However, today I’ve made an important decision: to stop keeping my projects in local folders and start sharing them with the community.&lt;br&gt;
​I’ve realized that the best way to grow isn’t just by studying, but by exposing my work to the eyes of other professionals—to receive feedback, improve, and hopefully help someone else on a similar path.&lt;br&gt;
​🛠️ My first contribution: Data Processing&lt;br&gt;
​Today, I’m sharing a repository I’ve been working on. It’s a tool designed to standardize and streamline data processing in Python, which is essential when seeking efficiency in Cloud environments.&lt;br&gt;
​🌟 Why did I decide to publish it today?&lt;br&gt;
​Validation: I want to know if the solutions I design follow industry best practices.&lt;br&gt;
​Community: I firmly believe the developer ecosystem thrives when we share what we learn.&lt;br&gt;
​Transparency: This marks the beginning of my public portfolio, showcasing my real evolution in technologies like AWS and GCP.&lt;br&gt;
​💻 Explore the repository&lt;br&gt;
​I invite you to check out the code, clone it, or even critique it (constructively, of course!). Any suggestions on how to optimize it are more than welcome.&lt;br&gt;
​👉 Project URL: &lt;a href="https://github.com/luiszuniga1990/data-processing.git" rel="noopener noreferrer"&gt;https://github.com/luiszuniga1990/data-processing.git&lt;/a&gt;&lt;/p&gt;

</description>
      <category>career</category>
      <category>datascience</category>
      <category>learning</category>
      <category>showdev</category>
    </item>
  </channel>
</rss>
