<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Marcelo Magario</title>
    <description>The latest articles on DEV Community by Marcelo Magario (@marcelomagario).</description>
    <link>https://dev.to/marcelomagario</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/marcelomagario"/>
    <language>en</language>
    <item>
      <title>Another E2E Solution delivered. This time with CI/CD, AWS EventBridge and ECS Fargate</title>
      <dc:creator>Marcelo Magario</dc:creator>
      <pubDate>Tue, 10 Mar 2026 01:17:40 +0000</pubDate>
      <link>https://dev.to/marcelomagario/another-e2e-solution-delivered-this-time-with-cicd-aws-eventbridge-and-ecs-fargate-4515</link>
      <guid>https://dev.to/marcelomagario/another-e2e-solution-delivered-this-time-with-cicd-aws-eventbridge-and-ecs-fargate-4515</guid>
      <description>&lt;p&gt;I recently completed a personal project focused on automating a password rotation process for a third-party system.&lt;/p&gt;

&lt;p&gt;This integration requires authentication, but the system enforces a monthly password rotation. When the password expires, uploads and downloads start failing, which quickly turns into a operational issue.&lt;/p&gt;

&lt;p&gt;To remove the need for manual updates and the risk of someone simply forgetting, I built an automation to handle this end to end.&lt;/p&gt;

&lt;p&gt;The solution is a Python worker using Selenium with headless Chromium, executed on a schedule and backed by a full CI/CD pipeline. On every push to the main branch, GitHub Actions assumes an AWS IAM Role via OIDC, builds the Docker image, and pushes it to Amazon ECR. The workflow then registers a new ECS Task Definition revision, updating only the container image.&lt;/p&gt;

&lt;p&gt;This is the Architecture design solution:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvy0y1nd8mdfhrcqn3lly.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvy0y1nd8mdfhrcqn3lly.png" alt=" " width="717" height="399"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;CI/CD:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9ouy3bfovlqdqkp7hk39.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9ouy3bfovlqdqkp7hk39.png" alt=" " width="800" height="378"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Execution is handled by Amazon EventBridge, which triggers the task every 29 days:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7wluw0bly54sse386dl2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7wluw0bly54sse386dl2.png" alt=" " width="800" height="119"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;ECS Cluster:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpit26jnrudvq2ys6g0mh.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpit26jnrudvq2ys6g0mh.png" alt=" " width="800" height="160"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5buf1cepzrry24ew4aye.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5buf1cepzrry24ew4aye.png" alt=" " width="800" height="238"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The task runs on ECS Fargate in a public subnet, with a public IP and outbound traffic allowed. When triggered, Fargate starts the container, runs automation.py, launches Selenium with Chromium and Chromedriver, logs into the system, performs the password rotation, and exits. On success, the task finishes automatically with exit code 0. If an exception occurs, logs are sent to CloudWatch and the error is reported to a Slack alerts channel.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0w7am8s403owc70altsb.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0w7am8s403owc70altsb.png" alt=" " width="800" height="383"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Archtecture decisions:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Using ECS Fargate instead of Lambda was a deliberate decision. Running Selenium with Chromium on Lambda usually requires custom layers and fine-tuning, and it’s easy to hit limits around memory, package size, or execution time. With Fargate, the entire environment is packaged in the Docker image, with predictable runtime behavior and flexible CPU and memory allocation, which makes this kind of workload much easier to operate.&lt;/p&gt;

&lt;p&gt;In the end, this is a simple batch worker. It runs on a schedule, does one job, and exits. For headless browser automation, this approach turned out to be more straightforward and reliable.&lt;/p&gt;

</description>
      <category>automation</category>
      <category>aws</category>
      <category>cicd</category>
      <category>python</category>
    </item>
    <item>
      <title>Hotfix Story: Fixing Password Reset Under Pressure</title>
      <dc:creator>Marcelo Magario</dc:creator>
      <pubDate>Thu, 21 Aug 2025 11:58:45 +0000</pubDate>
      <link>https://dev.to/marcelomagario/hotfix-story-fixing-password-reset-under-pressure-2l8b</link>
      <guid>https://dev.to/marcelomagario/hotfix-story-fixing-password-reset-under-pressure-2l8b</guid>
      <description>&lt;p&gt;Sometimes in software development, business priorities demand immediate action. That was the case recently when our password reset flow started failing for users, while at the same time we received feedback that the recovery emails were unclear and hard to identify. Our development environment was full of ongoing changes, so the safest path was to deliver a focused hotfix straight to production.&lt;/p&gt;

&lt;p&gt;I saw in the AWS (ECS Logs) were logs telling that the app was crashed with ReferenceError of Redis.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The Challenges&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Two main problems had to be solved:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Email recovery template&lt;/strong&gt; – Investors reported that password recovery emails were in English only and didn’t include any reference to the company. This made them hard to find and often overlooked. The fix required introducing support for multiple languages, defaulting to English when no language was provided, and including the company name in the email subject.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Redis v4 migration issues&lt;/strong&gt; – At the same time, our password reset functionality was broken due to a Redis upgrade. Moving to Redis v4 exposed a scoping issue: the Redis client wasn’t being referenced properly, causing &lt;code&gt;ReferenceError: Client not&lt;/code&gt; found and crashing our app in ECS logs. Tokens couldn’t be set or validated, which meant the password reset flow completely failed. Fixing this required revisiting the Redis initialization, ensuring proper connection handling, and maintaining compatibility with parts of the codebase still expecting the legacy behavior.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Overcoming the Issues&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The hotfix involved:&lt;/p&gt;

&lt;p&gt;Creating a new email template in Portuguese, with dynamic company branding in the subject line. To bring the Company's name into the email subject, I had to send a &lt;code&gt;company UUID&lt;/code&gt; to another system and then receive the company name.&lt;/p&gt;

&lt;p&gt;Adding support for a language parameter in requests, with a fallback to English if none was provided.&lt;/p&gt;

&lt;p&gt;Refactoring Redis initialization to ensure the client was accessible across the app.&lt;/p&gt;

&lt;p&gt;Adjusting for Redis v4 changes while keeping legacy expectations working until the rest of the system could be migrated.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Business Impact&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;These fixes had direct and immediate value:&lt;/p&gt;

&lt;p&gt;Clearer communication: Password recovery emails now include the company name and are localized, reducing confusion for end users.&lt;/p&gt;

&lt;p&gt;Restored critical functionality: Users can once again reset their passwords without encountering errors, eliminating support tickets and frustration.&lt;/p&gt;

&lt;p&gt;Stability under change: Despite ongoing development work, the hotfix restored confidence that critical user flows remain reliable.&lt;/p&gt;

&lt;p&gt;These changes turned a broken, confusing experience into a smooth, branded, and functional process. It was a reminder that sometimes the fastest path to protecting user trust is a focused hotfix, even when the technical challenges involve version upgrades and tricky scope bugs.&lt;/p&gt;

</description>
      <category>redis</category>
      <category>hotfix</category>
      <category>javascript</category>
    </item>
    <item>
      <title>Improving Upload Performance: Why I Chose Node.js Over Python</title>
      <dc:creator>Marcelo Magario</dc:creator>
      <pubDate>Fri, 01 Aug 2025 23:34:14 +0000</pubDate>
      <link>https://dev.to/marcelomagario/improving-upload-performance-why-i-chose-nodejs-over-python-2hki</link>
      <guid>https://dev.to/marcelomagario/improving-upload-performance-why-i-chose-nodejs-over-python-2hki</guid>
      <description>&lt;p&gt;Handling large file (20GB +) can be tough. I'm studying a way to improve performance on our file manager system when uploading a large files. The initial idea is to create a PoC to check performance between Node.JS and Python workers. &lt;/p&gt;

&lt;p&gt;Our feature users will paste a Google Drive link, and in the background, the system downloads the file and uploads it to AWS S3 — all without blocking the user.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What the System Does&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;User pastes a Google Drive link&lt;/li&gt;
&lt;li&gt;The system adds a job to a queue&lt;/li&gt;
&lt;li&gt;A background worker downloads the file&lt;/li&gt;
&lt;li&gt;The file is uploaded to AWS S3&lt;/li&gt;
&lt;li&gt;The user gets notified when it’s done&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Main Goals&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Handle big files (20GB+)&lt;/li&gt;
&lt;li&gt;Use low memory (no saving to disk)&lt;/li&gt;
&lt;li&gt;Be fast and reliable&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Python vs Node.js&lt;/strong&gt;&lt;br&gt;
I tested both. Python (with FastAPI and boto3) works, but streaming big files needs extra care for what I've seen in my researches.&lt;/p&gt;

&lt;p&gt;Node.js is built for streaming. Using &lt;code&gt;stream&lt;/code&gt;, fs, and &lt;code&gt;@aws-sdk/client-s3&lt;/code&gt;, I could pipe the file directly from Google Drive to S3. No buffer, no saving to disk, and memory stays low.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Why I Picked Node.js&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Streams are native and simple&lt;/li&gt;
&lt;li&gt;Uses less memory&lt;/li&gt;
&lt;li&gt;It seems to be more stable for this kind of task&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;For moving large files from cloud to cloud, Node.js seems to be a better and more efficient choice.&lt;/p&gt;

&lt;p&gt;I’ll post the solution soon once is done!&lt;/p&gt;

</description>
      <category>performance</category>
      <category>refactoring</category>
      <category>worker</category>
      <category>queues</category>
    </item>
    <item>
      <title>The RabbitMQ queue got stuck - How I solve this bug</title>
      <dc:creator>Marcelo Magario</dc:creator>
      <pubDate>Thu, 17 Jul 2025 17:38:28 +0000</pubDate>
      <link>https://dev.to/marcelomagario/payload-validation-3kh1</link>
      <guid>https://dev.to/marcelomagario/payload-validation-3kh1</guid>
      <description>&lt;p&gt;Recently I faced a problem at work: our mailing system stopped working and a heap of emails weren’t being sent. When we looked at RabbitMQ:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F10n7pfjroehi58qjx0vr.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F10n7pfjroehi58qjx0vr.png" alt=" " width="723" height="454"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;There was one “Unacked” message stuck on the consumer, and lots of “Ready” messages were piling up.&lt;/p&gt;

&lt;p&gt;After investigating, we found that our mailer supplier’s API broke because the mailing payload was over 30 MB. Digging deeper, we saw that one user had sent a mailing with a 70 MB attachment.&lt;/p&gt;

&lt;p&gt;First, I tried adding attachment-file-size validation and overall payload-size validation (body + attachments) on our backend. But the problem was that the user only saw “the file is too large” once they tried to save or send the mailing. So, after analyzing our process, I ended up putting validation in both the frontend and backend.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Frontend:&lt;/strong&gt; &lt;/p&gt;

&lt;p&gt;Validating the attachment size on the frontend lets the user know immediately if the file is too large:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;const handleAttachment = () =&amp;gt; {
  if (selectedFiles &amp;amp;&amp;amp; selectedFiles[0]) {
    const sizeInMB = selectedFiles[0].size / (1024 * 1024)
    if (sizeInMB &amp;gt; 9) {
      setShowSizeError(true)
      return
    }
  }
  closeModal()
}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Backend&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;To make sure our mailer supplier’s API won’t break again, I added payload validation on our backend. It calculates the size of all attachments plus the email content, then checks whether the total size exceeds our limit:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;// helper/requestSize.js
module.exports.computeRequestSize = (request) =&amp;gt; {
  let totalSize = 0
  let bodyBytes = 0
  let attachmentsBytes = 0

  if (request.body) {
    const serialized = JSON.stringify(request.body)
    bodyBytes = Buffer.byteLength(serialized, 'utf8')
    totalSize += bodyBytes
  }

  if (request.files) {
    Object.values(request.files).forEach(file =&amp;gt; {
      attachmentsBytes += file.size
    })
    totalSize += attachmentsBytes
  }

  return totalSize
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;and on the endpoint:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;const payloadSize = util.computeRequestSize(request)
if (payloadSize &amp;gt; CONFIG.MAX_REQUEST_SIZE) {
  response.sendError400(CONFIG.ERROR_PAYLOAD_TOO_LARGE)
  return next()
}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;blockquote&gt;
&lt;p&gt;For safety reasons I always change a little bit my code before post here.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Now we know this won’t happen again. &lt;br&gt;
What would you do?&lt;/p&gt;

</description>
      <category>mailer</category>
      <category>bug</category>
      <category>fullstack</category>
    </item>
    <item>
      <title>New project deployed on AWS! Wi-Fi Connectivity Notification!</title>
      <dc:creator>Marcelo Magario</dc:creator>
      <pubDate>Wed, 09 Jul 2025 23:15:46 +0000</pubDate>
      <link>https://dev.to/marcelomagario/new-project-deployed-on-aws-wi-fi-connectivity-notification-3fcn</link>
      <guid>https://dev.to/marcelomagario/new-project-deployed-on-aws-wi-fi-connectivity-notification-3fcn</guid>
      <description>&lt;p&gt;So, as usually the project starts to solve a problem this one is not different either. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The problem&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;My family has a beach house, and very often we go there and once we arrive we find out that the internet is not working! so annoying! That is really bad because sometimes we have plenty of work to do and we can't do much. So frustrating going there and wasting the afternoon calling and waiting the technician resolution. &lt;/p&gt;

&lt;p&gt;So I decided to solve it for good! &lt;br&gt;
I was thinking: what if we could know if the internet is working (or not) before we arrive there? even better, what if we could be notified if the internet was down or restored? well, that's when it all started the idea. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The solution&lt;/strong&gt;&lt;br&gt;
I took my brother's really old Galaxy A5 (2016) falling apart lol. It was just sitting in a drawer, and turned it into a simple internet monitor. Every 6 hours, it sends a quick GET request to a /heartbeat endpoint I created on a Node.js backend. This cycle can be configurated on .env to better attend your needs.&lt;/p&gt;

&lt;p&gt;The backend is deployed on an AWS EC2 instance and already running in production.&lt;/p&gt;

&lt;p&gt;If that heartbeat stops coming, the backend notices and sends me an email using AWS SES. When the connection comes back, I get another email letting me know it's back online.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The Stack&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Galaxy A5 (2016): Sends a simple GET request every 6 hours. To send the requests I'm using the Macrodroid.&lt;/p&gt;

&lt;p&gt;AWS EC2 (Node.js): Stores the last ping timestamp and checks for missed heartbeats&lt;/p&gt;

&lt;p&gt;AWS SES: Sends email alerts if the device goes silent or comes back&lt;/p&gt;

&lt;p&gt;Here's what the &lt;strong&gt;archtecture&lt;/strong&gt; looks like:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fz1cvs04trrfl12cn9fad.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fz1cvs04trrfl12cn9fad.png" alt=" " width="630" height="301"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;It’s deployed, running, and giving me peace of mind!&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyrllw5x32vrde2l3zidm.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyrllw5x32vrde2l3zidm.png" alt=" " width="800" height="690"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fag6rt78ydl9qgfpjati0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fag6rt78ydl9qgfpjati0.png" alt=" " width="800" height="161"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Have a look on the code...&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Github REPO:&lt;/strong&gt;&lt;br&gt;
&lt;a href="https://github.com/marcelomagario/wifi-check" rel="noopener noreferrer"&gt;https://github.com/marcelomagario/wifi-check&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Difficulties &amp;amp; Learnings:&lt;/strong&gt;&lt;br&gt;
AWS SES Region: I learned that email verification is region-specific. My EC2 was in Ohio (us-east-2), but my SES was in N. Virginia (us-east-1). I had to verify my email in the correct region to make it work.&lt;br&gt;
Email Verification: SES requires all sender and (in sandbox mode) recipient emails to be verified. This step is easy to miss!&lt;br&gt;
Spam Issues: Emails sent from SES using a Gmail address as the sender often end up in spam. Using a custom domain is the best way to improve deliverability (but I didn't do because I didn't want to pay! cheap bastard! heheh).&lt;br&gt;
Cost: By using a t2.micro EC2 instance and SES Free Tier, I can run this project 24/7 at almost zero cost.&lt;/p&gt;

</description>
      <category>iot</category>
      <category>galaxy</category>
      <category>aws</category>
      <category>node</category>
    </item>
    <item>
      <title>... from now on, English only!</title>
      <dc:creator>Marcelo Magario</dc:creator>
      <pubDate>Wed, 09 Jul 2025 22:44:43 +0000</pubDate>
      <link>https://dev.to/marcelomagario/-from-now-on-english-only-4ell</link>
      <guid>https://dev.to/marcelomagario/-from-now-on-english-only-4ell</guid>
      <description>&lt;p&gt;Hey guys,&lt;/p&gt;

&lt;p&gt;I decided that from now on I will sticky to English only on my posts. &lt;/p&gt;

&lt;p&gt;That way I might be able to achieve a bigger number of devs around the globe and not only in portuguese speakers country, right?&lt;/p&gt;

&lt;p&gt;Furthermore, I will try to avoid to get help from any IA or translation services because I also want to measure my english grammar skills. So, sorry in advanced for my grammar mistakes heheh!&lt;/p&gt;

</description>
      <category>discuss</category>
      <category>career</category>
      <category>productivity</category>
    </item>
    <item>
      <title>Conteinerizei o meu projeto full-stack com o Docker/Docker compose</title>
      <dc:creator>Marcelo Magario</dc:creator>
      <pubDate>Mon, 09 Jun 2025 21:16:21 +0000</pubDate>
      <link>https://dev.to/marcelomagario/conteinerizei-o-meu-projeto-full-stack-com-o-dockerdocker-compose-4k09</link>
      <guid>https://dev.to/marcelomagario/conteinerizei-o-meu-projeto-full-stack-com-o-dockerdocker-compose-4k09</guid>
      <description>&lt;p&gt;Fala pessoal! lembra do meu projeto portfólio que estou desenvolvendo?&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/marcelomagario/portfolio" rel="noopener noreferrer"&gt;https://github.com/marcelomagario/portfolio&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Hoje resolvi conteinerizar ele utilizando o &lt;strong&gt;Docker&lt;/strong&gt; e orquestrando com o &lt;strong&gt;Docker Compose&lt;/strong&gt;. &lt;/p&gt;

&lt;p&gt;separei em 2 containes diferentes:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Frontend (React/Vite)&lt;/li&gt;
&lt;li&gt;Backend (Node.js/TypeScript)&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;e configurei da seguinte maneira:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Frontend:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;code&gt;Dockerfile&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;FROM node:20-alpine
WORKDIR /app
COPY package*.json ./
RUN npm install
COPY . .
EXPOSE 5173
ENV HOST=0.0.0.0
ENV PORT=5173
CMD ["npm", "run", "dev"]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Backend:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;code&gt;Dockerfile&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;FROM node:20-alpine
WORKDIR /app
COPY package*.json ./
RUN npm install
COPY . .
EXPOSE 3001
CMD ["npx", "ts-node", "src/app.ts"]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;E na orquestração, fiz da seguinte maneira para que o frontend dependa do backend:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;docker-compose.yml&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;version: '3.8'

services:
  frontend:
    build: 
      context: ./frontend
      dockerfile: Dockerfile
    ports:
      - "5173:5173"
    volumes:
      - ./frontend:/app
      - /app/node_modules
    environment:
      - NODE_ENV=development
    depends_on:
      - backend

  backend:
    build:
      context: ./backend
      dockerfile: Dockerfile
    ports:
      - "3001:3001"
    volumes:
      - ./backend:/app
      - /app/node_modules
    environment:
      - NODE_ENV=development
    env_file:
      - ./backend/.env
    depends_on:
      - postgres

  postgres:
    image: postgres:16-alpine
    ports:
      - "5434:5432"
    environment:
      POSTGRES_USER: ${POSTGRES_USER}
      POSTGRES_PASSWORD: ${POSTGRES_PASSWORD}
      POSTGRES_DB: ${POSTGRES_DB}
    volumes:
      - postgres_data:/var/lib/postgresql/data
      - ./postgres-init:/docker-entrypoint-initdb.d
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Agora para rodar o meu projeto é só rodar o &lt;code&gt;docker-compose up&lt;/code&gt;. Ficou mais fácil de configurar e o ambiente mais consistente! &lt;/p&gt;

</description>
    </item>
    <item>
      <title>Como provisionei meu BD PostgreSQL no AWS RDS</title>
      <dc:creator>Marcelo Magario</dc:creator>
      <pubDate>Mon, 09 Jun 2025 00:22:15 +0000</pubDate>
      <link>https://dev.to/marcelomagario/como-migrei-meu-bd-postgresql-para-o-aws-rds-5255</link>
      <guid>https://dev.to/marcelomagario/como-migrei-meu-bd-postgresql-para-o-aws-rds-5255</guid>
      <description>&lt;p&gt;Provisionei meu primeiro banco de dados PostgreSQL gerenciado na AWS usando o serviço &lt;code&gt;RDS (Relational Database Service)&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;Antes de começar, pesquisei sobre os benefícios do AWS RDS e instantaneamente percebi que em vez de eu mesmo instalar, configurar e manter o PostgreSQL em um servidor (EC2), a AWS faria esse trabalho por mim. Além disso, ainda com benefícios de atualizações de versão, patches de segurança, backups automáticos, escalabilidade e monitoramento etc.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Como Criei o Banco de Dados&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;O processo no Console da AWS foi bem intuitivo. Mas como estou focando no aprendizado e evitar gastos desnecessários, utilizei a opção de "Free Tier" para garantir que não teria custos neste meu ambiente de desenvolvimento.&lt;/p&gt;

&lt;p&gt;Credenciais: Defini um DB instance identifier (o nome do meu servidor, que chamei de portfolio-db-instance) e as credenciais de acesso (usuário e senha master). O RDS usou essas informações para criar o usuário principal do banco.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Conectividade: Este foi o ponto mais crítico da configuração!&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Public Access:&lt;/strong&gt; Para conseguir me conectar da minha máquina local, marquei a opção "Public access" como "Yes". Aprendi que em um ambiente de produção, isso é uma má prática de segurança, onde o ideal é manter o banco em uma sub-rede privada, ou seja, só acessado de dentro da AWS. &lt;/p&gt;

&lt;p&gt;VPC Security Group: Este era o firewall virtual da minha instância. Criei um novo grupo (portfolio-db-sg) que, por padrão, bloqueava todo o tráfego de entrada.&lt;/p&gt;

&lt;p&gt;Após a criação, o RDS me forneceu um Endpoint. Este passou a ser o host do meu banco de dados, um endereço único no formato &lt;code&gt;nome-da-instancia.xxxxxxxxxxxx.regiao.rds.amazonaws.com&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Como Configurei o Acesso 🛡️&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Por padrão, mesmo com o acesso público habilitado, descobri que o firewall (Security Group) ainda impedia minha conexão. Para resolver, precisei criar uma regra de liberação.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Naveguei até a configuração do Security Group associado à minha instância RDS.&lt;/li&gt;
&lt;li&gt;Adicionei uma Inbound Rule (Regra de Entrada).&lt;/li&gt;
&lt;li&gt;Configurei a regra da seguinte forma:
Type: PostgreSQL (que já preencheu a porta 5432).
Source: My IP (a AWS detectou e preencheu meu IP público).
Essa regra declarou para o firewall só permitir conexões na porta 5432 vindas somente do meu endereço de IP.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Agora que está rodando, vou fuçar mais o &lt;code&gt;CloudWatch&lt;/code&gt; e &lt;code&gt;RDS Performance Insights&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcbpydvb6t7uyvspkqagj.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcbpydvb6t7uyvspkqagj.png" alt=" " width="800" height="372"&gt;&lt;/a&gt;&lt;/p&gt;

</description>
      <category>awsrds</category>
      <category>postgres</category>
      <category>cloud</category>
      <category>aws</category>
    </item>
    <item>
      <title>Minha implementação de Autenticação com JWT e Bcrypt</title>
      <dc:creator>Marcelo Magario</dc:creator>
      <pubDate>Sat, 07 Jun 2025 21:19:26 +0000</pubDate>
      <link>https://dev.to/marcelomagario/minha-implementacao-de-autenticacao-com-jwt-e-bcrypt-1a53</link>
      <guid>https://dev.to/marcelomagario/minha-implementacao-de-autenticacao-com-jwt-e-bcrypt-1a53</guid>
      <description>&lt;p&gt;Eai devs!&lt;/p&gt;

&lt;p&gt;Neste post, vou detalhar o passo a passo de como de como implementei a autenticação stateless usando &lt;code&gt;Node.js&lt;/code&gt; com &lt;code&gt;TypeScript&lt;/code&gt;, &lt;code&gt;Express&lt;/code&gt;, &lt;code&gt;PostgreSQL&lt;/code&gt;, e a dupla &lt;code&gt;bcryptjs&lt;/code&gt; para hashing de senhas e JSON Web Tokens (&lt;code&gt;JWT&lt;/code&gt;) para gerenciamento de sessão no meu projeto pessoal.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;A Estrutura do Projeto&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Para manter o código organizado, dividi a lógica em três partes principais:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1. authRoutes.ts:&lt;/strong&gt; Define os endpoints &lt;code&gt;/register&lt;/code&gt; e &lt;code&gt;/login&lt;/code&gt;. Sua única responsabilidade é direcionar as requisições para o controller correto.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2. authController.ts:&lt;/strong&gt; Contém a regra de negócio. É aqui que validamos os dados, interagimos com o banco e decidimos qual resposta enviar.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3. utils/auth.ts (Abstrato):&lt;/strong&gt; Funções complementares para hashear senhas e gerar tokens, para não repetirmos código. Achei que ficou mais organizado desse jeito.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Passo 1: Registro do Usuário e Hashing da Senha&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Nunca devemos salvar senhas em texto puro no banco de dados. A primeira etapa é garantir que a senha do usuário seja transformada em um hash irreversível. Para isso, usamos o &lt;code&gt;bcryptjs&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;O controller &lt;code&gt;authController.ts&lt;/code&gt; cuida desse processo na função &lt;code&gt;registerUser&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import bcrypt from 'bcryptjs';

// Função conceitual que criei para hashear uma senha
async function criarHashSenha(senha: string): Promise&amp;lt;string&amp;gt; {
    // O 'salt' adicionou uma camada extra de segurança ao hash
    const salt = await bcrypt.genSalt(10);
    const hash = await bcrypt.hash(senha, salt);
    return hash;
}

// Como eu usei:
// const hashDaSenha = await criarHashSenha(senhaDoUsuario);
// E então, salvei o `hashDaSenha` no meu banco de dados.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;A função &lt;code&gt;hashPassword&lt;/code&gt; em &lt;code&gt;utils/auth.ts&lt;/code&gt; seria algo simples como:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;// utils/auth.ts
import bcrypt from 'bcryptjs';

export const hashPassword = async (password: string): Promise&amp;lt;string&amp;gt; =&amp;gt; {
    const salt = await bcrypt.genSalt(10); // Gera um "sal" para fortalecer o hash
    return await bcrypt.hash(password, salt);
};
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Passo 2: gerando token de acesso do JWT&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Após validar a senha no login com &lt;code&gt;bcrypt.compare()&lt;/code&gt;, o passo seguinte que implementei foi gerar o token. Um ponto de atenção que tive foi nunca deixar a chave secreta no código.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import jwt from 'jsonwebtoken';

// Função conceitual que usei para gerar um token
function gerarToken(idDoUsuario: string): string {

    // 🔑 A chave secreta busquei das minhas variáveis de ambiente!
    const chaveSecreta = process.env.JWT_SECRET;

    if (!chaveSecreta) {
        throw new Error('Chave secreta do JWT não definida!');
    }

    const payload = { id: idDoUsuario };

    // Assinei o token com a chave e defini um tempo de expiração
    return jwt.sign(payload, chaveSecreta, {
        expiresIn: '1h' // Token expira em 1 hora
    });
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Passo 3: Middleware de proteção&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Este foi o guardião que implementei para as minhas rotas. Neste exemplo, foquei apenas na lógica de verificação do token, que foi o coração do processo.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;// Middleware conceitual que criei para proteger as rotas
function protegerRota(req: Request, res: Response, next: NextFunction) {
    const authHeader = req.headers.authorization;

    if (!authHeader || !authHeader.startsWith('Bearer ')) {
        // Neguei o acesso se o token não foi fornecido
        return res.status(401).json({ message: 'Acesso negado: token não fornecido.' });
    }

    try {
        const token = authHeader.split(' ')[1];
        const chaveSecreta = process.env.JWT_SECRET as string;

        // jwt.verify() validou o token. Se fosse inválido, dispararia um erro.
        const payloadVerificado = jwt.verify(token, chaveSecreta);

        next(); // Token válido, liberei o acesso!
    } catch (error) {
        // Neguei o acesso se o token era inválido
        return res.status(401).json({ message: 'Token inválido ou expirado.' });
    }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;O que acharam? vocês fariam diferente? &lt;/p&gt;

</description>
      <category>autenticacao</category>
      <category>jwt</category>
      <category>bcrypt</category>
      <category>api</category>
    </item>
    <item>
      <title>Como resolvi a vulnerabilidade de SQL Injection hoje...</title>
      <dc:creator>Marcelo Magario</dc:creator>
      <pubDate>Mon, 02 Jun 2025 16:16:59 +0000</pubDate>
      <link>https://dev.to/marcelomagario/blindando-seus-e-mails-contra-invasao-html-na-pratica-4jel</link>
      <guid>https://dev.to/marcelomagario/blindando-seus-e-mails-contra-invasao-html-na-pratica-4jel</guid>
      <description>&lt;p&gt;E aí, devs! Hoje precisava solucionar alguns pontos de vulnerabilidade de SQL injection de um resultado de pentest. O teste fazia uma simulação de alterar o payload da request e colocava um HTML e um script no meio desses dados e alterar todo o e-mail que ia ser disparado.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Mudava toda a conversa original do e-mail.&lt;/li&gt;
&lt;li&gt;Escondia as partes que realmente importavam.&lt;/li&gt;
&lt;li&gt;colocava uns links fakes para phishing, fazendo parecer que são os links de verdade.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Para solucionar isso fiz um &lt;code&gt;middleware&lt;/code&gt; em Node.js no nosso projeto para validar e sanitizar todos os dados no backend antes de serem usados:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1. Pega os Dados da Requisição:&lt;/strong&gt; Primeiro, ele pega os dados que vieram na bagagem da requisição web, especificamente do corpo &lt;code&gt;body&lt;/code&gt; dela. É dali que ele espera tirar informações como um pacote de dados do usuário &lt;code&gt;userInputPayload&lt;/code&gt;, uma dica para o assunto do e-mail &lt;code&gt;emailSubjectHint&lt;/code&gt;, o link principal &lt;code&gt;mainLink&lt;/code&gt; e o link da imagem da marca &lt;code&gt;brandImageUrl&lt;/code&gt;.&lt;br&gt;
&lt;strong&gt;2. Verificação Essencial:&lt;/strong&gt; Ele confere se o &lt;code&gt;userInputPayload&lt;/code&gt; (que é onde devem estar os detalhes do cliente como nome, e-mail, empresa, telefone, categoria de perfil e a origem do dado) realmente veio e se é um objeto válido. Se não, já corta o mal pela raiz e avisa que tá faltando coisa.&lt;br&gt;
&lt;strong&gt;3. Prepara as Ferramentas de Checagem (Regex):&lt;/strong&gt; O "segurança" usa algumas expressões regulares para poder inspecionar os dados:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Uma regex é especialista em farejar caracteres perigosos (tipo &lt;code&gt;&amp;lt;, &amp;gt;, /, ', ", ;, =&lt;/code&gt;), aqueles que são a porta de entrada pra HTML injection.&lt;/li&gt;
&lt;li&gt;Outra regex é focada em garantir que o formato do e-mail (&lt;code&gt;clientEmail&lt;/code&gt;) esteja correto.&lt;/li&gt;
&lt;li&gt;E uma terceira cuida de verificar se as URLs fornecidas para o &lt;code&gt;mainLink&lt;/code&gt; e &lt;code&gt;brandImageUrl&lt;/code&gt; seguem o padrão esperado (com protocolo, domínio, etc.).&lt;/li&gt;
&lt;li&gt;Inspeção Campo a Campo: Com as ferramentas em mãos, ele passa um pente fino em cada informação:&lt;/li&gt;
&lt;li&gt;Dados como nome do cliente (&lt;code&gt;clientName&lt;/code&gt;), organização (&lt;code&gt;clientOrg&lt;/code&gt;), categoria de perfil (&lt;code&gt;profileCategory&lt;/code&gt;), tag de origem do dado (&lt;code&gt;dataSourceTag&lt;/code&gt;) e telefone (&lt;code&gt;clientPhone&lt;/code&gt;) são checados contra a regex de caracteres perigosos.&lt;/li&gt;
&lt;li&gt;O e-mail do cliente (&lt;code&gt;clientEmail&lt;/code&gt;) tem seu formato validado pela regex específica para e-mails.&lt;/li&gt;
&lt;li&gt;Os campos &lt;code&gt;emailSubjectHint&lt;/code&gt;, &lt;code&gt;mainLink&lt;/code&gt; e &lt;code&gt;brandImageUrl&lt;/code&gt; são opcionais. Se eles não vierem, tudo bem. Mas se vierem, também passam pela checagem: &lt;code&gt;emailSubjectHint&lt;/code&gt; contra caracteres perigosos, e mainLink e brandImageUrl contra a regex de URLs.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;4. Decisão Final:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Algo errado? Bloqueia! Se qualquer um desses campos não passar na inspeção, o "segurança" impede a continuação do processo. Ele manda uma resposta de erro (um &lt;code&gt;400 Bad Request&lt;/code&gt;), avisando que os dados estão inválidos ou contêm caracteres não permitidos. Isso evita que dados "contaminados" sigam adiante.&lt;/li&gt;
&lt;li&gt;Tudo certinho? Pode passar! Se todos os dados estiverem limpinhos e válidos, o "segurança" dá sinal verde, e a requisição continua seu fluxo normal para o próximo passo, que seria, por exemplo, a lógica de montar e enviar o e-mail.&lt;/li&gt;
&lt;li&gt;Essa abordagem ajuda a garantir que apenas dados seguros e bem formatados sejam usados, diminuindo drasticamente o risco de injeções de HTML.&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>seguranca</category>
      <category>sqlinjection</category>
      <category>middleware</category>
      <category>node</category>
    </item>
    <item>
      <title>Criando o meu Projeto de Portfólio pessoal com Deploy na AWS!</title>
      <dc:creator>Marcelo Magario</dc:creator>
      <pubDate>Sun, 01 Jun 2025 20:40:47 +0000</pubDate>
      <link>https://dev.to/marcelomagario/criando-o-meu-projeto-de-portfolio-pessoal-com-deploy-na-aws-2ei9</link>
      <guid>https://dev.to/marcelomagario/criando-o-meu-projeto-de-portfolio-pessoal-com-deploy-na-aws-2ei9</guid>
      <description>&lt;p&gt;E aí, devs! vou fala um pouco mais sobre o projeto que estou criando que envolve desenvolvimento full-stack e cloud: meu novo &lt;code&gt;Portfólio Pessoal&lt;/code&gt; com &lt;code&gt;Blog&lt;/code&gt; e &lt;code&gt;CMS integrados&lt;/code&gt; e tudo com o &lt;code&gt;deploy que será feito na AWS&lt;/code&gt;. Essa experiência vai abranger vários pontos e com certeza vários desafios, que vou postando aqui nesse blog e descrevendo a minha experiência nesse projeto que vai do código às nuvens! &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;O Projeto: Visão Geral&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;A idéia é criar uma plataforma completa unificada para exibir meus trabalhos e compartilhar conhecimento técnico. O sistema é composto por:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Backend (API):&lt;/strong&gt; Uma API robusta e completa desenvolvida com &lt;code&gt;Node.js&lt;/code&gt;, &lt;code&gt;TypeScript&lt;/code&gt; e &lt;code&gt;Express&lt;/code&gt;. Ela já está pronta e lidando com posts do blog, tags, &lt;strong&gt;autenticação para o CMS (via JWT com bcryptjs)&lt;/strong&gt;, e até mesmo o &lt;strong&gt;envio de e-mails&lt;/strong&gt; do formulário de contato usando &lt;code&gt;AWS SES&lt;/code&gt;. Tudo isso conectado a um banco de dados &lt;code&gt;PostgreSQL&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Frontend (React):&lt;/strong&gt; Uma interface de usuário responsiva, construída com &lt;code&gt;React&lt;/code&gt; e &lt;code&gt;TypeScript&lt;/code&gt;, utilizando Vite para agilidade no desenvolvimento. O frontend consome a API para mostrar os posts, o formulário de contato, e em breve, a seção de portfólio e o painel de &lt;strong&gt;gerenciamento de conteúdo (CMS)&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;Tecnologias no Comando 🛠️&lt;/p&gt;

&lt;p&gt;No Backend (API):&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Linguagem: TypeScript&lt;/li&gt;
&lt;li&gt;Runtime: Node.js&lt;/li&gt;
&lt;li&gt;Framework: Express.js&lt;/li&gt;
&lt;li&gt;Banco de Dados: PostgreSQL&lt;/li&gt;
&lt;li&gt;Autenticação: JWT + bcryptjs&lt;/li&gt;
&lt;li&gt;Serviços AWS: SES (para e-mails)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;No Frontend&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Biblioteca: React.js&lt;/li&gt;
&lt;li&gt;Linguagem: TypeScript&lt;/li&gt;
&lt;li&gt;Build Tool: Vite&lt;/li&gt;
&lt;li&gt;Roteamento: react-router-dom&lt;/li&gt;
&lt;li&gt;Este projeto não é apenas um portfólio; é um campo de aprendizado prático para arquitetura de software, boas práticas e, crucialmente, operações &lt;strong&gt;DevOps com foco em AWS&lt;/strong&gt;. O backend já está 100% funcional, e o frontend está evoluindo rapidamente.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Quer dar uma olhada no código para ver a evolução? O repositório está aqui: &lt;a href="https://github.com/marcelomagario/portfolio" rel="noopener noreferrer"&gt;https://github.com/marcelomagario/portfolio&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Nos próximos posts, vou compartilhar mais sobre cada detalhes de cada tarefas e também os desafios do caminho. Valeu!&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5kdehjobl7oqjq4d7dc4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5kdehjobl7oqjq4d7dc4.png" alt=" " width="800" height="558"&gt;&lt;/a&gt;&lt;/p&gt;

</description>
      <category>aws</category>
      <category>deploy</category>
      <category>cicd</category>
      <category>auth</category>
    </item>
    <item>
      <title>Desvendando o InvalidClientTokenId no AWS SES com Node.js e TypeScript: Aprendi uma lição que a “ordem dos fatores” importa sim.</title>
      <dc:creator>Marcelo Magario</dc:creator>
      <pubDate>Sat, 31 May 2025 00:22:56 +0000</pubDate>
      <link>https://dev.to/marcelomagario/desvendando-o-invalidclienttokenid-no-aws-ses-com-nodejs-e-typescript-aprendi-uma-licao-que-a-274a</link>
      <guid>https://dev.to/marcelomagario/desvendando-o-invalidclienttokenid-no-aws-ses-com-nodejs-e-typescript-aprendi-uma-licao-que-a-274a</guid>
      <description>&lt;p&gt;Desenvolver APIs é sempre um aprendizado, e por vezes nos deparamos com erros que, à primeira vista, parecem apontar para um lugar, mas a causa raiz está em outro.&lt;/p&gt;

&lt;p&gt;Recentemente, ao integrar o AWS SES (Simple Email Service) no meu projeto pessoal de backend Node.js com TypeScript, me deparei com o frustrante erro &lt;code&gt;InvalidClientTokenId&lt;/code&gt;. Este post detalha o problema, o processo de depuração e a solução.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;O Cenário do Problema&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Eu estava construindo o backend para o formulário de contato do meu portfólio pessoal. A ideia era que, ao preencher o formulário no frontend, o backend enviasse um e-mail para mim usando o &lt;code&gt;AWS SES&lt;/code&gt;. Minha aplicação era construída com &lt;code&gt;Node.js&lt;/code&gt;, &lt;code&gt;TypeScript&lt;/code&gt; e &lt;code&gt;Express.js&lt;/code&gt;, utilizando o &lt;code&gt;@aws-sdk/client-ses&lt;/code&gt; para a comunicação com a &lt;code&gt;AWS&lt;/code&gt; e o &lt;code&gt;dotenv&lt;/code&gt; para gerenciar as variáveis de ambiente.&lt;br&gt;
As credenciais da AWS (Access Key ID e Secret Access Key) estavam devidamente configuradas no meu arquivo .env, junto com a região da AWS e os e-mails de origem e destino para o SES. Minhas políticas de IAM na AWS garantiam acesso total ao SES para o usuário correspondente, e a identidade do e-mail de origem estava devidamente verificada na região correta do SES.&lt;/p&gt;

&lt;p&gt;Tudo parecia estar em ordem, mas ao testar a rota de envio de e-mails, o servidor retornava uma mensagem de erro genérica. Ao inspecionar os logs detalhados do backend, a mensagem era clara: &lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;InvalidClientTokenId: The security token included in the request is invalid..&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;strong&gt;A Persistência do Erro e a Confusão Inicial&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Este erro da AWS geralmente indica que as credenciais de autenticação são inválidas ou estão ausentes. Naturalmente, minhas primeiras ações foram:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1. Verificar o .env:&lt;/strong&gt; Confirmei que as chaves estavam lá, sem erros de digitação ou espaços extras.&lt;br&gt;
&lt;strong&gt;2. Revisitar o IAM na AWS:&lt;/strong&gt; Verifiquei se o usuário tinha as permissões corretas (inclusive com o AmazonSESFullAccess) e se a chave de acesso que eu estava usando no .env estava ativa no console da AWS.&lt;br&gt;
&lt;strong&gt;3. Checar o AWS SES:&lt;/strong&gt; Confirmei que o e-mail de origem estava verificado e que a região do SES correspondia à que estava no meu arquivo de variáveis de ambiente.&lt;/p&gt;

&lt;p&gt;Apesar de todas as verificações apontarem que "estava tudo certo", o erro persistia. Tentei gerar novas chaves de acesso no IAM, apagar caches de credenciais locais que poderiam existir, mas a teimosia do &lt;code&gt;InvalidClientTokenId&lt;/code&gt; continuava.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;A Revelação Inesperada: undefined&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;A chave para desvendar o mistério veio quando decidi inspecionar o que o cliente do SES realmente estava "vendo" em tempo de execução. Adicionei um log estratégico no meu controller, exatamente no ponto onde o cliente SES era inicializado, para imprimir o valor da Access Key ID que estava sendo utilizada.&lt;/p&gt;

&lt;p&gt;A saída foi chocante e reveladora: &lt;code&gt;SES Client Access Key ID being used: undefined.&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Isso significava que, embora a variável &lt;code&gt;AWS_ACCESS_KEY_ID&lt;/code&gt; estivesse no meu arquivo .env e eu soubesse que ela era lida em algum momento, no instante em que o cliente SES estava sendo configurado, essa variável simplesmente não existia no ambiente do processo.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;A Causa Raiz: A Ordem Importa!&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;O problema não estava nas credenciais em si, nem na falta de permissões, mas sim &lt;strong&gt;na ordem&lt;/strong&gt; em que as coisas eram carregadas no meu aplicativo Node.js.&lt;/p&gt;

&lt;p&gt;Em ambientes JavaScript com módulos (como o TypeScript é transpilado), as instruções de &lt;code&gt;import&lt;/code&gt; no topo de um arquivo são processadas e seus módulos são executados antes que o restante do código do arquivo principal comece. No meu &lt;code&gt;app.ts,&lt;/code&gt; a chamada para &lt;code&gt;dotenv.config()&lt;/code&gt; (que carrega as variáveis do .env para process.env) estava posicionada abaixo das minhas declarações de import.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Meu app.ts importava a rota de contato.&lt;/li&gt;
&lt;li&gt;A rota de contato, por sua vez, importava o controller de contato.&lt;/li&gt;
&lt;li&gt;Quando o controller de contato era carregado e o cliente do SES era inicializado (uma ação que acontece no momento da importação do módulo), o dotenv.config() no app.ts ainda não havia sido executado.&lt;/li&gt;
&lt;li&gt;Consequentemente, as variáveis de ambiente essenciais da AWS ainda não estavam disponíveis em process.env.&lt;/li&gt;
&lt;li&gt;O cliente SES era instanciado com valores undefined para as credenciais, e a AWS naturalmente as rejeitava com o erro InvalidClientTokenId.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;A Solução Definitiva&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;A solução foi simples, mas de impacto: mover a linha &lt;code&gt;dotenv.config();&lt;/code&gt; para o topo absoluto do meu arquivo &lt;code&gt;app.ts&lt;/code&gt;, garantindo que ela fosse a primeira coisa a ser executada.&lt;/p&gt;

&lt;p&gt;Com essa pequena, mas crucial, alteração, as variáveis de ambiente eram carregadas no &lt;code&gt;process.env&lt;/code&gt; antes que qualquer outro módulo que dependesse delas fosse inicializado. O cliente SES, então, conseguia acessar as credenciais corretas, se autenticar com sucesso na AWS, e os e-mails começaram a ser entregues sem problemas.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Lições Aprendidas&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Este problema reforçou algumas lições importantes no desenvolvimento de APIs:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1. A Ordem de Carregamento é Crucial:&lt;/strong&gt; Em Node.js e TypeScript, a sequência de execução de módulos e inicializações pode ter um impacto significativo. Variáveis de ambiente precisam ser carregadas antes que qualquer código que as utilize seja executado.&lt;br&gt;
&lt;strong&gt;2. Debug com console.log:&lt;/strong&gt; Uma ferramenta simples como console.log pode ser incrivelmente poderosa para inspecionar o estado das variáveis e o fluxo de execução em um ponto específico do código.&lt;br&gt;
&lt;strong&gt;3. Confie nos Logs, mas Depure o Contexto:&lt;/strong&gt; O erro InvalidClientTokenId apontava para credenciais, e elas estavam corretas no .env. O desafio foi entender por que elas não estavam acessíveis no momento certo. O erro da AWS estava correto, mas a causa não era o valor da chave, e sim sua ausência no ambiente de execução.&lt;/p&gt;

&lt;p&gt;Esta experiência foi um ótimo lembrete de como nuances no fluxo de execução de um aplicativo podem levar a desafios de depuração complexos, mas também de como uma abordagem sistemática e a inspeção direta do ambiente podem levar à solução.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>amazonses</category>
      <category>typescript</category>
      <category>node</category>
    </item>
  </channel>
</rss>
