<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Jude Wakim</title>
    <description>The latest articles on DEV Community by Jude Wakim (@vvakim).</description>
    <link>https://dev.to/vvakim</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/vvakim"/>
    <language>en</language>
    <item>
      <title>Building My S3 Security Scanner: A Solo Dev's Journey to Automate AWS Bucket Safety</title>
      <dc:creator>Jude Wakim</dc:creator>
      <pubDate>Sun, 28 Sep 2025 17:33:59 +0000</pubDate>
      <link>https://dev.to/vvakim/building-my-s3-security-scanner-a-solo-devs-journey-to-automate-aws-bucket-safety-36ge</link>
      <guid>https://dev.to/vvakim/building-my-s3-security-scanner-a-solo-devs-journey-to-automate-aws-bucket-safety-36ge</guid>
      <description>&lt;h2&gt;
  
  
  Introduction:  Why I Built This (And What You’re Getting Into)
&lt;/h2&gt;

&lt;p&gt;Hey folks, Jude here — a DevSecOps engineer who spends way too much time wrangling AWS resources and not enough coffee breaks. If you’ve ever spun up an S3 bucket in a rush during a late-night coding sprint, only to realize it’s wide open to the internet the next day, this one’s for you. &lt;/p&gt;

&lt;p&gt;I created the &lt;strong&gt;S3 Security Scanner&lt;/strong&gt; to tackle that exact headache: a simple, &lt;strong&gt;serverless tool that scans your AWS S3 buckets for common misconfigurations&lt;/strong&gt; (like public access or wildcard policies) and &lt;strong&gt;optionally fixes them automatically&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;The goal?&lt;/em&gt; Give solo devs and small teams like mine a “set it and forget it” way to keep buckets secure without constant manual audits. It’s not a full enterprise suite — it’s an MVP I built in a few evenings to boost my skills and maybe sell on the AWS Marketplace someday.&lt;/p&gt;

&lt;p&gt;Tech stack:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Python with Boto3 for the Lambda core,&lt;/li&gt;
&lt;li&gt;CloudFormation for deployment, and&lt;/li&gt;
&lt;li&gt;EventBridge for daily runs.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;em&gt;Purpose:&lt;/em&gt; Peace of mind in the cloud, one bucket at a time.&lt;/p&gt;

&lt;p&gt;Let’s dive into how it came together.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Build Process: From Idea to Running Code
&lt;/h2&gt;

&lt;p&gt;I started with the basics: What pains me most about S3? Public buckets leaking data, right? So, the scanner checks four key areas — Public Access Block settings, ACL grants, bucket policies for wildcards, and default encryption — flagging risks with a simple severity score (none, low, medium, high).&lt;/p&gt;

&lt;p&gt;The heart is a Lambda function. I sketched it out in pseudocode first: Parse an event for config (like excluded buckets), scan via Boto3 calls (&lt;code&gt;list_buckets&lt;/code&gt;, &lt;code&gt;get_bucket_acl&lt;/code&gt;, etc.), and output JSON risks. Then, for remediation, an optional step applies fixes like &lt;code&gt;put_bucket_acl(ACL=’private’)&lt;/code&gt; or enabling Public Access Block — all with safety nets, like dry-run previews.&lt;/p&gt;


&lt;div class="ltag-github-readme-tag"&gt;
  &lt;div class="readme-overview"&gt;
    &lt;h2&gt;
      &lt;img src="https://assets.dev.to/assets/github-logo-5a155e1f9a670af7944dd5e12375bc76ed542ea80224905ecaf878b9157cdefc.svg" alt="GitHub logo"&gt;
      &lt;a href="https://github.com/Judewakim" rel="noopener noreferrer"&gt;
        Judewakim
      &lt;/a&gt; / &lt;a href="https://github.com/Judewakim/s3-misconfig" rel="noopener noreferrer"&gt;
        s3-misconfig
      &lt;/a&gt;
    &lt;/h2&gt;
    &lt;h3&gt;
      
    &lt;/h3&gt;
  &lt;/div&gt;
  &lt;div class="ltag-github-body"&gt;
    
&lt;div id="readme" class="md"&gt;
&lt;div class="markdown-heading"&gt;
&lt;h1 class="heading-element"&gt;S3 Sentry: The Autonomous Data Security Officer&lt;/h1&gt;
&lt;/div&gt;
&lt;p&gt;S3 Sentry is a high-fidelity, multi-tenant security orchestration platform designed to provide continuous oversight of AWS S3 storage environments. It acts as an automated "Data Security Officer," ensuring that organizational data remains private, encrypted, and compliant without manual intervention.&lt;/p&gt;

&lt;div class="markdown-heading"&gt;
&lt;h3 class="heading-element"&gt;The Mission&lt;/h3&gt;
&lt;/div&gt;
&lt;p&gt;In the modern cloud era, a single misconfigured S3 bucket can lead to catastrophic data exposure. S3 Sentry bridges the gap between complex AWS IAM policies and actionable security intelligence by providing a "zero-friction" onboarding experience and automated remediation scanning.&lt;/p&gt;

&lt;div class="markdown-heading"&gt;
&lt;h3 class="heading-element"&gt;Core Architecture&lt;/h3&gt;
&lt;/div&gt;
&lt;p&gt;S3 Sentry utilizes a Cross-Account Trust Handshake to scan client environments securely.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;b&gt;The Handshake&lt;/b&gt;: Uses a unique ExternalID protocol to prevent "Confused Deputy" attacks.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;b&gt;The Engine&lt;/b&gt;: Powered by an orchestrated Prowler CLI integration for industry-standard security checks.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;b&gt;The Single-Table Design&lt;/b&gt;: Leverages Amazon DynamoDB to manage thousands of tenants and millions of findings within a high-performance, scalable schema.&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;…&lt;/div&gt;
  &lt;/div&gt;
  &lt;div class="gh-btn-container"&gt;&lt;a class="gh-btn" href="https://github.com/Judewakim/s3-misconfig" rel="noopener noreferrer"&gt;View on GitHub&lt;/a&gt;&lt;/div&gt;
&lt;/div&gt;


&lt;p&gt;Deployment was straightforward: Zipped the code, uploaded to my S3 bucket, and wrapped it in a CloudFormation template. Users deploy the stack, pick a mode (scanning only or auto-remediation), and EventBridge kicks it off daily. &lt;em&gt;Total time?&lt;/em&gt; About a 5 hours of evenings, thanks to Claude Code for code gen and local testing with Grok.&lt;/p&gt;




&lt;h2&gt;
  
  
  Obstacles: The Bumps That Made It Better
&lt;/h2&gt;

&lt;p&gt;Nothing’s smooth — first, I hit a &lt;code&gt;OperationNotPageableError&lt;/code&gt; trying to paginate &lt;code&gt;list_buckets&lt;/code&gt; (spoiler: S3 doesn’t need it; one call gets everything). Fixed by ditching the paginator. Permissions were of course an issue: &lt;code&gt;AccessDenied&lt;/code&gt; on encryption checks until I added &lt;code&gt;s3:GetEncryptionConfiguration&lt;/code&gt; to the IAM role. And exclusions? My test event had a string instead of a list — classic JSON gotcha. Each snag taught me to test locally first, then deploy iteratively.&lt;/p&gt;




&lt;h2&gt;
  
  
  Wrapping Up: What’s Next?
&lt;/h2&gt;

&lt;p&gt;This project’s my love letter to secure-by-default cloud work — simple, effective, and fun to build. If you’re digging this, &lt;strong&gt;check out my next pieces: one on nailing the scanning logic, another on safe automation, and the final on CloudFormation magic.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;--&lt;/p&gt;

&lt;p&gt;Follow along on Medium or Dev.to for more DevSecOps adventures. What’s your biggest S3 headache? Drop a comment — let’s chat!&lt;/p&gt;

&lt;p&gt;I regularly share hands-on cloud builds, automation tricks, and AWS-focused deep dives across the web:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;🔗 &lt;a href="https://www.linkedin.com/in/jude-wakim" rel="noopener noreferrer"&gt;LinkedIn&lt;/a&gt; — for cloud content, networking, and consulting&lt;/li&gt;
&lt;li&gt;📖 &lt;a href="https://medium.com/@judewakim" rel="noopener noreferrer"&gt;Medium&lt;/a&gt; — where this and other walkthroughs live&lt;/li&gt;
&lt;li&gt;👨‍💻 &lt;a href="https://github.com/Judewakim" rel="noopener noreferrer"&gt;GitHub&lt;/a&gt;— for open-source tools and infra templates&lt;/li&gt;
&lt;li&gt;🖥️ &lt;a href="https://dev.to/vvakim"&gt;Dev.to&lt;/a&gt; — cross-posts and project write-ups&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If you want me to help write the Terraform files or the user data script for your article too, just say the word!&lt;/p&gt;

</description>
      <category>aws</category>
      <category>python</category>
      <category>automation</category>
      <category>security</category>
    </item>
    <item>
      <title>How to Use Docker Compose and Python to Automate Your Jenkins Environment</title>
      <dc:creator>Jude Wakim</dc:creator>
      <pubDate>Sat, 24 May 2025 16:18:47 +0000</pubDate>
      <link>https://dev.to/vvakim/how-to-use-docker-compose-and-python-to-automate-your-jenkins-environment-3eg7</link>
      <guid>https://dev.to/vvakim/how-to-use-docker-compose-and-python-to-automate-your-jenkins-environment-3eg7</guid>
      <description>&lt;p&gt;If you’ve ever dipped your toes into DevOps, chances are you’ve heard about Docker, Jenkins, or automation at some point. In this post, I’ll walk you through a fun little project where I used &lt;strong&gt;Docker Compose&lt;/strong&gt; and &lt;strong&gt;Python&lt;/strong&gt; to automate the creation of a Jenkins environment from scratch. Whether you're just getting into CI/CD tools or looking to add more automation into your workflow, this post should give you some hands-on insight.&lt;/p&gt;

&lt;h2&gt;
  
  
  🐳 What is Docker and Why It’s a Game-Changer
&lt;/h2&gt;

&lt;p&gt;Docker is basically your best friend when it comes to creating consistent environments across machines. It lets you package applications and all their dependencies into containers so they can run anywhere — your laptop, a server, a cloud instance — without the &lt;em&gt;"but it works on my machine"&lt;/em&gt; headache.&lt;/p&gt;

&lt;p&gt;It’s lightweight, fast, and makes spinning up complex environments surprisingly easy. Instead of installing Jenkins (or any other tool) directly on your machine, you just run a container with everything preconfigured. Simple, clean, and reversible.&lt;/p&gt;

&lt;h2&gt;
  
  
  📦 Enter Docker Compose
&lt;/h2&gt;

&lt;p&gt;Docker Compose is like Docker’s orchestration tool for your local environment. It allows you to define multi-container applications in a single &lt;code&gt;docker-compose.yml&lt;/code&gt; file and spin them all up with just one command. Perfect for projects that need more than one moving piece (like Jenkins and a custom volume or service).&lt;/p&gt;

&lt;p&gt;It’s kind of like having a remote control for all your Docker containers — and trust me, once you start using it, you’ll wonder how you ever managed without it.&lt;/p&gt;

&lt;h2&gt;
  
  
  ⚙️ Why Automation Matters
&lt;/h2&gt;

&lt;p&gt;Manually repeating setup steps is a productivity killer. Automation helps reduce errors, saves time, and makes your projects way more maintainable. For this project, I automated:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Building a custom Jenkins image&lt;/li&gt;
&lt;li&gt;Launching a Jenkins container&lt;/li&gt;
&lt;li&gt;Mounting persistent data&lt;/li&gt;
&lt;li&gt;Verifying that the setup worked&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Using Python, I wrapped all those commands into a simple script so that I (or anyone else) could spin up the environment with one command. Clean and scalable.&lt;/p&gt;




&lt;h2&gt;
  
  
  🛠️ Step-by-Step: Building the Project
&lt;/h2&gt;

&lt;p&gt;Let’s break down what I actually did to bring this to life.&lt;/p&gt;

&lt;h3&gt;
  
  
  1. Wrote the &lt;code&gt;docker-compose.yml&lt;/code&gt;
&lt;/h3&gt;

&lt;p&gt;This file defines the Jenkins container, ports, volume, and image. Here’s a simplified version:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;version&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;3.8"&lt;/span&gt;
&lt;span class="na"&gt;services&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;jenkins&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;build&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;.&lt;/span&gt;
    &lt;span class="na"&gt;container_name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;my-custom-jenkins&lt;/span&gt;
    &lt;span class="na"&gt;ports&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;8080:8080"&lt;/span&gt;
    &lt;span class="na"&gt;volumes&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;jenkins_data_custom:/var/jenkins_home&lt;/span&gt;

&lt;span class="na"&gt;volumes&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;jenkins_data_custom&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This lets me build a Jenkins container from a custom image and persist the Jenkins home directory with a volume.&lt;/p&gt;

&lt;h3&gt;
  
  
  2. Created a Dockerfile
&lt;/h3&gt;

&lt;p&gt;The Dockerfile defines the custom Jenkins image:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight docker"&gt;&lt;code&gt;&lt;span class="k"&gt;FROM&lt;/span&gt;&lt;span class="s"&gt; jenkins/jenkins:lts&lt;/span&gt;

&lt;span class="k"&gt;USER&lt;/span&gt;&lt;span class="s"&gt; root&lt;/span&gt;

&lt;span class="k"&gt;RUN &lt;/span&gt;apt-get update &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; apt-get &lt;span class="nb"&gt;install&lt;/span&gt; &lt;span class="nt"&gt;-y&lt;/span&gt; &lt;span class="se"&gt;\
&lt;/span&gt;    &lt;span class="nb"&gt;sudo&lt;/span&gt; &lt;span class="se"&gt;\
&lt;/span&gt;    curl &lt;span class="se"&gt;\
&lt;/span&gt;    git &lt;span class="se"&gt;\
&lt;/span&gt;    &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class="nb"&gt;rm&lt;/span&gt; &lt;span class="nt"&gt;-rf&lt;/span&gt; /var/lib/apt/lists/&lt;span class="k"&gt;*&lt;/span&gt;

&lt;span class="k"&gt;USER&lt;/span&gt;&lt;span class="s"&gt; jenkins&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Pretty straightforward. It gives Jenkins root privileges temporarily to install any required packages.&lt;/p&gt;

&lt;h3&gt;
  
  
  3. Wrote the Automation Script (automate.py)
&lt;/h3&gt;

&lt;p&gt;This Python script uses the Docker SDK for Python to run all the container logic automatically:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;docker&lt;/span&gt;  &lt;span class="c1"&gt;# type: ignore
&lt;/span&gt;&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;time&lt;/span&gt;

&lt;span class="n"&gt;client&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;docker&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;from_env&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;

&lt;span class="n"&gt;IMAGE_NAME&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;my-custom-jenkins&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
&lt;span class="n"&gt;VOLUME_NAME&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;jenkins_data_custom&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
&lt;span class="n"&gt;CONTAINER_NAME&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;my-custom-jenkins&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
&lt;span class="n"&gt;DOCKERFILE_PATH&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;.&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt; 

&lt;span class="c1"&gt;# Step 2: Build the custom Jenkins image
&lt;/span&gt;&lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sa"&gt;f&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Building image &lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;IMAGE_NAME&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;...&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="n"&gt;image&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;build_logs&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;client&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;images&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;build&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;path&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;DOCKERFILE_PATH&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;tag&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;IMAGE_NAME&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Image built successfully.&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="c1"&gt;# Step 3: Create volume
&lt;/span&gt;&lt;span class="k"&gt;try&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
    &lt;span class="n"&gt;volume&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;client&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;volumes&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;get&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;VOLUME_NAME&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sa"&gt;f&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Volume &lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;VOLUME_NAME&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt; already exists.&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="k"&gt;except&lt;/span&gt; &lt;span class="n"&gt;docker&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;errors&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;NotFound&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
    &lt;span class="n"&gt;volume&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;client&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;volumes&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;create&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;name&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;VOLUME_NAME&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sa"&gt;f&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Volume &lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;VOLUME_NAME&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt; created.&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="c1"&gt;# Step 4: Run the container
&lt;/span&gt;&lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sa"&gt;f&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Running container &lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;CONTAINER_NAME&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;...&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="k"&gt;try&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
    &lt;span class="n"&gt;container&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;client&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;containers&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;run&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
        &lt;span class="n"&gt;IMAGE_NAME&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="n"&gt;name&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;CONTAINER_NAME&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="n"&gt;ports&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;8080/tcp&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;8080&lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;
        &lt;span class="n"&gt;volumes&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="n"&gt;VOLUME_NAME&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;bind&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;/var/jenkins_home&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;mode&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;rw&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;}},&lt;/span&gt;
        &lt;span class="n"&gt;detach&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="bp"&gt;True&lt;/span&gt;
    &lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="k"&gt;except&lt;/span&gt; &lt;span class="n"&gt;docker&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;errors&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;APIError&lt;/span&gt; &lt;span class="k"&gt;as&lt;/span&gt; &lt;span class="n"&gt;e&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
    &lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sa"&gt;f&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Error: &lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;e&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Trying to remove existing container and retry...&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="k"&gt;try&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
        &lt;span class="n"&gt;old_container&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;client&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;containers&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;get&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;CONTAINER_NAME&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
        &lt;span class="n"&gt;old_container&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;stop&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
        &lt;span class="n"&gt;old_container&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;remove&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
        &lt;span class="n"&gt;container&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;client&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;containers&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;run&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
            &lt;span class="n"&gt;IMAGE_NAME&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
            &lt;span class="n"&gt;name&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;CONTAINER_NAME&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
            &lt;span class="n"&gt;ports&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;8080/tcp&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;8080&lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;
            &lt;span class="n"&gt;volumes&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="n"&gt;VOLUME_NAME&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;bind&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;/var/jenkins_home&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;mode&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;rw&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;}},&lt;/span&gt;
            &lt;span class="n"&gt;detach&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="bp"&gt;True&lt;/span&gt;
        &lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="k"&gt;except&lt;/span&gt; &lt;span class="nb"&gt;Exception&lt;/span&gt; &lt;span class="k"&gt;as&lt;/span&gt; &lt;span class="n"&gt;e2&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
        &lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sa"&gt;f&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Failed to recover: &lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;e2&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
        &lt;span class="nf"&gt;exit&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Container is starting. Waiting 15 seconds for Jenkins to initialize...&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="n"&gt;time&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;sleep&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;15&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;  &lt;span class="c1"&gt;# Wait for Jenkins to generate password
&lt;/span&gt;
&lt;span class="c1"&gt;# Step 5: Get the initial admin password
&lt;/span&gt;&lt;span class="n"&gt;exec_log&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;container&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;exec_run&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;cat /var/jenkins_home/secrets/initialAdminPassword&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="n"&gt;admin_password&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;exec_log&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;output&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;decode&lt;/span&gt;&lt;span class="p"&gt;().&lt;/span&gt;&lt;span class="nf"&gt;strip&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
&lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sa"&gt;f&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="s"&gt;  Jenkins Initial Admin Password:&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;admin_password&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Just run this script, and your Jenkins container is up and running.&lt;/p&gt;

&lt;h2&gt;
  
  
  🧱 Common Roadblocks I Hit
&lt;/h2&gt;

&lt;p&gt;No project is complete without a few bumps along the way. Here are a couple that tripped me up:&lt;/p&gt;

&lt;h3&gt;
  
  
  ❌ mypy Not Recognized in PowerShell
&lt;/h3&gt;

&lt;p&gt;Even after installing mypy, I kept seeing errors like:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight csharp"&gt;&lt;code&gt;&lt;span class="n"&gt;mypy&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;The&lt;/span&gt; &lt;span class="n"&gt;term&lt;/span&gt; &lt;span class="err"&gt;'&lt;/span&gt;&lt;span class="n"&gt;mypy&lt;/span&gt;&lt;span class="err"&gt;'&lt;/span&gt; &lt;span class="k"&gt;is&lt;/span&gt; &lt;span class="k"&gt;not&lt;/span&gt; &lt;span class="n"&gt;recognized&lt;/span&gt;&lt;span class="p"&gt;...&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Turns out, PowerShell doesn’t run scripts from the current directory by default. The fix? Run it like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;.&lt;span class="se"&gt;\m&lt;/span&gt;ypy
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Or better yet, add the script location to your PATH.&lt;/p&gt;

&lt;h3&gt;
  
  
  🐍 ModuleNotFoundError: No module named 'docker'
&lt;/h3&gt;

&lt;p&gt;Even though Docker was installed and working, Python couldn’t find the Docker SDK. I had to run:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;pip &lt;span class="nb"&gt;install &lt;/span&gt;docker
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Always make sure you’re using the correct Python environment (virtualenvs help!).&lt;/p&gt;




&lt;h2&gt;
  
  
  ✅ Final Thoughts
&lt;/h2&gt;

&lt;p&gt;If you’re learning DevOps or just want to get better with Docker and automation, I highly recommend trying something similar. Small projects like these sharpen your skills and make bigger tools like Jenkins feel a lot less intimidating.&lt;/p&gt;

&lt;p&gt;Check out the full repo here:&lt;br&gt;
👉 &lt;a href="https://github.com/Judewakim/CD-with-Docker-and-Jenkins" rel="noopener noreferrer"&gt;https://github.com/Judewakim/CD-with-Docker-and-Jenkins&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;And if you liked this article, feel free to drop a comment or share it with someone diving into DevOps!&lt;/p&gt;




&lt;h2&gt;
  
  
  🧭 Follow My Cloud Journey
&lt;/h2&gt;

&lt;p&gt;I regularly share hands-on cloud builds, automation tricks, and AWS-focused deep dives across the web:&lt;/p&gt;

&lt;p&gt;🔗 &lt;a href="https://linkedin.com/in/jude-wakim" rel="noopener noreferrer"&gt;LinkedIn &lt;/a&gt;— for cloud content, networking, and consulting&lt;br&gt;
📖 &lt;a href="https://medium.com/@judewakim" rel="noopener noreferrer"&gt;Medium&lt;/a&gt; — where this and other walkthroughs live&lt;br&gt;
👨‍💻 &lt;a href="https://github.com/Judewakim" rel="noopener noreferrer"&gt;GitHub&lt;/a&gt; — for open-source tools and infra templates&lt;br&gt;
🖥️ &lt;a href="https://dev.to/vvakim"&gt;Dev.to&lt;/a&gt; — cross-posts and project writeups&lt;br&gt;
Until next time — secure your containers and keep building 🔐🐳⚙️&lt;/p&gt;

</description>
      <category>docker</category>
      <category>python</category>
      <category>jenkins</category>
      <category>automation</category>
    </item>
    <item>
      <title>Automating My Docker Apache Server with Python</title>
      <dc:creator>Jude Wakim</dc:creator>
      <pubDate>Thu, 15 May 2025 22:51:10 +0000</pubDate>
      <link>https://dev.to/vvakim/automating-my-docker-apache-server-with-python-2mo3</link>
      <guid>https://dev.to/vvakim/automating-my-docker-apache-server-with-python-2mo3</guid>
      <description>&lt;p&gt;After building my first Docker project, I knew I couldn’t stop there. The Dockerfile worked perfectly, the image deployed, and Apache ran inside its own containerized environment. But there was still one essential piece missing:&lt;/p&gt;

&lt;p&gt;🚀 &lt;strong&gt;Automation&lt;/strong&gt; ✨&lt;/p&gt;

&lt;p&gt;So I built it.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Project (Recap + Link)
&lt;/h2&gt;

&lt;p&gt;In my previous article, I walked through setting up a full Dockerized Apache server workflow—building a Docker image, pushing it to Docker Hub, and deploying it in a custom Docker network.&lt;/p&gt;

&lt;p&gt;You can check out that full breakdown here:&lt;/p&gt;

&lt;p&gt;👉 &lt;a href="https://dev.to/vvakim/end-to-end-docker-apache-server-build-push-and-networked-deploy-gd3"&gt;End-to-End Docker Apache Server: Build, Push, and Networked Deploy&lt;/a&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  Why Python for Automation?
&lt;/h2&gt;

&lt;p&gt;Docker CLI is great, but typing in the same sequence of commands repeatedly isn’t efficient—especially when you’re building and deploying containers frequently.&lt;/p&gt;

&lt;p&gt;Using Python (and the official Docker SDK for Python) allows us to:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Automate the build → push → deploy cycle&lt;/li&gt;
&lt;li&gt;Control deployments via CLI flags (e.g. &lt;code&gt;--build&lt;/code&gt;)&lt;/li&gt;
&lt;li&gt;Manage Docker networks and containers programmatically&lt;/li&gt;
&lt;li&gt;Practice Python in a real-world DevOps workflow&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Let’s dive into the updated project and see how everything fits together.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Dockerfile 🐳
&lt;/h2&gt;

&lt;p&gt;The Dockerfile sets up an Ubuntu-based container with Apache pre-installed and ready to serve:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight docker"&gt;&lt;code&gt;&lt;span class="k"&gt;FROM&lt;/span&gt;&lt;span class="s"&gt; ubuntu:20.04&lt;/span&gt;

&lt;span class="k"&gt;ENV&lt;/span&gt;&lt;span class="s"&gt; DEBIAN_FRONTEND=noninteractive&lt;/span&gt;

&lt;span class="k"&gt;RUN &lt;/span&gt;apt-get update &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class="se"&gt;\
&lt;/span&gt;    apt-get upgrade &lt;span class="nt"&gt;-y&lt;/span&gt; &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class="se"&gt;\
&lt;/span&gt;    apt-get &lt;span class="nb"&gt;install&lt;/span&gt; &lt;span class="nt"&gt;-y&lt;/span&gt; apache2 &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class="se"&gt;\
&lt;/span&gt;    apt-get clean &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class="se"&gt;\
&lt;/span&gt;    &lt;span class="nb"&gt;rm&lt;/span&gt; &lt;span class="nt"&gt;-rf&lt;/span&gt; /var/lib/apt/lists/&lt;span class="k"&gt;*&lt;/span&gt;

&lt;span class="k"&gt;EXPOSE&lt;/span&gt;&lt;span class="s"&gt; 80&lt;/span&gt;

&lt;span class="k"&gt;CMD&lt;/span&gt;&lt;span class="s"&gt; ["apachectl", "-D", "FOREGROUND"]&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This file is responsible for building a clean, self-contained environment that runs Apache in the foreground—perfect for containerized deployment.&lt;/p&gt;




&lt;h2&gt;
  
  
  The deploy.py Script 🐍
&lt;/h2&gt;

&lt;p&gt;This is the automation magic.&lt;/p&gt;

&lt;p&gt;The Python script does the following:&lt;/p&gt;

&lt;p&gt;✅ Builds the Docker image locally (if the --build flag is passed)&lt;br&gt;
✅ Tags and pushes the image to Docker Hub&lt;br&gt;
✅ Pulls the image (even if already local, for consistency)&lt;br&gt;
✅ Creates a custom Docker network (if it doesn't exist)&lt;br&gt;
✅ Deploys the container inside that network&lt;br&gt;
✅ Automatically removes any conflicting containers&lt;/p&gt;

&lt;p&gt;Here’s a snippet of the logic behind the --build flag:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="k"&gt;if &lt;/span&gt;args.build:
    print&lt;span class="o"&gt;(&lt;/span&gt;f&lt;span class="s2"&gt;"Building image '{image_name}'..."&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt;
    image, logs &lt;span class="o"&gt;=&lt;/span&gt; client.images.build&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="nv"&gt;path&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"."&lt;/span&gt;, &lt;span class="nv"&gt;tag&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;image_name&lt;span class="o"&gt;)&lt;/span&gt;
    &lt;span class="c"&gt;# Push to DockerHub&lt;/span&gt;
    dockerhub_tag &lt;span class="o"&gt;=&lt;/span&gt; f&lt;span class="s2"&gt;"{dockerhub_username}/{image_name}"&lt;/span&gt;
    image.tag&lt;span class="o"&gt;(&lt;/span&gt;dockerhub_tag&lt;span class="o"&gt;)&lt;/span&gt;
    client.images.push&lt;span class="o"&gt;(&lt;/span&gt;dockerhub_tag&lt;span class="o"&gt;)&lt;/span&gt;
    print&lt;span class="o"&gt;(&lt;/span&gt;f&lt;span class="s2"&gt;"Image pushed to {dockerhub_tag}"&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt;
&lt;span class="k"&gt;else&lt;/span&gt;:
    &lt;span class="c"&gt;# Pull from DockerHub&lt;/span&gt;
    client.images.pull&lt;span class="o"&gt;(&lt;/span&gt;dockerhub_tag&lt;span class="o"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The full script handles everything from authentication to error catching and cleanup. You can test and run the automation with:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;python3 deploy.py --build&lt;br&gt;
&lt;/code&gt;&lt;br&gt;
Or just:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;python3 deploy.py&lt;br&gt;
&lt;/code&gt;&lt;br&gt;
To skip the build and pull the latest image from DockerHub.&lt;/p&gt;




&lt;h2&gt;
  
  
  Obstacles + Solutions 🧱
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Docker SDK permissions&lt;/strong&gt; – Needed to make sure Docker was properly installed and the Python SDK had access. Solved by ensuring Docker Desktop was running and permissions were configured.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Image caching&lt;/strong&gt; – Docker caches builds, so when testing changes, the image might not reflect updates. Resolved by using the &lt;code&gt;--build&lt;/code&gt; flag to force fresh builds.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Custom network errors&lt;/strong&gt; – If a network already existed, the script would fail. Handled by checking for the network and reusing it if needed.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Local environment confusion&lt;/strong&gt; – Mixing Docker CLI and Python SDK sometimes caused conflicts. Staying consistent with the SDK helped.&lt;/p&gt;




&lt;h2&gt;
  
  
  Final Thoughts 💭
&lt;/h2&gt;

&lt;p&gt;This updated project now covers the full lifecycle of a Docker container—from image build to deployment—with no manual CLI steps required.&lt;/p&gt;

&lt;p&gt;If you're learning Docker, automating your workflow with Python is a great way to cement your understanding while leveling up your DevOps toolkit.&lt;/p&gt;

&lt;p&gt;You can check out the updated GitHub repo here:&lt;br&gt;
👉 GitHub: &lt;a href="https://github.com/Judewakim/apache-server-on-docker" rel="noopener noreferrer"&gt;judeWakim/apache-server-on-docker&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Let me know if you try it or fork it—I'd love to see your improvements.&lt;/p&gt;

&lt;p&gt;Happy coding! 🐳🚀&lt;/p&gt;




&lt;h2&gt;
  
  
  📬 Follow My Work
&lt;/h2&gt;

&lt;p&gt;Want more DevOps and cloud projects?&lt;/p&gt;

&lt;p&gt;📝 &lt;a href="https://medium.com/@judewakim" rel="noopener noreferrer"&gt;Medium&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;💼 &lt;a href="https://www.linkedin.com/in/jude-wakim/" rel="noopener noreferrer"&gt;LinkedIn&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;🐙 &lt;a href="https://github.com/Judewakim" rel="noopener noreferrer"&gt;GitHub&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;💻 &lt;a href="https://dev.to/vvakim"&gt;Dev.to&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Like and comment if you liked this one — more hands-on AWS, Docker, and automation content coming soon. 🛠️💡&lt;/p&gt;

</description>
      <category>docker</category>
      <category>python</category>
      <category>aws</category>
      <category>automation</category>
    </item>
    <item>
      <title>🚀 End-to-End Docker: Apache Server Build, Push, and Networked Deploy</title>
      <dc:creator>Jude Wakim</dc:creator>
      <pubDate>Tue, 13 May 2025 21:06:41 +0000</pubDate>
      <link>https://dev.to/vvakim/end-to-end-docker-apache-server-build-push-and-networked-deploy-gd3</link>
      <guid>https://dev.to/vvakim/end-to-end-docker-apache-server-build-push-and-networked-deploy-gd3</guid>
      <description>&lt;p&gt;So I finally sat down and got hands-on with Docker — and not just the &lt;code&gt;hello world&lt;/code&gt; kind of hands-on. I mean building, customizing, pushing, and deploying an Apache web server inside a Docker container, running it on EC2, and even setting up a custom Docker network to simulate more complex environments.&lt;/p&gt;

&lt;p&gt;It started off simple. Then got frustrating. Then got cool. Then very frustrating. Then… actually awesome.&lt;/p&gt;

&lt;p&gt;If you’re ready to go from &lt;em&gt;“I think I understand Docker…”&lt;/em&gt; to &lt;em&gt;“I just built a full web server from scratch using Docker”&lt;/em&gt;, then this one’s for you.&lt;/p&gt;

&lt;p&gt;Let’s dive in.&lt;/p&gt;




&lt;h2&gt;
  
  
  🐳 What Is Docker and Why Does It Matter?
&lt;/h2&gt;

&lt;p&gt;Docker is a tool that allows developers to &lt;strong&gt;package applications and all their dependencies into lightweight, portable containers.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Why should you care? Because:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;It &lt;strong&gt;works everywhere&lt;/strong&gt; (local, staging, prod).&lt;/li&gt;
&lt;li&gt;You avoid the classic &lt;em&gt;“works on my machine”&lt;/em&gt; headache.&lt;/li&gt;
&lt;li&gt;You can version, share, deploy, and replicate environments in seconds.&lt;/li&gt;
&lt;li&gt;It’s the industry standard for modern DevOps and cloud-native apps.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Whether you're developing microservices, testing distributed systems, or just trying to keep your local machine clean, &lt;strong&gt;Docker is the go-to tool&lt;/strong&gt;.&lt;/p&gt;




&lt;h2&gt;
  
  
  🧰 Prerequisites + What You'll Learn
&lt;/h2&gt;

&lt;h3&gt;
  
  
  ✅ Prerequisites:
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Basic Linux CLI experience&lt;/li&gt;
&lt;li&gt;EC2 instance running Amazon Linux 2&lt;/li&gt;
&lt;li&gt;Docker installed (we’ll cover that too)&lt;/li&gt;
&lt;li&gt;Public-facing IP and security group rules set&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  💡 What You'll Learn:
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;How to install and use Docker on EC2&lt;/li&gt;
&lt;li&gt;How to create and run a containerized Apache web server&lt;/li&gt;
&lt;li&gt;How to commit + push your image to Docker Hub&lt;/li&gt;
&lt;li&gt;How to create and attach Docker networks&lt;/li&gt;
&lt;li&gt;How to troubleshoot common Docker issues&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  🔧 Foundational: Docker + Apache on EC2
&lt;/h2&gt;

&lt;p&gt;First, we get Docker installed and running on our EC2 instance.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# 1. Update the package index&lt;/span&gt;
&lt;span class="nb"&gt;sudo &lt;/span&gt;yum update &lt;span class="nt"&gt;-y&lt;/span&gt;

&lt;span class="c"&gt;# 2. Install Docker&lt;/span&gt;
&lt;span class="nb"&gt;sudo &lt;/span&gt;amazon-linux-extras &lt;span class="nb"&gt;install &lt;/span&gt;docker &lt;span class="nt"&gt;-y&lt;/span&gt;

&lt;span class="c"&gt;# 3. Start the Docker service&lt;/span&gt;
&lt;span class="nb"&gt;sudo &lt;/span&gt;service docker start

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now let’s launch a base Ubuntu container and install Apache inside it:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Run a Docker Ubuntu container (detached on port 80)&lt;/span&gt;
&lt;span class="nb"&gt;sudo &lt;/span&gt;docker run &lt;span class="nt"&gt;-dit&lt;/span&gt; &lt;span class="nt"&gt;-p&lt;/span&gt; 80:80 &lt;span class="nt"&gt;--name&lt;/span&gt; ubuntu-web ubuntu

&lt;span class="c"&gt;# Go into the container&lt;/span&gt;
&lt;span class="nb"&gt;sudo &lt;/span&gt;docker &lt;span class="nb"&gt;exec&lt;/span&gt; &lt;span class="nt"&gt;-it&lt;/span&gt; ubuntu-web bash

&lt;span class="c"&gt;# Update everything inside the container&lt;/span&gt;
apt-get update &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; apt-get upgrade &lt;span class="nt"&gt;-y&lt;/span&gt;

&lt;span class="c"&gt;# Install Apache&lt;/span&gt;
apt-get &lt;span class="nb"&gt;install &lt;/span&gt;apache2 &lt;span class="nt"&gt;-y&lt;/span&gt;

&lt;span class="c"&gt;# Start Apache&lt;/span&gt;
service apache2 start

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now check your EC2 public IP in your browser. If port 80 is open in your EC2 security group, you should see Apache’s default page! 🎉&lt;/p&gt;




&lt;h2&gt;
  
  
  🚀 Advanced: Commit, Push, and Pull Like a Pro
&lt;/h2&gt;

&lt;p&gt;Let’s save this container as a reusable image, push it to Docker Hub, and redeploy it from scratch.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Exit the container&lt;/span&gt;
&lt;span class="nb"&gt;exit&lt;/span&gt;

&lt;span class="c"&gt;# Commit the container as an image&lt;/span&gt;
&lt;span class="nb"&gt;sudo &lt;/span&gt;docker commit ubuntu-web vvakim/ubuntu-webserver:v1

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Before we push it, make sure Docker Hub is properly connected:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;
&lt;span class="c"&gt;# Log in to Docker&lt;/span&gt;
docker login

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;em&gt;⚠️ Troubleshooting:&lt;/em&gt; I had some wild issues here. Even though I got “Login Succeeded,” I wasn’t actually logged in — no username showed in &lt;code&gt;docker info&lt;/code&gt;. Here's the fix:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;
&lt;span class="c"&gt;# Add your user to the Docker group&lt;/span&gt;
&lt;span class="nb"&gt;sudo &lt;/span&gt;usermod &lt;span class="nt"&gt;-aG&lt;/span&gt; docker &lt;span class="nv"&gt;$USER&lt;/span&gt;
newgrp docker

&lt;span class="c"&gt;# Confirm login&lt;/span&gt;
docker info | &lt;span class="nb"&gt;grep &lt;/span&gt;Username

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now push the image:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;
docker push vvakim/ubuntu-webserver:v1

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Test it by removing the old container and pulling from Docker Hub:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;
docker stop ubuntu-web &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; docker &lt;span class="nb"&gt;rm &lt;/span&gt;ubuntu-web

&lt;span class="c"&gt;# Run the pulled image&lt;/span&gt;
docker run &lt;span class="nt"&gt;-dit&lt;/span&gt; &lt;span class="nt"&gt;-p&lt;/span&gt; 8080:80 &lt;span class="nt"&gt;--name&lt;/span&gt; ubuntu-web-alt vvakim/ubuntu-webserver:v1

&lt;span class="c"&gt;# Start Apache inside the container&lt;/span&gt;
docker &lt;span class="nb"&gt;exec&lt;/span&gt; &lt;span class="nt"&gt;-it&lt;/span&gt; ubuntu-web-alt bash
apachectl start

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Check your EC2 public IP on port 8080. Apache is back up, this time from your own Docker image!&lt;/p&gt;




&lt;h2&gt;
  
  
  🧠 Complex: Networking &amp;amp; Multi-Container Readiness
&lt;/h2&gt;

&lt;p&gt;Let’s simulate a more real-world use case — multiple containers on a shared Docker network.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;
&lt;span class="c"&gt;# Create a custom Docker network&lt;/span&gt;
docker network create custom-net

&lt;span class="c"&gt;# Run your container using the new network and a different port&lt;/span&gt;
docker run &lt;span class="nt"&gt;-dit&lt;/span&gt; &lt;span class="nt"&gt;--name&lt;/span&gt; ubuntu-web-net &lt;span class="nt"&gt;--network&lt;/span&gt; custom-net &lt;span class="nt"&gt;-p&lt;/span&gt; 8081:80 vvakim/ubuntu-webserver:v1

&lt;span class="c"&gt;# Start Apache inside the new container&lt;/span&gt;
docker &lt;span class="nb"&gt;exec&lt;/span&gt; &lt;span class="nt"&gt;-it&lt;/span&gt; ubuntu-web-net bash
apachectl start

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now check the EC2 public IP on port 8081. You’ve got a second instance running side-by-side!&lt;/p&gt;

&lt;p&gt;You can stop and remove the earlier container on 8080 if you want:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;
docker stop ubuntu-web-alt
docker &lt;span class="nb"&gt;rm &lt;/span&gt;ubuntu-web-alt

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;h2&gt;
  
  
  🧱 Roadblocks &amp;amp; Fixes
&lt;/h2&gt;

&lt;p&gt;Let’s be real — it wasn’t smooth sailing the whole time. Here are a few issues I hit:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Apache “site not reached” error&lt;/strong&gt; — Apache was installed but not running. Minimal Ubuntu images don’t auto-start services, so I used service apache2 start manually.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Docker login issues&lt;/strong&gt; — Even with “Login Succeeded,” I wasn’t really logged in. Fixed it by adding my user to the Docker group and logging in again.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;EC2 security group pain&lt;/strong&gt; — Had to manually allow inbound TCP on ports 80, 8080, and 8081 for browser access.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Systemctl not working in container&lt;/strong&gt; — systemctl isn’t supported in minimal Docker images. Stick to service or apachectl.&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  🎯 Conclusion
&lt;/h2&gt;

&lt;p&gt;&lt;em&gt;So what did we build?&lt;/em&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;An Apache web server inside Docker ✅&lt;/li&gt;
&lt;li&gt;A versioned Docker image pushed to Docker Hub ✅&lt;/li&gt;
&lt;li&gt;Networked containers running simultaneously ✅&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This project helped me internalize how containers work, how Docker networks operate, and how real-world services get packaged and deployed. If you’re learning DevOps, Docker isn’t optional — it’s foundational.&lt;/p&gt;




&lt;h2&gt;
  
  
  📬 Follow My Work
&lt;/h2&gt;

&lt;p&gt;Want more DevOps and cloud projects?&lt;/p&gt;

&lt;p&gt;📝 &lt;a href="https://medium.com/@judewakim" rel="noopener noreferrer"&gt;Medium&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;💼 &lt;a href="https://www.linkedin.com/in/jude-wakim/" rel="noopener noreferrer"&gt;LinkedIn&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;🐙 &lt;a href="https://github.com/Judewakim" rel="noopener noreferrer"&gt;GitHub&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;💻 &lt;a href="https://dev.to/vvakim"&gt;Dev.to&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Drop a follow if you liked this one&lt;/strong&gt; — more hands-on AWS, Docker, and automation content coming soon. 🛠️💡&lt;/p&gt;

</description>
      <category>docker</category>
      <category>aws</category>
      <category>beginners</category>
      <category>tutorial</category>
    </item>
    <item>
      <title>🎯 Auto Thumbnail Generator: Serverless Image Processing for YouTube Thumbnails</title>
      <dc:creator>Jude Wakim</dc:creator>
      <pubDate>Fri, 25 Apr 2025 17:30:00 +0000</pubDate>
      <link>https://dev.to/vvakim/auto-thumbnail-generator-serverless-image-processing-for-youtube-thumbnails-333d</link>
      <guid>https://dev.to/vvakim/auto-thumbnail-generator-serverless-image-processing-for-youtube-thumbnails-333d</guid>
      <description>&lt;p&gt;&lt;em&gt;The program is a thumbnail generator available via a web browser with full follow-along&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7st6wp44gtms1pt9roa9.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7st6wp44gtms1pt9roa9.png" alt="architecture" width="800" height="412"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  📦 Overview: What's this thing do?
&lt;/h2&gt;

&lt;p&gt;This serverless web app lets you:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Upload an image (screenshot, still frame, etc.) via a simple web dashboard.&lt;/li&gt;
&lt;li&gt;First Lambda, triggered via API Gateway using a POST request, generates a presigned POST policy and uploads the image to S3 with metadata.&lt;/li&gt;
&lt;li&gt;Automatically trigger a second Lambda function using S3 event notifications.&lt;/li&gt;
&lt;li&gt;Process the image using Pillow to create a properly sized thumbnail (1280x720).&lt;/li&gt;
&lt;li&gt;Adds a text overlay using the metadata to create thumbnail text.&lt;/li&gt;
&lt;li&gt;Store the processed image in a second S3 bucket.&lt;/li&gt;
&lt;li&gt;Return a presigned URL to be downloaded.&lt;/li&gt;
&lt;li&gt;Frontend polls periodically until the corresponding file is found in the processed bucket.&lt;/li&gt;
&lt;li&gt;Second Lambda then generates and returns the presigned URL from the processed bucket. &lt;/li&gt;
&lt;/ol&gt;




&lt;h2&gt;
  
  
  🔧 Services: How's this thing do what it does?
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Amazon S3&lt;/strong&gt; - store raw uploads and processed images.&lt;br&gt;
&lt;strong&gt;AWS Lambda&lt;/strong&gt; - perform image processing using Pillow.&lt;br&gt;
&lt;strong&gt;Amazon API Gateway&lt;/strong&gt; - support uploading and presigned URL generation from the web.&lt;br&gt;
&lt;strong&gt;IAM&lt;/strong&gt; - permission control.&lt;br&gt;
&lt;strong&gt;Simple HTML Dashboard&lt;/strong&gt; - static website for uploads and image preview.&lt;/p&gt;

&lt;p&gt;Let's dive in and build this thing! &lt;/p&gt;


&lt;h2&gt;
  
  
  ⚙️Build: How do you get the thing to do what it does?
&lt;/h2&gt;
&lt;h3&gt;
  
  
  1. Set Up Two S3 Buckets
&lt;/h3&gt;

&lt;p&gt;Just create two S3 buckets. Name them:&lt;br&gt;
&lt;code&gt;something-uploads-related&lt;/code&gt; → stores original uploads&lt;br&gt;
&lt;code&gt;something-processed-related&lt;/code&gt; → stores finalized thumbnails&lt;/p&gt;

&lt;p&gt;Then, on the &lt;code&gt;something-uploads-related&lt;/code&gt; bucket, enable S3 event notifications to trigger the Lambda function that we are going to write next, on object creation. &lt;/p&gt;
&lt;h3&gt;
  
  
  2. Write Image Processing Lambda Function with Python &amp;amp; Pillow
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fc3kxyn3oroyxl2xg5g9z.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fc3kxyn3oroyxl2xg5g9z.jpg" alt="Python Pillow" width="224" height="225"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;The logic for the function should be as follows: &lt;/em&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Get the image from the &lt;code&gt;something-uploads-related&lt;/code&gt; bucket&lt;/li&gt;
&lt;li&gt;Collect metadata about the image from the user input on the webpage&lt;/li&gt;
&lt;li&gt;Resize it to YouTube's recommended 1280x720 resolution (or maintain aspect ratio + pad) using Pillow&lt;/li&gt;
&lt;li&gt;Generate thumbnail text using the metadata and add text as an overlay &lt;/li&gt;
&lt;li&gt;Save it to &lt;code&gt;something-processed-related&lt;/code&gt; bucket&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;em&gt;So the flow becomes something like this: &lt;/em&gt;&lt;br&gt;
🡢The User Uploads an Image to S3: The user uploads an image to the S3 bucket via the web browser. The webpage will provide the user with the space to include thumbnail text for the overlay. &lt;br&gt;
🡢The Lambda gets Triggered by Upload: Lambda processes the image file and stores it in another S3 bucket.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;This is the basic code for the Lambda function:&lt;/em&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;from PIL import Image, ImageDraw, ImageFont, UnidentifiedImageError
import boto3
import os
import io

s3 = boto3.client('s3')

# Assume you added "Impact.ttf" font to your deployment package or layer
# Font path and size for text overlay is detailed in the Lambda layer
FONT_PATH = "/opt/Impact.ttf"  # If added via Lambda Layer
FONT_SIZE = 64

def lambda_handler(event, context):
    bucket = event['Records'][0]['s3']['bucket']['name']
    key    = event['Records'][0]['s3']['object']['key']

    try:
        # Get metadata (for custom text)
        head = s3.head_object(Bucket=bucket, Key=key)
        text = head.get("Metadata", {}).get("text", "Your Title Here")

        # Download image
        original_image = s3.get_object(Bucket=bucket, Key=key)['Body'].read()
        try:
            image = Image.open(io.BytesIO(original_image)).convert("RGB")
        except UnidentifiedImageError:
            return {
                'statusCode': 400,
                'body': 'The uploaded file is not a valid image.'
            }

    # Resize to 1280x720
    image.thumbnail((1280, 720))

    # Creates a black canvas that is always 1280xx720
    canvas = Image.new('RGB', (1280, 720), (0, 0, 0)) 
    # Paste the resized image on the center of the black canvas
    canvas.paste(image, ((1280 - image.width) // 2, (720 - image.height) // 2))

    # ImageDraw.Draw lets you draw shapes, lines, or text onto a Pillow image
    draw = ImageDraw.Draw(canvas)

    # Apply translucent black overlay for contrast 
    # (later will improve this logic to make it dynamic)
    overlay = Image.new('RGBA', canvas.size, (0, 0, 0, 100))
    canvas = Image.alpha_composite(canvas.convert('RGBA'), overlay)

    # Load font
    font = ImageFont.truetype(FONT_PATH, FONT_SIZE)

    # Text placement
    text_position = (50, 600)
    draw = ImageDraw.Draw(canvas)
    draw.text(text_position, text, font=font, fill='white')

    # Save final output
    output = io.BytesIO()
    canvas.convert("RGB").save(output, format='JPEG')
    output.seek(0)

    # Upload to processed bucket
    processed_key = f"processed-{key}"
    processed_bucket = 'something-processed-related'
    s3.upload_fileobj(output, processed_bucket, processed_key, ExtraArgs={'ContentType': 'image/jpeg'})

    # Generate presigned URL
    url = s3.generate_presigned_url('get_object', Params={
        'Bucket': processed_bucket,
        'Key': processed_key
    }, ExpiresIn=3600)

    return {
        'statusCode': 200,
        'body': url
    }

    except Exception as e:
        return {
            'statusCode': 500,
            'body': f'Error: {str(e)}'
        }
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;AWS Lambda does not have Pillow installed by default, so you should create a Lambda layer with Pillow pre-installed and attach it to your function.&lt;/p&gt;

&lt;p&gt;Use a pre-existing layer for Pillow, like:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/keithrozario/Klayers/tree/master/deployments/python3.12" rel="noopener noreferrer"&gt;https://github.com/keithrozario/Klayers/tree/master/deployments/python3.12&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Once the layer is created, just attach it to the function. &lt;/p&gt;

&lt;p&gt;Then, add another Lambda layer for font styles for the thumbnail text. You can find many different fonts here:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://fontsgeek.com/impactall-font" rel="noopener noreferrer"&gt;https://fontsgeek.com/impactall-font&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Once again, when the layer is created, just attach it to the function.&lt;/p&gt;

&lt;p&gt;and…💥&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgagymb6s8oyqvkzyb48s.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgagymb6s8oyqvkzyb48s.png" alt="layers" width="800" height="108"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  3. Write the Lambda to Generate a Presigned POST URL
&lt;/h3&gt;

&lt;p&gt;✅ Use Case Recap:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Browser sends the file and  text to this Lambda&lt;/li&gt;
&lt;li&gt;Lambda generates a presigned &lt;code&gt;POST&lt;/code&gt; policy&lt;/li&gt;
&lt;li&gt;Browser uploads directly to S3 using the &lt;code&gt;POST&lt;/code&gt; form &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This Lambda code could look something like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import json
import boto3
import os
from urllib.parse import parse_qs

s3 = boto3.client('s3')

UPLOAD_BUCKET = 'something-upload-related'
PROCESSED_BUCKET = 'something-process-related'

def lambda_handler(event, context):
    method = event.get('httpMethod')
    path = event.get('path')

    if method == 'POST' and path == '/generate-presigned-url':
        return generate_presigned_post(event)
    elif method == 'GET' and path == '/get-final-url':
        return get_processed_image_url(event)
    else:
        return {
            'statusCode': 404,
            'body': json.dumps({'error': 'Route not found'})
        }

def generate_presigned_post(event):
    try:
        body = json.loads(event['body'])
        filename = body.get('filename')
        text = body.get('text', '')  # Optional

        if not filename:
            return {
                'statusCode': 400,
                'body': json.dumps({'error': 'Filename is required'})
            }

        # Create a presigned POST to upload the file with metadata
        presigned_post = s3.generate_presigned_post(
            Bucket=UPLOAD_BUCKET,
            Key=filename,
            Fields={"x-amz-meta-text": text},
            Conditions=[
                {"x-amz-meta-text": text},
                ["starts-with", "$Content-Type", ""]
            ],
            ExpiresIn=300  # 5 minutes
        )

        return {
            'statusCode': 200,
            'body': json.dumps(presigned_post)
        }

    except Exception as e:
        return {
            'statusCode': 500,
            'body': json.dumps({'error': str(e)})
        }

def get_processed_image_url(event):
    try:
        params = event.get('queryStringParameters') or {}
        filename = params.get('filename')

        if not filename:
            return {
                'statusCode': 400,
                'body': json.dumps({'error': 'Missing filename'})
            }

        processed_key = f'processed-{filename}'

        # Check if the file exists
        try:
            s3.head_object(Bucket=PROCESSED_BUCKET, Key=processed_key)
        except s3.exceptions.ClientError as e:
            if e.response['Error']['Code'] == '404':
                return {
                    'statusCode': 404,
                    'body': json.dumps({'message': 'Thumbnail not ready yet'})
                }
            else:
                raise

        # File exists, generate download URL
        presigned_url = s3.generate_presigned_url('get_object', Params={
            'Bucket': PROCESSED_BUCKET,
            'Key': processed_key
        }, ExpiresIn=3600)

        return {
            'statusCode': 200,
            'body': json.dumps({'url': presigned_url})
        }

    except Exception as e:
        return {
            'statusCode': 500,
            'body': json.dumps({'error': str(e)})
        }
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;For the presigned POST URL Lambda to work, it needs an API Gateway hookup. Create a new HTTP API and add routes &lt;code&gt;POST /generate-presigned-url&lt;/code&gt; and &lt;code&gt;GET /get-final-url&lt;/code&gt; integrate it with the Lambda.&lt;/p&gt;

&lt;p&gt;Then test it using the API endpoint and the command line using thecURL command.&lt;/p&gt;

&lt;h3&gt;
  
  
  4. Create the Web Dashboard
&lt;/h3&gt;

&lt;p&gt;The end user should have a clean, easy dashboard to upload images and receive thumbnails. So we will create a static webpage using S3 that allows the user to: &lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Upload an image file with text for the thumbnail&lt;br&gt;
🡢Make sure it is an acceptable file type.&lt;br&gt;
🡢The text is passed as metadata.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Receive a thumbnail of the image for download&lt;br&gt;
🡢A link is provided to view and download the image.&lt;br&gt;
🡢The image is stored in an S3 bucket (with a lifecycle policy).&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1xkrlvnyv4ljk8kvqzxu.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1xkrlvnyv4ljk8kvqzxu.png" alt="webpage" width="800" height="413"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;How?&lt;/em&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Submit button triggers the Lambda using an API endpoint to generate a presigned URL to upload to S3&lt;br&gt;
🡢API Gateway passes along the image file and thumbnail text stored as metadata&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;JavaScript on the webpage uploads an image file to the S3 bucket with the presigned URL&lt;br&gt;
🡢Uploads to the S3 bucket trigger the Lambda function to process the image and store it in the processed images bucket&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Submit button also triggers another Lambda using an API endpoint to poll the S3 bucket for processed images for the processed image, creates a presigned URL and returns it&lt;br&gt;
🡢Same submit button (with a short delay) triggers another Lambda to search the processed images S3 bucket for an object with the proper filename&lt;br&gt;
🡢Poll repeats periodically and when found, generates and returns a presigned URL back to the user for viewing or downloading&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;To do so, create another S3 bucket and enable static website hosting, set a bucket policy, and unblock all public access (we can go back and make this more secure later). Then upload an index.html file to the bucket. The JavaScript in my file looked something like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;&amp;lt;script&amp;gt;
    document.getElementById('upload-form').addEventListener('submit', async (e) =&amp;gt; {
      e.preventDefault();

      const text = document.getElementById('text').value;
      const filetype = document.getElementById('filetype').value;
      const fileInput = document.getElementById('image');
      const file = fileInput.files[0];

      if (!file) {
        alert("Please choose a file.");
        return;
      }

      // Step 1: Request a presigned URL
      const presignedRes = await fetch("[API-ENDPOINT]", {
        method: "POST",
        headers: {
          "Content-Type": "application/json"
        },
        body: JSON.stringify({
          text: text,
          filetype: filetype
        })
      });

      const { url, filename } = await presignedRes.json();

      // Step 2: Upload file to S3 using presigned URL
      const uploadRes = await fetch(url, {
        method: "PUT",
        headers: {
          "Content-Type": file.type
        },
        body: file
      });

      if (uploadRes.ok) {
        document.getElementById("result").innerHTML = `
          &amp;lt;strong&amp;gt;✅ Uploaded successfully!&amp;lt;/strong&amp;gt;&amp;lt;br&amp;gt;
          Processed thumbnail will appear shortly:&amp;lt;br&amp;gt;
          &amp;lt;code&amp;gt;${filename}&amp;lt;/code&amp;gt;
        `;

        setTimeout(() =&amp;gt; {
            checkForThumbnail(filename);
        }, 3000);
      } else {
        document.getElementById("result").innerHTML = "❌ Upload failed.";
      }
    });
    async function checkForThumbnail(filename, retries = 5) {
    const pollRes = await fetch(`[API-ENDPOINT]`);

    if (pollRes.ok) {
      const { downloadUrl } = await pollRes.json();
      document.getElementById("result").innerHTML += `
        &amp;lt;br&amp;gt;&amp;lt;strong&amp;gt;✅ Thumbnail Ready!&amp;lt;/strong&amp;gt;&amp;lt;br&amp;gt;
        &amp;lt;a href="${downloadUrl}" target="_blank" download&amp;gt;Click here to view&amp;lt;/a&amp;gt;&amp;lt;br&amp;gt;

      `;
    } else if (retries &amp;gt; 0) {
      setTimeout(() =&amp;gt; {
        checkForThumbnail(filename, retries - 1);
      }, 3000); // Retry after 3 seconds
    } else {
      document.getElementById("result").innerHTML += `&amp;lt;br&amp;gt;⏳ Thumbnail still processing. Try again shortly.`;
    }
  }
  &amp;lt;/script&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  5. Create the Third Lambda for Polling
&lt;/h3&gt;

&lt;p&gt;At this point in the project, the webpage is up and communicating with the presigned URL generator Lambda. When you hit the "generate thumbnail" button, the API endpoint passes the image file and thumbnail text as metadata to the URL generation Lambda, which creates  the presigned URL and returns it to the webpage. Then the webpage uploads the image file to the uploads bucket using the URL. When the image file gets uploaded to the bucket, the event notification triggers the thumbnail generator Lambda, which creates the thumbnail and stores it in the processed images bucket. &lt;/p&gt;

&lt;p&gt;Now, what we have to set up here is how we are going to get the processed image back to the user. We are going to do this with another Lambda function for polling the processed images bucket for the new file and returning it back to the user. So, we will put another API endpoint on the "generate thumbnail" button, which waits a couple of seconds, then triggers the polling Lambda. The code for this Lambda is below.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import json
import boto3
import os

s3 = boto3.client('s3')

PROCESSED_BUCKET = 'something-processed-related' 

def lambda_handler(event, context):
    try:
        # If REST API: filename is in queryStringParameters
        # If HTTP API: filename may be in event['queryStringParameters']
        params = event.get("queryStringParameters") or event.get("querystring") or {}
        filename = params.get("filename")
        if not filename:
            return {
                'statusCode': 400,
                'body': json.dumps("Missing filename parameter.")
            }

        processed_key = f"processed-{filename}"

        # Try to get object metadata to confirm it exists
        s3.head_object(Bucket=PROCESSED_BUCKET, Key=processed_key)

        # If it exists, generate presigned URL
        url = s3.generate_presigned_url('get_object', Params={
            'Bucket': PROCESSED_BUCKET,
            'Key': processed_key
        }, ExpiresIn=3600)

        return {
            'statusCode': 200,
            'headers': {'Access-Control-Allow-Origin': '*'},  # CORS
            'body': json.dumps({'downloadUrl': url})
        }

    except s3.exceptions.ClientError as e:
        if e.response['Error']['Code'] == '404':
            return {
                'statusCode': 404,
                'headers': {'Access-Control-Allow-Origin': '*'},  # CORS
                'body': json.dumps("Thumbnail not ready yet.")
            }
        return {
            'statusCode': 500,
            'headers': {'Access-Control-Allow-Origin': '*'},  # CORS
            'body': json.dumps("Error checking file.")
        }

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  6. Security &amp;amp; IAM
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Lambda needs &lt;code&gt;s3:GetObject&lt;/code&gt;, &lt;code&gt;s3:PutObject&lt;/code&gt; for both buckets, but not more.&lt;/li&gt;
&lt;li&gt;Restrict the bucket policy and bucket access.&lt;/li&gt;
&lt;li&gt;If using presigned POST or API Gateway, you'll control access via CORS but generating a CORS config template.&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  Issues: Why isn't this thing doing what it does ❓
&lt;/h2&gt;

&lt;h3&gt;
  
  
  1.🛡️CORS
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Problem:&lt;/strong&gt;&lt;br&gt;
 My frontend was politely trying to talk to my Lambda endpoints… but CORS was like "who are you again?"&lt;br&gt;
&lt;strong&gt;Symptom:&lt;/strong&gt;&lt;br&gt;
 Chrome Dev Tools screamed:&lt;br&gt;
 &lt;code&gt;Access to fetch at '...' from origin '...' has been blocked by CORS policy.&lt;/code&gt;&lt;br&gt;
&lt;strong&gt;Solution:&lt;/strong&gt;&lt;br&gt;
 I added the right headers in every Lambda response - especially for 4XX and 5XX errors, which are often forgotten:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;'headers': {
  'Access-Control-Allow-Origin': '*'
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  2.🔁Thumbnail "Still Processing" Forever
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Problem:&lt;/strong&gt;&lt;br&gt;
 Even after uploading, the frontend would just loop "Thumbnail still processing. Try again shortly."&lt;br&gt;
&lt;strong&gt;Cause:&lt;/strong&gt;&lt;br&gt;
 The Lambda that checks for the processed thumbnail was using head_object() to look for the file-but my IAM role didn't have &lt;code&gt;s3:GetObject&lt;/code&gt; permission, which is sneakily required to do a &lt;code&gt;HeadObject&lt;/code&gt;!&lt;br&gt;
&lt;strong&gt;Fix:&lt;/strong&gt;&lt;br&gt;
 I added this permission to the role:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
  "Effect": "Allow",
  "Action": "s3:GetObject",
  "Resource": "arn:aws:s3:::your-processed-bucket-name/*"
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  3. ✔️❌Good URL, Bad Upload
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Problem:&lt;/strong&gt;&lt;br&gt;
 I'd get a valid presigned URL, but uploading the file would silently fail or return a confusing 403.&lt;br&gt;
&lt;strong&gt;Diagnosis:&lt;/strong&gt;&lt;br&gt;
 The &lt;code&gt;Content-Type&lt;/code&gt; of the file must match the one used when generating the URL. A mismatch breaks the signature.&lt;br&gt;
*&lt;em&gt;Solution:&lt;br&gt;
*&lt;/em&gt; I explicitly set the &lt;code&gt;Content-Type&lt;/code&gt; during both the presigned URL generation in Lambda and the &lt;code&gt;fetch()&lt;/code&gt; upload call on the frontend.&lt;/p&gt;




&lt;h2&gt;
  
  
  V2 Improvements: How to do what it does…better
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Improve Logic -&lt;/strong&gt; Thumbnails generally have text on top of the images and can be added in many different ways, depending on the image itself. Make the text overlay dynamic depending on the colors and contrast of the image itself.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Infrastructure as Code (IaC) -&lt;/strong&gt; Make this whole project into one file in Terraform. Automation and easy source control.&lt;/li&gt;
&lt;/ol&gt;




&lt;h2&gt;
  
  
  🧪 Want to Try It Yourself?
&lt;/h2&gt;

&lt;p&gt;You can view the full repo and clone it for your own AWS account here:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/Judewakim/thumbnailgenerator" rel="noopener noreferrer"&gt;https://github.com/Judewakim/thumbnailgenerator&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;For more projects like this, check out my other Medium posts and check out my Github and LinkedIn.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>🚀 How I Hosted a Static Website on Azure (in Under 15 Minutes)</title>
      <dc:creator>Jude Wakim</dc:creator>
      <pubDate>Tue, 15 Apr 2025 00:31:04 +0000</pubDate>
      <link>https://dev.to/vvakim/how-i-hosted-a-static-website-on-azure-in-under-15-minutes-3ala</link>
      <guid>https://dev.to/vvakim/how-i-hosted-a-static-website-on-azure-in-under-15-minutes-3ala</guid>
      <description>&lt;p&gt;&lt;em&gt;Thinking of using Azure to host a simple static site?&lt;/em&gt; You’re in the right place.&lt;/p&gt;

&lt;p&gt;In this short post, I’ll walk you through how I used Azure Blob Storage to host a static website in minutes — without overcomplicating it. No VMs, no servers, no chaos. Just clean cloud goodness.&lt;/p&gt;

&lt;h2&gt;
  
  
  ☁️ First, Why Azure?
&lt;/h2&gt;

&lt;p&gt;Azure is Microsoft’s cloud platform — same league as AWS and Google Cloud. What makes Azure nice?&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Integrated with everything Microsoft&lt;/li&gt;
&lt;li&gt;Great UI for beginners&lt;/li&gt;
&lt;li&gt;You get $200 free with a trial account (aka: zero dollars to deploy your site)&lt;/li&gt;
&lt;li&gt;And with Azure Blob Storage, you can turn a simple storage bucket into a static website with public access — perfect for portfolios, documentation, or proof-of-concept apps.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  🛠️ Step-by-Step: Host a Static Website on Azure Blob Storage
&lt;/h2&gt;

&lt;p&gt;Let’s get into it. No fluff — just what I did.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1. Create a Storage Account&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Head to Azure Portal at portal.azure.com and sign in or up.&lt;/li&gt;
&lt;li&gt;Create a new Storage Account.&lt;/li&gt;
&lt;li&gt;Choose a region, name it something unique like 'myawesomestorage123'.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;2. Create a Blob Container&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Inside your new Storage Account, go to Containers.&lt;/li&gt;
&lt;li&gt;Create a new container, name it web or website or something simple.&lt;/li&gt;
&lt;li&gt;Set Public access level to Container (anonymous read access).&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;3. Upload Your Site&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Upload your index.html file to the container.&lt;/li&gt;
&lt;li&gt;If you’re fancy, add a error.html too.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;4. Enable Static Website Hosting&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Still in your storage account, find Static website under settings.&lt;/li&gt;
&lt;li&gt;Turn it on, set index.html as the default.&lt;/li&gt;
&lt;li&gt;Azure will give you a public endpoint like: 'https://.z13.web.core.windows.net'&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;5. Test It&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Open that URL in incognito mode.&lt;/li&gt;
&lt;li&gt;Boom. Site live. Cloud flex unlocked. 💪&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  ⚠️ Common Issues &amp;amp; Fixes
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;❌ Blob Container Access Level is Greyed Out&lt;/strong&gt;&lt;br&gt;
When you try to set your Blob container to Container (anonymous read access) and it’s greyed out, don’t panic.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;✅ Fix It:&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Go to your Storage Account settings.&lt;/li&gt;
&lt;li&gt;Find Configuration.&lt;/li&gt;
&lt;li&gt;Flip the switch for Allow blob anonymous access to Enabled.&lt;/li&gt;
&lt;li&gt;Save it.&lt;/li&gt;
&lt;li&gt;Now go back and set the public access level — good to go.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;❌ CDN Not Registered&lt;/strong&gt;&lt;br&gt;
I tried to add Azure Front Door to boost performance and HTTPS support. But if you’re on a Free Trial, you’ll probably hit this:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;&amp;gt; Error: Microsoft.Cdn is not registered for the subscription&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;✅ How to Fix It (If You Have Access)&lt;/strong&gt;&lt;br&gt;
If you’re not on a Free Trial, here’s the fix using the Azure CLI:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;az provider register --namespace Microsoft.Cdn&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Done! Now, you can use Azure Front Door or Azure CDN. But if you’re on a trial account, just know it won’t work until you upgrade. Not a dealbreaker for simple sites.&lt;/p&gt;
&lt;h2&gt;
  
  
  🧹 Clean-Up: Don’t Forget to Shut It Down
&lt;/h2&gt;

&lt;p&gt;If you’re done with your project and want to avoid unexpected charges (or just like a clean dashboard), here’s how to clean up:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Delete the Blob Container&lt;/li&gt;
&lt;li&gt;Delete the Storage Account&lt;/li&gt;
&lt;li&gt;Double-check Resource Group usage if you created one just for this&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;That’s it — you’re done.&lt;/p&gt;
&lt;h2&gt;
  
  
  🏁 Final Thoughts
&lt;/h2&gt;

&lt;p&gt;Hosting a static site on Azure is easy, fast, and free (if you’re on a trial). You don’t need VMs or complicated infra to get a portfolio or simple app online.&lt;/p&gt;

&lt;p&gt;And if you want more performance or HTTPS, Azure Front Door is there when you’re ready (and off the free trial 😅).&lt;/p&gt;


&lt;div class="crayons-card c-embed text-styles text-styles--secondary"&gt;
    &lt;div class="c-embed__content"&gt;
        &lt;div class="c-embed__cover"&gt;
          &lt;a href="https://github.com/Judewakim?tab=repositories" class="c-link align-middle" rel="noopener noreferrer"&gt;
            &lt;img alt="" src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Favatars.githubusercontent.com%2Fu%2F59939123%3Fv%3D4%3Fs%3D400" height="auto" class="m-0"&gt;
          &lt;/a&gt;
        &lt;/div&gt;
      &lt;div class="c-embed__body"&gt;
        &lt;h2 class="fs-xl lh-tight"&gt;
          &lt;a href="https://github.com/Judewakim?tab=repositories" rel="noopener noreferrer" class="c-link"&gt;
            Judewakim (Jude Wakim) / Repositories · GitHub
          &lt;/a&gt;
        &lt;/h2&gt;
          &lt;p class="truncate-at-3"&gt;
            Hi, I’m a DevOps Cloud Engineer. I use AWS ☁️, Python 🐍, Terraform🏗️, and Linux 🐧 to build things for fun. - Judewakim
          &lt;/p&gt;
        &lt;div class="color-secondary fs-s flex items-center"&gt;
            &lt;img alt="favicon" class="c-embed__favicon m-0 mr-2 radius-0" src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fgithub.githubassets.com%2Ffavicons%2Ffavicon.svg"&gt;
          github.com
        &lt;/div&gt;
      &lt;/div&gt;
    &lt;/div&gt;
&lt;/div&gt;


&lt;p&gt;Want to see more quick cloud wins like this? Follow me — I build fast, clean, cloud-native solutions for real-world projects. Check out my other Medium posts and check out my Github and LinkedIn.&lt;/p&gt;

&lt;p&gt;Catch you in the cloud. ☁️&lt;/p&gt;

</description>
      <category>azure</category>
      <category>webdev</category>
      <category>cloud</category>
      <category>beginners</category>
    </item>
    <item>
      <title>STOP USING THE AWS CONSOLE…use CloudFormation (IaC) instead</title>
      <dc:creator>Jude Wakim</dc:creator>
      <pubDate>Fri, 21 Mar 2025 17:30:00 +0000</pubDate>
      <link>https://dev.to/vvakim/stop-using-the-aws-consoleuse-cloudformation-iac-instead-a4e</link>
      <guid>https://dev.to/vvakim/stop-using-the-aws-consoleuse-cloudformation-iac-instead-a4e</guid>
      <description>&lt;h2&gt;
  
  
  Scenario
&lt;/h2&gt;

&lt;p&gt;Here’s a real-world scenario…imagine “Aurora Digital, an expanding online retailer specializing in luxury home goods, has decided to transition its digital operations to AWS to leverage cloud computing’s benefits. The company is experiencing significant growth and needs a solution that can handle increasing traffic and scale during major sales events without sacrificing performance or security.”&lt;/p&gt;

&lt;p&gt;Now, your goal is to “help Aurora Digital implement a cloud-based infrastructure using AWS services, specifically focusing on scalability, security, cost-efficiency, and automation.” &lt;em&gt;-Level Up In Tech, CDA03-CloudFormation&lt;/em&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Objectives
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Enhance Scalability:&lt;/strong&gt; Automatically adjust computing resources to meet demand, ensuring the platform remains operational and responsive during peak traffic periods.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Increase Reliability:&lt;/strong&gt; Improve website uptime and reduce the risk of failures associated with physical hardware and manual interventions.&lt;br&gt;
Boost Security: Implement advanced security protocols and isolation through AWS to protect sensitive customer data and transactions.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Reduce Operational Costs:&lt;/strong&gt; Lower overall infrastructure costs by optimizing resource usage and eliminating the need for upfront hardware investments.&lt;/p&gt;
&lt;h2&gt;
  
  
  Why use CloudFormation (IaC)
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Automation:&lt;/strong&gt; Reduce human error and free up developer time to focus on other tasks by automating the provisioning and management of a complex cloud environment.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Consistency:&lt;/strong&gt; Ensure consistent environments are created every time, thereby eliminating the “it works on my machine” problem.&lt;br&gt;
Version Control: Enhance collaboration among team members by allowing infrastructure to be version-controlled and reviewed as part of the application code.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Reusability:&lt;/strong&gt; Reuse templates across the company or community, speeding up future deployments and ensuring best practices are followed.&lt;/p&gt;
&lt;h2&gt;
  
  
  Prerequisites
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Be familiar with IaC&lt;/li&gt;
&lt;li&gt;Be familiar with YAML&lt;/li&gt;
&lt;li&gt;Be familiar with the AWS CLI&lt;/li&gt;
&lt;li&gt;Have a text editor (I use VSC)&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;
  
  
  Create the Solution Architecture
&lt;/h2&gt;

&lt;p&gt;So the first step when receiving an objective is designing the solution. Before you start building, you have to know what you are trying to build. To do that, it’s always best practice to create a diagram. I use Draw.io.&lt;/p&gt;

&lt;p&gt;For this objective, we need to create a web server for Aurora Digital to host their website. We also need this web server to be automatically scalable in the event of spikes, secure to protect the company’s assets, and resilient in the event of disaster.&lt;/p&gt;

&lt;p&gt;This architecture will have multiple EC2 instances, running Apache web server. It will have an auto scaling group to automatically scale the number of instances required to handle the traffic. This will make it scalable. The instances will be spread across three public subnets in three availability zones all within a VPC, making it resilient to any disasters in the area. We will use a load balancer to, guess what… balance the load, of traffic between the instances. This will help with the resilience because if one instance in one availability zone goes down, the auto scaling group will create another instance in another availability zone and if one instance is not performing well or getting overwhelmed with traffic, the load balancer will re-distribute the load to make sure we optimize the utilization of all the instances. Also, we will use security groups to restrict access to the web server only from the load balancer. There will be two security groups (the security group for the load balancer is not displayed in the diagram), one for the webservers that restricts inbound traffic to only allow HTTP traffic from the load balancer and another security group for the load balancer that allows HTTP traffic from anywhere. Doing this makes sure all traffic is routed through the same path, making it easier to control and secure.&lt;/p&gt;
&lt;h2&gt;
  
  
  Create the YAML Script
&lt;/h2&gt;

&lt;p&gt;Now, that you have the diagram and you know what you are building, build it. Create the script using YAML or JSON ( I will use YAML).&lt;/p&gt;

&lt;p&gt;Below is my complete YAML script that I created in Visual Studio Code (VSC).&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;#cda03-cloudformation 

AWSTemplateFormatVersion: "2010-09-09"
Description: The CloudFormation template in YAML for CDA03-CloudFormation use case.

Resources:
  #vpc
  CDA03VPC:
    Type: AWS::EC2::VPC
    Properties:  
      CidrBlock: 10.10.0.0/16
      EnableDnsSupport: true
      EnableDnsHostnames: true
      Tags:
        - Key: Name
          Value: CDA03VPC

  #subnet 1
  CDA03PublicSubnet1:
    Type: AWS::EC2::Subnet
    Properties:
      VpcId: !Ref CDA03VPC
      CidrBlock: 10.10.1.0/24
      MapPublicIpOnLaunch: true
      AvailabilityZone: us-east-1a
      Tags:
          - Key: Name
            Value: CDA03PublicSubnet1

  #subnet 2
  CDA03PublicSubnet2:
    Type: AWS::EC2::Subnet
    Properties:
      VpcId: !Ref CDA03VPC
      CidrBlock: 10.10.2.0/24
      MapPublicIpOnLaunch: true
      AvailabilityZone: us-east-1b
      Tags:
        - Key: Name
          Value: CDA03PublicSubnet2

  #subnet 3
  CDA03PublicSubnet3:
    Type: AWS::EC2::Subnet
    Properties:
      VpcId: !Ref CDA03VPC
      CidrBlock: 10.10.3.0/24
      MapPublicIpOnLaunch: true
      AvailabilityZone: us-east-1c
      Tags:
        - Key: Name
          Value: CDA03PublicSubnet3

  #igw
  CDA03InternetGateway:
    Type: AWS::EC2::InternetGateway
    Properties: 
      Tags: 
        - Key: Name
          Value: CDA03InternetGateway

  #igw attachment
  AttachGateway:
    Type: AWS::EC2::VPCGatewayAttachment
    Properties:
      VpcId: !Ref CDA03VPC
      InternetGatewayId: !Ref CDA03InternetGateway

  #route table
  CDA03RouteTable:
    Type: AWS::EC2::RouteTable
    Properties: 
      VpcId: !Ref CDA03VPC
      Tags: 
        - Key: Name
          Value: CDA03RouteTable

  #route to internet
  PublicRoute:
    Type: AWS::EC2::Route
    Properties: 
      RouteTableId: !Ref CDA03RouteTable
      DestinationCidrBlock: 0.0.0.0/0
      GatewayId: !Ref CDA03InternetGateway


  # Subnet Route Table Associations
  Subnet1RouteTableAssociation:
    Type: AWS::EC2::SubnetRouteTableAssociation
    Properties:
      SubnetId: !Ref CDA03PublicSubnet1
      RouteTableId: !Ref CDA03RouteTable

  Subnet2RouteTableAssociation:
    Type: AWS::EC2::SubnetRouteTableAssociation
    Properties:
      SubnetId: !Ref CDA03PublicSubnet2
      RouteTableId: !Ref CDA03RouteTable

  Subnet3RouteTableAssociation:
    Type: AWS::EC2::SubnetRouteTableAssociation
    Properties:
      SubnetId: !Ref CDA03PublicSubnet3
      RouteTableId: !Ref CDA03RouteTable

  #key pair
  CDA03KeyPair:
    Type: AWS::EC2::KeyPair
    Properties:  
      KeyName: cda03-keypair
      KeyType: rsa

  #launch template
  CDA03LaunchTemplate: 
    Type: AWS::EC2::LaunchTemplate
    Properties: 
      LaunchTemplateData:
        InstanceType: t2.micro
        KeyName: cda03-keypair
        ImageId: ami-0c02fb55956c7d316 # Amazon Linux 2 AMI ID (specific to region)
        SecurityGroupIds:
          - !Ref CDA03WebserverSecurityGroup
        UserData:
          Fn::Base64: |
            #!/bin/bash
            sudo yum update &amp;amp;&amp;amp; sudo yum upgrade
            sudo yum install httpd -y
            sudo systemctl start httpd
            sudo systemctl enable httpd
            cd /usr/share/httpd/noindex/
            chown apache:apache /usr/share/httpd/noindex/index.html
            chmod 644 /usr/share/httpd/noindex/index.html
            echo "hello world, from $HOSTNAME" | sudo tee /usr/share/httpd/noindex/index.html &amp;gt; /dev/null
        TagSpecifications:
          - ResourceType: instance
            Tags:
              - Key: Name
                Value: CDA03-ApacheInstance
      LaunchTemplateName: CDA03LaunchTemplate

  #webserver sg
  CDA03WebserverSecurityGroup:
    Type: AWS::EC2::SecurityGroup
    Properties:
      GroupDescription: Allow HTTP access only from the ALB
      VpcId: !Ref CDA03VPC
      SecurityGroupIngress:
        - IpProtocol: tcp
          FromPort: 80
          ToPort: 80
          SourceSecurityGroupId: !Ref CDA03ALBSecurityGroup
      Tags:
        - Key: Name
          Value: CDA03WebserverSecurityGroup

  #asg
  CDA03ASG: 
    Type: AWS::AutoScaling::AutoScalingGroup
    Properties:
      LaunchTemplate:
        LaunchTemplateId: !Ref CDA03LaunchTemplate
        Version: !GetAtt  CDA03LaunchTemplate.LatestVersionNumber
      MinSize: 2
      DesiredCapacity: 2
      MaxSize: 5
      VPCZoneIdentifier:
        - !Ref CDA03PublicSubnet1
        - !Ref CDA03PublicSubnet2
        - !Ref CDA03PublicSubnet3
      TargetGroupARNs:
        - !Ref CDA03TargetGroup
      Tags: 
        - Key: Name
          Value: CDA03ASG
          PropagateAtLaunch: true

  # Target Group
  CDA03TargetGroup:
    Type: AWS::ElasticLoadBalancingV2::TargetGroup
    Properties:
      Name: CDA03TargetGroup
      Protocol: HTTP
      Port: 80
      VpcId: !Ref CDA03VPC
      TargetType: instance
      HealthCheckEnabled: true
      HealthCheckPath: /
      HealthCheckIntervalSeconds: 30
      HealthCheckTimeoutSeconds: 5
      HealthyThresholdCount: 3
      UnhealthyThresholdCount: 2
      Matcher:
        HttpCode: 200

  #alb
  CDA03LoadBalancer:
    Type: AWS::ElasticLoadBalancingV2::LoadBalancer
    Properties: 
      Name: CDA03ApplicationLoadBalancer
      Scheme: internet-facing
      Subnets:
        - !Ref CDA03PublicSubnet1
        - !Ref CDA03PublicSubnet2
        - !Ref CDA03PublicSubnet3
      SecurityGroups: 
        - !Ref CDA03ALBSecurityGroup
      Tags:
        - Key: Name
          Value: CDA03LoadBalancer

  #alb sg
  CDA03ALBSecurityGroup:
    Type: AWS::EC2::SecurityGroup
    Properties: 
      GroupDescription: Allow HTTP traffic from anywhere to ALB
      VpcId: !Ref CDA03VPC
      SecurityGroupIngress:
        - IpProtocol: tcp
          FromPort: 80
          ToPort: 80
          CidrIp: 0.0.0.0/0
      Tags:
        - Key: Name
          Value: CDA03ALBSecurityGroup

  #alb listener
  CDA03ALBListener:
    Type: AWS::ElasticLoadBalancingV2::Listener
    Properties: 
      DefaultActions: 
        - Type: forward
          TargetGroupArn: !Ref CDA03TargetGroup
      LoadBalancerArn: !Ref CDA03LoadBalancer
      Port: 80
      Protocol: HTTP

#outputs (dns url)
Outputs:
  ALBDNSName:
    Description: "DNS name of the Application Load Balancer"
    Value: !GetAtt CDA03LoadBalancer.DNSName
    Export:
      Name: "ALBDNSName"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Troubleshoot
&lt;/h2&gt;

&lt;p&gt;Once I created the script, I used the AWS CLI to validate the code. In the terminal within VSC, I ran this command, &lt;code&gt;aws cloudformation validate-template --template-body file://cda03-cloudformation-code.yml&lt;/code&gt; to see if the code was all correct and ready to be deployed. &lt;/p&gt;

&lt;p&gt;This was my response:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;“An error occurred (ValidationError) when calling the ValidateTemplate operation: Invalid template resource property ‘ALBDNSName’”&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;There was an error with the output. I wanted the CloudFormation stack to output the DNS name of the load balancer so I don’t have to look around for it. After a little troubleshooting, I realized I had the outputs block within the resource block. The outputs block is not a resource and is supposed to be on par with the resource block. So after a minor adjustment, I saved the changes to the file and ran the validation again.&lt;/p&gt;

&lt;h2&gt;
  
  
  Success
&lt;/h2&gt;

&lt;p&gt;Now that the code is validated, and confirmed working. We can create the CloudFormation stack. You will need to run this command &lt;code&gt;aws cloudformation create-stack --stack-name CDA03Stack --template-body file://cda03-cloudformation-code.yml&lt;/code&gt; and kick back and relax while CloudFormation creates everything. Wait a couple of minutes for all the services to finish provisioning.&lt;/p&gt;

&lt;p&gt;If you try to query the stack for the load balancer DNS name before it is ready you will see a result of “none”, which just means it’s not done provisioning, so wait and try again.&lt;/p&gt;

&lt;p&gt;Go to that address to confirm the web server is up and running. Make sure you use HTTP not HTTPS, because that is what we specified in the YAML code. Refresh the page a couple of times to verify that the load balancer is balancing between different instances.&lt;/p&gt;

&lt;p&gt;Lastly, let’s confirm that you can only access the web server via the load balancer and not via the instance’s IP addresses directly. To find the instance’s IP addresses from the AWS CLI, use this command &lt;code&gt;aws ec2 describe-instances --region us-east-1&lt;/code&gt; to list all the instances in the us-east-1 region. The public IP address should be listed there, copy it and paste into a browser (using HTTP) to confirm that you cannot access the web server.&lt;/p&gt;




&lt;p&gt;&lt;strong&gt;Congratulations!&lt;/strong&gt; &lt;br&gt;
If you followed along and reached this point, you should now see why IaC is better to use than manually creating everything in the console. You created a web server that is scalable, resilient, and secure, and you now can automate its creation process.&lt;/p&gt;

&lt;p&gt;To delete the stack from the AWS CLI, use this command &lt;code&gt;aws cloudformation delete-stack --stack-name CDA03Stack&lt;/code&gt;. You should not get any response from the terminal but wait a couple minutes until the entire stack gets de-provisioned.&lt;/p&gt;




&lt;p&gt;Originally published on &lt;a href="https://medium.com/aws-tip/stop-using-the-aws-console-use-cloudformation-iac-instead-68fccdb18e75" rel="noopener noreferrer"&gt;Medium&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Find me in &lt;a href="//www.linkedin.com/in/jude-wakim"&gt;Linkedin&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;For similar projects, check out my &lt;a href="https://github.com/Judewakim" rel="noopener noreferrer"&gt;Github&lt;/a&gt;&lt;/p&gt;

</description>
      <category>aws</category>
      <category>cloudformation</category>
      <category>infrastructureascode</category>
      <category>tutorial</category>
    </item>
    <item>
      <title>Master Python Automation: Extract and Display File Info Like a Pro</title>
      <dc:creator>Jude Wakim</dc:creator>
      <pubDate>Fri, 21 Mar 2025 17:03:00 +0000</pubDate>
      <link>https://dev.to/vvakim/master-python-automation-extract-and-display-file-info-like-a-pro-21a2</link>
      <guid>https://dev.to/vvakim/master-python-automation-extract-and-display-file-info-like-a-pro-21a2</guid>
      <description>&lt;h2&gt;
  
  
  &lt;em&gt;Scenario&lt;/em&gt;
&lt;/h2&gt;

&lt;p&gt;“Your company needs to learn about the files located on various machines. You have been asked to build a script that extracts information such as the name and size about the files in the current working directory and stores it in a list of dictionaries.” -LUIT, Python2&lt;/p&gt;

&lt;p&gt;Automation scripting is a game-changer and the LUIT-Python2 GitHub repository provides a robust solution. This repository offers a Python-based tool that simplifies tasks like network provisioning, application deployment, and system configuration.&lt;/p&gt;




&lt;h2&gt;
  
  
  Why Automate Infrastructure Management?
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Efficiency:&lt;/strong&gt; By automating repetitive tasks, you save time and reduce the risk of human error.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Consistency:&lt;/strong&gt; Each infrastructure deployment is identical, ensuring that “it works on my machine” never becomes an issue.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Scalability:&lt;/strong&gt; Automated scripts can scale effortlessly, provisioning multiple environments with just a few commands.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Maintainability:&lt;/strong&gt; Once an automated script is written, it can be reused, updated, and version-controlled.&lt;/p&gt;




&lt;h2&gt;
  
  
  Getting Started
&lt;/h2&gt;

&lt;p&gt;Before diving into the repository, make sure you have Python 2.7 (or compatible) installed on your machine. You will also need a basic understanding of Linux-based systems, as many of the scripts are tailored for that environment.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Clone the Repository:&lt;/strong&gt; Start by cloning the LUIT-Python2 repository from GitHub:&lt;br&gt;
&lt;code&gt;git clone https://github.com/Judewakim/LUIT-Python2.git&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Install Dependencies:&lt;/strong&gt; The repository uses a set of Python packages to handle various tasks. You can install them using pip:&lt;br&gt;
&lt;code&gt;cd LUIT-Python2&lt;br&gt;
pip install -r requirements.txt&lt;/code&gt;&lt;/p&gt;


&lt;h2&gt;
  
  
  Writing Python
&lt;/h2&gt;

&lt;p&gt;To break down the essential components of the data_extraction.py file, lets start with the imports. This code requires to import information about the operating system and the datetime.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import os
import time
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Next, we begin defining the first function of the code. This function will collect all the information that we will need. Remember, the goal of this code is to “build a script that extracts information such as the name and size about the files in the current working directory” so we will collect all the information about the working directory (or whatever directory is specified) and later we will present this information similar to the ls -al command in the Linux command line. That function will look like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;def get_files_info(path='.'):  # Function that collects file details, defaulting to current directory
    files_info = []  # List to store file details

    for root, _, files in os.walk(path):  # Recursively traverse directories and files
        for filename in files:
            file_path = os.path.join(root, filename)  # Construct the full file path
            file_stat = os.stat(file_path)  # Get file details/statistics

            files_info.append({  # Store file details in a dictionary
                'name': filename,  # File name
                'path': file_path,  # Full file path
                'size_bytes': file_stat.st_size,  # File size in bytes
                'last_modified': time.ctime(file_stat.st_mtime),  # Last modified time
                'permissions': oct(file_stat.st_mode)[-3:],  # File permissions in octal format (last 3 digits)
            })

    return files_info  # Return the list of file details
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now, we have a function that collects the file name, file path, file size, last modification time, and file permissions of each file in the path. The next thing to do is create another function that will display all this collected data in the way we want it. That second function will look like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;def print_ll_view(files_info):  # Function to print file details in 'll' view format
    for file in files_info:  # Loop through the list of file dictionaries
        print(f"{file['permissions']} {file['size_bytes']} {file['last_modified']} {file['path']}")  # Print file details
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Lastly, we will call this functions and add a bit of user interaction to make the program more streamlined for the user. That last bit of code is set to only run when the Python is run directly and cannot be called from another file. This is what that looks like:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;if __name__ == "__main__":  # Run the script only if executed directly
    path = input("Enter the directory path (press Enter to use current directory): ") or "."  # Prompt user for path, default to current directory
    files_data = get_files_info(path)  # Get file details for the specified path
    print("\nLinux CLI 'll' View:")  # Print header
    print_ll_view(files_data)  # Display file details
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;h2&gt;
  
  
  Running the Program
&lt;/h2&gt;

&lt;p&gt;The program is created and ready to be used. Navigate to the location where the program is stored. If you cloned it from my Github repository the location should be LUIT-Python2 . Once there, you can run the program use the command python &lt;code&gt;.\data_extraction.py&lt;/code&gt;. &lt;/p&gt;

&lt;p&gt;At this point, you have a Python program that will display the files from whatever location you specify and will display those files in Linux ls format for easy readability. This program can be repeated with whatever file path you need.&lt;/p&gt;

&lt;p&gt;You can modify this program to fit your company’s needs by cloning it locally or forking it on Github.&lt;/p&gt;




&lt;p&gt;This entire project is available on &lt;a href="https://github.com/Judewakim/LUIT-Python2" rel="noopener noreferrer"&gt;GitHub&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Originally published on &lt;a href="https://medium.com/aws-tip/master-python-automation-extract-and-display-file-info-like-a-pro-00b2d1cd9eb5" rel="noopener noreferrer"&gt;Medium&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Find me on &lt;a href="//www.linkedin.com/in/jude-wakim"&gt;Linkedin&lt;/a&gt;&lt;/p&gt;

</description>
      <category>python</category>
      <category>automation</category>
      <category>linux</category>
      <category>aws</category>
    </item>
  </channel>
</rss>
