<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Habil BOZALİ</title>
    <description>The latest articles on DEV Community by Habil BOZALİ (@habil).</description>
    <link>https://dev.to/habil</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/habil"/>
    <language>en</language>
    <item>
      <title>Setting Up SSL on AWS Elastic Beanstalk Single Instances</title>
      <dc:creator>Habil BOZALİ</dc:creator>
      <pubDate>Sat, 10 May 2025 18:08:05 +0000</pubDate>
      <link>https://dev.to/habil/setting-up-ssl-on-aws-elastic-beanstalk-single-instances-3hkl</link>
      <guid>https://dev.to/habil/setting-up-ssl-on-aws-elastic-beanstalk-single-instances-3hkl</guid>
      <description>&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F1024%2F0%2AwIxIHoS3bqD0SdO8" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F1024%2F0%2AwIxIHoS3bqD0SdO8" width="1024" height="575"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;Photo by Taylor Vick on Unsplash&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Deploying applications on AWS Elastic Beanstalk simplifies many aspects of running your web applications, but setting up SSL/TLS on single-instance environments can get a bit tricky. In this article, we’ll explore two practical approaches to implementing SSL for your Elastic Beanstalk single instances.&lt;/p&gt;
&lt;h3&gt;
  
  
  Why SSL on Single Instances Is Challenging
&lt;/h3&gt;

&lt;p&gt;AWS Elastic Beanstalk single instances don’t come with built-in SSL termination like their load-balanced counterparts. When you deploy a web application in a single-instance environment, you’re essentially working with a standalone EC2 instance without the benefits of an Application Load Balancer to handle SSL termination.&lt;/p&gt;

&lt;p&gt;However, there are compelling reasons to use single instances:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Cost efficiency&lt;/strong&gt; : Avoiding load balancer costs for smaller applications&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Simplified architecture&lt;/strong&gt; : Reduced complexity for development or test environments&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Direct instance access&lt;/strong&gt; : Easier debugging and monitoring&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Lower traffic needs&lt;/strong&gt; : Many applications don’t require the scalability of multiple instances&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Let’s dive into two effective solutions for enabling SSL on your Elastic Beanstalk single instances.&lt;/p&gt;
&lt;h3&gt;
  
  
  Solution 1: Using .ebextensions with Let’s Encrypt
&lt;/h3&gt;

&lt;p&gt;This approach involves configuring your Elastic Beanstalk environment with custom .ebextensions scripts to automatically set up and renew SSL certificates using Let's Encrypt.&lt;/p&gt;
&lt;h3&gt;
  
  
  Step 1: Create the .ebextensions Directory
&lt;/h3&gt;

&lt;p&gt;In your application’s root directory, create a .ebextensions folder if it doesn't already exist:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;mkdir -p .ebextensions
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Step 2: Create the Let’s Encrypt Configuration Script
&lt;/h3&gt;

&lt;p&gt;Create a configuration file that will handle the installation and setup of Let’s Encrypt:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# .ebextensions/01_ssl_setup.config
packages:
  yum:
    mod24_ssl: []
    epel-release: []

container_commands:
  10_install_certbot:
    command: "sudo yum install -y certbot python-certbot-apache"
  20_get_certificate:
    command: "sudo certbot certonly --standalone --debug --non-interactive --email ${EMAIL} --agree-tos --domains ${DOMAIN} --keep-until-expiring --pre-hook \"service httpd stop\" --post-hook \"service httpd start\""
    env:
      EMAIL: "your-email@example.com"
      DOMAIN: "yourdomain.com"
  30_link_cert:
    command: "ln -sf /etc/letsencrypt/live/${DOMAIN}/cert.pem /etc/pki/tls/certs/server.crt"
    env:
      DOMAIN: "yourdomain.com"
  40_link_key:
    command: "ln -sf /etc/letsencrypt/live/${DOMAIN}/privkey.pem /etc/pki/tls/certs/server.key"
    env:
      DOMAIN: "yourdomain.com"
  50_enable_ssl_conf:
    command: "sed -i 's/#LoadModule ssl_module/LoadModule ssl_module/' /etc/httpd/conf.modules.d/00-ssl.conf"
  60_create_ssl_conf:
    command: |
      cat &amp;gt; /etc/httpd/conf.d/ssl.conf &amp;lt;&amp;lt; 'EOL'
      Listen 443
      &amp;lt;VirtualHost *:443&amp;gt;
        ServerName ${DOMAIN}
        DocumentRoot /var/www/html
        SSLEngine on
        SSLCertificateFile /etc/pki/tls/certs/server.crt
        SSLCertificateKeyFile /etc/pki/tls/certs/server.key
        SSLCertificateChainFile /etc/letsencrypt/live/${DOMAIN}/chain.pem
        &amp;lt;Directory /var/www/html&amp;gt;
          AllowOverride All
          Require all granted
        &amp;lt;/Directory&amp;gt;
      &amp;lt;/VirtualHost&amp;gt;
      EOL
    env:
      DOMAIN: "yourdomain.com"
files:
  "/etc/cron.d/certbot-renew":
    mode: "000644"
    owner: root
    group: root
    content: |
      0 0,12 * * * root certbot renew --standalone --pre-hook "service httpd stop" --post-hook "service httpd start" --quiet
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Make sure to replace &lt;a href="mailto:your-email@example.com"&gt;your-email@example.com&lt;/a&gt; and yourdomain.com with your actual email and domain.&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 3: Create a Configuration to Handle HTTP to HTTPS Redirect
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# .ebextensions/02_ssl_redirect.config
files:
  "/etc/httpd/conf.d/http_redirect.conf":
    mode: "000644"
    owner: root
    group: root
    content: |
      &amp;lt;VirtualHost *:80&amp;gt;
        ServerName yourdomain.com
        Redirect permanent / https://yourdomain.com/
      &amp;lt;/VirtualHost&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Again, replace yourdomain.com with your actual domain.&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 4: Deploy Your Application
&lt;/h3&gt;

&lt;p&gt;Deploy your application with these .ebextensions files:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;eb deploy
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  How This Solution Works
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;The configuration installs the necessary SSL packages and Let’s Encrypt certbot&lt;/li&gt;
&lt;li&gt;It requests a certificate from Let’s Encrypt for your domain&lt;/li&gt;
&lt;li&gt;The Apache web server is configured to use the certificate&lt;/li&gt;
&lt;li&gt;A cron job is set up to automatically renew the certificate before expiration&lt;/li&gt;
&lt;li&gt;HTTP traffic is redirected to HTTPS&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  Potential Issues and Solutions
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Port 80/443 accessibility&lt;/strong&gt; : Ensure your security group allows inbound traffic on ports 80 and 443.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Certificate renewal failures&lt;/strong&gt; : The pre-hook and post-hook in the cron job temporarily stop Apache during renewal to free up port 80. Monitor renewal logs at /var/log/letsencrypt/ if you encounter issues.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;DNS configuration&lt;/strong&gt; : Your domain must be correctly pointing to your Elastic Beanstalk instance’s IP address for Let’s Encrypt validation to succeed.&lt;/p&gt;

&lt;h3&gt;
  
  
  Solution 2: Using a Load Balancer with CNAME
&lt;/h3&gt;

&lt;p&gt;This approach involves creating a single-instance environment but fronting it with a load balancer specifically for SSL termination.&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 1: Create a Load Balanced Environment
&lt;/h3&gt;

&lt;p&gt;Instead of creating a single-instance environment, create a load-balanced environment with a minimum and maximum of 1 instance:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;eb create my-environment --elb-type application --single
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Or through the AWS Management Console:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Create a new environment&lt;/li&gt;
&lt;li&gt;Choose “Web server environment”&lt;/li&gt;
&lt;li&gt;Under “Platform”, select your platform of choice&lt;/li&gt;
&lt;li&gt;Under “Configuration presets”, select “Custom configuration”&lt;/li&gt;
&lt;li&gt;In the “Capacity” section, select “Load balanced”&lt;/li&gt;
&lt;li&gt;Set both “Min instances” and “Max instances” to 1&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  Step 2: Configure SSL on the Load Balancer
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;Navigate to the EC2 Console &amp;gt; Load Balancers&lt;/li&gt;
&lt;li&gt;Select your environment’s load balancer&lt;/li&gt;
&lt;li&gt;Add a listener for HTTPS (port 443)&lt;/li&gt;
&lt;li&gt;Add your SSL certificate (you can use AWS Certificate Manager for free public certificates)
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
  "Listeners": [
    {
      "Protocol": "HTTPS",
      "Port": 443,
      "DefaultActions": [
        {
          "Type": "forward",
          "TargetGroupArn": "arn:aws:elasticloadbalancing:us-east-1:123456789012:targetgroup/awseb-AWSEB-ABCDEFGHIJ/0123456789abcdef"
        }
      ],
      "SslPolicy": "ELBSecurityPolicy-2016-08",
      "Certificates": [
        {
          "CertificateArn": "arn:aws:acm:us-east-1:123456789012:certificate/12345678-1234-1234-1234-123456789012"
        }
      ]
    }
  ]
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Step 3: Set Up CNAME Records
&lt;/h3&gt;

&lt;p&gt;In your DNS provider:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Create a CNAME record pointing your domain to your Elastic Beanstalk environment URL&lt;/li&gt;
&lt;li&gt;For example: yourdomain.com → my-environment.elasticbeanstalk.com&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  Step 4: Configure HTTP to HTTPS Redirect (Optional)
&lt;/h3&gt;

&lt;p&gt;Create a listener rule for port 80 that redirects to HTTPS:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;In the EC2 Console, go to the Load Balancer&lt;/li&gt;
&lt;li&gt;Add a listener for HTTP (port 80)&lt;/li&gt;
&lt;li&gt;Add a redirect action to HTTPS (port 443)&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  How This Solution Works
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;The load balancer handles SSL termination&lt;/li&gt;
&lt;li&gt;Traffic is encrypted between clients and the load balancer&lt;/li&gt;
&lt;li&gt;The load balancer forwards traffic to your single EC2 instance&lt;/li&gt;
&lt;li&gt;You benefit from AWS Certificate Manager’s free certificates and automatic renewals&lt;/li&gt;
&lt;li&gt;The architecture remains simple with just one instance&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  Considerations for This Approach
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Cost implications&lt;/strong&gt; : This approach adds the cost of a load balancer (approximately $16–20/month)&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Advantage&lt;/strong&gt; : AWS manages certificate renewal automatically&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Simplified management&lt;/strong&gt; : No need to manage SSL configuration on the instance itself&lt;/p&gt;

&lt;h3&gt;
  
  
  Comparing the Two Approaches
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fu8235gw70gq4ayc0x9oo.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fu8235gw70gq4ayc0x9oo.png" width="733" height="291"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Which Solution Should You Choose?
&lt;/h3&gt;

&lt;p&gt;Choose &lt;strong&gt;Solution 1&lt;/strong&gt; (.ebextensions with Let’s Encrypt) if:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Cost is a primary concern&lt;/li&gt;
&lt;li&gt;You’re comfortable managing Linux configurations&lt;/li&gt;
&lt;li&gt;You want to maintain a truly single-instance architecture&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Choose &lt;strong&gt;Solution 2&lt;/strong&gt; (Load Balancer with CNAME) if:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;You prefer managed services with less operational overhead&lt;/li&gt;
&lt;li&gt;Budget allows for an additional ~$20/month&lt;/li&gt;
&lt;li&gt;You might need to scale in the future&lt;/li&gt;
&lt;li&gt;You want automatic certificate renewal without custom scripts&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Implementation Best Practices
&lt;/h3&gt;

&lt;p&gt;Regardless of which solution you choose, follow these best practices:&lt;/p&gt;

&lt;h3&gt;
  
  
  Security Considerations
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Keep your Elastic Beanstalk platform updated&lt;/li&gt;
&lt;li&gt;Use strong SSL/TLS protocols (TLS 1.2 or higher)&lt;/li&gt;
&lt;li&gt;Configure security headers in your application&lt;/li&gt;
&lt;li&gt;Set appropriate security groups to limit access&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Monitoring
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Set up CloudWatch alarms for certificate expiration (for Solution 1)&lt;/li&gt;
&lt;li&gt;Monitor HTTP status codes to ensure redirects are working properly&lt;/li&gt;
&lt;li&gt;Set up alerts for SSL/TLS failures&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Documentation
&lt;/h3&gt;

&lt;p&gt;Document your SSL setup thoroughly, including:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Certificate renewal dates and processes&lt;/li&gt;
&lt;li&gt;DNS configuration&lt;/li&gt;
&lt;li&gt;Load balancer or instance configuration details&lt;/li&gt;
&lt;li&gt;Troubleshooting steps for common issues&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Conclusion
&lt;/h3&gt;

&lt;p&gt;Setting up SSL on AWS Elastic Beanstalk single instances doesn’t have to be complicated. Both approaches outlined in this article provide secure ways to serve your application over HTTPS.&lt;/p&gt;

&lt;p&gt;The .ebextensions approach with Let’s Encrypt offers a cost-effective solution with more hands-on management, while the load balancer approach provides a more managed experience at a higher cost.&lt;/p&gt;

&lt;p&gt;Choose the approach that best aligns with your technical comfort level, budget constraints, and operational preferences. Either way, you’ll be providing your users with the secure experience they expect and deserve.&lt;/p&gt;

&lt;p&gt;See you in the next article! 👻&lt;/p&gt;

</description>
      <category>letsencrypt</category>
      <category>elasticbeanstalk</category>
      <category>aws</category>
    </item>
    <item>
      <title>The AWS Well-Architected Framework: A Complete Guide</title>
      <dc:creator>Habil BOZALİ</dc:creator>
      <pubDate>Sun, 13 Apr 2025 19:18:43 +0000</pubDate>
      <link>https://dev.to/habil/the-aws-well-architected-framework-a-complete-guide-1ddi</link>
      <guid>https://dev.to/habil/the-aws-well-architected-framework-a-complete-guide-1ddi</guid>
      <description>&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F1024%2F0%2AvhN5i1Kw5UMrJcBL" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F1024%2F0%2AvhN5i1Kw5UMrJcBL" width="1024" height="768"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;Photo by Jess Bailey on Unsplash&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;As more companies pivot towards major providers like AWS in the cloud computing landscape, success isn’t merely about using services — it’s about using them correctly. This is precisely where the AWS Well-Architected Framework comes into play. This framework provides a comprehensive guide to help you evaluate and improve your cloud architecture over time.&lt;/p&gt;
&lt;h3&gt;
  
  
  What is the AWS Well-Architected Framework?
&lt;/h3&gt;

&lt;p&gt;The AWS Well-Architected Framework is a set of principles and best practices that AWS has codified over years of experience in designing, implementing, and maintaining cloud-based systems. The framework helps cloud architects and application developers build secure, high-performing, resilient, and efficient systems.&lt;/p&gt;

&lt;p&gt;The framework is built upon five interrelated pillars:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Operational Excellence&lt;/strong&gt; : Running and monitoring systems to deliver business value&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Security&lt;/strong&gt; : Protecting information and systems&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Reliability&lt;/strong&gt; : Ensuring a workload performs its intended function correctly and consistently&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Performance Efficiency&lt;/strong&gt; : Using computing resources efficiently&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Cost Optimization&lt;/strong&gt; : Avoiding unnecessary costs&lt;/li&gt;
&lt;/ul&gt;
&lt;h3&gt;
  
  
  A Deeper Look at the Five Pillars
&lt;/h3&gt;

&lt;p&gt;Each pillar of the Well-Architected Framework focuses on a specific aspect of cloud architecture and provides a set of design principles and best practices:&lt;/p&gt;
&lt;h3&gt;
  
  
  1. Operational Excellence
&lt;/h3&gt;

&lt;p&gt;This pillar focuses on running and monitoring systems to deliver business value and continually improving processes and procedures.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Key Principles:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Perform operations as code&lt;/li&gt;
&lt;li&gt;Make frequent, small, reversible changes&lt;/li&gt;
&lt;li&gt;Refine operations procedures frequently&lt;/li&gt;
&lt;li&gt;Anticipate failure&lt;/li&gt;
&lt;li&gt;Learn from all operational failures&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Best Practices:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Implement Infrastructure as Code (IaC) using AWS CloudFormation or Terraform&lt;/li&gt;
&lt;li&gt;Establish CI/CD pipelines for automated deployments&lt;/li&gt;
&lt;li&gt;Implement comprehensive logging and monitoring with services like CloudWatch&lt;/li&gt;
&lt;li&gt;Create runbooks and playbooks for standard procedures and incident response&lt;/li&gt;
&lt;/ul&gt;
&lt;h3&gt;
  
  
  2. Security
&lt;/h3&gt;

&lt;p&gt;The security pillar protects information, systems, and assets while delivering business value through risk assessments and mitigation strategies.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Key Principles:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Implement a strong identity foundation&lt;/li&gt;
&lt;li&gt;Enable traceability&lt;/li&gt;
&lt;li&gt;Apply security at all layers&lt;/li&gt;
&lt;li&gt;Automate security best practices&lt;/li&gt;
&lt;li&gt;Protect data in transit and at rest&lt;/li&gt;
&lt;li&gt;Keep people away from data&lt;/li&gt;
&lt;li&gt;Prepare for security events&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Best Practices:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Use IAM to implement the principle of least privilege&lt;/li&gt;
&lt;li&gt;Enable MFA for all users, especially those with privileged access&lt;/li&gt;
&lt;li&gt;Implement network security using security groups, NACLs, and VPCs&lt;/li&gt;
&lt;li&gt;Encrypt data at rest and in transit&lt;/li&gt;
&lt;li&gt;Use AWS GuardDuty for threat detection&lt;/li&gt;
&lt;li&gt;Perform regular security assessments and penetration tests&lt;/li&gt;
&lt;/ul&gt;
&lt;h3&gt;
  
  
  3. Reliability
&lt;/h3&gt;

&lt;p&gt;This pillar emphasizes a system's ability to recover from infrastructure or service disruptions, dynamically acquire computing resources to meet demand, and mitigate disruptions such as misconfigurations or transient network issues.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Key Principles:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Test recovery procedures&lt;/li&gt;
&lt;li&gt;Automatically recover from failure&lt;/li&gt;
&lt;li&gt;Scale horizontally to increase aggregate system availability&lt;/li&gt;
&lt;li&gt;Stop guessing capacity&lt;/li&gt;
&lt;li&gt;Manage change in automation&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Best Practices:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Design with redundancy and high availability in mind&lt;/li&gt;
&lt;li&gt;Use multiple Availability Zones and regions where appropriate&lt;/li&gt;
&lt;li&gt;Implement auto-scaling to handle load variations&lt;/li&gt;
&lt;li&gt;Create backup and restore strategies&lt;/li&gt;
&lt;li&gt;Use fault isolation to protect the entire system&lt;/li&gt;
&lt;li&gt;Design with graceful degradation in mind&lt;/li&gt;
&lt;/ul&gt;
&lt;h3&gt;
  
  
  4. Performance Efficiency
&lt;/h3&gt;

&lt;p&gt;This pillar focuses on using computing resources efficiently to meet system requirements and maintaining that efficiency as demand changes and technologies evolve.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Key Principles:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Democratize advanced technologies&lt;/li&gt;
&lt;li&gt;Go global in minutes&lt;/li&gt;
&lt;li&gt;Use serverless architectures&lt;/li&gt;
&lt;li&gt;Experiment more often&lt;/li&gt;
&lt;li&gt;Mechanical sympathy&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Best Practices:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Select the right resource types and sizes based on workload requirements&lt;/li&gt;
&lt;li&gt;Monitor performance and optimize over time&lt;/li&gt;
&lt;li&gt;Use caching to improve performance and reduce database load&lt;/li&gt;
&lt;li&gt;Deploy in multiple regions to provide lower latency to global users&lt;/li&gt;
&lt;li&gt;Leverage managed services to reduce operational overhead&lt;/li&gt;
&lt;li&gt;Experiment with new technologies and approaches&lt;/li&gt;
&lt;/ul&gt;
&lt;h3&gt;
  
  
  5. Cost Optimization
&lt;/h3&gt;

&lt;p&gt;This pillar focuses on avoiding unnecessary costs by understanding spending over time and controlling fund allocation.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Key Principles:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Adopt a consumption model&lt;/li&gt;
&lt;li&gt;Measure overall efficiency&lt;/li&gt;
&lt;li&gt;Stop spending money on undifferentiated heavy lifting&lt;/li&gt;
&lt;li&gt;Analyze and attribute expenditure&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Best Practices:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Implement resource tagging strategies for cost allocation&lt;/li&gt;
&lt;li&gt;Use reserved instances and savings plans for predictable workloads&lt;/li&gt;
&lt;li&gt;Right-size resources based on actual usage patterns&lt;/li&gt;
&lt;li&gt;Automate resource optimization with AWS Trusted Advisor&lt;/li&gt;
&lt;li&gt;Implement lifecycle policies for data storage&lt;/li&gt;
&lt;li&gt;Review and adjust your architecture as AWS introduces new services&lt;/li&gt;
&lt;/ul&gt;
&lt;h3&gt;
  
  
  Why is the AWS Well-Architected Framework Important?
&lt;/h3&gt;

&lt;p&gt;In today’s competitive landscape, simply migrating to the cloud isn’t enough. Organizations need to leverage the cloud’s full potential while maintaining security, reliability, and cost-effectiveness. Here’s why the Well-Architected Framework matters:&lt;/p&gt;
&lt;h3&gt;
  
  
  Risk Identification and Mitigation
&lt;/h3&gt;

&lt;p&gt;The framework helps identify potential issues early in the design process, preventing costly remediation later. By asking critical questions across each pillar, you can discover architectural weaknesses before they become operational problems.&lt;/p&gt;

&lt;p&gt;For example, considering the security pillar early might lead you to implement proper encryption and access controls from the start, rather than retrofitting them after a security incident occurs.&lt;/p&gt;
&lt;h3&gt;
  
  
  Consistency Across Teams
&lt;/h3&gt;

&lt;p&gt;As organizations grow, different teams might adopt varying approaches to cloud architecture. The Well-Architected Framework provides a common language and set of standards that ensures consistency across development teams, resulting in:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Reduced operational overhead&lt;/li&gt;
&lt;li&gt;Simplified maintenance&lt;/li&gt;
&lt;li&gt;Better knowledge sharing&lt;/li&gt;
&lt;li&gt;Lower risk of configuration drift&lt;/li&gt;
&lt;/ul&gt;
&lt;h3&gt;
  
  
  Continuous Improvement Path
&lt;/h3&gt;

&lt;p&gt;Cloud architecture isn’t static — it evolves with business needs and technological advancements. The framework encourages regular assessments of your workloads, helping you identify areas for improvement as your requirements change.&lt;/p&gt;
&lt;h3&gt;
  
  
  How to Leverage the Well-Architected Framework
&lt;/h3&gt;

&lt;p&gt;Adopting the framework doesn’t have to be overwhelming. Here’s a practical approach:&lt;/p&gt;
&lt;h3&gt;
  
  
  Step 1: Conduct a Well-Architected Review
&lt;/h3&gt;

&lt;p&gt;Start by evaluating your existing workloads against the framework’s five pillars. AWS provides a free Well-Architected Tool in the management console that guides you through this process with a series of questions for each pillar.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
    "WorkloadName": "E-commerce Platform",
    "ReviewDate": "2023-04-13",
    "PillarReviews": {
        "OperationalExcellence": {
            "RiskLevel": "MEDIUM",
            "ImprovementAreas": [
                "Implement infrastructure as code",
                "Enhance monitoring and alerting"
            ]
        },
        "Security": {
            "RiskLevel": "HIGH",
            "ImprovementAreas": [
                "Enable MFA for all IAM users",
                "Implement data encryption at rest"
            ]
        }
        // Other pillars...
    }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Step 2: Prioritize Improvements
&lt;/h3&gt;

&lt;p&gt;Once you’ve identified areas for improvement, prioritizing them effectively is crucial for success. Consider the following factors when determining which improvements to tackle first:&lt;/p&gt;

&lt;h4&gt;
  
  
  Business Impact
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Revenue Protection&lt;/strong&gt; : Prioritize issues that could impact revenue generation or customer retention&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Regulatory Compliance&lt;/strong&gt; : Address improvements needed to maintain compliance with relevant regulations&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Brand Protection&lt;/strong&gt; : Focus on issues that could damage your reputation if left unaddressed&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  Risk Assessment
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;High&lt;/strong&gt; : Critical vulnerabilities or design flaws that could lead to system failure, data breach, or significant financial loss&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Medium&lt;/strong&gt; : Issues that impact performance, efficiency, or partial functionality&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Low&lt;/strong&gt; : Opportunities for optimization that don’t pose immediate threats&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  Implementation Factors
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Effort Required&lt;/strong&gt; : Consider the time, resources, and expertise needed&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Disruption Level&lt;/strong&gt; : Assess the potential impact on ongoing operations during implementation&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Dependencies&lt;/strong&gt; : Identify improvements that serve as foundations for other changes&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  Pillar-Based Priority Framework
&lt;/h4&gt;

&lt;p&gt;When prioritizing across pillars, consider this general hierarchy (though this may vary based on your specific business context):&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Security&lt;/strong&gt; issues (especially HIGH risk) typically deserve immediate attention due to potential regulatory, financial, and reputational impacts&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Reliability&lt;/strong&gt; improvements that address potential service disruptions&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Operational Excellence&lt;/strong&gt; enhancements that improve your ability to respond to issues&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Performance Efficiency&lt;/strong&gt; optimizations that affect customer experience&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Cost Optimization&lt;/strong&gt; opportunities (unless your organization is under specific cost pressures)&lt;/li&gt;
&lt;/ol&gt;

&lt;h4&gt;
  
  
  Prioritization Matrix Example
&lt;/h4&gt;

&lt;p&gt;Here’s a simple matrix approach to visualize priorities:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fljwnew0xog1cmtxetary.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fljwnew0xog1cmtxetary.png" width="735" height="219"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This approach helps balance urgency, impact, and practical considerations to create a workable improvement roadmap.&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 3: Implement Changes Iteratively
&lt;/h3&gt;

&lt;p&gt;Don’t try to address everything at once. Instead:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Create a roadmap for improvements&lt;/li&gt;
&lt;li&gt;Implement changes in small, manageable increments&lt;/li&gt;
&lt;li&gt;Measure the impact of each change&lt;/li&gt;
&lt;li&gt;Document lessons learned&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  Step 4: Make Well-Architected Reviews Regular
&lt;/h3&gt;

&lt;p&gt;Schedule regular reviews (quarterly or bi-annually) to ensure your architecture continues to align with best practices as your workloads evolve.&lt;/p&gt;

&lt;h3&gt;
  
  
  The Benefits of Following Well-Architected Principles
&lt;/h3&gt;

&lt;p&gt;Organizations that embrace the Well-Architected Framework typically experience several tangible benefits:&lt;/p&gt;

&lt;h3&gt;
  
  
  Reduced Operational Issues
&lt;/h3&gt;

&lt;p&gt;By designing systems according to well-established principles, you’ll encounter fewer unexpected outages, performance bottlenecks, and security incidents.&lt;/p&gt;

&lt;h3&gt;
  
  
  Lower Total Cost of Ownership
&lt;/h3&gt;

&lt;p&gt;The Cost Optimization pillar helps identify resource waste and inefficiencies. Organizations often discover they can achieve the same performance at lower costs after applying Well-Architected recommendations.&lt;/p&gt;

&lt;p&gt;For example, one e-commerce company reduced their monthly AWS bill by 42% after implementing auto-scaling based on usage patterns and switching from on-demand to reserved instances where appropriate.&lt;/p&gt;

&lt;h3&gt;
  
  
  Enhanced Security Posture
&lt;/h3&gt;

&lt;p&gt;The Security pillar guides organizations toward comprehensive security practices that protect data, systems, and assets. This proactive approach helps prevent breaches and compliance issues that could otherwise be costly and damage your reputation.&lt;/p&gt;

&lt;h3&gt;
  
  
  Faster Time to Market
&lt;/h3&gt;

&lt;p&gt;Well-architected systems are easier to maintain and extend. Teams spend less time troubleshooting issues and more time delivering new features and capabilities.&lt;/p&gt;

&lt;h3&gt;
  
  
  The Risks of Ignoring Well-Architected Principles
&lt;/h3&gt;

&lt;p&gt;Conversely, organizations that neglect architectural best practices often face several challenges:&lt;/p&gt;

&lt;h3&gt;
  
  
  Technical Debt Accumulation
&lt;/h3&gt;

&lt;p&gt;Without a framework for evaluation, architectural shortcuts and compromises can accumulate, making systems increasingly difficult and expensive to maintain over time.&lt;/p&gt;

&lt;h3&gt;
  
  
  Unpredictable Costs
&lt;/h3&gt;

&lt;p&gt;Poor architectural decisions can lead to resource inefficiencies and unexpected cost spikes. For instance, improperly configured storage or databases might grow unchecked, resulting in escalating expenses.&lt;/p&gt;

&lt;h3&gt;
  
  
  Security Vulnerabilities
&lt;/h3&gt;

&lt;p&gt;Neglecting security considerations can expose organizations to data breaches, unauthorized access, and compliance violations. The average cost of a data breach now exceeds $4.5 million, making this risk particularly significant.&lt;/p&gt;

&lt;h3&gt;
  
  
  Reliability Issues
&lt;/h3&gt;

&lt;p&gt;Systems designed without reliability in mind are prone to outages, data loss, and inconsistent performance — all of which can directly impact customer satisfaction and revenue.&lt;/p&gt;

&lt;p&gt;Consider a financial services company that experienced a four-hour outage due to a single point of failure in their architecture. The incident resulted in:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;$2.3 million in lost transactions&lt;/li&gt;
&lt;li&gt;Customer compensation costs&lt;/li&gt;
&lt;li&gt;Reputational damage&lt;/li&gt;
&lt;li&gt;Regulatory scrutiny&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;A Well-Architected review would have identified this vulnerability before it became a costly problem.&lt;/p&gt;

&lt;h3&gt;
  
  
  Practical Implementation Tips
&lt;/h3&gt;

&lt;p&gt;To make the most of the Well-Architected Framework, consider these practical tips:&lt;/p&gt;

&lt;h3&gt;
  
  
  Start Small
&lt;/h3&gt;

&lt;p&gt;If you’re new to the framework, begin by applying it to a single, important workload rather than attempting to assess everything at once.&lt;/p&gt;

&lt;h3&gt;
  
  
  Involve Cross-Functional Teams
&lt;/h3&gt;

&lt;p&gt;Well-Architected reviews are most effective when they involve perspectives from development, operations, security, and business stakeholders.&lt;/p&gt;

&lt;h3&gt;
  
  
  Automate Compliance Checks
&lt;/h3&gt;

&lt;p&gt;Use tools like AWS Config Rules, CloudFormation Guard, or third-party solutions to automatically check your infrastructure against Well-Architected best practices.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Example CloudFormation Guard rule to ensure encryption
let s3_buckets = Resources.*[Type == 'AWS::S3::Bucket']
rule s3_buckets_encrypted when %s3_buckets !empty {
  %s3_buckets.Properties.BucketEncryption.ServerSideEncryptionConfiguration[*].ServerSideEncryptionByDefault.SSEAlgorithm == "AES256" or
  %s3_buckets.Properties.BucketEncryption.ServerSideEncryptionConfiguration[*].ServerSideEncryptionByDefault.SSEAlgorithm == "aws:kms"
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Document Decisions
&lt;/h3&gt;

&lt;p&gt;When you choose to deviate from Well-Architected best practices, document your reasoning. This helps future team members understand the context behind architectural decisions.&lt;/p&gt;

&lt;h3&gt;
  
  
  Conclusion
&lt;/h3&gt;

&lt;p&gt;The AWS Well-Architected Framework isn’t just a theoretical construct — it’s a practical tool that translates AWS’s vast experience into actionable guidance. By systematically applying these principles to your cloud workloads, you can build systems that are more secure, reliable, efficient, and cost-effective.&lt;/p&gt;

&lt;p&gt;Remember that Well-Architected is a journey, not a destination. Cloud best practices continue to evolve, and your architecture should evolve alongside them. Regular reviews, incremental improvements, and a commitment to excellence will ensure your cloud infrastructure remains a competitive advantage rather than a liability.&lt;/p&gt;

&lt;p&gt;Whether you’re just starting your cloud journey or looking to optimize existing workloads, the Well-Architected Framework provides a valuable compass to guide your way. The time and resources invested in aligning with these principles will pay dividends in reduced incidents, lower costs, and greater business agility.&lt;/p&gt;

&lt;p&gt;Happy architecting! 👻&lt;/p&gt;

</description>
      <category>wellarchitectedtool</category>
      <category>aws</category>
      <category>wellarchitected</category>
    </item>
    <item>
      <title>Automating Pi-hole Updates with Ansible</title>
      <dc:creator>Habil BOZALİ</dc:creator>
      <pubDate>Sun, 16 Mar 2025 23:24:57 +0000</pubDate>
      <link>https://dev.to/habil/automating-pi-hole-updates-with-ansible-29eb</link>
      <guid>https://dev.to/habil/automating-pi-hole-updates-with-ansible-29eb</guid>
      <description>&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F1024%2F0%2Ar2v680mkUBhep0hi" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F1024%2F0%2Ar2v680mkUBhep0hi" width="1024" height="680"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;Photo by Ant Rozetsky on Unsplash&lt;/em&gt;&lt;/p&gt;
&lt;h3&gt;
  
  
  Automating Pi-hole Updates with Ansible
&lt;/h3&gt;

&lt;p&gt;Managing multiple Pi-hole instances can become a time-consuming task, especially when it comes to regular updates. In this article, we’ll explore how to use Ansible to automate the process of updating Pi-hole installations across your network. This approach will save you time and ensure consistency across all your Pi-hole servers.&lt;/p&gt;
&lt;h3&gt;
  
  
  What is Pi-hole?
&lt;/h3&gt;

&lt;p&gt;Pi-hole is a network-wide ad blocker that acts as a DNS sinkhole. It intercepts DNS requests on your network and blocks requests to known advertising and tracking domains, preventing ads from being downloaded. This not only improves your browsing experience but also:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Reduces bandwidth usage&lt;/li&gt;
&lt;li&gt;Increases browsing speed&lt;/li&gt;
&lt;li&gt;Enhances privacy by blocking tracking domains&lt;/li&gt;
&lt;li&gt;Works on all devices on your network without needing to install software on each device&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Pi-hole is typically installed on a Raspberry Pi (hence the name), but it can run on virtually any Linux distribution with minimal resources. It’s an excellent solution for home networks or small businesses looking to reduce ad traffic.&lt;/p&gt;
&lt;h3&gt;
  
  
  Why Ansible for Pi-hole Management?
&lt;/h3&gt;

&lt;p&gt;When you’re managing one Pi-hole, manual updates are straightforward. However, as your infrastructure grows or if you maintain Pi-hole instances across different locations, the manual approach becomes:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Time-consuming&lt;/li&gt;
&lt;li&gt;Error-prone&lt;/li&gt;
&lt;li&gt;Difficult to track&lt;/li&gt;
&lt;li&gt;Inconsistent&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Ansible provides a solution with these benefits:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Automation&lt;/strong&gt; : Execute the same tasks across multiple servers with a single command&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Idempotency&lt;/strong&gt; : Run playbooks multiple times without causing issues&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Consistency&lt;/strong&gt; : Ensure all systems are updated using the same procedure&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Documentation&lt;/strong&gt; : Your playbooks serve as living documentation of your update process&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Scalability&lt;/strong&gt; : Easily add new Pi-hole instances to your inventory&lt;/li&gt;
&lt;/ul&gt;
&lt;h3&gt;
  
  
  Setting Up the Environment
&lt;/h3&gt;

&lt;p&gt;Let’s break down the process into clear steps:&lt;/p&gt;
&lt;h3&gt;
  
  
  Step 1: Install Ansible
&lt;/h3&gt;

&lt;p&gt;First, ensure you have Ansible installed on your control node:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# On Debian/Ubuntu
sudo apt update
sudo apt install ansible

# On macOS with Homebrew
brew install ansible

# Verify installation
ansible --version
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Step 2: Create Your Ansible Structure
&lt;/h3&gt;

&lt;p&gt;Create a basic directory structure for your Ansible project:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;mkdir -p pihole-ansible/inventory
mkdir -p pihole-ansible/playbooks
cd pihole-ansible
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Step 3: Configure Your Inventory
&lt;/h3&gt;

&lt;p&gt;Create an inventory file that lists your Pi-hole servers:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# inventory/hosts
[pizeros]
pihole1 ansible_host=192.168.1.100
pihole2 ansible_host=192.168.1.101
pihole3 ansible_host=192.168.1.102

[pizeros:vars]
ansible_user=pi
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Step 4: Create the Group Variables
&lt;/h3&gt;

&lt;p&gt;Create a group variables file to apply settings to all Pi-hole instances:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# inventory/group_vars/pizeros.yml
ansible_python_interpreter: /usr/bin/python3
ansible_become: yes
ansible_become_method: sudo
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Step 5: Create the Update Playbook
&lt;/h3&gt;

&lt;p&gt;Create a playbook that handles the Pi-hole update process:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# playbooks/update_pihole.yml
---
- hosts: pizeros
  become: true
  become_method: sudo
  become_user: root
  tasks:
    - name: Update package lists
      apt:
        update_cache: yes
      changed_when: false
- name: Upgrade all packages
      apt:
        upgrade: dist
        autoremove: yes
        autoclean: yes
    - name: Update Pi-hole
      command: pihole -up
      register: pihole_update_result
      changed_when: "'Everything is already up to date' not in pihole_update_result.stdout"
    - name: Display Pi-hole update results
      debug:
        var: pihole_update_result.stdout_lines
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Step 6: Create a Convenience Script
&lt;/h3&gt;

&lt;p&gt;For even easier updates, create a simple shell script:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# update.sh
#!/bin/bash
ansible-playbook -i inventory/hosts playbooks/update_pihole.yml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Make it executable:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;chmod +x update.sh
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Running the Update Process
&lt;/h3&gt;

&lt;p&gt;Now that everything is set up, you can update all your Pi-hole instances with a single command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;./update.sh
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Or, if you prefer to run the playbook directly:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;ansible-playbook -i inventory/hosts playbooks/update_pihole.yml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Understanding the Playbook in Detail
&lt;/h3&gt;

&lt;p&gt;Let’s break down what our update playbook does:&lt;/p&gt;

&lt;h3&gt;
  
  
  1. Package Updates
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;- name: Update package lists
  apt:
    update_cache: yes
  changed_when: false
- name: Upgrade all packages
  apt:
    upgrade: dist
    autoremove: yes
    autoclean: yes
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;These tasks:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Update the APT package cache&lt;/li&gt;
&lt;li&gt;Perform a full distribution upgrade&lt;/li&gt;
&lt;li&gt;Remove unnecessary packages&lt;/li&gt;
&lt;li&gt;Clean the APT cache&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  2. Pi-hole Specific Update
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;- name: Update Pi-hole
  command: pihole -up
  register: pihole_update_result
  changed_when: "'Everything is already up to date' not in pihole_update_result.stdout"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This task:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Runs the Pi-hole update command (pihole -up)&lt;/li&gt;
&lt;li&gt;Captures the output in a variable&lt;/li&gt;
&lt;li&gt;Only registers as “changed” if an actual update occurred&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  3. Result Display
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;- name: Display Pi-hole update results
  debug:
    var: pihole_update_result.stdout_lines
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This task displays the full output of the Pi-hole update process, making it easy to review what happened.&lt;/p&gt;

&lt;h3&gt;
  
  
  Advanced Customizations
&lt;/h3&gt;

&lt;p&gt;Once you have the basic update process working, you can enhance your Ansible setup with these additional features:&lt;/p&gt;

&lt;h3&gt;
  
  
  Schedule Regular Updates
&lt;/h3&gt;

&lt;p&gt;Use cron on your control node to schedule regular updates:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Run updates every Sunday at 3:00 AM
0 3 * * 0 /path/to/pihole-ansible/update.sh &amp;gt; /path/to/logs/pihole-update.log 2&amp;gt;&amp;amp;1
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Add Health Checks
&lt;/h3&gt;

&lt;p&gt;Enhance your playbook with health checks after updates:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;- name: Check Pi-hole status
  command: pihole status
  register: pihole_status
  changed_when: false
- name: Verify DNS resolution is working
  command: dig @localhost google.com
  register: dns_test
  changed_when: false
  failed_when: "'ANSWER SECTION' not in dns_test.stdout"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Add Notification System
&lt;/h3&gt;

&lt;p&gt;Add tasks to notify you when updates are complete:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;- name: Send update completion notification
  mail:
    host: smtp.gmail.com
    port: 587
    username: your_email@gmail.com
    password: "{{ email_password }}"
    to: admin@example.com
    subject: "Pi-hole update completed"
    body: "Updates have been applied to all Pi-hole instances.\n\n{{ pihole_update_result.stdout }}"
  when: pihole_update_result.changed
  no_log: true
  vars:
    ansible_python_interpreter: /usr/bin/python3
  delegate_to: localhost
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Note: Store sensitive information like passwords in an encrypted Ansible vault.&lt;/p&gt;

&lt;h3&gt;
  
  
  Troubleshooting Common Issues
&lt;/h3&gt;

&lt;p&gt;When using this automation, you might encounter some issues:&lt;/p&gt;

&lt;h3&gt;
  
  
  SSH Connection Problems
&lt;/h3&gt;

&lt;p&gt;If you have SSH connection issues:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Verify your inventory has the correct IP addresses and usernames&lt;/li&gt;
&lt;/ol&gt;

&lt;ul&gt;
&lt;li&gt;Test the connection manually:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;ansible pizeros -i inventory/hosts -m ping
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Ensure SSH key authentication is set up:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;ssh-copy-id pi@your_pihole_ip
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Update Failures
&lt;/h3&gt;

&lt;p&gt;If Pi-hole updates fail:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Ensure your Pi-hole instances have internet connectivity&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Review Pi-hole logs for specific errors:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;- name: Check Pi-hole logs   
  command: cat /var/log/pihole.log   
  register: pihole_logs   
  changed_when: false
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Check disk space on your Pi-hole instances:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;- name: Check available disk space
  shell: df -h /   
  register: disk_space   
  changed_when: false
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Conclusion
&lt;/h3&gt;

&lt;p&gt;Using Ansible to automate Pi-hole updates significantly improves manual processes, especially when managing multiple instances. This approach not only saves time but also ensures consistent updates across your entire network.&lt;/p&gt;

&lt;p&gt;The playbooks and configurations in this article provide a solid foundation that you can customize to meet your specific needs. As you become more familiar with Ansible, you can expand your automation to include other aspects of Pi-hole management such as configuration changes, blocklist updates, or even full system backups.&lt;/p&gt;

&lt;p&gt;Remember that automation is an investment that pays dividends over time. The initial setup may take some effort, but the long-term benefits of time savings and consistency are well worth it.&lt;/p&gt;

&lt;p&gt;Happy automating and see you in the next article! 👻&lt;/p&gt;

</description>
      <category>pihole</category>
      <category>ansible</category>
      <category>automation</category>
    </item>
    <item>
      <title>Analyzing CloudFront Logs with Amazon Athena</title>
      <dc:creator>Habil BOZALİ</dc:creator>
      <pubDate>Mon, 03 Mar 2025 19:48:34 +0000</pubDate>
      <link>https://dev.to/habil/analyzing-cloudfront-logs-with-amazon-athena-1l8a</link>
      <guid>https://dev.to/habil/analyzing-cloudfront-logs-with-amazon-athena-1l8a</guid>
      <description>&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F1024%2F0%2AFZdQLb7YdPEmbmZH" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F1024%2F0%2AFZdQLb7YdPEmbmZH" width="1024" height="576"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;Photo by Adrien on Unsplash&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;You are actively using the CloudFront service and noticed an increase in 4xx records in CloudFront statistics. This is usually due to incorrect configurations. However, for more details, you can open CloudFront logs and examine incoming requests. In this article, we will focus on how to perform log review most practically. Let’s start.&lt;/p&gt;
&lt;h2&gt;
  
  
  Why Athena for CloudFront Logs?
&lt;/h2&gt;

&lt;p&gt;CloudFront generates detailed access logs that contain valuable information about requests made to your distribution. However, these logs are:  &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Delivered as compressed files to your S3 bucket
&lt;/li&gt;
&lt;li&gt;Generated in large volumes
&lt;/li&gt;
&lt;li&gt;Written in a specific format that’s not immediately queryable&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Amazon Athena provides a serverless solution to analyze these logs using standard SQL, without the need to set up complex data processing pipelines.&lt;/p&gt;
&lt;h2&gt;
  
  
  Setting Up the Environment
&lt;/h2&gt;

&lt;p&gt;Let’s break down the process into clear steps:&lt;/p&gt;
&lt;h3&gt;
  
  
  Step 1: Enable CloudFront Logging
&lt;/h3&gt;

&lt;p&gt;First, ensure that CloudFront logging is enabled for your distribution:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Navigate to the CloudFront console
&lt;/li&gt;
&lt;li&gt;Select your distribution
&lt;/li&gt;
&lt;li&gt;Click on the “Behaviors” tab
&lt;/li&gt;
&lt;li&gt;Edit the behaviour settings and enable logging
&lt;/li&gt;
&lt;li&gt;Specify an S3 bucket for your logs
&lt;/li&gt;
&lt;/ol&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;  
 &lt;/span&gt;&lt;span class="err"&gt;“Logging”:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;  
 &lt;/span&gt;&lt;span class="err"&gt;“Enabled”:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="kc"&gt;true&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;  
 &lt;/span&gt;&lt;span class="err"&gt;“IncludeCookies”:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="kc"&gt;false&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;  
 &lt;/span&gt;&lt;span class="err"&gt;“Bucket”:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;“your-logs-bucket.s&lt;/span&gt;&lt;span class="mi"&gt;3&lt;/span&gt;&lt;span class="err"&gt;.amazonaws.com”&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;  
 &lt;/span&gt;&lt;span class="err"&gt;“Prefix”:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;“cloudfront-logs/”&lt;/span&gt;&lt;span class="w"&gt;  
 &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;  
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;  
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;h3&gt;
  
  
  Step 2: Create Athena Database and Table
&lt;/h3&gt;

&lt;p&gt;Once logs are being delivered to your S3 bucket, you need to create a database and table in Athena:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Navigate to the Athena console
&lt;/li&gt;
&lt;li&gt;Create a new database:
&lt;/li&gt;
&lt;/ol&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight sql"&gt;&lt;code&gt;&lt;span class="k"&gt;CREATE&lt;/span&gt; &lt;span class="k"&gt;DATABASE&lt;/span&gt; &lt;span class="n"&gt;cloudfront_logs&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;  
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;ol&gt;
&lt;li&gt;Create a table that maps to the CloudFront log format:
&lt;/li&gt;
&lt;/ol&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight sql"&gt;&lt;code&gt;&lt;span class="k"&gt;CREATE&lt;/span&gt; &lt;span class="k"&gt;EXTERNAL&lt;/span&gt; &lt;span class="k"&gt;TABLE&lt;/span&gt; &lt;span class="n"&gt;IF&lt;/span&gt; &lt;span class="k"&gt;NOT&lt;/span&gt; &lt;span class="k"&gt;EXISTS&lt;/span&gt; &lt;span class="n"&gt;cloudfront_logs&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;cf_access_logs&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;  
 &lt;span class="nv"&gt;`date`&lt;/span&gt; &lt;span class="nb"&gt;DATE&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;  
 &lt;span class="nb"&gt;time&lt;/span&gt; &lt;span class="n"&gt;STRING&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;  
 &lt;span class="k"&gt;location&lt;/span&gt; &lt;span class="n"&gt;STRING&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;  
 &lt;span class="n"&gt;bytes&lt;/span&gt; &lt;span class="nb"&gt;BIGINT&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;  
 &lt;span class="n"&gt;request_ip&lt;/span&gt; &lt;span class="n"&gt;STRING&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;  
 &lt;span class="k"&gt;method&lt;/span&gt; &lt;span class="n"&gt;STRING&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;  
 &lt;span class="k"&gt;host&lt;/span&gt; &lt;span class="n"&gt;STRING&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;  
 &lt;span class="n"&gt;uri&lt;/span&gt; &lt;span class="n"&gt;STRING&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;  
 &lt;span class="n"&gt;status&lt;/span&gt; &lt;span class="nb"&gt;INT&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;  
 &lt;span class="n"&gt;referrer&lt;/span&gt; &lt;span class="n"&gt;STRING&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;  
 &lt;span class="n"&gt;user_agent&lt;/span&gt; &lt;span class="n"&gt;STRING&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;  
 &lt;span class="n"&gt;query_string&lt;/span&gt; &lt;span class="n"&gt;STRING&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;  
 &lt;span class="n"&gt;cookie&lt;/span&gt; &lt;span class="n"&gt;STRING&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;  
 &lt;span class="n"&gt;result_type&lt;/span&gt; &lt;span class="n"&gt;STRING&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;  
 &lt;span class="n"&gt;request_id&lt;/span&gt; &lt;span class="n"&gt;STRING&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;  
 &lt;span class="n"&gt;host_header&lt;/span&gt; &lt;span class="n"&gt;STRING&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;  
 &lt;span class="n"&gt;request_protocol&lt;/span&gt; &lt;span class="n"&gt;STRING&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;  
 &lt;span class="n"&gt;request_bytes&lt;/span&gt; &lt;span class="nb"&gt;BIGINT&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;  
 &lt;span class="n"&gt;time_taken&lt;/span&gt; &lt;span class="nb"&gt;FLOAT&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;  
 &lt;span class="n"&gt;xforwarded_for&lt;/span&gt; &lt;span class="n"&gt;STRING&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;  
 &lt;span class="n"&gt;ssl_protocol&lt;/span&gt; &lt;span class="n"&gt;STRING&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;  
 &lt;span class="n"&gt;ssl_cipher&lt;/span&gt; &lt;span class="n"&gt;STRING&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;  
 &lt;span class="n"&gt;response_result_type&lt;/span&gt; &lt;span class="n"&gt;STRING&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;  
 &lt;span class="n"&gt;http_version&lt;/span&gt; &lt;span class="n"&gt;STRING&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;  
 &lt;span class="n"&gt;fle_status&lt;/span&gt; &lt;span class="n"&gt;STRING&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;  
 &lt;span class="n"&gt;fle_encrypted_fields&lt;/span&gt; &lt;span class="nb"&gt;INT&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;  
 &lt;span class="n"&gt;c_port&lt;/span&gt; &lt;span class="nb"&gt;INT&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;  
 &lt;span class="n"&gt;time_to_first_byte&lt;/span&gt; &lt;span class="nb"&gt;FLOAT&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;  
 &lt;span class="n"&gt;x_edge_detailed_result_type&lt;/span&gt; &lt;span class="n"&gt;STRING&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;  
 &lt;span class="n"&gt;sc_content_type&lt;/span&gt; &lt;span class="n"&gt;STRING&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;  
 &lt;span class="n"&gt;sc_content_len&lt;/span&gt; &lt;span class="nb"&gt;BIGINT&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;  
 &lt;span class="n"&gt;sc_range_start&lt;/span&gt; &lt;span class="nb"&gt;BIGINT&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;  
 &lt;span class="n"&gt;sc_range_end&lt;/span&gt; &lt;span class="nb"&gt;BIGINT&lt;/span&gt;  
&lt;span class="p"&gt;)&lt;/span&gt;  
&lt;span class="k"&gt;ROW&lt;/span&gt; &lt;span class="n"&gt;FORMAT&lt;/span&gt; &lt;span class="n"&gt;DELIMITED&lt;/span&gt;   
&lt;span class="n"&gt;FIELDS&lt;/span&gt; &lt;span class="n"&gt;TERMINATED&lt;/span&gt; &lt;span class="k"&gt;BY&lt;/span&gt; &lt;span class="err"&gt;‘\&lt;/span&gt;&lt;span class="n"&gt;t&lt;/span&gt;&lt;span class="err"&gt;’&lt;/span&gt;  
&lt;span class="k"&gt;LOCATION&lt;/span&gt; &lt;span class="err"&gt;‘&lt;/span&gt;&lt;span class="n"&gt;s3&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="o"&gt;//&lt;/span&gt;&lt;span class="n"&gt;your&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;logs&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;bucket&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;cloudfront&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;logs&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="err"&gt;’&lt;/span&gt;  
&lt;span class="n"&gt;TBLPROPERTIES&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="err"&gt;‘&lt;/span&gt;&lt;span class="n"&gt;skip&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;header&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;line&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="k"&gt;count&lt;/span&gt;&lt;span class="err"&gt;’&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="err"&gt;’&lt;/span&gt;&lt;span class="mi"&gt;2&lt;/span&gt;&lt;span class="s1"&gt;');  
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;Note: The table structure follows CloudFront’s log format. Adjust the &lt;code&gt;LOCATION&lt;/code&gt; to match your S3 bucket path.&lt;/p&gt;
&lt;h2&gt;
  
  
  Analyzing 4xx Errors
&lt;/h2&gt;

&lt;p&gt;Now that your environment is set up, let’s write some queries to analyze those 4xx errors:&lt;/p&gt;
&lt;h3&gt;
  
  
  Query 1: Count of 4xx Errors by Status Code
&lt;/h3&gt;


&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight sql"&gt;&lt;code&gt;&lt;span class="k"&gt;SELECT&lt;/span&gt;   
 &lt;span class="n"&gt;status&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;  
 &lt;span class="k"&gt;COUNT&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="o"&gt;*&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;as&lt;/span&gt; &lt;span class="n"&gt;error_count&lt;/span&gt;  
&lt;span class="k"&gt;FROM&lt;/span&gt;   
 &lt;span class="n"&gt;cloudfront_logs&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;cf_access_logs&lt;/span&gt;  
&lt;span class="k"&gt;WHERE&lt;/span&gt;   
 &lt;span class="n"&gt;status&lt;/span&gt; &lt;span class="k"&gt;BETWEEN&lt;/span&gt; &lt;span class="mi"&gt;400&lt;/span&gt; &lt;span class="k"&gt;AND&lt;/span&gt; &lt;span class="mi"&gt;499&lt;/span&gt;  
 &lt;span class="k"&gt;AND&lt;/span&gt; &lt;span class="nb"&gt;date&lt;/span&gt; &lt;span class="k"&gt;BETWEEN&lt;/span&gt; &lt;span class="nb"&gt;DATE&lt;/span&gt; &lt;span class="err"&gt;‘&lt;/span&gt;&lt;span class="mi"&gt;2023&lt;/span&gt;&lt;span class="err"&gt;–&lt;/span&gt;&lt;span class="mi"&gt;01&lt;/span&gt;&lt;span class="err"&gt;–&lt;/span&gt;&lt;span class="mi"&gt;01&lt;/span&gt;&lt;span class="err"&gt;’&lt;/span&gt; &lt;span class="k"&gt;AND&lt;/span&gt; &lt;span class="nb"&gt;DATE&lt;/span&gt; &lt;span class="err"&gt;‘&lt;/span&gt;&lt;span class="mi"&gt;2023&lt;/span&gt;&lt;span class="err"&gt;–&lt;/span&gt;&lt;span class="mi"&gt;01&lt;/span&gt;&lt;span class="err"&gt;–&lt;/span&gt;&lt;span class="mi"&gt;31&lt;/span&gt;&lt;span class="err"&gt;’&lt;/span&gt;  
&lt;span class="k"&gt;GROUP&lt;/span&gt; &lt;span class="k"&gt;BY&lt;/span&gt;   
 &lt;span class="n"&gt;status&lt;/span&gt;  
&lt;span class="k"&gt;ORDER&lt;/span&gt; &lt;span class="k"&gt;BY&lt;/span&gt;   
 &lt;span class="n"&gt;error_count&lt;/span&gt; &lt;span class="k"&gt;DESC&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;  
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;This query will show you the distribution of different 4xx error codes.&lt;/p&gt;
&lt;h3&gt;
  
  
  Query 2: Top URIs Generating 4xx Errors
&lt;/h3&gt;


&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight sql"&gt;&lt;code&gt;&lt;span class="k"&gt;SELECT&lt;/span&gt;   
 &lt;span class="n"&gt;uri&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;  
 &lt;span class="n"&gt;status&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;  
 &lt;span class="k"&gt;COUNT&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="o"&gt;*&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;as&lt;/span&gt; &lt;span class="n"&gt;error_count&lt;/span&gt;  
&lt;span class="k"&gt;FROM&lt;/span&gt;   
 &lt;span class="n"&gt;cloudfront_logs&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;cf_access_logs&lt;/span&gt;  
&lt;span class="k"&gt;WHERE&lt;/span&gt;   
 &lt;span class="n"&gt;status&lt;/span&gt; &lt;span class="k"&gt;BETWEEN&lt;/span&gt; &lt;span class="mi"&gt;400&lt;/span&gt; &lt;span class="k"&gt;AND&lt;/span&gt; &lt;span class="mi"&gt;499&lt;/span&gt;  
 &lt;span class="k"&gt;AND&lt;/span&gt; &lt;span class="nb"&gt;date&lt;/span&gt; &lt;span class="k"&gt;BETWEEN&lt;/span&gt; &lt;span class="nb"&gt;DATE&lt;/span&gt; &lt;span class="err"&gt;‘&lt;/span&gt;&lt;span class="mi"&gt;2023&lt;/span&gt;&lt;span class="err"&gt;–&lt;/span&gt;&lt;span class="mi"&gt;01&lt;/span&gt;&lt;span class="err"&gt;–&lt;/span&gt;&lt;span class="mi"&gt;01&lt;/span&gt;&lt;span class="err"&gt;’&lt;/span&gt; &lt;span class="k"&gt;AND&lt;/span&gt; &lt;span class="nb"&gt;DATE&lt;/span&gt; &lt;span class="err"&gt;‘&lt;/span&gt;&lt;span class="mi"&gt;2023&lt;/span&gt;&lt;span class="err"&gt;–&lt;/span&gt;&lt;span class="mi"&gt;01&lt;/span&gt;&lt;span class="err"&gt;–&lt;/span&gt;&lt;span class="mi"&gt;31&lt;/span&gt;&lt;span class="err"&gt;’&lt;/span&gt;  
&lt;span class="k"&gt;GROUP&lt;/span&gt; &lt;span class="k"&gt;BY&lt;/span&gt;   
 &lt;span class="n"&gt;uri&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;status&lt;/span&gt;  
&lt;span class="k"&gt;ORDER&lt;/span&gt; &lt;span class="k"&gt;BY&lt;/span&gt;   
 &lt;span class="n"&gt;error_count&lt;/span&gt; &lt;span class="k"&gt;DESC&lt;/span&gt;  
&lt;span class="k"&gt;LIMIT&lt;/span&gt; &lt;span class="mi"&gt;20&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;  
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;This helps identify problematic endpoints or resources.&lt;/p&gt;
&lt;h3&gt;
  
  
  Query 3: Error Distribution by Time
&lt;/h3&gt;


&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight sql"&gt;&lt;code&gt;&lt;span class="k"&gt;SELECT&lt;/span&gt;   
 &lt;span class="nb"&gt;date&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;  
 &lt;span class="n"&gt;HOUR&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;from_iso8601_timestamp&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;concat&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nb"&gt;date&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="err"&gt;‘&lt;/span&gt;&lt;span class="n"&gt;T&lt;/span&gt;&lt;span class="err"&gt;’&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nb"&gt;time&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="err"&gt;‘&lt;/span&gt;&lt;span class="n"&gt;Z&lt;/span&gt;&lt;span class="err"&gt;’&lt;/span&gt;&lt;span class="p"&gt;)))&lt;/span&gt; &lt;span class="k"&gt;as&lt;/span&gt; &lt;span class="n"&gt;hour&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;  
 &lt;span class="k"&gt;COUNT&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="o"&gt;*&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;as&lt;/span&gt; &lt;span class="n"&gt;error_count&lt;/span&gt;  
&lt;span class="k"&gt;FROM&lt;/span&gt;   
 &lt;span class="n"&gt;cloudfront_logs&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;cf_access_logs&lt;/span&gt;  
&lt;span class="k"&gt;WHERE&lt;/span&gt;   
 &lt;span class="n"&gt;status&lt;/span&gt; &lt;span class="k"&gt;BETWEEN&lt;/span&gt; &lt;span class="mi"&gt;400&lt;/span&gt; &lt;span class="k"&gt;AND&lt;/span&gt; &lt;span class="mi"&gt;499&lt;/span&gt;  
 &lt;span class="k"&gt;AND&lt;/span&gt; &lt;span class="nb"&gt;date&lt;/span&gt; &lt;span class="k"&gt;BETWEEN&lt;/span&gt; &lt;span class="nb"&gt;DATE&lt;/span&gt; &lt;span class="err"&gt;‘&lt;/span&gt;&lt;span class="mi"&gt;2023&lt;/span&gt;&lt;span class="err"&gt;–&lt;/span&gt;&lt;span class="mi"&gt;01&lt;/span&gt;&lt;span class="err"&gt;–&lt;/span&gt;&lt;span class="mi"&gt;01&lt;/span&gt;&lt;span class="err"&gt;’&lt;/span&gt; &lt;span class="k"&gt;AND&lt;/span&gt; &lt;span class="nb"&gt;DATE&lt;/span&gt; &lt;span class="err"&gt;‘&lt;/span&gt;&lt;span class="mi"&gt;2023&lt;/span&gt;&lt;span class="err"&gt;–&lt;/span&gt;&lt;span class="mi"&gt;01&lt;/span&gt;&lt;span class="err"&gt;–&lt;/span&gt;&lt;span class="mi"&gt;31&lt;/span&gt;&lt;span class="err"&gt;’&lt;/span&gt;  
&lt;span class="k"&gt;GROUP&lt;/span&gt; &lt;span class="k"&gt;BY&lt;/span&gt;   
 &lt;span class="nb"&gt;date&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;HOUR&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;from_iso8601_timestamp&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;concat&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nb"&gt;date&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="err"&gt;‘&lt;/span&gt;&lt;span class="n"&gt;T&lt;/span&gt;&lt;span class="err"&gt;’&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nb"&gt;time&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="err"&gt;‘&lt;/span&gt;&lt;span class="n"&gt;Z&lt;/span&gt;&lt;span class="err"&gt;’&lt;/span&gt;&lt;span class="p"&gt;)))&lt;/span&gt;  
&lt;span class="k"&gt;ORDER&lt;/span&gt; &lt;span class="k"&gt;BY&lt;/span&gt;   
 &lt;span class="nb"&gt;date&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;hour&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;  
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;This query helps identify patterns or spikes in errors by time.&lt;/p&gt;
&lt;h3&gt;
  
  
  Query 4: 4xx Errors by User Agent
&lt;/h3&gt;


&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight sql"&gt;&lt;code&gt;&lt;span class="k"&gt;SELECT&lt;/span&gt;   
 &lt;span class="n"&gt;REGEXP_EXTRACT&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;user_agent&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="err"&gt;‘&lt;/span&gt;&lt;span class="p"&gt;([&lt;/span&gt;&lt;span class="o"&gt;^/&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="o"&gt;+&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;&lt;span class="err"&gt;’&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;as&lt;/span&gt; &lt;span class="n"&gt;browser&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;  
 &lt;span class="k"&gt;COUNT&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="o"&gt;*&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;as&lt;/span&gt; &lt;span class="n"&gt;error_count&lt;/span&gt;  
&lt;span class="k"&gt;FROM&lt;/span&gt;   
 &lt;span class="n"&gt;cloudfront_logs&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;cf_access_logs&lt;/span&gt;  
&lt;span class="k"&gt;WHERE&lt;/span&gt;   
 &lt;span class="n"&gt;status&lt;/span&gt; &lt;span class="k"&gt;BETWEEN&lt;/span&gt; &lt;span class="mi"&gt;400&lt;/span&gt; &lt;span class="k"&gt;AND&lt;/span&gt; &lt;span class="mi"&gt;499&lt;/span&gt;  
 &lt;span class="k"&gt;AND&lt;/span&gt; &lt;span class="nb"&gt;date&lt;/span&gt; &lt;span class="k"&gt;BETWEEN&lt;/span&gt; &lt;span class="nb"&gt;DATE&lt;/span&gt; &lt;span class="err"&gt;‘&lt;/span&gt;&lt;span class="mi"&gt;2023&lt;/span&gt;&lt;span class="err"&gt;–&lt;/span&gt;&lt;span class="mi"&gt;01&lt;/span&gt;&lt;span class="err"&gt;–&lt;/span&gt;&lt;span class="mi"&gt;01&lt;/span&gt;&lt;span class="err"&gt;’&lt;/span&gt; &lt;span class="k"&gt;AND&lt;/span&gt; &lt;span class="nb"&gt;DATE&lt;/span&gt; &lt;span class="err"&gt;‘&lt;/span&gt;&lt;span class="mi"&gt;2023&lt;/span&gt;&lt;span class="err"&gt;–&lt;/span&gt;&lt;span class="mi"&gt;01&lt;/span&gt;&lt;span class="err"&gt;–&lt;/span&gt;&lt;span class="mi"&gt;31&lt;/span&gt;&lt;span class="err"&gt;’&lt;/span&gt;  
&lt;span class="k"&gt;GROUP&lt;/span&gt; &lt;span class="k"&gt;BY&lt;/span&gt;   
 &lt;span class="n"&gt;REGEXP_EXTRACT&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;user_agent&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="err"&gt;‘&lt;/span&gt;&lt;span class="p"&gt;([&lt;/span&gt;&lt;span class="o"&gt;^/&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="o"&gt;+&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;&lt;span class="err"&gt;’&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;  
&lt;span class="k"&gt;ORDER&lt;/span&gt; &lt;span class="k"&gt;BY&lt;/span&gt;   
 &lt;span class="n"&gt;error_count&lt;/span&gt; &lt;span class="k"&gt;DESC&lt;/span&gt;  
&lt;span class="k"&gt;LIMIT&lt;/span&gt; &lt;span class="mi"&gt;10&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;  
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;This can help identify if errors are related to specific browsers or bots.&lt;/p&gt;
&lt;h3&gt;
  
  
  Query 5: Geographic Distribution of Errors
&lt;/h3&gt;

&lt;p&gt;If you have enabled location fields in your logs:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight sql"&gt;&lt;code&gt;&lt;span class="k"&gt;SELECT&lt;/span&gt;   
 &lt;span class="k"&gt;location&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;  
 &lt;span class="k"&gt;COUNT&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="o"&gt;*&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;as&lt;/span&gt; &lt;span class="n"&gt;error_count&lt;/span&gt;  
&lt;span class="k"&gt;FROM&lt;/span&gt;   
 &lt;span class="n"&gt;cloudfront_logs&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;cf_access_logs&lt;/span&gt;  
&lt;span class="k"&gt;WHERE&lt;/span&gt;   
 &lt;span class="n"&gt;status&lt;/span&gt; &lt;span class="k"&gt;BETWEEN&lt;/span&gt; &lt;span class="mi"&gt;400&lt;/span&gt; &lt;span class="k"&gt;AND&lt;/span&gt; &lt;span class="mi"&gt;499&lt;/span&gt;  
 &lt;span class="k"&gt;AND&lt;/span&gt; &lt;span class="nb"&gt;date&lt;/span&gt; &lt;span class="k"&gt;BETWEEN&lt;/span&gt; &lt;span class="nb"&gt;DATE&lt;/span&gt; &lt;span class="err"&gt;‘&lt;/span&gt;&lt;span class="mi"&gt;2023&lt;/span&gt;&lt;span class="err"&gt;–&lt;/span&gt;&lt;span class="mi"&gt;01&lt;/span&gt;&lt;span class="err"&gt;–&lt;/span&gt;&lt;span class="mi"&gt;01&lt;/span&gt;&lt;span class="err"&gt;’&lt;/span&gt; &lt;span class="k"&gt;AND&lt;/span&gt; &lt;span class="nb"&gt;DATE&lt;/span&gt; &lt;span class="err"&gt;‘&lt;/span&gt;&lt;span class="mi"&gt;2023&lt;/span&gt;&lt;span class="err"&gt;–&lt;/span&gt;&lt;span class="mi"&gt;01&lt;/span&gt;&lt;span class="err"&gt;–&lt;/span&gt;&lt;span class="mi"&gt;31&lt;/span&gt;&lt;span class="err"&gt;’&lt;/span&gt;  
&lt;span class="k"&gt;GROUP&lt;/span&gt; &lt;span class="k"&gt;BY&lt;/span&gt;   
 &lt;span class="k"&gt;location&lt;/span&gt;  
&lt;span class="k"&gt;ORDER&lt;/span&gt; &lt;span class="k"&gt;BY&lt;/span&gt;   
 &lt;span class="n"&gt;error_count&lt;/span&gt; &lt;span class="k"&gt;DESC&lt;/span&gt;  
&lt;span class="k"&gt;LIMIT&lt;/span&gt; &lt;span class="mi"&gt;10&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;  
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Advanced Analysis: Finding Root Causes
&lt;/h2&gt;

&lt;p&gt;Let’s dig deeper to find patterns that might explain the increase in 4xx errors:&lt;/p&gt;

&lt;h3&gt;
  
  
  Query 6: Analyze Referrer Patterns
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight sql"&gt;&lt;code&gt;&lt;span class="k"&gt;SELECT&lt;/span&gt;   
 &lt;span class="n"&gt;referrer&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;  
 &lt;span class="k"&gt;COUNT&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="o"&gt;*&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;as&lt;/span&gt; &lt;span class="n"&gt;error_count&lt;/span&gt;  
&lt;span class="k"&gt;FROM&lt;/span&gt;   
 &lt;span class="n"&gt;cloudfront_logs&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;cf_access_logs&lt;/span&gt;  
&lt;span class="k"&gt;WHERE&lt;/span&gt;   
 &lt;span class="n"&gt;status&lt;/span&gt; &lt;span class="k"&gt;BETWEEN&lt;/span&gt; &lt;span class="mi"&gt;400&lt;/span&gt; &lt;span class="k"&gt;AND&lt;/span&gt; &lt;span class="mi"&gt;499&lt;/span&gt;  
 &lt;span class="k"&gt;AND&lt;/span&gt; &lt;span class="nb"&gt;date&lt;/span&gt; &lt;span class="k"&gt;BETWEEN&lt;/span&gt; &lt;span class="nb"&gt;DATE&lt;/span&gt; &lt;span class="err"&gt;‘&lt;/span&gt;&lt;span class="mi"&gt;2023&lt;/span&gt;&lt;span class="err"&gt;–&lt;/span&gt;&lt;span class="mi"&gt;01&lt;/span&gt;&lt;span class="err"&gt;–&lt;/span&gt;&lt;span class="mi"&gt;01&lt;/span&gt;&lt;span class="err"&gt;’&lt;/span&gt; &lt;span class="k"&gt;AND&lt;/span&gt; &lt;span class="nb"&gt;DATE&lt;/span&gt; &lt;span class="err"&gt;‘&lt;/span&gt;&lt;span class="mi"&gt;2023&lt;/span&gt;&lt;span class="err"&gt;–&lt;/span&gt;&lt;span class="mi"&gt;01&lt;/span&gt;&lt;span class="err"&gt;–&lt;/span&gt;&lt;span class="mi"&gt;31&lt;/span&gt;&lt;span class="err"&gt;’&lt;/span&gt;  
 &lt;span class="k"&gt;AND&lt;/span&gt; &lt;span class="n"&gt;referrer&lt;/span&gt; &lt;span class="o"&gt;!=&lt;/span&gt; &lt;span class="err"&gt;‘&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="err"&gt;’&lt;/span&gt;  
&lt;span class="k"&gt;GROUP&lt;/span&gt; &lt;span class="k"&gt;BY&lt;/span&gt;   
 &lt;span class="n"&gt;referrer&lt;/span&gt;  
&lt;span class="k"&gt;ORDER&lt;/span&gt; &lt;span class="k"&gt;BY&lt;/span&gt;   
 &lt;span class="n"&gt;error_count&lt;/span&gt; &lt;span class="k"&gt;DESC&lt;/span&gt;  
&lt;span class="k"&gt;LIMIT&lt;/span&gt; &lt;span class="mi"&gt;20&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;  
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This can identify broken links from other websites.&lt;/p&gt;

&lt;h3&gt;
  
  
  Query 7: Identify Common Query Parameters in Failed Requests
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight sql"&gt;&lt;code&gt;&lt;span class="k"&gt;SELECT&lt;/span&gt;   
 &lt;span class="n"&gt;query_string&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;  
 &lt;span class="k"&gt;COUNT&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="o"&gt;*&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;as&lt;/span&gt; &lt;span class="n"&gt;error_count&lt;/span&gt;  
&lt;span class="k"&gt;FROM&lt;/span&gt;   
 &lt;span class="n"&gt;cloudfront_logs&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;cf_access_logs&lt;/span&gt;  
&lt;span class="k"&gt;WHERE&lt;/span&gt;   
 &lt;span class="n"&gt;status&lt;/span&gt; &lt;span class="k"&gt;BETWEEN&lt;/span&gt; &lt;span class="mi"&gt;400&lt;/span&gt; &lt;span class="k"&gt;AND&lt;/span&gt; &lt;span class="mi"&gt;499&lt;/span&gt;  
 &lt;span class="k"&gt;AND&lt;/span&gt; &lt;span class="nb"&gt;date&lt;/span&gt; &lt;span class="k"&gt;BETWEEN&lt;/span&gt; &lt;span class="nb"&gt;DATE&lt;/span&gt; &lt;span class="err"&gt;‘&lt;/span&gt;&lt;span class="mi"&gt;2023&lt;/span&gt;&lt;span class="err"&gt;–&lt;/span&gt;&lt;span class="mi"&gt;01&lt;/span&gt;&lt;span class="err"&gt;–&lt;/span&gt;&lt;span class="mi"&gt;01&lt;/span&gt;&lt;span class="err"&gt;’&lt;/span&gt; &lt;span class="k"&gt;AND&lt;/span&gt; &lt;span class="nb"&gt;DATE&lt;/span&gt; &lt;span class="err"&gt;‘&lt;/span&gt;&lt;span class="mi"&gt;2023&lt;/span&gt;&lt;span class="err"&gt;–&lt;/span&gt;&lt;span class="mi"&gt;01&lt;/span&gt;&lt;span class="err"&gt;–&lt;/span&gt;&lt;span class="mi"&gt;31&lt;/span&gt;&lt;span class="err"&gt;’&lt;/span&gt;  
 &lt;span class="k"&gt;AND&lt;/span&gt; &lt;span class="n"&gt;query_string&lt;/span&gt; &lt;span class="o"&gt;!=&lt;/span&gt; &lt;span class="err"&gt;‘&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="err"&gt;’&lt;/span&gt;  
&lt;span class="k"&gt;GROUP&lt;/span&gt; &lt;span class="k"&gt;BY&lt;/span&gt;   
 &lt;span class="n"&gt;query_string&lt;/span&gt;  
&lt;span class="k"&gt;ORDER&lt;/span&gt; &lt;span class="k"&gt;BY&lt;/span&gt;   
 &lt;span class="n"&gt;error_count&lt;/span&gt; &lt;span class="k"&gt;DESC&lt;/span&gt;  
&lt;span class="k"&gt;LIMIT&lt;/span&gt; &lt;span class="mi"&gt;20&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;  
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This may reveal issues with specific parameters being passed to your application.&lt;/p&gt;

&lt;h2&gt;
  
  
  Creating Dashboards with QuickSight
&lt;/h2&gt;

&lt;p&gt;For ongoing monitoring, you can connect Amazon QuickSight to your Athena queries:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Navigate to the QuickSight console
&lt;/li&gt;
&lt;li&gt;Create a new analysis
&lt;/li&gt;
&lt;li&gt;Select Athena as the data source
&lt;/li&gt;
&lt;li&gt;Choose your cloudfront_logs database and cf_access_logs table
&lt;/li&gt;
&lt;li&gt;Build visualizations based on the queries we’ve explored&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Some useful visualizations include:  &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Line chart of 4xx errors over time
&lt;/li&gt;
&lt;li&gt;Bar chart of top error-generating URIs
&lt;/li&gt;
&lt;li&gt;Pie chart of error status code distribution
&lt;/li&gt;
&lt;li&gt;Heat map of errors by hour and day&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Optimizing Athena Queries
&lt;/h2&gt;

&lt;p&gt;CloudFront logs can grow large, making Athena queries expensive. Here are some optimization tips:&lt;/p&gt;

&lt;h3&gt;
  
  
  Partitioning Your Table
&lt;/h3&gt;

&lt;p&gt;For more efficient queries, consider partitioning your table by date:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight sql"&gt;&lt;code&gt;&lt;span class="k"&gt;CREATE&lt;/span&gt; &lt;span class="k"&gt;EXTERNAL&lt;/span&gt; &lt;span class="k"&gt;TABLE&lt;/span&gt; &lt;span class="n"&gt;IF&lt;/span&gt; &lt;span class="k"&gt;NOT&lt;/span&gt; &lt;span class="k"&gt;EXISTS&lt;/span&gt; &lt;span class="n"&gt;cloudfront_logs&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;cf_access_logs_partitioned&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;  
 &lt;span class="nv"&gt;`time`&lt;/span&gt; &lt;span class="n"&gt;STRING&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;  
 &lt;span class="k"&gt;location&lt;/span&gt; &lt;span class="n"&gt;STRING&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;  
 &lt;span class="n"&gt;bytes&lt;/span&gt; &lt;span class="nb"&gt;BIGINT&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;  
&lt;span class="err"&gt; — &lt;/span&gt;&lt;span class="n"&gt;other&lt;/span&gt; &lt;span class="n"&gt;fields&lt;/span&gt; &lt;span class="k"&gt;as&lt;/span&gt; &lt;span class="k"&gt;in&lt;/span&gt; &lt;span class="n"&gt;the&lt;/span&gt; &lt;span class="n"&gt;previous&lt;/span&gt; &lt;span class="n"&gt;definition&lt;/span&gt;  
&lt;span class="p"&gt;)&lt;/span&gt;  
&lt;span class="n"&gt;PARTITIONED&lt;/span&gt; &lt;span class="k"&gt;BY&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nv"&gt;`date`&lt;/span&gt; &lt;span class="nb"&gt;DATE&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;  
&lt;span class="k"&gt;ROW&lt;/span&gt; &lt;span class="n"&gt;FORMAT&lt;/span&gt; &lt;span class="n"&gt;DELIMITED&lt;/span&gt;   
&lt;span class="n"&gt;FIELDS&lt;/span&gt; &lt;span class="n"&gt;TERMINATED&lt;/span&gt; &lt;span class="k"&gt;BY&lt;/span&gt; &lt;span class="err"&gt;‘\&lt;/span&gt;&lt;span class="n"&gt;t&lt;/span&gt;&lt;span class="err"&gt;’&lt;/span&gt;  
&lt;span class="k"&gt;LOCATION&lt;/span&gt; &lt;span class="err"&gt;‘&lt;/span&gt;&lt;span class="n"&gt;s3&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="o"&gt;//&lt;/span&gt;&lt;span class="n"&gt;your&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;logs&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;bucket&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;cloudfront&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;logs&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="err"&gt;’&lt;/span&gt;  
&lt;span class="n"&gt;TBLPROPERTIES&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="err"&gt;‘&lt;/span&gt;&lt;span class="n"&gt;skip&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;header&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;line&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="k"&gt;count&lt;/span&gt;&lt;span class="err"&gt;’&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="err"&gt;’&lt;/span&gt;&lt;span class="mi"&gt;2&lt;/span&gt;&lt;span class="s1"&gt;');  
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Then load partitions:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight sql"&gt;&lt;code&gt;&lt;span class="n"&gt;MSCK&lt;/span&gt; &lt;span class="n"&gt;REPAIR&lt;/span&gt; &lt;span class="k"&gt;TABLE&lt;/span&gt; &lt;span class="n"&gt;cloudfront_logs&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;cf_access_logs_partitioned&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;  
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Or add partitions manually:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight sql"&gt;&lt;code&gt;&lt;span class="k"&gt;ALTER&lt;/span&gt; &lt;span class="k"&gt;TABLE&lt;/span&gt; &lt;span class="n"&gt;cloudfront_logs&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;cf_access_logs_partitioned&lt;/span&gt; &lt;span class="k"&gt;ADD&lt;/span&gt;  
&lt;span class="k"&gt;PARTITION&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nv"&gt;`date`&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="err"&gt;’&lt;/span&gt;&lt;span class="mi"&gt;2023&lt;/span&gt;&lt;span class="err"&gt;–&lt;/span&gt;&lt;span class="mi"&gt;01&lt;/span&gt;&lt;span class="err"&gt;–&lt;/span&gt;&lt;span class="mi"&gt;01&lt;/span&gt;&lt;span class="s1"&gt;') LOCATION ‘s3://your-logs-bucket/cloudfront-logs/2023–01–01/’  
PARTITION (`date`=’2023–01–02'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;LOCATION&lt;/span&gt; &lt;span class="err"&gt;‘&lt;/span&gt;&lt;span class="n"&gt;s3&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="o"&gt;//&lt;/span&gt;&lt;span class="n"&gt;your&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;logs&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;bucket&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;cloudfront&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;logs&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="mi"&gt;2023&lt;/span&gt;&lt;span class="err"&gt;–&lt;/span&gt;&lt;span class="mi"&gt;01&lt;/span&gt;&lt;span class="err"&gt;–&lt;/span&gt;&lt;span class="mi"&gt;02&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="err"&gt;’&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;  
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Convert to Columnar Format
&lt;/h3&gt;

&lt;p&gt;Convert your data to a columnar format like Parquet for better performance:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight sql"&gt;&lt;code&gt;&lt;span class="k"&gt;CREATE&lt;/span&gt; &lt;span class="k"&gt;TABLE&lt;/span&gt; &lt;span class="n"&gt;cloudfront_logs&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;cf_access_logs_parquet&lt;/span&gt;  
&lt;span class="k"&gt;WITH&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;  
 &lt;span class="n"&gt;format&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="err"&gt;‘&lt;/span&gt;&lt;span class="n"&gt;PARQUET&lt;/span&gt;&lt;span class="err"&gt;’&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;  
 &lt;span class="n"&gt;parquet_compression&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="err"&gt;‘&lt;/span&gt;&lt;span class="n"&gt;SNAPPY&lt;/span&gt;&lt;span class="err"&gt;’&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;  
 &lt;span class="n"&gt;external_location&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="err"&gt;‘&lt;/span&gt;&lt;span class="n"&gt;s3&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="o"&gt;//&lt;/span&gt;&lt;span class="n"&gt;your&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;logs&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;bucket&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;cloudfront&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;logs&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;parquet&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="err"&gt;’&lt;/span&gt;  
&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;AS&lt;/span&gt;  
&lt;span class="k"&gt;SELECT&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="k"&gt;FROM&lt;/span&gt; &lt;span class="n"&gt;cloudfront_logs&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;cf_access_logs&lt;/span&gt;  
&lt;span class="k"&gt;WHERE&lt;/span&gt; &lt;span class="nb"&gt;date&lt;/span&gt; &lt;span class="k"&gt;BETWEEN&lt;/span&gt; &lt;span class="nb"&gt;DATE&lt;/span&gt; &lt;span class="err"&gt;‘&lt;/span&gt;&lt;span class="mi"&gt;2023&lt;/span&gt;&lt;span class="err"&gt;–&lt;/span&gt;&lt;span class="mi"&gt;01&lt;/span&gt;&lt;span class="err"&gt;–&lt;/span&gt;&lt;span class="mi"&gt;01&lt;/span&gt;&lt;span class="err"&gt;’&lt;/span&gt; &lt;span class="k"&gt;AND&lt;/span&gt; &lt;span class="nb"&gt;DATE&lt;/span&gt; &lt;span class="err"&gt;‘&lt;/span&gt;&lt;span class="mi"&gt;2023&lt;/span&gt;&lt;span class="err"&gt;–&lt;/span&gt;&lt;span class="mi"&gt;01&lt;/span&gt;&lt;span class="err"&gt;–&lt;/span&gt;&lt;span class="mi"&gt;31&lt;/span&gt;&lt;span class="err"&gt;’&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;  
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Common Troubleshooting Patterns
&lt;/h2&gt;

&lt;p&gt;Based on the analysis of 4xx errors, here are common issues to check:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;404 errors&lt;/strong&gt;: Check for recently removed resources or broken links in your application
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;403 errors&lt;/strong&gt;: Review CloudFront distribution settings, especially origin access identity configuration
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;400 errors&lt;/strong&gt;: Look for malformed requests, possibly from outdated clients or bots
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;401 errors&lt;/strong&gt;: Check authentication mechanisms and token expiration settings&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;By using Amazon Athena to analyze CloudFront logs, you can quickly identify the root causes of 4xx errors and take corrective actions. This serverless approach eliminates the need for complex ETL processes while providing powerful SQL-based analysis capabilities.&lt;/p&gt;

&lt;p&gt;Remember to optimize your queries and table structure as your log volume grows to keep costs manageable and queries performant.&lt;/p&gt;

&lt;p&gt;I hope this article helps you diagnose and resolve your CloudFront 4xx errors efficiently. Happy troubleshooting! 👻&lt;/p&gt;

</description>
      <category>clodfrontathena</category>
      <category>awscloudfront</category>
      <category>cloudfrontlogging</category>
      <category>awsathena</category>
    </item>
    <item>
      <title>Automated GitLab Artifact Cleanup with AWS Lambda</title>
      <dc:creator>Habil BOZALİ</dc:creator>
      <pubDate>Mon, 24 Feb 2025 08:12:04 +0000</pubDate>
      <link>https://dev.to/habil/automated-gitlab-artifact-cleanup-with-aws-lambda-9gm</link>
      <guid>https://dev.to/habil/automated-gitlab-artifact-cleanup-with-aws-lambda-9gm</guid>
      <description>&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F1024%2F0%2APe5EUZW-P_hMzghb" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F1024%2F0%2APe5EUZW-P_hMzghb" width="1024" height="576"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;Photo by Siarhei Palishchuk on Unsplash&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;GitLab CI/CD pipelines generate artifacts and build logs that accumulate over time, consuming valuable storage space. While these artifacts are crucial for debugging and deployment, keeping them indefinitely is neither practical nor cost-effective. Let’s explore how to automate the cleanup process using AWS Lambda and EventBridge.&lt;/p&gt;

&lt;p&gt;AWS Lambda is a serverless compute service that lets you run code without provisioning or managing servers. Combined with EventBridge for scheduling and SNS for notifications, we can create a robust solution for maintaining our GitLab instance’s hygiene.&lt;/p&gt;

&lt;p&gt;Let’s break down the solution into its core components and understand how they work together.&lt;/p&gt;
&lt;h3&gt;
  
  
  Core Components
&lt;/h3&gt;

&lt;p&gt;Our cleanup solution consists of several key elements:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;AWS Lambda function for artifact cleanup&lt;/li&gt;
&lt;li&gt;GitLab API integration for accessing projects and jobs&lt;/li&gt;
&lt;li&gt;AWS SNS integration for notifications&lt;/li&gt;
&lt;li&gt;Environment variables for secure credential management&lt;/li&gt;
&lt;li&gt;Error handling and logging&lt;/li&gt;
&lt;/ul&gt;
&lt;h3&gt;
  
  
  Code Analysis
&lt;/h3&gt;

&lt;p&gt;Let’s analyze the main components of our cleanup function:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Configuration and Setup:
&lt;/li&gt;
&lt;/ol&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;const gitlabHost = 'https://gitlab.myinstance.com';
const token = process.env.GITLAB_TOKEN;
const date = new Date();
date.setMonth(date.getMonth() - 3); //Can be change
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;This section establishes our GitLab connection details and sets up a three-month threshold for artifact retention.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Main Cleanup Function:
&lt;/li&gt;
&lt;/ol&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;async function deleteOldArtifacts() {
    try {
        let page = 1;
        while (true) {
            const projectsUrl = `${gitlabHost}/api/v4/projects?private_token=${token}&amp;amp;per_page=100&amp;amp;page=${page}`;
            const projectsResponse = await axios.get(projectsUrl);
            const projects = projectsResponse.data;
            if (projects.length === 0) break;
            for (const project of projects) {
                        await deleteArtifactsForJob(project.id);
                        await deleteArtifactsForProject(project.id);
                    }
                    page++;
                }
                sendSnsMessage('Old artifacts from all projects deleted successfully.');
            } catch (error) {
                sendSnsMessage(error.message);
            }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;This function iterates through all GitLab projects, handling pagination, and processing each project’s artifacts.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;SNS Notification:
&lt;/li&gt;
&lt;/ol&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;async function sendSnsMessage(message) {
    const sns = new AWS.SNS();
    const topicArn = 'arn:aws:sns:eu-central-1:00000000000:arn'; 

    sns.publish({
        TopicArn: topicArn,
        Message: JSON.stringify(message)
    });
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;This component sends notifications about the cleanup process status.&lt;/p&gt;
&lt;h3&gt;
  
  
  Setting Up the Solution
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;Create a new Lambda function and set these environment variables:&lt;/li&gt;
&lt;/ol&gt;

&lt;ul&gt;
&lt;li&gt;GITLAB_TOKEN&lt;/li&gt;
&lt;li&gt;AWS_A_KEY&lt;/li&gt;
&lt;li&gt;AWS_S_KEY&lt;/li&gt;
&lt;/ul&gt;

&lt;ol&gt;
&lt;li&gt;Configure the EventBridge rule with a cron expression to schedule the cleanup:
&lt;/li&gt;
&lt;/ol&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;0 0 1 * ? * // Runs monthly on the 1st
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;ol&gt;
&lt;li&gt;Set up an SNS topic and subscribe to receive notifications about the cleanup process.&lt;/li&gt;
&lt;/ol&gt;
&lt;h3&gt;
  
  
  Complete Code
&lt;/h3&gt;

&lt;p&gt;Here’s the complete Lambda function code:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;const axios = require('axios');
const AWS = require('aws-sdk');
require('dotenv').config();
const gitlabHost = 'https://gitlab.myinstance.com';
const token = process.env.GITLAB_TOKEN;
const date = new Date();
date.setMonth(date.getMonth() - 3);
const formattedThreeMonthsAgo = date.toISOString();
AWS.config.update({
    region: 'eu-central-1', 
    accessKeyId: process.env.AWS_A_KEY,
    secretAccessKey: process.env.AWS_S_KEY
});
async function deleteOldArtifacts() {
    try {
        let page = 1;
        while (true) {
            const projectsUrl = `${gitlabHost}/api/v4/projects?private_token=${token}&amp;amp;per_page=100&amp;amp;page=${page}`;
            const projectsResponse = await axios.get(projectsUrl);
            const projects = projectsResponse.data;
            if (projects.length === 0) break;
            for (const project of projects) {
                console.log(`Processing project ID: ${project.id}`);
                await deleteArtifactsForJob(project.id);
                await deleteArtifactsForProject(project.id);
            }
            page++;
        }
        sendSnsMessage('Old artifacts from all projects deleted successfully.');
    } catch (error) {
        console.log(error.message);
        sendSnsMessage(error.message);
    }
}
async function deleteArtifactsForProject(projectId) {
    try {
        const projectArtifactsUrl = `${gitlabHost}/api/v4/projects/${projectId}/artifacts?private_token=${token}`;
        await axios.delete(projectArtifactsUrl);   
    } catch (error) {
        console.error(`Error deleting artifacts for project ${projectId}:`, error.message);
    }
}
async function deleteArtifactsForJob(projectId) {
    try {
        const jobsUrl = `${gitlabHost}/api/v4/projects/${projectId}/jobs?private_token=${token}`;
        const jobsResponse = await axios.get(jobsUrl);
        const jobs = jobsResponse.data;
        for (const job of jobs) {
            try {
                if(typeof job.artifacts !== 'undefined' &amp;amp;&amp;amp; job.artifacts.length &amp;gt; 0 &amp;amp;&amp;amp; job.finished_at &amp;amp;&amp;amp; job.finished_at &amp;lt; formattedThreeMonthsAgo) {
                    const deleteUrl = `${gitlabHost}/api/v4/projects/${projectId}/jobs/${job.id}/artifacts?private_token=${token}`;
                    await axios.delete(deleteUrl);
                    console.log(`Artifact ${job.id} deleted successfully.`);
                }
            } catch (error) {
                console.error(`Error deleting artifacts for job ${projectId}:`, error.message);
            }
        }
    } catch (error) {
        console.error(`Error deleting artifacts for job ${projectId}:`, error.message);
    }
}
async function sendSnsMessage(message) {
    const sns = new AWS.SNS();
    const topicArn = 'arn:aws:sns:eu-central-1:00000000000:arn'; 
    sns.publish({
    TopicArn: topicArn,
    Message: JSON.stringify(message)
    }, (err, data) =&amp;gt; {
    if (err) {
        console.error('Error sending message:', err);
    } else {
        console.log('Message sent successfully:', data);
    }
    });
}
exports.handler = async (event) =&amp;gt; {
    await deleteOldArtifacts();
    const response = {
      statusCode: 200,
      body: JSON.stringify('Done!'),
    };
    return response;
};
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Benefits
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Automated cleanup reduces manual maintenance&lt;/li&gt;
&lt;li&gt;Configurable retention period&lt;/li&gt;
&lt;li&gt;Notification system for monitoring&lt;/li&gt;
&lt;li&gt;Cost-effective storage management&lt;/li&gt;
&lt;li&gt;Scalable solution for growing GitLab instances&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Conclusion
&lt;/h3&gt;

&lt;p&gt;This automated solution helps maintain a clean and efficient GitLab instance by regularly removing old artifacts. The combination of AWS Lambda, EventBridge, and SNS creates a reliable, hands-off maintenance system that can be easily adapted to your specific needs.&lt;/p&gt;

&lt;p&gt;See you in the next article! 👻&lt;/p&gt;

</description>
      <category>eventbridge</category>
      <category>gitlab</category>
      <category>aws</category>
      <category>awslambda</category>
    </item>
    <item>
      <title>How to List AWS S3 Directory Contents Using Python and Boto3</title>
      <dc:creator>Habil BOZALİ</dc:creator>
      <pubDate>Mon, 24 Feb 2025 07:53:45 +0000</pubDate>
      <link>https://dev.to/habil/how-to-list-aws-s3-directory-contents-using-python-and-boto3-3i1b</link>
      <guid>https://dev.to/habil/how-to-list-aws-s3-directory-contents-using-python-and-boto3-3i1b</guid>
      <description>&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F1024%2F0%2A0z_KBa1i_QmcfFB9" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F1024%2F0%2A0z_KBa1i_QmcfFB9" width="1024" height="683"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;Photo by Luke Chesser on Unsplash&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;When working with AWS S3, you might need to get a list of all files in a specific bucket or directory. This is particularly useful for inventory management, backup operations, or content synchronization. In this article, we’ll explore how to use Python and boto3 to list directory contents in an S3 bucket.&lt;/p&gt;
&lt;h3&gt;
  
  
  Common Use Cases
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Creating inventory reports of S3 bucket contents&lt;/li&gt;
&lt;li&gt;Verifying uploaded files after bulk transfers&lt;/li&gt;
&lt;li&gt;Monitoring content changes in specific folders&lt;/li&gt;
&lt;li&gt;Synchronizing content between different environments&lt;/li&gt;
&lt;li&gt;Automated file management and cleanup operations&lt;/li&gt;
&lt;/ul&gt;
&lt;h3&gt;
  
  
  Prerequisites
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Python 3.x installed&lt;/li&gt;
&lt;li&gt;AWS account with appropriate permissions&lt;/li&gt;
&lt;li&gt;boto3 library installed (pip install boto3)&lt;/li&gt;
&lt;li&gt;AWS credentials configured&lt;/li&gt;
&lt;/ul&gt;
&lt;h3&gt;
  
  
  Understanding the Code
&lt;/h3&gt;

&lt;p&gt;Let’s break down a simple yet effective solution:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import boto3
import csv
def list_bucket():
  # Configuration
  bucket = "your-bucket-name"
  folder = "path/to/folder"
  # Initialize S3 client
  s3 = boto3.resource("s3",
    aws_access_key_id="YOUR_ACCESS_KEY",
    aws_secret_access_key="YOUR_SECRET_KEY"
  )
  # Get bucket reference
  s3_bucket = s3.Bucket(bucket)

  # List files and extract relative paths
  files_in_s3 = [
      f.key.split(folder + "/")[1] 
      for f in s3_bucket.objects.filter(Prefix=folder).all()
  ]

  # Write results to file
  with open('bucket-contents.txt', 'w', encoding='UTF8') as file:
      file.write(str(files_in_s3))
if name == 'main':
  list_bucket()
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Code Breakdown
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Client Initialization&lt;/strong&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;s3 = boto3.resource("s3",
    aws_access_key_id="YOUR_ACCESS_KEY",
    aws_secret_access_key="YOUR_SECRET_KEY"
)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This section establishes a connection to AWS S3 using your credentials. For better security, consider using AWS CLI profiles or environment variables instead of hardcoded credentials.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;File Listing&lt;/strong&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;files_in_s3 = [
    f.key.split(folder + "/")[1] 
    for f in s3_bucket.objects.filter(Prefix=folder).all()
]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This part:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Uses filter(Prefix=folder) to list only files in the specified folder&lt;/li&gt;
&lt;li&gt;Splits the full path to get relative file paths&lt;/li&gt;
&lt;li&gt;Creates a list using list comprehension&lt;/li&gt;
&lt;/ul&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Output Generation&lt;/strong&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;with open('bucket-contents.txt', 'w', encoding='UTF8') as file:
    file.write(str(files_in_s3))
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Writes the results to a text file using proper UTF-8 encoding.&lt;/p&gt;

&lt;h3&gt;
  
  
  Enhanced Version
&lt;/h3&gt;

&lt;p&gt;Here’s an improved version with additional features:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import boto3
import csv
from datetime import datetime
def list_bucket_contents(bucket_name, folder_prefix, output_format='txt'):
    try:
        # Initialize S3 client using AWS CLI profile
        session = boto3.Session(profile_name='default')
        s3 = session.resource('s3')
        bucket = s3.Bucket(bucket_name)

        # Get file listing
        files = []
        for obj in bucket.objects.filter(Prefix=folder_prefix):
            files.append({
                'name': obj.key.split(folder_prefix + '/')[1],
                'size': obj.size,
                'last_modified': obj.last_modified
            })

        # Generate output filename
        timestamp = datetime.now().strftime('%Y%m%d_%H%M%S')
        output_file = f'bucket_contents_{timestamp}.{output_format}'

        # Write output
        if output_format == 'csv':
            with open(output_file, 'w', newline='', encoding='UTF8') as f:
                writer = csv.DictWriter(f, fieldnames=['name', 'size', 'last_modified'])
                writer.writeheader()
                writer.writerows(files)
        else:
            with open(output_file, 'w', encoding='UTF8') as f:
                f.write(str(files))

        return True, output_file

    except Exception as e:
        return False, str(e)
if __name__ == ' __main__':
    success, result = list_bucket_contents(
        'your-bucket-name',
        'path/to/folder',
        'csv'
    )
    print(f"Operation {'successful' if success else 'failed'}: {result}")
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This enhanced version includes:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Support for multiple output formats (CSV/TXT)&lt;/li&gt;
&lt;li&gt;Additional file metadata (size, last modified date)&lt;/li&gt;
&lt;li&gt;Error handling&lt;/li&gt;
&lt;li&gt;Timestamp-based output files&lt;/li&gt;
&lt;li&gt;AWS CLI profile support&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Best Practices
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;Never hardcode AWS credentials in your code&lt;/li&gt;
&lt;li&gt;Use error handling to manage potential failures&lt;/li&gt;
&lt;li&gt;Consider pagination for large buckets&lt;/li&gt;
&lt;li&gt;Include relevant metadata in the output&lt;/li&gt;
&lt;li&gt;Use appropriate file encoding (UTF-8)&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  Conclusion
&lt;/h3&gt;

&lt;p&gt;Directory listing in AWS S3 using Python and boto3 is a powerful tool for managing your cloud storage. Whether you’re doing inventory management, migrations, or routine maintenance, this script provides a solid foundation that you can build upon.&lt;/p&gt;

&lt;p&gt;See you in the next article! 👻&lt;/p&gt;

</description>
      <category>aws</category>
      <category>boto3</category>
      <category>awss3</category>
    </item>
    <item>
      <title>Automated Java Application Deployment to AWS Elastic Beanstalk using GitHub Actions</title>
      <dc:creator>Habil BOZALİ</dc:creator>
      <pubDate>Mon, 24 Feb 2025 07:51:14 +0000</pubDate>
      <link>https://dev.to/habil/automated-java-application-deployment-to-aws-elastic-beanstalk-using-github-actions-1fd8</link>
      <guid>https://dev.to/habil/automated-java-application-deployment-to-aws-elastic-beanstalk-using-github-actions-1fd8</guid>
      <description>&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F1024%2F0%2A6DTmsd1ZMU19NHbj" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F1024%2F0%2A6DTmsd1ZMU19NHbj" width="1024" height="705"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;Photo by Roman Synkevych on Unsplash&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;When developing modern applications, having a robust CI/CD pipeline is crucial. In this guide, we’ll walk through setting up automated deployments to AWS Elastic Beanstalk using GitHub Actions for both staging and production environments.&lt;/p&gt;

&lt;h3&gt;
  
  
  Prerequisites
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Java application with Maven&lt;/li&gt;
&lt;li&gt;GitHub repository&lt;/li&gt;
&lt;li&gt;AWS Elastic Beanstalk environment&lt;/li&gt;
&lt;li&gt;AWS IAM credentials&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Setting Up GitHub Actions
&lt;/h3&gt;

&lt;p&gt;We’ll create two workflow files: one for staging and one for production. These files should be placed in the .github/workflows/ directory of your repository.&lt;/p&gt;

&lt;h3&gt;
  
  
  Staging Deployment Configuration
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;name: Deploy staging
on:
  push:
    branches:
    - staging
jobs:
  build-and-deploy:
    runs-on: ubuntu-latest
    steps:
    - name: Checkout source code
      uses: actions/checkout@v4
    - name: Setup Java
      uses: actions/setup-java@v4
      with:
        distribution: 'temurin'
        java-version: '21'
    - name: Setup Maven
      uses: stCarolas/setup-maven@v5
      with:
        maven-version: 3.8.2
    - name: Build project
      run: mvn clean package -DskipTests
    - name: Deploy to EB
      uses: einaregilsson/beanstalk-deploy@v22
      with:
        aws_access_key: ${{ secrets.AWS_ACCESS_KEY_ID }}
        aws_secret_key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
        application_name: 'your-app-name-staging'
        environment_name: 'your-environment-name-staging'
        version_label: ${{ github.sha}}
        region: eu-central-1
        existing_bucket_name: 'elasticbeanstalk-eu-central-1-your-bucket'
        deployment_package: target/your-application.jar
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Production Deployment Configuration
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;name: Deploy production
on:
  push:
    branches:
    - production
jobs:
  build-and-deploy:
    runs-on: ubuntu-latest
    steps:
    - name: Checkout source code
      uses: actions/checkout@v4
    - name: Setup Java
      uses: actions/setup-java@v4
      with:
        distribution: 'temurin'
        java-version: '21'
    - name: Setup Maven
      uses: stCarolas/setup-maven@v5
      with:
        maven-version: 3.8.2
    - name: Build project
      run: mvn clean package -DskipTests
    - name: Deploy to EB
      uses: einaregilsson/beanstalk-deploy@v22
      with:
        aws_access_key: ${{ secrets.AWS_ACCESS_KEY_ID }}
        aws_secret_key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
        application_name: 'your-app-name-prod'
        environment_name: 'your-environment-name-prod'
        version_label: ${{ github.sha}}
        region: eu-central-1
        existing_bucket_name: 'elasticbeanstalk-eu-central-1-your-bucket'
        deployment_package: target/your-application.jar
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Setting Up GitHub Secrets
&lt;/h3&gt;

&lt;p&gt;To securely manage AWS credentials, you must add them as GitHub secrets. Here’s how:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Navigate to your GitHub repository&lt;/li&gt;
&lt;li&gt;Go to Settings &amp;gt; Secrets and Variables&amp;gt; Actions&lt;/li&gt;
&lt;li&gt;Click “New repository secret”&lt;/li&gt;
&lt;li&gt;Add the following secrets:&lt;/li&gt;
&lt;/ol&gt;

&lt;ul&gt;
&lt;li&gt;AWS_ACCESS_KEY_ID: Your AWS access key&lt;/li&gt;
&lt;li&gt;AWS_SECRET_ACCESS_KEY: Your AWS secret key&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  How It Works
&lt;/h3&gt;

&lt;p&gt;Let’s break down the key components of our workflow:&lt;/p&gt;

&lt;h3&gt;
  
  
  1. Trigger Configuration
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;on:
  push:
    branches:
    - main # or production for prod deployment
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The workflow triggers when code is pushed to the specified branch.&lt;/p&gt;

&lt;h3&gt;
  
  
  2. Environment Setup
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;- name: Setup Java
  uses: actions/setup-java@v4
  with:
    distribution: 'temurin'
    java-version: '21'
- name: Setup Maven
  uses: stCarolas/setup-maven@v5
  with:
    maven-version: 3.8.2
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Sets up the Java and Maven environment for building the application.&lt;/p&gt;

&lt;h3&gt;
  
  
  3. Build Process
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;- name: Build project
  run: mvn clean package -DskipTests
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Builds the Java application and creates the JAR file.&lt;/p&gt;

&lt;h3&gt;
  
  
  4. Deployment
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;- name: Deploy to EB
  uses: einaregilsson/beanstalk-deploy@v22
  with:
    aws_access_key: ${{ secrets.AWS_ACCESS_KEY_ID }}
    aws_secret_key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
    application_name: 'your-app-name'
    environment_name: 'your-environment-name'
    version_label: ${{ github.sha}}
    region: eu-central-1
    existing_bucket_name: 'elasticbeanstalk-eu-central-1-your-bucket'
    deployment_package: target/your-application.jar
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Deploys the built application to AWS Elastic Beanstalk.&lt;/p&gt;

&lt;h3&gt;
  
  
  Important Notes
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Security&lt;/strong&gt; : Never commit AWS credentials directly in the workflow files.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Version Label&lt;/strong&gt; : We use github.sha it for version tracking, ensuring unique deployments.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Skip Tests&lt;/strong&gt; : The current configuration skips tests during the build (-DskipTests). Consider enabling tests for production deployments.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Region&lt;/strong&gt; : Use the correct AWS region where your Elastic Beanstalk environment is hosted.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Bucket Name&lt;/strong&gt; : The existing_bucket_name should match your Elastic Beanstalk S3 bucket.&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  Benefits
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Automated deployments triggered by code pushes&lt;/li&gt;
&lt;li&gt;Separate workflows for staging and production environments&lt;/li&gt;
&lt;li&gt;Secure credential management using GitHub secrets&lt;/li&gt;
&lt;li&gt;Consistent build and deployment process&lt;/li&gt;
&lt;li&gt;Version tracking using Git SHA&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Conclusion
&lt;/h3&gt;

&lt;p&gt;This setup provides a robust CI/CD pipeline for Java applications, automating the deployment process to AWS Elastic Beanstalk. By maintaining separate workflows for staging and production, we ensure a proper deployment strategy while keeping our sensitive credentials secure.&lt;/p&gt;

&lt;p&gt;See you in the next article! 👻&lt;/p&gt;

</description>
      <category>githubactions</category>
      <category>aws</category>
      <category>github</category>
      <category>elasticbeanstalk</category>
    </item>
    <item>
      <title>How to Clone AWS Cognito User Pools Using Python</title>
      <dc:creator>Habil BOZALİ</dc:creator>
      <pubDate>Mon, 03 Feb 2025 15:48:05 +0000</pubDate>
      <link>https://dev.to/habil/how-to-clone-aws-cognito-user-pools-using-python-2h94</link>
      <guid>https://dev.to/habil/how-to-clone-aws-cognito-user-pools-using-python-2h94</guid>
      <description>&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F1024%2F0%2AwYik9CxoRt2dpMQb" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F1024%2F0%2AwYik9CxoRt2dpMQb" width="1024" height="681"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;Photo by marc belver colomer on Unsplash&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Amazon Cognito is a powerful user authentication and authorization service provided by AWS. It helps you manage user sign-up, sign-in, and access control for your web and mobile applications. One common requirement when working with Cognito is the need to clone a User Pool, especially when setting up different environments (development, staging, production) or creating backups.&lt;/p&gt;

&lt;p&gt;In this article, we’ll explore how to clone an AWS Cognito User Pool programmatically using Python and the boto3 library. We’ll create a script that copies all essential components, including app clients, groups, and schema attributes.&lt;/p&gt;

&lt;h3&gt;
  
  
  Prerequisites
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Python 3.x installed&lt;/li&gt;
&lt;li&gt;AWS account with appropriate permissions&lt;/li&gt;
&lt;li&gt;boto3 library installed (pip install boto3)&lt;/li&gt;
&lt;li&gt;AWS credentials configured&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Understanding the Code Structure
&lt;/h3&gt;

&lt;p&gt;Let’s break down our solution into manageable parts:&lt;/p&gt;

&lt;h3&gt;
  
  
  1. Setting Up AWS Client
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import boto3
from botocore.exceptions import ClientError

def get_client(aws_profile=None, region_name='eu-central-1'):
    if aws_profile:
        boto3.setup_default_session(profile_name=aws_profile)
    return boto3.client('cognito-idp', region_name=region_name)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This section initializes the AWS client using boto3. It allows you to specify an AWS profile and region, making it flexible for different environments.&lt;/p&gt;

&lt;h3&gt;
  
  
  2. Main Cloning Function
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;def copy_user_pool(source_user_pool_id, new_user_pool_name, aws_profile=None):
    client = get_client(aws_profile)

    try:
        response = client.describe_user_pool(UserPoolId=source_user_pool_id)
        user_pool_details = response['UserPool']
        print(f"Source User Pool details for ID {source_user_pool_id} retrieved successfully.")

        new_user_pool_response = client.create_user_pool(
            PoolName=new_user_pool_name,
            Policies=user_pool_details['Policies'],
            LambdaConfig=user_pool_details.get('LambdaConfig', {}),
            AutoVerifiedAttributes=user_pool_details.get('AutoVerifiedAttributes', []),
            # ... other configuration parameters
        )

        new_user_pool_id = new_user_pool_response['UserPool']['Id']
        return new_user_pool_id

    except Exception as e:
        print(f"An error occurred: {e}")
        return None
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This function handles the main cloning process. It:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Retrieves the source User Pool details&lt;/li&gt;
&lt;li&gt;Creates a new User Pool with the same configuration&lt;/li&gt;
&lt;li&gt;Returns the new User Pool ID if successful&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  3. Copying App Clients
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;def copy_app_clients(client, source_user_pool_id, new_user_pool_id, user_pool_details):
    try:
        app_clients_response = client.list_user_pool_clients(UserPoolId=source_user_pool_id)
        app_clients = app_clients_response['UserPoolClients']

        for app_client in app_clients:
            client.create_user_pool_client(
                UserPoolId=new_user_pool_id,
                ClientName=app_client['ClientName'],
                GenerateSecret=True,
                RefreshTokenValidity=86400,
                # ... other client configurations
            )
    except Exception as e:
        print(f"An error occurred while copying app clients: {e}")
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This function copies all app clients from the source User Pool to the new one, maintaining their configurations.&lt;/p&gt;

&lt;h3&gt;
  
  
  4. Copying User Pool Groups
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;def copy_user_pool_groups(client, source_user_pool_id, new_user_pool_id):
    try:
        groups_response = client.list_groups(UserPoolId=source_user_pool_id)
        groups = groups_response['Groups']

        for group in groups:
            client.create_group(
                UserPoolId=new_user_pool_id,
                GroupName=group['GroupName'],
                Description=group.get('Description', ''),
                Precedence=group.get('Precedence', 0)
            )
    except Exception as e:
        print(f"An error occurred while copying groups: {e}")
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This function replicates all user groups from the source User Pool to the new one.&lt;/p&gt;

&lt;h3&gt;
  
  
  Usage Example
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;if __name__ == " __main__":
    source_user_pool_id = 'eu-central-1_XXXXXXXX' # Replace with your source pool ID
    new_user_pool_name = 'my-new-user-pool'
    aws_profile = 'YOUR_AWS_PROFILE'

new_pool_id = copy_user_pool(source_user_pool_id, new_user_pool_name, aws_profile)
    if new_pool_id:
        print(f"User Pool copied successfully. New Pool ID: {new_pool_id}")
    else:
        print("User Pool copying failed.")
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Important Notes
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;The script maintains the same schema attributes, policies, and configurations as the source User Pool&lt;/li&gt;
&lt;li&gt;App clients are created with new client IDs and secrets&lt;/li&gt;
&lt;li&gt;User data is not copied — only the User Pool structure and configuration&lt;/li&gt;
&lt;li&gt;Make sure you have appropriate AWS permissions before running the script&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Conclusion
&lt;/h3&gt;

&lt;p&gt;This Python script provides a convenient way to clone AWS Cognito User Pools, which can be particularly useful when setting up new environments or creating backups. The modular structure makes it easy to modify and extend based on your specific needs.&lt;/p&gt;

&lt;p&gt;Remember to handle sensitive information carefully and never commit AWS credentials to version control systems.&lt;/p&gt;

&lt;p&gt;See you in the next article! 👻&lt;/p&gt;

</description>
      <category>awgcognito</category>
      <category>cognitouserpools</category>
      <category>cognito</category>
      <category>cognitocopy</category>
    </item>
    <item>
      <title>Publish static website with S3 + Github + Cloudfront</title>
      <dc:creator>Habil BOZALİ</dc:creator>
      <pubDate>Sun, 27 Aug 2023 18:40:14 +0000</pubDate>
      <link>https://dev.to/habil/publish-static-website-with-s3-github-cloudfront-21bc</link>
      <guid>https://dev.to/habil/publish-static-website-with-s3-github-cloudfront-21bc</guid>
      <description>&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fimages.unsplash.com%2Fphoto-1627752633728-85ebd86dfe4a%3Fcrop%3Dentropy%26cs%3Dtinysrgb%26fit%3Dmax%26fm%3Djpg%26ixid%3DM3wxMTc3M3wwfDF8c2VhcmNofDExfHxwaXBlbGluZXxlbnwwfHx8fDE2OTMxNjE1ODR8MA%26ixlib%3Drb-4.0.3%26q%3D80%26w%3D2000" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fimages.unsplash.com%2Fphoto-1627752633728-85ebd86dfe4a%3Fcrop%3Dentropy%26cs%3Dtinysrgb%26fit%3Dmax%26fm%3Djpg%26ixid%3DM3wxMTc3M3wwfDF8c2VhcmNofDExfHxwaXBlbGluZXxlbnwwfHx8fDE2OTMxNjE1ODR8MA%26ixlib%3Drb-4.0.3%26q%3D80%26w%3D2000" alt="Publish static website with S3 + Github + Cloudfront" width="2000" height="1333"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;A static website is a type of website that consists of fixed content and resources that do not change dynamically. This means that the content is pre-built and remains the same for all users who visit the website. Static websites are typically written in HTML, CSS, and JavaScript, and they can be hosted on a variety of platforms, including cloud services like Amazon Web Services (AWS).&lt;/p&gt;

&lt;p&gt;Amazon S3 (Simple Storage Service) is a scalable cloud storage service provided by Amazon Web Services. It allows you to store and retrieve large amounts of data, including files like images, videos, documents, and more. S3 is designed to be highly reliable, secure, and cost-effective. It's commonly used to store a wide range of data, from backups to media files.&lt;/p&gt;

&lt;p&gt;Using AWS S3 to host a static website can provide you with a cost-effective, reliable, and scalable solution that is well-suited for websites with content that doesn't change frequently.&lt;/p&gt;

&lt;p&gt;I assume, you have a static website and want to deploy your changes with GitHub Actions and distribute your website with CloudFront CDN network.&lt;/p&gt;

&lt;p&gt;Also, I assume, you already have an S3 bucket and CloudFront Distribution.&lt;/p&gt;

&lt;h2&gt;
  
  
  Create Secrets
&lt;/h2&gt;

&lt;p&gt;In order to run your action, you need to create some secret for it. Let's create 3 required secrets for the action.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;AWS_ACCESS_KEY_ID&lt;/li&gt;
&lt;li&gt;AWS_SECRET_ACCESS_KEY&lt;/li&gt;
&lt;li&gt;DISTRIBUTION_ID&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Create GitHub Action
&lt;/h2&gt;

&lt;p&gt;Create a new directory named .github and workflow in your project root directory.&lt;/p&gt;

&lt;p&gt;After that, you need to create a main.yml file.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;mkdir .github &amp;amp;&amp;amp; cd .github &amp;amp;&amp;amp; mkdir workflow &amp;amp;&amp;amp; cd workflow
touch main.yml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Populate the main.yml file with the following content&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;name: Upload Website to S3

on:
  push:
    branches:
    - main

jobs:
  deploy:
    runs-on: ubuntu-latest
    steps:
    - name: Checkout
      uses: actions/checkout@v1

    - name: Configure AWS Credentials
      uses: aws-actions/configure-aws-credentials@v1
      with:
        aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID }}
        aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
        aws-region: eu-central-1

    - name: Deploy static site to S3 bucket
      run: aws s3 sync . s3://my-static-website-bucket --delete
    - name: Invalidate CloudFront Distribution
      run: aws cloudfront create-invalidation --distribution-id ${{ secrets.DISTRIBUTION_ID }} --paths "/*"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;We created a Github Pipeline and uploaded our static files to the s3 service, invalidating the CloudFront service, and loading the updated version of our page.&lt;/p&gt;

&lt;p&gt;See you in the next article. 👻&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Create virtual machines with MacOS (m1 and m2 included)</title>
      <dc:creator>Habil BOZALİ</dc:creator>
      <pubDate>Sat, 08 Jul 2023 17:50:46 +0000</pubDate>
      <link>https://dev.to/habil/create-virtual-machines-with-macos-m1-and-m2-included-22f0</link>
      <guid>https://dev.to/habil/create-virtual-machines-with-macos-m1-and-m2-included-22f0</guid>
      <description>&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fimages.unsplash.com%2Fphoto-1627660163617-72878fc413e5%3Fcrop%3Dentropy%26cs%3Dtinysrgb%26fit%3Dmax%26fm%3Djpg%26ixid%3DM3wxMTc3M3wwfDF8c2VhcmNofDQ2fHxtYWNvc3xlbnwwfHx8fDE2ODgzMTc3MzR8MA%26ixlib%3Drb-4.0.3%26q%3D80%26w%3D2000" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fimages.unsplash.com%2Fphoto-1627660163617-72878fc413e5%3Fcrop%3Dentropy%26cs%3Dtinysrgb%26fit%3Dmax%26fm%3Djpg%26ixid%3DM3wxMTc3M3wwfDF8c2VhcmNofDQ2fHxtYWNvc3xlbnwwfHx8fDE2ODgzMTc3MzR8MA%26ixlib%3Drb-4.0.3%26q%3D80%26w%3D2000" alt="Create virtual machines with MacOS (m1 and m2 included)" width="2000" height="1333"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;A virtual machine (VM) is a software emulation of a computer system that behaves like a separate physical machine. It enables the creation and execution of multiple operating systems or applications on a single physical computer. VMs provide isolation, allowing different software environments to run independently of each other.&lt;/p&gt;

&lt;h2&gt;
  
  
  Problem
&lt;/h2&gt;

&lt;p&gt;You are using a Macbook as the development device and you want to use an application that only has a Windows or Linux version available.&lt;/p&gt;

&lt;p&gt;My scenario is as follows; I wanted to install Power BI Client on my computer and saw that only the Windows version is available. I set out thinking that I could solve this problem by setting up a virtual machine. Unfortunately, it is very difficult to find free solutions in the macOS world, but you can solve this problem with open-source software.&lt;/p&gt;

&lt;p&gt;The first solution that comes to mind is Oracle VM VirtualBox, but Oracle VM VirtualBox does not yet support Apple Silicon processors. (Currently in Developer Preview), My second solution is UTM. UTM is a full-featured system emulator and virtual machine host for iOS and macOS. It is based on QEMU. In short, it allows you to run Windows, Linux, and more on your Mac, iPhone, and iPad. Let's start.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Requirements&lt;/strong&gt;
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://brew.sh/" rel="noopener noreferrer"&gt;brew&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Installation
&lt;/h2&gt;

&lt;p&gt;Step 1: You need to install UTM. You can download UTM from the &lt;a href="https://github.com/utmapp/UTM/releases/" rel="noopener noreferrer"&gt;GitHub page.&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Step 2: Install the required libs.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;brew install cabextract wimlib cdrtools minacle/chntpw/chntpw aria2
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Step 3: Download the installer creator. To do this, you need to one of the following addresses.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://uupdump.net/known.php?q=21390" rel="noopener noreferrer"&gt;UUP dump (Windows 10 21H2)&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://uupdump.net/known.php?q=22000.1098" rel="noopener noreferrer"&gt;UUP dump (Windows 11 21H2)&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://uupdump.net/known.php?q=22621.674" rel="noopener noreferrer"&gt;UUP dump (Windows 11 22H2)&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Once you downloaded and extracted the installer creator, you need to run &lt;code&gt;uup_download_macos.sh&lt;/code&gt; from Terminal to generate the ISO.&lt;/p&gt;

&lt;p&gt;Step 4: Open the UTM and start the Windows installation.&lt;/p&gt;

&lt;p&gt;Following steps are required for the successful installation.&lt;/p&gt;

&lt;p&gt;Open UTM and click the “+” button to open the VM creation wizard.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Select “Virtualize”.&lt;/li&gt;
&lt;li&gt;Select “Windows”.&lt;/li&gt;
&lt;li&gt;Make sure “Import VHDX Image” is &lt;em&gt;unchecked&lt;/em&gt; and “Install Windows 10 or higher” is &lt;em&gt;checked&lt;/em&gt;. Also, make sure “Install drivers and SPICE tools” is &lt;em&gt;checked&lt;/em&gt;. Press “Browse” and select the ISO you built in Step 3.&lt;/li&gt;
&lt;li&gt;Pick the amount of RAM and CPU cores you wish to give access to the VM. Press “Next” to continue.&lt;/li&gt;
&lt;li&gt;Specify the maximum amount of drive space to allocate. Press “Next” to continue.&lt;/li&gt;
&lt;li&gt;If you have a directory you want to mount in the VM, you can select it here. Alternatively, you can skip this and select the directory later from the VM window’s toolbar. The shared directory will be available after installing SPICE tools (see below). Press “Next” to continue.&lt;/li&gt;
&lt;li&gt;Press “Save” to create the VM. Wait for the guest tools to finish downloading and press the Run button to start the VM.&lt;/li&gt;
&lt;li&gt;Follow the Windows installer. If you have issues with the mouse, press the mouse capture button in the toolbar to send mouse input directly. Press Control+Option together to exit mouse capture mode. Sometimes, due to driver issues, you can enter and exit capture mode and the mouse cursor works normally again.&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;In this article, we talked about how to install Windows with the help of a virtual machine on the MacOS operating system. I hope it was useful.&lt;/p&gt;

&lt;p&gt;See you in the next article. 👻&lt;/p&gt;

</description>
      <category>macos</category>
      <category>vm</category>
    </item>
    <item>
      <title>How to use Lens IDE with two different AWS profiles</title>
      <dc:creator>Habil BOZALİ</dc:creator>
      <pubDate>Sat, 01 Jul 2023 19:24:24 +0000</pubDate>
      <link>https://dev.to/habil/how-to-use-lens-ide-with-two-different-aws-profiles-7al</link>
      <guid>https://dev.to/habil/how-to-use-lens-ide-with-two-different-aws-profiles-7al</guid>
      <description>&lt;h2&gt;
  
  
  What is Lens IDE?
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fimages.unsplash.com%2Fphoto-1515774004412-e3185c2a8217%3Fcrop%3Dentropy%26cs%3Dtinysrgb%26fit%3Dmax%26fm%3Djpg%26ixid%3DM3wxMTc3M3wwfDF8c2VhcmNofDF8fG11bHRpfGVufDB8fHx8MTY4ODIzOTMzNnww%26ixlib%3Drb-4.0.3%26q%3D80%26w%3D2000" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fimages.unsplash.com%2Fphoto-1515774004412-e3185c2a8217%3Fcrop%3Dentropy%26cs%3Dtinysrgb%26fit%3Dmax%26fm%3Djpg%26ixid%3DM3wxMTc3M3wwfDF8c2VhcmNofDF8fG11bHRpfGVufDB8fHx8MTY4ODIzOTMzNnww%26ixlib%3Drb-4.0.3%26q%3D80%26w%3D2000" alt="How to use Lens IDE with two different AWS profiles" width="2000" height="1277"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The Lens IDE is a desktop application with a user-friendly interface for managing and interacting with Kubernetes clusters. It offers a centralized dashboard to view and manage multiple clusters, provides real-time metrics and logs, allows for easy deployment and management of applications, and offers various other features to simplify working with Kubernetes.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Problem
&lt;/h2&gt;

&lt;p&gt;If there is only 1 profile in the AWS CLI, Lens IDE will not give you a problem, but when you start working with more than one profile, you begin to encounter problems.&lt;/p&gt;

&lt;p&gt;Editing the active profile manually or rotating the access keys will start to tire you after a while.&lt;/p&gt;

&lt;p&gt;I can offer you a solution to overcome this problem. Let's start.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Requirements&lt;/strong&gt;
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://docs.aws.amazon.com/cli/latest/userguide/install-cliv2.html" rel="noopener noreferrer"&gt;aws-cli&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://kubernetes.io/docs/tasks/tools/" rel="noopener noreferrer"&gt;kubectl&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/johnnyopao/awsp" rel="noopener noreferrer"&gt;awsp&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://k8slens.dev/" rel="noopener noreferrer"&gt;open-lens&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Installation
&lt;/h2&gt;

&lt;p&gt;Before starting the installation, let's ensure everything works with a single profile. To achieve this, let's complete the AWS CLI, then kubectl and finally Lens IDE installations.&lt;/p&gt;

&lt;p&gt;After the initial single profile setup, you need to create AWS Profile with the following command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;aws configure --profile PROFILE_NAME
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Example AWS CLI Config File:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;[profile DEMO_1]
region = eu-central-1
output = json
[profile DEMO_2]
region = eu-central-1
output = json
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Example AWS CLI Credential File:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;[DEMO_1]
aws_access_key_id = XYZ
aws_secret_access_key = 123
[DEMO_2]
aws_access_key_id = ABC
aws_secret_access_key = 321

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Install AWSP:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;npm install -g awsp
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Add the following to your &lt;code&gt;.bashrc&lt;/code&gt; or &lt;code&gt;.zshrc&lt;/code&gt; config&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;alias awsp="source _awsp"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now you have installed awsp, if you execute &lt;code&gt;awsp&lt;/code&gt; command in your terminal you can switch in your AWS Profiles.&lt;/p&gt;

&lt;p&gt;One more step to achieve the goal. We need to add some environment variables to the kubectl config file.&lt;/p&gt;

&lt;p&gt;Open the file and add the following environments for every profile.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;users:
- name: demo@demo.eu-central-1.eksctl.io
  user:
    exec:
      apiVersion: client.authentication.k8s.io/v1alpha1
      args:
      - token
      - -i
      - dec14
      command: aws
      env:
      - name: AWS_PROFILE
        value: DEMO_1
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;In this article, we learned how to create &amp;amp; manage multiple aws profiles and how to use these profiles with Lens IDE. I hope it was useful.&lt;/p&gt;

&lt;p&gt;See you in the next article. 👻&lt;/p&gt;

</description>
      <category>aws</category>
      <category>eks</category>
      <category>k8s</category>
      <category>lens</category>
    </item>
    <item>
      <title>Transfer your Gmail messages to Google Workspace Group</title>
      <dc:creator>Habil BOZALİ</dc:creator>
      <pubDate>Tue, 20 Jun 2023 21:21:33 +0000</pubDate>
      <link>https://dev.to/habil/transfer-your-gmail-messages-to-google-workspace-group-lnf</link>
      <guid>https://dev.to/habil/transfer-your-gmail-messages-to-google-workspace-group-lnf</guid>
      <description>&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fimages.unsplash.com%2Fphoto-1590893384694-5770006aa1bc%3Fcrop%3Dentropy%26cs%3Dtinysrgb%26fit%3Dmax%26fm%3Djpg%26ixid%3DM3wxMTc3M3wwfDF8c2VhcmNofDIwfHxUcmFuc2ZlcnxlbnwwfHx8fDE2ODcyOTYwNDN8MA%26ixlib%3Drb-4.0.3%26q%3D80%26w%3D2000" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fimages.unsplash.com%2Fphoto-1590893384694-5770006aa1bc%3Fcrop%3Dentropy%26cs%3Dtinysrgb%26fit%3Dmax%26fm%3Djpg%26ixid%3DM3wxMTc3M3wwfDF8c2VhcmNofDIwfHxUcmFuc2ZlcnxlbnwwfHx8fDE2ODcyOTYwNDN8MA%26ixlib%3Drb-4.0.3%26q%3D80%26w%3D2000" alt="Transfer your Gmail messages to Google Workspace Group" width="2000" height="1333"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Google Groups is an online service provided by Google that allows people with shared interests or goals to communicate and collaborate in a group setting. It serves as a platform for discussions, email-based communication, file sharing, and organizing events within a group of individuals.&lt;/p&gt;

&lt;p&gt;There are several reasons why you might consider using Google Groups:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Communication and Collaboration&lt;/li&gt;
&lt;li&gt;Community Building&lt;/li&gt;
&lt;li&gt;Email Distribution&lt;/li&gt;
&lt;li&gt;Access Control and Privacy&lt;/li&gt;
&lt;li&gt;Integration with other Google Services&lt;/li&gt;
&lt;li&gt;Archiving and Searchability&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;What if you have an old Gmail mailbox and want to transfer all the content into a Google group? Let's solve it together.&lt;/p&gt;

&lt;h2&gt;
  
  
  GYB
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://github.com/GAM-team/got-your-back/wiki#Introduction" rel="noopener noreferrer"&gt;Got Your Back (GYB)&lt;/a&gt; is a command line tool that backs up and restores your Gmail account. GYB also works with Google Workspace (formerly G Suite / Google Apps) accounts.&lt;/p&gt;

&lt;h2&gt;
  
  
  Installation
&lt;/h2&gt;

&lt;p&gt;Windows Users:&lt;br&gt;&lt;br&gt;
&lt;a href="https://github.com/GAM-team/got-your-back/releases" rel="noopener noreferrer"&gt;https://github.com/GAM-team/got-your-back/releases/Setup.msi&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Mac and Linux Users:&lt;br&gt;&lt;br&gt;
Open a terminal and run:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;bash &amp;lt;(curl -s -S -L https://git.io/gyb-install)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;this will download GYB, install it and start setup. Follow the instructions and complete the setup.&lt;/p&gt;

&lt;h2&gt;
  
  
  Backup Gmail
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;gyb --email myoldgmail@gmail.com --action backup
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This command will create a directory and clone all the messages into it.&lt;/p&gt;

&lt;h2&gt;
  
  
  Google Workspace Setup
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;Go to the &lt;a href="https://console.developers.google.com/flows/enableapi?apiid=drive,gmail,groupsmigration" rel="noopener noreferrer"&gt;Google Developers Console&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Select Yes and click "Agree and continue". It will take a moment for the project to be created.&lt;/li&gt;
&lt;li&gt;Click "Go to credentials"&lt;/li&gt;
&lt;li&gt;Click "New credentials" and choose "Service account key".&lt;/li&gt;
&lt;li&gt;Click "Select..." and choose "New service account".&lt;/li&gt;
&lt;li&gt;Give your service account a name like "GYB Service account".&lt;/li&gt;
&lt;li&gt;Keep JSON as the key type. Click "Create".&lt;/li&gt;
&lt;li&gt;Agree to create the service account without a role. Copy the client ID.&lt;/li&gt;
&lt;li&gt;Your browser will download a .json file. Save the file with the name oauth2service.json and put it in the same folder as gyb.py or gyb.exe.&lt;/li&gt;
&lt;li&gt;Go to &lt;a href="https://admin.google.com/ac/owl/domainwidedelegation" rel="noopener noreferrer"&gt;Domain-wide Delegation in your Google Workspace Admin console&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Click "Add New".&lt;/li&gt;
&lt;li&gt;For Client ID, enter the Client ID from above.&lt;/li&gt;
&lt;li&gt;For API Scopes, enter exactly:
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;https://mail.google.com/,https://www.googleapis.com/auth/apps.groups.migration,https://www.googleapis.com/auth/drive.appdata
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Click "Authorize". Now, your service account setup is complete.&lt;/p&gt;

&lt;h2&gt;
  
  
  Start Migration
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;gyb --local-folder GYB-GMail-Backup-myoldgmail@gmail.com --action restore-group --use-admin admin@domain.com --service-account --email yourgroup@domain.com
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This command syncs your local backup into a specified Google Group.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;In this article, we learned how to move the messages in your Gmail account to the Google group using GYB.&lt;/p&gt;

&lt;p&gt;See you in the next article. 👻&lt;/p&gt;

</description>
      <category>googleworkspace</category>
      <category>gmail</category>
      <category>gyb</category>
    </item>
  </channel>
</rss>
