<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Harsh Viradia</title>
    <description>The latest articles on DEV Community by Harsh Viradia (@viradiaharsh).</description>
    <link>https://dev.to/viradiaharsh</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/viradiaharsh"/>
    <language>en</language>
    <item>
      <title>Amazon ECR Pull Through Cache (PTC)</title>
      <dc:creator>Harsh Viradia</dc:creator>
      <pubDate>Sat, 18 Apr 2026 06:07:09 +0000</pubDate>
      <link>https://dev.to/viradiaharsh/amazon-ecr-pull-through-cache-ptc-2p22</link>
      <guid>https://dev.to/viradiaharsh/amazon-ecr-pull-through-cache-ptc-2p22</guid>
      <description>&lt;h3&gt;
  
  
  What is ECR Pull Through Cache?
&lt;/h3&gt;

&lt;p&gt;Normally, to use an external image in the private environmet, we have to manually download the image and push it to ECR. With Pull Through Cache, we can simply pull the image using our ECR URL. AWS ECR automatically fetches the images from the upstream registry, caches it in our private registry and keeps it up to date with latest version. &lt;/p&gt;

&lt;h3&gt;
  
  
  What are the public registries are supported?
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;No Auth registries like AWS Public ECR, Kubernetes, Quay. &lt;/li&gt;
&lt;li&gt;Auth registries like Docker Hub, Azure ACR, GHCR, Gitlab SaaS, Chainguard. &lt;/li&gt;
&lt;li&gt;Even Cross Account AWS ECR also but it require IAM authemtication. &lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  How does It Works?
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;Create a Rule to define which upstream registry we want to sync for an example let's say Docker Hub. &lt;/li&gt;
&lt;li&gt;Let's pull an image with our private URL of ECR.
&lt;code&gt;&amp;lt;aws_account_id&amp;gt;.dkr.ecr.&amp;lt;region&amp;gt;.amazonaws.com/docker-hub/library/nginx:latest&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Rest automatic caching by AWS ECR, ECR will create repository for us and stores the image and it will check for updates in every 24 hours for latest updates. &lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  Why It's Good Feature?
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Obvisouly performance, once the images cached into the ECR we can directly pull it from the ECR privatly no need to travel over the internet. &lt;/li&gt;
&lt;li&gt;Security is key as image is in the ECR we can utilize the ECR buit in security tools to scan the images and also we can apply our own lifecycle policy. &lt;/li&gt;
&lt;li&gt;Reliability, as if upstream refistry goes down still our image is still available, which all Kubernetes engineers have feeled this issue when Docker has remove so many images from it's regiestry and it has created a huge chaos. &lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  What's Bingo here?
&lt;/h3&gt;

&lt;p&gt;ECR Pull through the cache eliminates the manual download, re-tag and push workflow and it will give the convenience the public registries with the security and speed of AWS private network ad environment. &lt;/p&gt;

&lt;h3&gt;
  
  
  References
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://docs.aws.amazon.com/AmazonECR/latest/userguide/pull-through-cache.html" rel="noopener noreferrer"&gt;https://docs.aws.amazon.com/AmazonECR/latest/userguide/pull-through-cache.html&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://aws.amazon.com/about-aws/whats-new/2026/04/amazon-ecr-pull-through-cache-referrers/" rel="noopener noreferrer"&gt;https://aws.amazon.com/about-aws/whats-new/2026/04/amazon-ecr-pull-through-cache-referrers/&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Visit me:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://harshviradia.vercel.app/" rel="noopener noreferrer"&gt;https://harshviradia.vercel.app/&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.linkedin.com/in/harsh-viradia/" rel="noopener noreferrer"&gt;https://www.linkedin.com/in/harsh-viradia/&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>aws</category>
      <category>docker</category>
      <category>devops</category>
      <category>news</category>
    </item>
    <item>
      <title>How I Reduced AWS Infrastructure Cost by 40% in Two Quarters</title>
      <dc:creator>Harsh Viradia</dc:creator>
      <pubDate>Wed, 11 Mar 2026 13:34:06 +0000</pubDate>
      <link>https://dev.to/viradiaharsh/how-i-reduced-aws-infrastructure-cost-by-40-in-two-quarters-1pfe</link>
      <guid>https://dev.to/viradiaharsh/how-i-reduced-aws-infrastructure-cost-by-40-in-two-quarters-1pfe</guid>
      <description>&lt;p&gt;Over the last two quarters, I focused heavily on optimizing my AWS infrastructure by applying cloud cost optimization best practice. Through systematic analysis, observing cost spend and analyze spend report, I was able to reduce AWS infrastructure cost by nearly 40% without impacting system reliability or performance. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvmzrfzxjc491og0074hw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvmzrfzxjc491og0074hw.png" alt=" " width="448" height="253"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In this post, I am going to share teh approach I have followed, the optimizations that dellivered the biggest savings. &lt;/p&gt;

&lt;h3&gt;
  
  
  The Challenge
&lt;/h3&gt;

&lt;p&gt;Before October 2025 monthly cost of AWS around 60K per month, Monthly cost spiked nearly 15% of a month. While the architecture was stable and scalable, there was significant room for cost optimization. &lt;/p&gt;

&lt;p&gt;The goal was clear: reduce infrastructure costs without compromising performance, reliability or scalability. &lt;/p&gt;

&lt;h3&gt;
  
  
  Step 1: Gaining Cost Visibility
&lt;/h3&gt;

&lt;p&gt;The verry first was understanding where the money was actually going. &lt;/p&gt;

&lt;p&gt;With the help of Billing section of AWS and Cost Explorer I was able to find the major culprit of the high cost. Which was S3 bucket and CloudFront.&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 2: Determine The Underlying Issue
&lt;/h3&gt;

&lt;p&gt;After carefully oberserving the S3 spend found there was a spike in the S3 storage also there was a increasing curve in the GET request of the S3. &lt;/p&gt;

&lt;p&gt;With all these details did deeper analysis in the application and found there was a bug in the application which was generating the duplicate images, and these was the reason why S3 cost got spiked. &lt;/p&gt;

&lt;p&gt;In figures the S3 storage went from 100TB to 600TB storage, also there was a multi-region buckets so the data-transfer cost was at it's peak. &lt;/p&gt;

&lt;h4&gt;
  
  
  This says a small bug in the application can cost very huge!
&lt;/h4&gt;

&lt;h3&gt;
  
  
  Step 3: What I did to bring cost down again?
&lt;/h3&gt;

&lt;p&gt;Since it was a application bug first thing was to fix it, with the help of developers fix that bug first. Now the big challenge was to delete this duplicates images without increasing cost&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F09z4ugc6iw7q8w388j13.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F09z4ugc6iw7q8w388j13.gif" alt=" " width="498" height="498"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Images count was in the &lt;strong&gt;Quadrillion&lt;/strong&gt; because of this duplication bug and versioning, now removing these many images can cost because if we just call delete api to delete all images it will first list all images and we all know AWS is charging to list the objects, so just delete all the images was not an option, what else I can do 🤔&lt;/p&gt;

&lt;p&gt;What good thing was in the Database I have details available what keys are required or we can say this is main image rest are duplicates, I created one PSQL DB query that gave me one CSV file which was containing all duplicate images name or we can say key. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fshkag44ofy3zan3sbnfq.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fshkag44ofy3zan3sbnfq.gif" alt=" " width="498" height="413"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Then I have created one python script which will delete all the images which listed in the CSV file, now images were in trillion so I have written script in a way that it will delete at-least 50,000 images in a seconds. And I got sucess, my script has marked as deleted trillions images in just few days and S3 lifecycle policies helped them to remove them instantly.&lt;/p&gt;

&lt;p&gt;There was a still some more grey area of the application which we work on it and fix it and continuesly save the cost. Finally a Hard work in the Novemember and clean process in the Dec we finally able to see a good result in the Jan when cost was below 45K of month. &lt;/p&gt;

&lt;h3&gt;
  
  
  The Result
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;AWS Cost reduced by ~40%&lt;/li&gt;
&lt;li&gt;No degradation in performance and reliability&lt;/li&gt;
&lt;li&gt;Finetune application and infrastructure without increasing cost.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Key Lessons Learned
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;Monthly Cost Analysis can help to track the cost&lt;/li&gt;
&lt;li&gt;Try to add maximum visbility in the cost which will be a magic wand to reduce the cost&lt;/li&gt;
&lt;li&gt;Small steps will compound over time &lt;/li&gt;
&lt;/ol&gt;

&lt;h4&gt;
  
  
  For me this journey resulted in a 40% reduction in AWS in just 3-4 months, it reinforced the importance of continuous cloud optimization.
&lt;/h4&gt;

</description>
      <category>finops</category>
      <category>devops</category>
      <category>aws</category>
      <category>infrastructure</category>
    </item>
    <item>
      <title>Designed a Global Image Optimization System with &lt;1s P95 Latency Across Regions</title>
      <dc:creator>Harsh Viradia</dc:creator>
      <pubDate>Thu, 29 Jan 2026 09:21:51 +0000</pubDate>
      <link>https://dev.to/viradiaharsh/designed-a-global-image-optimization-system-with-1s-p95-latency-across-regions-33hg</link>
      <guid>https://dev.to/viradiaharsh/designed-a-global-image-optimization-system-with-1s-p95-latency-across-regions-33hg</guid>
      <description>&lt;p&gt;In this blog I am going to write how we can reduce the Image latency by using CloudFront and S3 bucket multi region architecture. We are going to use few services of the AWS like AWS CloudFront, S3 bucket, CloudFront Function, AWS Lambda, API Gateway, Route53.&lt;/p&gt;

&lt;h2&gt;
  
  
  Architecture Overview:
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fblplnxqywv6yqvwol80a.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fblplnxqywv6yqvwol80a.png" alt=" " width="800" height="320"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In enterprise architecture specially for Ecommerce Images are the main components of the latency, when end users are the across the world, and we can not bring the whole infrastructure all over the world due to cost we can reduce the size of images by converting them into the WEBP format and serve them with CDN. This can help to reduce the latency by almost 98%.&lt;/p&gt;

&lt;p&gt;In this architecture we are going to use some AWS service which can help us to achieve this goal. In the architecture diagram we can there is a three S3 bucket, one AWS CloudFront with CloudFront Function, Lambda Function to convert the images into the WEBP format and store them into the S3 bucket, API Gateway because we can not attach the lambda function as an origin of CloudFront so we will use the API Gateway for this.&lt;/p&gt;

&lt;p&gt;About the three S3 bucket, our three one is the main S3 bucket where all the images have been stored, rest two buckets are known as transformed bucket which hosted two different regions, in this bucket all the transformed images will be storing, and there will be one multi-region access point with these two transformed bucket which will be the one of origin of the AWS CloudFront. For a simplicity let's say main S3 bucket is in the North Virginia and about the transformed bucket one in the North Virginia and second bucket in the Asia Pacific region.&lt;/p&gt;

&lt;p&gt;Now when user requests the image it with authenticate with CloudFront Functions (Edge Function) and it will check that user browser is supporting the WEBP format or not, also based on that it will re-write the URL. If it supports than in the request it will add format=WebP or if it is not supporting then it will add format=JPEG.&lt;/p&gt;

&lt;p&gt;Now once URL re-written request will pass to the CloudFront where two origin has been configured, one is the multi-region access point of the S3 bucket and another one is the lambda function with API Gateway for failover. Now if the particular image that has been requested if it is not available in the both of the transformed S3 bucket the request will fail, and it will route to lambda function. What lambda function will do it will pick the image from the main S3 bucket and convert it to WEBP format and store them into the transformed bucket and serve that image via the CDN.&lt;/p&gt;

&lt;p&gt;In the very first go it looks like it is very lengthy process, but it is not, for very first call there will be latency but after than all images will be served within milliseconds, in some cases almost 1-2 ms.&lt;/p&gt;

&lt;p&gt;This first call named as a cold start, to remove this there is a way, we could add one more S3 bucket which will be the replicated bucket of the main bucket, it will be hosted in the Asia Pacific region. After than create a multi-region access point for these two buckets. Then create a replicated lambda as well in the Asia Pacific region and API Gateway as well. Now we have two API Gateway, pick one domain and create latency based routing record in the Route53. This will make sure if any request coming form the Asia region it will route to the Asia S3 bucket by this it can reduce the 30-40% cold start latency.&lt;/p&gt;

&lt;p&gt;With this approach the image latency will be brought down pretty good. This approach was from the Infrastructure side, but there are better approach available, like framework like Next.js, Spree, ROR natively support image transformation logic from code end which is much faster and better approach. If there is an option like which approach should choose I suggest using the Native support as it comes with better result of latency compare to external approach.&lt;/p&gt;

&lt;p&gt;Here I am adding the GitHub link for the lambda code a URL re-write logic of CloudFront function. Any suggestion feel free to contact me here is my LinkedIn profile link.&lt;/p&gt;

&lt;p&gt;GitHub: &lt;a href="https://github.com/harsh-viradia/Global-Image-Optimization-System" rel="noopener noreferrer"&gt;https://github.com/harsh-viradia/Global-Image-Optimization-System&lt;/a&gt;&lt;br&gt;
Linkdin: &lt;a href="https://www.linkedin.com/in/harsh-viradia/" rel="noopener noreferrer"&gt;https://www.linkedin.com/in/harsh-viradia/&lt;/a&gt;&lt;/p&gt;

</description>
      <category>aws</category>
      <category>devops</category>
      <category>tutorial</category>
      <category>python</category>
    </item>
    <item>
      <title>Optimizing Kubernetes Scaling with KEDA: Balancing Performance and Cost Efficiency</title>
      <dc:creator>Harsh Viradia</dc:creator>
      <pubDate>Tue, 25 Feb 2025 09:52:08 +0000</pubDate>
      <link>https://dev.to/viradiaharsh/optimizing-kubernetes-scaling-with-keda-balancing-performance-and-cost-efficiency-1n3j</link>
      <guid>https://dev.to/viradiaharsh/optimizing-kubernetes-scaling-with-keda-balancing-performance-and-cost-efficiency-1n3j</guid>
      <description>&lt;p&gt;Automation is at the core of our daily responsibilities as a DevOps engineer. In the Kubernetes ecosystem, automation is crucial in optimizing workloads, ensuring scalability, and maintaining cost efficiency. One of the most impactful areas where automation can be leveraged is scaling applications based on real-time demand.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Challenge: Scaling Event-Driven Applications
&lt;/h2&gt;

&lt;p&gt;Many applications operate on an event-driven or request-driven architecture, where computational resources are required only when a request arrives. Ideally, the application should scale up when demand spikes and scale down when inactive to optimize resource consumption. Kubernetes, by default, supports auto-scaling mechanisms such as Horizontal Pod Autoscaler (HPA) and Cluster Autoscaler, but these solutions typically maintain a minimum number of running pods at all times.&lt;/p&gt;

&lt;p&gt;However, in scenarios where applications need to scale down to zero when idle and instantly scale up upon request, traditional scaling mechanisms fall short. This is where KEDA (Kubernetes Event-Driven Autoscaling) provides an elegant solution.&lt;/p&gt;

&lt;h2&gt;
  
  
  Introducing KEDA: Event-Driven Scaling for Kubernetes
&lt;/h2&gt;

&lt;p&gt;KEDA enables Kubernetes to scale applications based on external event sources, such as message queues, HTTP requests, or cloud-native messaging systems. Unlike HPA, which scales pods based on CPU and memory usage, KEDA can:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Maintain zero replicas during idle periods, reducing costs.&lt;/li&gt;
&lt;li&gt;Scale applications from zero to the required number of replicas when an event is detected.&lt;/li&gt;
&lt;li&gt;Automatically adjust scaling based on real-time demand.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Real-World Use Case: Scaling Kubernetes Pods with GCP Pub/Sub
&lt;/h2&gt;

&lt;p&gt;Consider an application that processes messages from Google Cloud Pub/Sub, a robust queuing service. With KEDA, Kubernetes pods can scale dynamically based on the number of messages in the queue. When there are no messages, the system runs with zero replicas, consuming no resources. As messages arrive, KEDA scales up pods in response to the workload, ensuring efficient resource utilization.&lt;/p&gt;

&lt;h3&gt;
  
  
  The Challenge of Cold Start in Heavy Applications
&lt;/h3&gt;

&lt;p&gt;While KEDA provides cost-effective scaling, it introduces a potential latency issue for applications that require immediate responsiveness. For instance, if a machine learning (ML) application must process a request within one minute, relying entirely on KEDA-based scaling from zero replicas might not be viable.&lt;/p&gt;

&lt;p&gt;This is because:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Kubernetes first needs to provision a node (if using a cluster autoscaler).&lt;/li&gt;
&lt;li&gt;Once the node is available, the application pod must be scheduled and started.&lt;/li&gt;
&lt;li&gt;If the application is resource-intensive, the pod initialization could take several minutes.&lt;/li&gt;
&lt;li&gt;This cold start latency can delay responses, making it unsuitable for real-time, latency-sensitive applications.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Solving the Cold Start Challenge with Hybrid Automation
&lt;/h2&gt;

&lt;p&gt;To mitigate this issue, we can implement a hybrid automation strategy that combines both proactive and reactive scaling. The key is to predict demand patterns and optimize pod availability accordingly.&lt;/p&gt;

&lt;h3&gt;
  
  
  High-Traffic Hours Scaling (8:00 AM EST – 9:00 PM EST)
&lt;/h3&gt;

&lt;p&gt;During peak hours, when consistent traffic is expected, we:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Maintain at least one active replica to eliminate cold start delays.&lt;/li&gt;
&lt;li&gt;Utilize Horizontal Pod Autoscaler (HPA) to dynamically scale pods based on CPU and memory utilization.&lt;/li&gt;
&lt;li&gt;Ensure seamless user experience by keeping production services highly responsive.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Off-Peak Hours Scaling (9:00 PM EST – 8:00 AM EST)
&lt;/h3&gt;

&lt;p&gt;During low-traffic hours, we:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Rely on KEDA event-driven scaling to minimize costs.&lt;/li&gt;
&lt;li&gt;Set the ideal replica count to zero, ensuring no resources are consumed when there are no incoming requests.&lt;/li&gt;
&lt;li&gt;Automatically scale up pods when messages arrive in the queue, responding to real-time demand.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Implementation: Automating Scaling with Terraform, Kubernetes Manifests, and CI/CD Pipelines
&lt;/h2&gt;

&lt;p&gt;To implement this solution effectively, we can leverage Infrastructure as Code (IaC) tools such as Terraform and Kubernetes manifests, combined with CI/CD pipelines for seamless automation.&lt;/p&gt;

&lt;h3&gt;
  
  
  Using Terraform for Infrastructure Automation
&lt;/h3&gt;

&lt;p&gt;Terraform can be used to define and provision Kubernetes clusters with autoscaling enabled:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Define node groups and auto-scaling policies.&lt;/li&gt;
&lt;li&gt;Deploy KEDA and HPA configurations using Terraform modules.&lt;/li&gt;
&lt;li&gt;Automate infrastructure changes based on time-based triggers.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Using Kubernetes Manifests for Dynamic Scaling
&lt;/h3&gt;

&lt;p&gt;Kubernetes manifests can define the deployment and scaling behavior:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Use KEDA ScaledObject to define event-driven autoscaling.&lt;/li&gt;
&lt;li&gt;Configure HPA to manage scaling during peak hours.&lt;/li&gt;
&lt;li&gt;Utilize CronJobs to trigger scaling adjustments based on time windows.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Leveraging CI/CD Pipelines for Automation
&lt;/h3&gt;

&lt;p&gt;A CI/CD pipeline can automate scaling adjustments and deployments:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Deploy Terraform infrastructure changes via GitLab CI/CD or GitHub Actions.&lt;/li&gt;
&lt;li&gt;Automate KEDA configurations based on business hours using scheduled pipeline jobs.&lt;/li&gt;
&lt;li&gt;Monitor performance metrics and adjust scaling thresholds dynamically&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;As DevOps engineers, embracing automation and intelligent scaling strategies is not just an advantage—it’s a necessity in today's dynamic cloud-native landscape. Implementing these solutions empowers organizations to enhance their Kubernetes scalability, reduce cloud costs, and optimize application performance.&lt;/p&gt;

&lt;p&gt;Thank you for reading the blog!&lt;br&gt;
Content Copyright reserved by Author Harsh Viradia.&lt;br&gt;
Contact: &lt;a href="https://www.linkedin.com/in/harsh-viradia/" rel="noopener noreferrer"&gt;https://www.linkedin.com/in/harsh-viradia/&lt;/a&gt;&lt;/p&gt;

</description>
      <category>devops</category>
      <category>kubernetes</category>
      <category>automation</category>
      <category>cloud</category>
    </item>
    <item>
      <title>Streamline Kubernetes Management with Amazon EKS Hybrid Nodes Amazon EKS Hybrid Nodes</title>
      <dc:creator>Harsh Viradia</dc:creator>
      <pubDate>Fri, 06 Dec 2024 06:20:56 +0000</pubDate>
      <link>https://dev.to/viradiaharsh/streamline-kubernetes-management-with-amazon-eks-hybrid-nodesamazon-eks-hybrid-nodes-5ee6</link>
      <guid>https://dev.to/viradiaharsh/streamline-kubernetes-management-with-amazon-eks-hybrid-nodesamazon-eks-hybrid-nodes-5ee6</guid>
      <description>&lt;p&gt;Amazon EKS Hybrid Nodes extends the power and flexibility of Amazon Elastic Kubernetes Service (EKS) to your on-premises and edge environments. With this capability, AWS manages the Kubernetes control plane, while you retain control over the infrastructure that powers your hybrid nodes. This enables seamless integration of your on-premises and edge workloads into Amazon EKS clusters, simplifying operations and unifying Kubernetes management across environments.&lt;/p&gt;

&lt;h2&gt;
  
  
  Key Features of Amazon EKS Hybrid Nodes
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Seamless Integration Across Environments
&lt;/h3&gt;

&lt;p&gt;Amazon EKS Hybrid Nodes support a wide range of on-premises hardware or virtual machines, allowing you to bring the scalability and reliability of Amazon EKS to wherever your applications run. You can leverage features such as Amazon EKS add-ons, Pod Identity, cluster access entries, cluster insights, and extended Kubernetes version support, providing a consistent experience regardless of the deployment location.&lt;/p&gt;

&lt;h3&gt;
  
  
  Native AWS Service Integration
&lt;/h3&gt;

&lt;p&gt;Amazon EKS Hybrid Nodes integrate seamlessly with AWS services, including:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;AWS Systems Manager&lt;/strong&gt; for centralized management.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Amazon GuardDuty&lt;/strong&gt; for enhanced security monitoring.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Amazon CloudWatch&lt;/strong&gt; and Amazon Managed Service for Prometheus for observability and monitoring.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Flexible Pricing
&lt;/h3&gt;

&lt;p&gt;Amazon EKS Hybrid Nodes operate on a pay-as-you-go pricing model, billed hourly for vCPU usage while nodes are connected to an EKS cluster. This flexibility ensures you only pay for what you use, with no upfront commitments.&lt;/p&gt;

&lt;h2&gt;
  
  
  Operational Insights and Requirements
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Networking and Connectivity
&lt;/h3&gt;

&lt;p&gt;Amazon EKS Hybrid Nodes require a stable connection between your on-premises infrastructure and AWS. Supported networking configurations include AWS Site-to-Site VPN and AWS Direct Connect, ensuring reliable communication with the EKS control plane. However, hybrid nodes are currently limited to IPv4 address families and cannot operate in disconnected, disrupted, or limited environments. For such scenarios, consider Amazon EKS Anywhere.&lt;/p&gt;

&lt;h2&gt;
  
  
  Infrastructure Flexibility
&lt;/h2&gt;

&lt;p&gt;The service adopts a "bring-your-own-infrastructure" approach. You can deploy hybrid nodes on physical or virtual machines with x86 or ARM architectures. Supported operating systems include:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Amazon Linux 2023 (AL2023) (in virtualized environments such as VMware or KVM),&lt;/li&gt;
&lt;li&gt;Ubuntu (20.04, 22.04, 24.04),&lt;/li&gt;
&lt;li&gt;Red Hat Enterprise Linux (RHEL) (versions 8 and 9).
You must manage the provisioning, maintenance, and security of these nodes.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Security Enhancements
&lt;/h2&gt;

&lt;p&gt;Amazon EKS Hybrid Nodes offer robust security features:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;IAM Roles Anywhere and AWS Systems Manager hybrid activations for authentication.&lt;/li&gt;
&lt;li&gt;Support for OIDC authentication and IAM Roles for Service Accounts (IRSA), enabling fine-grained access control for Pods.&lt;/li&gt;
&lt;li&gt;Integration with Amazon GuardDuty for EKS protection.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Kubernetes Compatibility
&lt;/h2&gt;

&lt;p&gt;Hybrid nodes align with the standard Kubernetes version lifecycle of Amazon EKS. However, they require the creation of new Amazon EKS clusters for deployment and cannot be added to existing clusters.&lt;/p&gt;

&lt;h2&gt;
  
  
  Add-ons and Observability
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Networking and Load Balancing
&lt;/h3&gt;

&lt;p&gt;Hybrid nodes do not support the AWS VPC CNI plugin but are compatible with Cilium and Calico for container networking. For ingress and load balancing, you can use the AWS Load Balancer Controller to set up Application Load Balancers (ALB) or Network Load Balancers (NLB).&lt;/p&gt;

&lt;h3&gt;
  
  
  Metrics and Logs
&lt;/h3&gt;

&lt;p&gt;Monitoring is simplified with tools such as Amazon Managed Prometheus, AWS Distro for OpenTelemetry (ADOT), and Amazon CloudWatch Observability Agent. These provide comprehensive visibility into hybrid node and application performance.&lt;/p&gt;

&lt;h3&gt;
  
  
  Simplified Management
&lt;/h3&gt;

&lt;p&gt;The nodeadm CLI facilitates hybrid node installation, configuration, and uninstallation. Cluster management remains consistent with existing Amazon EKS tools, including the AWS Management Console, API, SDKs, and popular tools like eksctl, CloudFormation, and Terraform.&lt;/p&gt;

&lt;h2&gt;
  
  
  Limitations and Best Practices
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;Unsupported Deployments: Hybrid nodes cannot run on AWS infrastructure like AWS Regions, Local Zones, or Outposts. For these scenarios, use Amazon EC2 managed nodes, self-managed nodes, or AWS Fargate.&lt;/li&gt;
&lt;li&gt;Network Reliability: Ensure robust connectivity between hybrid nodes and AWS control planes. Avoid using hybrid nodes in environments prone to disconnections.&lt;/li&gt;
&lt;li&gt;Cluster Endpoint Access: Use "Public" or "Private" cluster endpoint access configurations, but not both, to prevent DNS resolution issues.&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  A Comprehensive Kubernetes Solution
&lt;/h2&gt;

&lt;p&gt;Amazon EKS Hybrid Nodes bridge the gap between cloud and on-premises deployments, enabling a unified Kubernetes management experience. Whether you are scaling edge applications or integrating with existing infrastructure, Amazon EKS Hybrid Nodes provide a flexible, secure, and efficient solution for modern Kubernetes workloads.&lt;/p&gt;

</description>
      <category>devops</category>
      <category>kubernetes</category>
      <category>cloud</category>
      <category>news</category>
    </item>
    <item>
      <title>Introducing the AWS CDK L2 Construct: Simplified Security for Amazon CloudFront with Origin Access Control (OAC)</title>
      <dc:creator>Harsh Viradia</dc:creator>
      <pubDate>Thu, 14 Nov 2024 15:07:31 +0000</pubDate>
      <link>https://dev.to/viradiaharsh/introducing-the-aws-cdk-l2-construct-simplified-security-for-amazon-cloudfront-with-origin-access-94h</link>
      <guid>https://dev.to/viradiaharsh/introducing-the-aws-cdk-l2-construct-simplified-security-for-amazon-cloudfront-with-origin-access-94h</guid>
      <description>&lt;p&gt;AWS recently rolled out a new L2 construct in the AWS Cloud Development Kit (CDK) specifically for CloudFront Origin Access Control (OAC). This addition aims to make it easier for developers to secure Amazon S3 origins with CloudFront using modern security practices.&lt;/p&gt;

&lt;p&gt;With the increased focus on secure, scalable architectures, OAC has become the go-to method for securing CloudFront distributions, surpassing the legacy Origin Access Identity (OAI) in both functionality and security. Let's dive into how this new construct works, its benefits, and the migration from OAI to OAC.&lt;/p&gt;

&lt;h2&gt;
  
  
  What is AWS CDK and Why Use Constructs?
&lt;/h2&gt;

&lt;p&gt;The AWS CDK is an open-source framework that allows you to define cloud infrastructure in code, primarily using languages like TypeScript, Python, and Java. CDK applications are based on constructs—modular building blocks that encapsulate resources and their configurations.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Level 1: Direct mappings to CloudFormation resources, without added abstraction.&lt;/li&gt;
&lt;li&gt;Level 2: These offer higher-level abstractions, with an intent-based API that simplifies AWS service integration.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;L2 constructs streamline the configuration process by embedding best practices and intuitive defaults, and that's precisely what the new OAC construct brings to the table.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why Origin Access Control (OAC) Matters for CloudFront Security
&lt;/h2&gt;

&lt;p&gt;Amazon CloudFront is a global content delivery network (CDN) designed to reduce latency by caching data closer to users. For a more secure setup, CloudFront can be configured to use only trusted origins like Amazon S3, Lambda function URLs, or custom servers. OAC, introduced in 2022, is the recommended way to secure CloudFront distributions, providing:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Enhanced Security&lt;/strong&gt;: OAC supports AWS Key Management Service (KMS) encryption, short-term credential rotations, and new security regions.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;More Functionality&lt;/strong&gt;: Unlike OAI, OAC supports dynamic requests like PUT and DELETE, making it adaptable for modern, interactive applications.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Built-in Flexibility&lt;/strong&gt;: OAC seamlessly manages policy configurations, eliminating the need for low-level escape hatches in CDK.&lt;br&gt;
OAC provides the ability to restrict direct access to S3 buckets, ensuring access only through CloudFront, where additional security measures (such as AWS WAF) can be applied.&lt;/p&gt;

&lt;h2&gt;
  
  
  Enhanced Security with SSE-KMS Encryption
&lt;/h2&gt;

&lt;p&gt;Encrypting S3 objects is a best practice, particularly when sensitive data is involved. The new OAC L2 construct makes it easy to use KMS encryption, automatically updating policies to allow CloudFront access to KMS-encrypted objects.&lt;/p&gt;

&lt;h2&gt;
  
  
  Migrating from Origin Access Identity (OAI) to Origin Access Control (OAC)
&lt;/h2&gt;

&lt;p&gt;If you’re currently using OAI, migrating to OAC may seem daunting, but the new construct is designed to minimize downtime. The migration process generally involves:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;First Deployment: Update the S3 bucket policy to allow both OAI and OAC access.&lt;/li&gt;
&lt;li&gt;Second Deployment: Switch to the new OAC-based L2 construct.&lt;/li&gt;
&lt;li&gt;Final Clean-Up: Remove OAI-specific code and bucket policies.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Key Benefits of the New OAC L2 Construct
&lt;/h2&gt;

&lt;p&gt;In summary, the new L2 construct for OAC provides several advantages:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Simplified Setup&lt;/strong&gt;: A high-level interface that minimizes configuration complexity.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;SSE-KMS Support&lt;/strong&gt;: Simplified permissions management for KMS-encrypted S3 buckets.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Flexible Customization&lt;/strong&gt;: Easily adjust default settings like signing protocols and permissions.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Smooth Migration Path&lt;/strong&gt;: Built-in tools to transition smoothly from OAI to OAC.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The OAC L2 construct currently supports only Amazon S3 as an origin. For other origin types, AWS encourages feedback in the GitHub repository, where you can request additional features, such as support for Lambda@Edge.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion:
&lt;/h2&gt;

&lt;p&gt;This new L2 construct is a step forward in making secure, scalable CloudFront-S3 integrations accessible to all. Whether you’re a beginner in CDK or a seasoned DevOps professional, the OAC construct enables robust security with minimal configuration, setting you up for success in your cloud journey.&lt;/p&gt;

</description>
      <category>cloud</category>
      <category>security</category>
      <category>devops</category>
      <category>programming</category>
    </item>
    <item>
      <title>AWS Certificate Manager to Shift Trust Anchor, Ending Cross-Signature with Starfield Class 2 Root</title>
      <dc:creator>Harsh Viradia</dc:creator>
      <pubDate>Tue, 29 Oct 2024 04:45:25 +0000</pubDate>
      <link>https://dev.to/viradiaharsh/aws-certificate-manager-to-shift-trust-anchor-ending-cross-signature-with-starfield-class-2-root-1pal</link>
      <guid>https://dev.to/viradiaharsh/aws-certificate-manager-to-shift-trust-anchor-ending-cross-signature-with-starfield-class-2-root-1pal</guid>
      <description>&lt;p&gt;As part of an essential security update, AWS Certificate Manager (ACM) will adjust its public certificate hierarchy, no longer cross-signing with the GoDaddy Starfield Class 2 (C2) root after August 2024. Moving forward, ACM public certificates will directly terminate at the Starfield Services G2 (G2) root with the trust anchor specified as “C=US, ST=Arizona, L=Scottsdale, O=Starfield Technologies, Inc., CN=Starfield Services Root Certificate Authority – G2.” This change is part of a proactive alignment with future compatibility needs and evolving browser trust policies.&lt;/p&gt;

&lt;h3&gt;
  
  
  What is AWS Certificate Manager (ACM)?
&lt;/h3&gt;

&lt;p&gt;AWS Certificate Manager (ACM) simplifies and manages the process of provisioning, deploying, and maintaining TLS certificates across AWS services, such as Elastic Load Balancing (ELB), Amazon CloudFront, and Amazon API Gateway. Using certificates from Amazon Trust Services, ACM leverages a structured hierarchy to ensure secure connections across AWS-managed environments.&lt;/p&gt;

&lt;h3&gt;
  
  
  Background: Trust Chain Hierarchy
&lt;/h3&gt;

&lt;p&gt;Since its launch in 2016, ACM has enhanced certificate compatibility through a cross-signed trust chain with the Starfield Class 2 root to broaden device and browser acceptance. AWS certificates are rooted in Amazon Trust Services, structured under Amazon Root CAs 1 to 4, which were cross-signed by Starfield Services G2, further linking to Starfield Class 2. This initial structure aimed to extend trust through Starfield Class 2, which was widely accepted and trusted at the time.&lt;/p&gt;

&lt;h3&gt;
  
  
  Key Update Starting August 2024
&lt;/h3&gt;

&lt;p&gt;Beginning in August 2024, ACM-issued certificates will anchor to the Starfield Services G2 root and no longer include the Starfield Class 2 root in the trust chain. The last certificate in the chain provided by ACM will be Starfield Services G2, without Starfield Class 2 cross-signature.&lt;/p&gt;

&lt;h3&gt;
  
  
  Why the Update? Browser and Root Compatibility Evolution
&lt;/h3&gt;

&lt;p&gt;This change aligns with planned updates in browser and trust policies. GoDaddy, which operates Starfield Class 2, plans to withdraw support for this root, and both Chromium and Mozilla browsers have announced that Starfield Class 2 will lose trust status by April 2025. AWS has secured extended support for Starfield C2 through December 31, 2025, to assist with transition, but due to ACM’s 13-month certificate validity period, this change is being phased in now to ensure smooth continuity and compatibility.&lt;/p&gt;

&lt;h3&gt;
  
  
  How This Affects ACM Users?
&lt;/h3&gt;

&lt;p&gt;AWS expects this adjustment to have minimal impact for most ACM users, given the long-standing trust and compatibility of Amazon-owned roots. Devices and browsers widely recognize Amazon-owned trust anchors, including Starfield Services G2. Amazon Root CAs 1 to 4 are also supported by iOS 11 and above and by later Android versions from Gingerbread onward. Consequently, ACM-issued certificates anchored to G2 are expected to remain widely trusted across applications, browsers, and devices.&lt;/p&gt;

&lt;p&gt;This transition underscores ACM’s commitment to security and compatibility in alignment with updated standards and device/browser requirements.&lt;/p&gt;

&lt;p&gt;Thank you for reading the blog!&lt;br&gt;
Content Copyright reserved by Author Harsh Viradia.&lt;br&gt;
Contact: &lt;a href="https://www.linkedin.com/in/harsh-viradia/" rel="noopener noreferrer"&gt;https://www.linkedin.com/in/harsh-viradia/&lt;/a&gt;&lt;/p&gt;

</description>
      <category>cloud</category>
      <category>security</category>
      <category>aws</category>
      <category>cybersecurity</category>
    </item>
    <item>
      <title>Unlocking the Power of AWS ElastiCache with Valkey 7.2: Lower Costs, Serverless Flexibility, and Performance Gains</title>
      <dc:creator>Harsh Viradia</dc:creator>
      <pubDate>Wed, 16 Oct 2024 06:19:21 +0000</pubDate>
      <link>https://dev.to/viradiaharsh/unlocking-the-power-of-aws-elasticache-with-valkey-72-lower-costs-serverless-flexibility-and-performance-gains-5765</link>
      <guid>https://dev.to/viradiaharsh/unlocking-the-power-of-aws-elasticache-with-valkey-72-lower-costs-serverless-flexibility-and-performance-gains-5765</guid>
      <description>&lt;p&gt;In a significant move for developers and enterprises alike, Amazon ElastiCache announced on October 8th, 2024, support for Valkey version 7.2. This update introduces new pricing models that promise substantial savings, with serverless configurations priced 33% lower and node-based clusters priced 20% lower than other supported engines. These cost reductions, coupled with the high performance and flexibility of Valkey, make this a game-changer for organizations looking to optimize their caching strategies. Whether you're running large-scale distributed applications or need high-speed caching for real-time operations, AWS ElastiCache for Valkey offers a fully managed, open-source-powered solution with unparalleled flexibility.&lt;/p&gt;

&lt;h2&gt;
  
  
  What is Valkey?
&lt;/h2&gt;

&lt;p&gt;Valkey is an open-source, high-performance, key-value data store, backed by the Linux Foundation and supported by over 40 companies. It serves as a drop-in replacement for Redis OSS, sharing the same API and operational characteristics but with additional features and enhancements driven by the community and contributors. Valkey, which was developed by long-standing Redis OSS contributors, has gained rapid adoption since its inception in March 2024. This accelerated growth, combined with AWS's active contribution to the project, makes Valkey a compelling choice for organizations that prioritize innovation, performance, and community-driven development.&lt;/p&gt;

&lt;p&gt;With this new release, AWS has integrated Valkey into its ElastiCache service, providing customers with a fully managed caching experience that leverages 13+ years of AWS's operational excellence, security, and reliability. By choosing ElastiCache for Valkey, organizations can now scale their applications effortlessly while enjoying significant cost savings.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why ElastiCache for Valkey is a Game-Changer
&lt;/h2&gt;

&lt;p&gt;The introduction of Valkey support in ElastiCache is more than just an incremental update—it’s a major leap forward in cost-efficiency, operational simplicity, and scalability. Here’s why:&lt;/p&gt;

&lt;h3&gt;
  
  
  1. Lower Pricing: Maximize Value with Cost Efficiency
&lt;/h3&gt;

&lt;p&gt;One of the most attractive benefits of ElastiCache for Valkey is the significant price reduction. AWS now offers serverless deployments of Valkey at 33% lower prices than other engines, with minimum storage requirements reduced to 100MB, allowing customers to get started for as little as $6 per month. This makes serverless caching affordable even for smaller organizations or teams looking to pilot new projects without committing substantial resources.&lt;/p&gt;

&lt;p&gt;Additionally, for those opting for node-based (self-designed) clusters, Valkey brings up to 20% lower costs compared to other engines. This reduction is critical for enterprises that rely on large-scale caching to support millions of operations per second, as it allows them to reduce operational expenses without sacrificing performance.&lt;/p&gt;

&lt;p&gt;AWS also supports size flexibility for reserved nodes within an instance family and AWS Region. If you are an existing user of ElastiCache with reserved nodes, switching to Valkey from Redis OSS enables you to retain your discounted rates across node sizes, providing further value for your long-term investments.&lt;/p&gt;

&lt;h3&gt;
  
  
  2. Serverless Flexibility: Deploy in Under a Minute
&lt;/h3&gt;

&lt;p&gt;ElastiCache for Valkey's serverless option takes ease of use to a new level. Customers can now create a fully operational cache in less than a minute. The serverless deployment option automatically scales based on application demands, removing the need to pre-provision capacity and reducing the risk of over-provisioning or under-utilizing resources.&lt;/p&gt;

&lt;p&gt;Serverless caching not only simplifies the deployment process but also enables businesses to react dynamically to changing workloads, ensuring optimal resource utilization and cost management. Whether you're handling bursty traffic or steady-state operations, the ability to scale seamlessly without manual intervention provides a significant operational advantage.&lt;/p&gt;

&lt;h3&gt;
  
  
  3. Operational Excellence and Performance
&lt;/h3&gt;

&lt;p&gt;Building on AWS’s reputation for operational excellence, ElastiCache for Valkey delivers a fully managed experience that is both secure and reliable. With a 99.99% availability SLA and multi-AZ (Availability Zone) deployments, you can ensure your caching solution is always available and resilient to failures.&lt;/p&gt;

&lt;p&gt;Performance is another area where ElastiCache for Valkey excels. Valkey supports microsecond read and write latencies, ensuring that your most time-sensitive applications continue to perform at their peak. The service is capable of scaling to 500 million requests per second (RPS) on a single node-based cluster, making it ideal for real-time applications, such as gaming, financial services, and IoT systems, where every millisecond counts.&lt;/p&gt;

&lt;h3&gt;
  
  
  4. Seamless API Compatibility: A Drop-In Replacement for Redis OSS
&lt;/h3&gt;

&lt;p&gt;One of the standout features of Valkey is its API compatibility with Redis OSS. This allows customers to migrate their existing Redis applications to Valkey with zero code changes. The smooth transition makes it an appealing option for developers and teams who want to leverage Valkey's cost and performance advantages without the hassle of re-architecting their systems.&lt;/p&gt;

&lt;p&gt;In addition to easy migrations, ElastiCache for Valkey also supports zero-downtime upgrades for users currently running ElastiCache for Redis OSS. This means businesses can switch to Valkey without disrupting their operations, enabling them to take advantage of the latest technology without the risk of service outages.&lt;/p&gt;

&lt;h3&gt;
  
  
  5. Continuous Innovation with Open Source
&lt;/h3&gt;

&lt;p&gt;Valkey, as an open-source project under the Linux Foundation, benefits from continuous community-driven innovation. AWS's active contributions to the Valkey project ensure that customers not only adopt a stable and reliable solution today but also have access to ongoing enhancements and features in the future.&lt;/p&gt;

&lt;p&gt;By choosing an open-source solution like Valkey, customers gain more flexibility and avoid vendor lock-in, all while benefiting from the extensive resources and support that AWS provides. The continuous evolution of the Valkey project, backed by AWS's contributions, ensures that businesses stay at the forefront of technology, positioning themselves for long-term success.&lt;/p&gt;

&lt;h2&gt;
  
  
  Use Cases for ElastiCache with Valkey
&lt;/h2&gt;

&lt;p&gt;The addition of Valkey support to ElastiCache opens up a variety of use cases, including:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Real-time analytics: With its ability to process millions of requests per second with microsecond latency, Valkey is well-suited for real-time analytics platforms that require fast data processing and retrieval.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;E-commerce personalization: Valkey's low-latency performance enables e-commerce platforms to deliver personalized shopping experiences in real time, enhancing customer satisfaction and engagement.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;IoT and edge computing: Valkey's scalability and performance make it ideal for IoT and edge computing applications where data needs to be processed and cached close to the source for rapid decision-making.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Gaming leaderboards and session management: Valkey can handle the high throughput and low latency demands of gaming applications, ensuring smooth gameplay and real-time leaderboards.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Amazon ElastiCache's support for Valkey version 7.2 is a significant step forward for customers seeking a powerful, cost-effective, and flexible caching solution. With reduced costs, serverless deployments, and seamless API compatibility, Valkey offers the performance and scalability required for modern, data-intensive applications. Whether you're a developer looking to build real-time apps or an enterprise optimizing costs at scale, ElastiCache for Valkey provides the tools and capabilities to meet your needs while positioning your business for future growth.&lt;/p&gt;

&lt;p&gt;By leveraging AWS’s deep expertise in operational excellence, security, and innovation, businesses can now harness the power of open-source technology with the peace of mind that their caching infrastructure is in expert hands. If you’re ready to take advantage of the next evolution in caching, now is the time to explore what ElastiCache for Valkey can do for your organization.&lt;/p&gt;

&lt;p&gt;Thank you for reading the blog!&lt;br&gt;
Content Copyright reserved by Author Harsh Viradia.&lt;br&gt;
Contact: &lt;a href="https://www.linkedin.com/in/harsh-viradia/" rel="noopener noreferrer"&gt;https://www.linkedin.com/in/harsh-viradia/&lt;/a&gt;&lt;/p&gt;

</description>
      <category>cloud</category>
      <category>serverless</category>
      <category>redis</category>
      <category>devops</category>
    </item>
    <item>
      <title>Cybersecurity Erosion: Addressing the Hidden Threat to Long-Term Security</title>
      <dc:creator>Harsh Viradia</dc:creator>
      <pubDate>Fri, 04 Oct 2024 05:58:25 +0000</pubDate>
      <link>https://dev.to/viradiaharsh/cybersecurity-erosion-addressing-the-hidden-threat-to-long-term-security-346c</link>
      <guid>https://dev.to/viradiaharsh/cybersecurity-erosion-addressing-the-hidden-threat-to-long-term-security-346c</guid>
      <description>&lt;p&gt;In cybersecurity, it’s often said that the human factor is the weakest link. This is not just due to susceptibility to social engineering but also how security design influences user behavior over time. When security measures create friction, frustration follows, and users begin to circumvent or ignore policies. This phenomenon is known as Security Drift — the slow degradation of a security system’s effectiveness, not due to technical flaws but because it clashes with the way people work.&lt;/p&gt;

&lt;p&gt;At the heart of this issue is a critical design flaw: security solutions often fail, not because of technical limitations, but because of how users interact with them. Security and usability are not mutually exclusive; in fact, they must work together to create systems that are both robust and user-friendly. When security measures are integrated seamlessly into everyday workflows, the risk of human-induced failures diminishes significantly.&lt;/p&gt;

&lt;p&gt;This is the foundation of Sustainable Security — an approach that addresses not just technical resilience, but also the usability of security measures to ensure their longevity and effectiveness.&lt;/p&gt;

&lt;h3&gt;
  
  
  The Human Element: A Persistent Weakness
&lt;/h3&gt;

&lt;p&gt;Security breaches frequently result from human error rather than system vulnerabilities. Social engineering attacks exploit this weakness, but even more common is the tendency for users to find workarounds when security measures impede productivity. This isn’t just an individual problem; it reflects a systemic issue in how security is designed.&lt;/p&gt;

&lt;p&gt;When security measures create friction in workflows, users will find ways around them — consciously or unconsciously. Over time, even the most robust security architecture can erode if it doesn’t account for human interaction. In essence, security must align with usability to prevent users from becoming the weakest link.&lt;/p&gt;

&lt;h3&gt;
  
  
  Cybersecurity Erosion: A Critical Concern
&lt;/h3&gt;

&lt;p&gt;Cybersecurity erosion refers to the gradual degradation of a security system’s effectiveness, driven by operational inefficiencies and user workarounds rather than technical shortcomings. Unlike common vulnerabilities, cybersecurity erosion stems from the tension between security measures and everyday workflows.&lt;/p&gt;

&lt;p&gt;For security professionals, cybersecurity erosion presents a serious threat, undermining even the strongest architecture if it is not addressed. Two key factors contribute to this degradation:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1. Management Overhead&lt;/strong&gt;&lt;br&gt;
Complex security architectures often require continuous monitoring, updates, and adjustments. As organizations grow, resource constraints or cost-cutting measures can lead to these tasks being de-prioritized. The more effort required to maintain a system, the more likely it is to fall behind, creating potential vulnerabilities.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2. Security Friction&lt;/strong&gt;&lt;br&gt;
Security measures that create barriers to productivity drive users to seek workarounds. This friction leads to internal tension, where security teams view employees as adversaries, further complicating the organization’s security strategy. Ultimately, this creates a tug-of-war between security and operational efficiency, with security often losing.&lt;/p&gt;

&lt;h3&gt;
  
  
  Re-evaluating Security Design: Key Considerations
&lt;/h3&gt;

&lt;p&gt;At the core of cybersecurity erosion is a fundamental flaw in security design. Security professionals should ask themselves the following critical questions:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1. Why must we choose between security and efficiency?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;An ideal security system balances security with minimal operational overhead. A well-designed system should not drain resources but scale efficiently as the organization grows while maintaining its security posture.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2. How can we implement security without compromising workflow?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Reducing friction between users and security controls is essential for long-term sustainability. A security architecture that integrates seamlessly into daily operations minimizes frustration and ensures compliance, reducing the risk of users bypassing controls.&lt;/p&gt;

&lt;h3&gt;
  
  
  Designing for Sustainability: Avoiding Cybersecurity Erosion
&lt;/h3&gt;

&lt;p&gt;To prevent the effects of cybersecurity erosion, forward-thinking security teams must adopt a holistic approach to security design. The following considerations can help create a sustainable, effective security architecture:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1. Reduce User Frustration&lt;/strong&gt;&lt;br&gt;
The security system should align with user workflows rather than obstruct them. Any friction, no matter how small, can lead users to circumvent controls. Just as water gradually erodes stone, frustrated users will dismantle even the most secure system over time. Usability is not a luxury; it is a core requirement.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2. Simplicity in Implementation&lt;/strong&gt;&lt;br&gt;
Security controls should be easy to deploy and adapt as the organization evolves. Access controls, for instance, should not be so rigid or complex that practitioners need to invest excessive time and effort to integrate them with new environments or applications. A flexible, easy-to-implement system ensures that security remains effective as the organization scales.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3. Ease of Maintenance&lt;/strong&gt;&lt;br&gt;
No system is immune to the need for maintenance, but it should be designed to minimize the burden. A complex security system that requires constant upkeep becomes a liability. Overworked security teams are more prone to errors, which can leave gaps in the system. A streamlined maintenance process helps ensure that the security architecture remains intact without overwhelming staff tasked with its upkeep.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Conclusion: Long-Term Resilience Through Thoughtful Design&lt;/strong&gt;&lt;br&gt;
Cybersecurity erosion is a pervasive issue that arises not from technical deficiencies but from a lack of foresight in architectural design. By placing equal emphasis on security and usability, security professionals can prevent the gradual erosion of their systems and ensure that they remain resilient over time.&lt;/p&gt;

&lt;p&gt;The key to long-term security is designing systems that integrate seamlessly into workflows, minimize user friction, and require minimal maintenance. Ultimately, a sustainable security posture is one that reduces the risk of human-induced failures and ensures the system remains effective, no matter how the organization evolves.&lt;/p&gt;

&lt;p&gt;Thank you for reading the blog!&lt;br&gt;
Content Copyright reserved by Author Harsh Viradia.&lt;br&gt;
Contact: &lt;a href="https://www.linkedin.com/in/harsh-viradia/" rel="noopener noreferrer"&gt;https://www.linkedin.com/in/harsh-viradia/&lt;/a&gt;&lt;/p&gt;

</description>
      <category>security</category>
      <category>learning</category>
      <category>cloud</category>
      <category>devops</category>
    </item>
    <item>
      <title>Hosting Self-Hosted GitHub Runners on Kubernetes</title>
      <dc:creator>Harsh Viradia</dc:creator>
      <pubDate>Wed, 28 Aug 2024 09:23:36 +0000</pubDate>
      <link>https://dev.to/viradiaharsh/hosting-self-hosted-github-runners-on-kubernetes-o2d</link>
      <guid>https://dev.to/viradiaharsh/hosting-self-hosted-github-runners-on-kubernetes-o2d</guid>
      <description>&lt;p&gt;In the world of Continuous Integration and Continuous Deployment (CI/CD), GitHub Actions has emerged as a powerful tool, enabling developers to automate their workflows and streamline their software development process. GitHub Actions offers a range of features that help in automating tasks such as building, testing, and deploying code. While GitHub provides hosted runners to execute these workflows, there are scenarios where using a self-hosted runner might be more advantageous.&lt;/p&gt;

&lt;p&gt;Self-hosted runners give you the flexibility to configure your build environment exactly as you need it. Whether you require specific hardware, custom software, or a particular environment configuration, self-hosted runners allow you to tailor your CI/CD pipeline to meet these needs. Hosting a self-hosted GitHub Runner on Kubernetes can further enhance this setup by leveraging the scalability, reliability, and resource management features of Kubernetes.&lt;/p&gt;

&lt;p&gt;In this blog post, we'll walk you through the process of setting up a self-hosted GitHub Runner on a Kubernetes cluster. By the end of this guide, you’ll have a fully operational GitHub Runner running within your Kubernetes environment, ready to execute your CI/CD workflows.&lt;/p&gt;

&lt;h2&gt;
  
  
  Prerequisites
&lt;/h2&gt;

&lt;p&gt;Before diving into the setup, make sure you have the following prerequisites in place:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Kubernetes Cluster&lt;/strong&gt;: You’ll need access to a Kubernetes cluster. This can be a local cluster (like Minikube) or a cloud-based Kubernetes service (such as Google Kubernetes Engine, Azure Kubernetes Service, or Amazon EKS).&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;GitHub Repository&lt;/strong&gt;: Ensure you have a GitHub repository where you want to set up Actions. If you don’t have one, you can create a new repository on GitHub.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Helm&lt;/strong&gt;: Helm is a package manager for Kubernetes that simplifies deploying applications. We’ll use Helm to manage the GitHub Runner deployment.&lt;/p&gt;

&lt;h3&gt;
  
  
  Configure Self-Hosted Runner:
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Open Developer Settings and from GitHub Profile &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2y97urf7o13ricjzqn80.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2y97urf7o13ricjzqn80.png" alt="Image description" width="550" height="564"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Create a new GitHub App&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ft209soybv7rbkehlkwgd.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ft209soybv7rbkehlkwgd.png" alt="Image description" width="800" height="96"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Provide the GitHub App Name&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F036rqn3u4by2oxc5chpf.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F036rqn3u4by2oxc5chpf.png" alt="Image description" width="655" height="198"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Provide the Website URL for the GitHub App&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9m24qaoack9ga0vmiy00.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9m24qaoack9ga0vmiy00.png" alt="Image description" width="694" height="125"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Uncheck the Webhook URL, we are not going to expose GitHub Jobs over the internet as per industry standard.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fuok736t6cc3iyjvo996a.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fuok736t6cc3iyjvo996a.png" alt="Image description" width="614" height="406"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Expand the Repository permissions and provide Read access to the Actions and Read and Write access to the Administration.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Felgqxfkpmpoxvkg4jlcf.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Felgqxfkpmpoxvkg4jlcf.png" alt="Image description" width="800" height="219"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Provide the account in that GitHub app will be installed and click on create GitHub app.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdqts8rwdp0mls1n1kfr9.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdqts8rwdp0mls1n1kfr9.png" alt="Image description" width="753" height="275"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Copy the APP ID, Client ID and save it somewhere.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9rndadicuyjf3kedo1d9.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9rndadicuyjf3kedo1d9.png" alt="Image description" width="800" height="303"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Scroll down and Generate the a private key and save it in the local.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fi7a2mk78v8rergee5ypd.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fi7a2mk78v8rergee5ypd.png" alt="Image description" width="800" height="378"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Open the tab called Install App and install the app.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsc248flndsatjrd4o5ti.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsc248flndsatjrd4o5ti.png" alt="Image description" width="800" height="202"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;You can choose any specific repo or all repo and install the app&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fs5l7bzkrtultna3tehfw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fs5l7bzkrtultna3tehfw.png" alt="Image description" width="652" height="717"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;After installation there will be unique ID in the URl copy the ID and save it for the further use.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fb66k58rsnpradwycimzc.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fb66k58rsnpradwycimzc.png" alt="Image description" width="657" height="127"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Open the Kubernetes Cluster CLI and follow below commands.
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;helm repo add jetstack https://charts.jetstack.io
helm repo update
helm search repo cert-manager
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Use the latest version of cert-manager for below command
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;helm install \
cert-manager jetstack/cert-manager \
--namespace=NAMESPACE-NAME \
--create=namespace \
--version=LATEST-VERSION \
--set prometheus.enabled=false \
--set isntallCRDs=true
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Check the pods are up and running for cert-manager.
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl get pods -n NAMESPACE-NAME
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Create a Kubernets secret for the runner.
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl create secret generic controller-manager\
-n actions \
--from-literal=github_app_id=APP-ID \
--from-literal=github_app-installation_id=UNIQUE-ID \
--from-literal=fiirhub_app_private_key=PRIVATE-KEY-FILE
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Add helm repo to manage actions.
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;helm repo add actions-runner-controller https://actions-runner-controller.github.io/actions-runner-controller

helm search repo actions
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Install the helm repo with the latest version
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;helm install runner \
actions-runner-controller/actions-runner-controller \
--namespace actions \
--version LATEST-VERSION \
--set syncPeriod=1m
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Check the actions pods are up and running or not with the below command.
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl get pods -n actions
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Apply the below Kubernetes yaml file to deploy runner.
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: actions.summerwind.dev/v1alpha1
kind: RunnerDeployment
metadata:
  name: arc-runner
  namespace: default
spec:
  template:
    spec:
      repository: # specify name of the repository
      labels:
        - # runner label
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl apply -f runnerdeployment.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;For autoscaling of the runner apply below kubernetes yaml file.
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: actions.summerwind.dev/v1alpha1
kind: HorizontalRunnerAutoscaler
metadata:
  name: k8s-runner-autoscaler
spec:
  scaleTargetRef:
    kind: RunnerDeployment
    name: k8s-runners
  scaleDownDelaySecondsAfterScaleOut: 300
  minReplicas: 1
  maxReplicas: 10
  metrics:
  - type: TotalNumberOfQueuedAndInProgressWorkflowRuns
    repositoryNames:
    - # specify name of the repository
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl apply -f hpa.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;After following all the above steps edit the workflow file from GitHub and change the tag runs-on to self-hosted.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fn37gs1cdyz1zujovbmbz.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fn37gs1cdyz1zujovbmbz.png" alt="Image description" width="222" height="123"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Like this you can configure Self-hosted runners for the GitHub.&lt;/p&gt;

&lt;p&gt;Thank you for reading the blog!&lt;br&gt;
Content Copyright reserved by Author Harsh Viradia.&lt;br&gt;
Contact: &lt;a href="https://www.linkedin.com/in/harsh-viradia/" rel="noopener noreferrer"&gt;https://www.linkedin.com/in/harsh-viradia/&lt;/a&gt;&lt;/p&gt;

</description>
      <category>devops</category>
      <category>github</category>
      <category>kubernetes</category>
      <category>git</category>
    </item>
    <item>
      <title>Integrate Cloud Secrets with Kubernetes Secrets using External Secrets Through Terraform</title>
      <dc:creator>Harsh Viradia</dc:creator>
      <pubDate>Sun, 19 May 2024 05:52:48 +0000</pubDate>
      <link>https://dev.to/viradiaharsh/integrate-cloud-secrets-with-kubernetes-secrets-using-external-secrets-through-terraform-56o2</link>
      <guid>https://dev.to/viradiaharsh/integrate-cloud-secrets-with-kubernetes-secrets-using-external-secrets-through-terraform-56o2</guid>
      <description>&lt;p&gt;In the dynamic landscape of cloud-native applications, managing secrets securely is paramount. Secrets such as API keys, database credentials, and other sensitive configuration details need to be handled with care to prevent unauthorized access. Kubernetes, being a leading orchestration platform, offers mechanisms to manage these secrets internally. However, leveraging external secret managers adds an extra layer of security and flexibility, enabling centralized management and seamless integration across multiple environments.&lt;/p&gt;

&lt;p&gt;This blog will guide you through the process of fetching cloud secrets to Kubernetes secrets using an External Secret Manager (ESM) with Terraform. External Secret Managers like AWS Secrets Manager, HashiCorp Vault, and Google Cloud Secret Manager provide robust solutions for storing and managing secrets securely outside of your Kubernetes cluster. By using Terraform, an infrastructure-as-code (IaC) tool, we can automate the provisioning and management of these secrets, ensuring consistency and reproducibility.&lt;/p&gt;

&lt;p&gt;By the end of this tutorial, you'll have a clear understanding of how to securely manage your secrets using external secret managers and Terraform, enhancing the security and maintainability of your Kubernetes-based applications.&lt;/p&gt;

&lt;p&gt;Steps:4&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Install External Secret Manager via helm.&lt;/li&gt;
&lt;li&gt;Authenticate K8s(Kubernetes) with CSP(Cloud Service Provider).&lt;/li&gt;
&lt;li&gt;Store the Secrets in the CSP Secrets Service like AWS Secret Manager, GCP Secret Store, and Azure Key Vault.&lt;/li&gt;
&lt;li&gt;Create an External Secret Store that synced with Cloud Secrets.&lt;/li&gt;
&lt;li&gt;Create K8s Secret which refers the value from External Secrets Manager.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Diagram:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F12qjipq7jwnysq0e3p80.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F12qjipq7jwnysq0e3p80.jpg" alt="Image description" width="800" height="478"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  1. Install External Secret Manager via helm.
&lt;/h2&gt;

&lt;p&gt;First we will install an External Secret Manager in the K8s cluster using Helm.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;resource "helm_release" "external_secret_operator" {
  name = "external-secret-operator"
  repository = "https://charts.external-secrets.io"
  chart = "external-secrets"
  namespace = "default"
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  2. Authenticate K8s(Kubernetes) with CSP(Cloud Service Provider).
&lt;/h2&gt;

&lt;p&gt;For the Example let's take GCP as a CSP, and create a Service Account in the GCP that has GCP Secret Accessor permission.&lt;br&gt;
Now create a terraform Script that will authenticate K8s with CSP in our case it's GCP.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;resource "kubernetes_manifest" "service-account-secret-authenticator" {
  computed_fields = [ "stringData" ]
  manifest = {
    "apiVersion" = "v1"
    "kind"       = "Secret"
    "metadata" = {
      "name"      = ""   # K8s secret to authenticate gsm with k8s
      "namespace" = "default"
      "labels"    = {
        "type" = "gcpsm"
      }
    }
    "type" = "Opaque"
    "stringData" = {
      "secret-access-credentials" = "" #Service Account Token value.
    }
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  3. Store the Secrets in the CSP Secrets Service like AWS Secret Manager, GCP Secret Store, and Azure Key Vault.
&lt;/h2&gt;

&lt;p&gt;Now create a secret on CSP which we are going to use in the k8s Secrets.&lt;/p&gt;

&lt;h2&gt;
  
  
  4. Create an External Secret Store that synced with Cloud Secrets.
&lt;/h2&gt;

&lt;p&gt;To create a secret in the K8s first, we have to create an External Secret Store which will fetch secret from CSP.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;resource "kubernetes_manifest" "clustersecretstore" {
  depends_on = [ kubernetes_manifest.service-account-secret-authenticator ]
  manifest = {
    "apiVersion" = "external-secrets.io/v1beta1"
    "kind"       = "ClusterSecretStore"
    "metadata" = {
      "name"      = "gcp-store"
    }
    "spec" = {
      "provider" = {
        "gcpsm" = {
            "projectID" ="" # Based on GCP we have to pass projectID
            "auth" = {
                "secretRef" = {
                    "secretAccessKeySecretRef" = {
                        "name" = "" #k8s secret service account name
                        "key"  = "secret-access-credentials"   # K8s secret service acount token
                    }
                }
            }
        }
      }
    }
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Here we have one field named "depends on" which indicates that first, we have to authenticate K8s cluster with CSP then we can create an External Secret Store.&lt;/p&gt;

&lt;h2&gt;
  
  
  5. Create K8s Secret which refers to the value from External Secrets Manager.
&lt;/h2&gt;

&lt;p&gt;Now we will create an External Secret which will create a K8s secret and we can get our value in the K8s cluster.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;resource "kubernetes_manifest" "external-secrets" {
  depends_on = [ kubernetes_manifest.clustersecretstore ]

  manifest = {
    "apiVersion" = "external-secrets.io/v1beta1"
    "kind"       = "ExternalSecret"
    "metadata" = {
      "name"      = ""   # external secret name
      "namespace" = "default"
    }
    "spec" = {
      "refreshInterval" = "5m"
      "secretStoreRef" = {
        "kind" = "ClusterSecretStore"
        "name" = "gcp-store"
      }
      "target" = {
        "name" = ""    # K8s secret name
        "creationPolicy" = "Owner"
      }
      "data" = [
        {
            "secretKey" = ""  # K8s secrent file name
            "remoteRef" = {
                "key" = ""   # GSM secret name
            }
        }
      ]
    }
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Here is one field which is "refreshInterval" this field indicates that an External Secret Store will refresh in each 5-minute period to sync with Cloud Secrets.&lt;/p&gt;

&lt;p&gt;Through this, we can securely fetch our Cloud Secrets in the K8s Secrets.&lt;/p&gt;

&lt;p&gt;Thank you for reading the blog!&lt;br&gt;
Content Copyright reserved by Author Harsh Viradia.&lt;/p&gt;

</description>
      <category>cloud</category>
      <category>kubernetes</category>
      <category>security</category>
      <category>devops</category>
    </item>
    <item>
      <title>AWS Lambda Function for Dynamic ECS Tasks</title>
      <dc:creator>Harsh Viradia</dc:creator>
      <pubDate>Sat, 20 Apr 2024 06:25:10 +0000</pubDate>
      <link>https://dev.to/viradiaharsh/aws-lambda-function-for-dynamic-ecs-tasks-3957</link>
      <guid>https://dev.to/viradiaharsh/aws-lambda-function-for-dynamic-ecs-tasks-3957</guid>
      <description>&lt;p&gt;In the ever-evolving landscape of cloud computing, agility and scalability are paramount. As organizations strive to optimize their infrastructure, the demand for flexible and dynamic solutions has surged. Among the myriad of services offered by Amazon Web Services (AWS), Elastic Container Service (ECS) stands out as a robust container management platform, facilitating the deployment and scaling of containerized applications with ease.&lt;/p&gt;

&lt;p&gt;However, managing ECS tasks manually or statically poses significant challenges, especially in environments characterized by fluctuating workloads or diverse application requirements. Enter AWS Lambda and Simple Queue Service (SQS), two powerful services that, when combined, offer a seamless solution for creating dynamic ECS tasks on the fly.&lt;/p&gt;

&lt;p&gt;In this blog, we embark on a journey to explore the synergy between Lambda, SQS, and ECS, unveiling the methodology behind orchestrating dynamic container deployments effortlessly. By harnessing the event-driven architecture of Lambda and the reliable message queuing of SQS, we pave the way for automated, efficient, and responsive ECS task management.&lt;/p&gt;

&lt;h2&gt;
  
  
  Steps:
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;Create a Standard SQS queue.&lt;/li&gt;
&lt;li&gt;Create 5 different ECS task definitions with different vCPU and Memory configurations.&lt;/li&gt;
&lt;li&gt;Create a Lambda Function with python3.x runtime.&lt;/li&gt;
&lt;li&gt;Add SQS queue as a trigger and change the role of the lambda function role, provide SQS control role to it.&lt;/li&gt;
&lt;li&gt;Create an ECS cluster.&lt;/li&gt;
&lt;li&gt;Add the Python code in the lambda function and add ECS details in the code.&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Create SQS Queue
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Create a Standard SQS queue with 150 seconds Visibility timeout.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4kd6py00a7t21amp7d2i.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4kd6py00a7t21amp7d2i.png" alt="Image description" width="800" height="435"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Create an ECS Task Definition.
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;In this, we are going to create 5 different ECS task definitions with the same docker image, we will give names of these definitions like scale-1, scale-2, ..., scale-5,&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;For eg: &lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;scale-1 : 0.5 vCPU and 1 GB RAM&lt;/li&gt;
&lt;li&gt;scale-2 : 1   vCPU and 2 GB RAM&lt;/li&gt;
&lt;li&gt;scale-3 : 2   vCPU and 4 GB RAM&lt;/li&gt;
&lt;li&gt;scale-4 : 4   vCPU and 8 GB RAM&lt;/li&gt;
&lt;li&gt;scale-5 : 5   vCPU and 9 GB RAM&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Create a Lambda Function with Python3.x Runtime.
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fi2f4x2l7ye0qyola0omv.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fi2f4x2l7ye0qyola0omv.png" alt="Image description" width="800" height="366"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Add SQS queue as trigger of lambda function and provide permission as well.
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Provide SQS permission to lambda first.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwivwqxdmdtp36zpqt6fz.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwivwqxdmdtp36zpqt6fz.png" alt="Image description" width="532" height="736"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Add SQS as a trigger of Lambda function.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9jfor91gb9t8slpox9bg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9jfor91gb9t8slpox9bg.png" alt="Image description" width="800" height="529"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Add Python code in the lambda function.
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import json
import boto3
from datetime import datetime

##############################################################LAMBDA FUNCTION####################################################################
def lambda_handler(event, context):
    records = event['Records']

    # ECS cluster and service information
    ecs_cluster = ''

    # Subnet IDs and Security Group ID
    subnet_ids = ['', '']
    security_group_id = '' # mds-stag-video-cluster-arm-sg

    # Create ECS client
    ecs_client = boto3.client('ecs')

    for record in records:
        body = str(record['body'])


        data = json.loads(body)
        scale = data["scale"]
        mediakey_1 = data["mediaKey"]
        mediaid_1 = data["mediaId"]
        mediaSize_1 = data["pixel"]

        print(mediakey_1)
        print(mediaid_1)
        print(mediaSize_1)

        # Container Nmae
        container_name = ""
        create_ecs_task_audio(ecs_client, ecs_cluster, subnet_ids, security_group_id, mediaSize_1, mediaid_1, mediakey_1)
        print("-----------------------------------------------audio tassk created-------------------------------------------------------------")


        if 1 &amp;lt;= int(scale) &amp;lt;= 5:
            if mediaSize_1 == '480':
                print("---------------------------------------------------------Pixel size taken to 480p---------------------------------------------------------")
                create_ecs_task(ecs_client, ecs_cluster, scale, container_name, subnet_ids, security_group_id, mediaSize_1, mediaid_1, mediakey_1)
                print("---------------------------------------------------------480p task created----------------------------------------------------------------")

            elif mediaSize_1 == '720':
                print("---------------------------------------------------------Pixel size taken 720p------------------------------------------------------------")
                create_ecs_task(ecs_client, ecs_cluster, scale, container_name, subnet_ids, security_group_id, mediaSize_1, mediaid_1, mediakey_1)
                print("---------------------------------------------------------720 task cretaed-----------------------------------------------------------------")

                print("---------------------------------------------------------Pixel size taken 480p------------------------------------------------------------")
                create_ecs_task(ecs_client, ecs_cluster, scale, container_name, subnet_ids, security_group_id, "480", mediaid_1, mediakey_1)
                print("---------------------------------------------------------480 task created-----------------------------------------------------------------")                

            else:
                print("---------------------------------------------------------Pixel size taken 1080------------------------------------------------------------")
                create_ecs_task(ecs_client, ecs_cluster, scale, container_name, subnet_ids, security_group_id, mediaSize_1, mediaid_1, mediakey_1)
                print("---------------------------------------------------------1080 task created----------------------------------------------------------------")

                print("---------------------------------------------------------Pixel size taken 480p------------------------------------------------------------")
                create_ecs_task(ecs_client, ecs_cluster, scale, container_name, subnet_ids, security_group_id, "480", mediaid_1, mediakey_1)
                print("---------------------------------------------------------480p task created----------------------------------------------------------------")

                print("---------------------------------------------------------Pixel size taken 720p------------------------------------------------------------")
                create_ecs_task(ecs_client, ecs_cluster, scale, container_name, subnet_ids, security_group_id, "720", mediaid_1, mediakey_1)
                print("---------------------------------------------------------720 task created-----------------------------------------------------------------")


        else:
            if mediaSize_1 == '480':
                print("---------------------------------------------------------scale =3 Pixel size taken 480p------------------------------------------------------------")
                create_ecs_task(ecs_client, ecs_cluster, "3", container_name, subnet_ids, security_group_id, mediaSize_1, mediaid_1, mediakey_1)
                print("---------------------------------------------------------scale =3 480p task created----------------------------------------------------------------")

            elif mediaSize_1 == '720':
                print("---------------------------------------------------------scale =3 Pixel size taken 720p------------------------------------------------------------")
                create_ecs_task(ecs_client, ecs_cluster, "3", container_name, subnet_ids, security_group_id, mediaSize_1, mediaid_1, mediakey_1)
                print("---------------------------------------------------------scale =3 720p task created----------------------------------------------------------------")

                print("---------------------------------------------------------scale =3 Pixel size taken 480p------------------------------------------------------------")
                create_ecs_task(ecs_client, ecs_cluster, "3", container_name, subnet_ids, security_group_id, "480", mediaid_1, mediakey_1)
                print("---------------------------------------------------------scale =3 480p task created----------------------------------------------------------------")

            else:
                print("---------------------------------------------------------scale =3 Pixel size taken 1080p-----------------------------------------------------------")
                create_ecs_task(ecs_client, ecs_cluster, "3", container_name, subnet_ids, security_group_id, mediaSize_1, mediaid_1, mediakey_1)
                print("---------------------------------------------------------scale =3 1080p task created---------------------------------------------------------------")

                print("---------------------------------------------------------scale =3 Pixel size taken 720p------------------------------------------------------------")                
                create_ecs_task(ecs_client, ecs_cluster, "3", container_name, subnet_ids, security_group_id, "480", mediaid_1, mediakey_1)
                print("---------------------------------------------------------scale =3 720p task created----------------------------------------------------------------")

                print("---------------------------------------------------------scale =3 Pixel size taken 720p------------------------------------------------------------")
                create_ecs_task(ecs_client, ecs_cluster, "3", container_name, subnet_ids, security_group_id, '720', mediaid_1, mediakey_1)
                print("---------------------------------------------------------scale =3 720 task created-----------------------------------------------------------------")





####################################################### AUDIO Function ################################################################

def create_ecs_task_audio(ecs_client, ecs_cluster, audio_container_name, subnet_ids, security_group_id, mediaSize_1, mediaid_1, mediakey_1):

    # Environment variables
    environment_variables = {
            'mediaKey': mediakey_1,
            'mediaId': mediaid_1,
            'mediaSize': mediaSize_1
    }

    try:
        create_task_response = ecs_client.run_task(
            cluster=ecs_cluster,
            launchType='FARGATE',
            taskDefinition='audio-converter-task',
            networkConfiguration={
                'awsvpcConfiguration': {
                    'subnets': subnet_ids,
                    'securityGroups': [security_group_id],
                    'assignPublicIp': 'ENABLED'
                }
            },
            overrides={
                'containerOverrides': [
                    {
                        'name': audio_container_name,
                        'environment': [
                            {'name': key, 'value': value} for key, value in environment_variables.items()
                        ]
                    }
                ]
            }
        )

        for task in create_task_response.get('tasks', []):
            task['createdAt'] = task.get('createdAt').isoformat() if task.get('createdAt') else None
            task['updatedAt'] = task.get('updatedAt').isoformat() if task.get('updatedAt') else None

        if create_task_response['tasks']:
            print(f"Created ECS task: {json.dumps(create_task_response, indent=2)}")
        else:
            print(f"Failed to create ECS task. Response: {json.dumps(create_task_response, indent=2)}")

    except Exception as e:
        print(f"Error creating ECS task: {str(e)}")

##################################################################### Video Function ###################################################

def create_ecs_task(ecs_client, ecs_cluster, scale, container_name, subnet_ids, security_group_id, mediaSize_1, mediaid_1, mediakey_1):

    # Environment variables
    environment_variables = {
            'mediaKey': mediakey_1,
            'mediaId': mediaid_1,
            'mediaSize': mediaSize_1
    }



    try:
        create_task_response = ecs_client.run_task(
            cluster=ecs_cluster,
            launchType='FARGATE',
            taskDefinition=f'{scale}',
            networkConfiguration={
                'awsvpcConfiguration': {
                    'subnets': subnet_ids,
                    'securityGroups': [security_group_id],
                    'assignPublicIp': 'ENABLED'
                }
            },
            overrides={
                'containerOverrides': [
                    {
                        'name': container_name,
                        'environment': [
                            {'name': key, 'value': value} for key, value in environment_variables.items()
                        ]
                    }
                ]
            }
        )

        for task in create_task_response.get('tasks', []):
            task['createdAt'] = task.get('createdAt').isoformat() if task.get('createdAt') else None
            task['updatedAt'] = task.get('updatedAt').isoformat() if task.get('updatedAt') else None

        if create_task_response['tasks']:
            print(f"Created ECS task: {json.dumps(create_task_response, indent=2)}")
        else:
            print(f"Failed to create ECS task. Response: {json.dumps(create_task_response, indent=2)}")

    except Exception as e:
        print(f"Error creating ECS task: {str(e)}")
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Note: Change ECS details with your details!&lt;/p&gt;

&lt;p&gt;Thank you for reading the blog!&lt;br&gt;
Content Copyright reserved by Author Harsh Viradia.&lt;/p&gt;

</description>
      <category>serverless</category>
      <category>aws</category>
      <category>lambda</category>
      <category>cloud</category>
    </item>
  </channel>
</rss>
