<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Ibrahim Cisse</title>
    <description>The latest articles on DEV Community by Ibrahim Cisse (@ibraheemcisse).</description>
    <link>https://dev.to/ibraheemcisse</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/ibraheemcisse"/>
    <language>en</language>
    <item>
      <title>Enabling HTTP-based Autoscaling in GKE with KEDA HTTP Add-on</title>
      <dc:creator>Ibrahim Cisse</dc:creator>
      <pubDate>Mon, 10 Feb 2025 08:17:05 +0000</pubDate>
      <link>https://dev.to/ibraheemcisse/enabling-http-based-autoscaling-in-gke-with-keda-http-add-on-36pf</link>
      <guid>https://dev.to/ibraheemcisse/enabling-http-based-autoscaling-in-gke-with-keda-http-add-on-36pf</guid>
      <description>&lt;p&gt;In this article, I will walk through the journey of setting up KEDA (Kubernetes Event-Driven Autoscaler) with the HTTP Add-on in Google Kubernetes Engine (GKE). The goal was to scale a Python-based Rock-Paper-Scissors application based on incoming HTTP traffic. Along the way, I faced several challenges with cluster resource limits, scaling issues, and the limitations of the HTTP Add-on in KEDA, which I tackled through various strategies. This article details the steps, challenges, solutions, and configurations used during the process.&lt;/p&gt;

&lt;p&gt;Steps Taken:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1. Creating the GKE Cluster with Two Nodes&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The first step was to create a Kubernetes cluster in GKE. This cluster would host the Rock-Paper-Scissors application and later be configured for autoscaling using KEDA.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;gcloud container clusters create my-cluster \
    --zone us-central1-a \
    --num-nodes 2
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This created a GKE cluster with 2 nodes, providing enough resources for the application. After the cluster was created, I connected to the cluster using kubectl.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2. Installing KEDA with Server-Side Apply&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;I installed KEDA in the cluster using server-side apply due to an issue with the CRD YAML being too large, which was putting unnecessary load on the etcd component.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl apply -f https://github.com/kedacore/keda/releases/download/v2.4.0/keda-operator.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Using server-side apply ensures that the cluster resources are managed in an efficient and non-blocking way. The KEDA operator was successfully installed, and I proceeded to install the necessary ScaledObject for my application.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3. Installing and Confirming ScaledObject&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Once KEDA was installed, the next step was to deploy a ScaledObject. A ScaledObject defines the scaling rules for the application based on event sources. Here’s the initial YAML configuration for ScaledObject:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: keda.sh/v1alpha1
kind: ScaledObject
metadata:
  name: rock-paper-scissors-scaledobject
  namespace: default
spec:
  scaleTargetRef:
    kind: Deployment
    name: rock-paper-scissors
  minReplicaCount: 1
  maxReplicaCount: 10
  triggers:
    - type: http-add-on
      metadata:
        endpoint: "/"

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;I confirmed that the ScaledObject was running correctly using kubectl get scaledobjects.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;4. Sending HTTP Load and Scaling Issues&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Next, I tried sending HTTP load to the application using curl, but the autoscaling did not trigger as expected. The HPA (Horizontal Pod Autoscaler) was not scaling the application, and the CRDs for scaling were stuck in a Pending state.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbina0i11bo1p6mjinr6z.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbina0i11bo1p6mjinr6z.PNG" alt="Image description" width="800" height="88"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This led me to investigate the node status to check the cluster’s resource utilization.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fprv2bqjimf5e6xsxwlqf.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fprv2bqjimf5e6xsxwlqf.PNG" alt="Image description" width="800" height="231"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;5. Resource Constraints and Node Scaling&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;I ran kubectl top nodes and discovered that the cluster was at 90% CPU utilization. Given that I had already exceeded my free tier and billing wasn’t working (due to using a prepaid card), I wasn’t able to scale the nodes. Since I couldn’t scale up the cluster, I opted for a different strategy.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmhtdno6wwc00tidelsn3.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmhtdno6wwc00tidelsn3.PNG" alt="Image description" width="800" height="400"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;6. Making the Application Lightweight&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;In order to reduce CPU consumption, I modified the Python application to be more lightweight. This involved introducing a small delay in the response time to simulate a more realistic load. Here’s the modified app.py with a delay:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;from flask import Flask
import time

app = Flask(__name__)

@app.route('/')
def index():
    time.sleep(0.2)  # 200 ms delay
    return "Welcome to Rock-Paper-Scissors!"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This change significantly reduced the CPU usage and allowed the cluster to function under the free-tier constraints.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;7. Incorrect ScaledObject Configuration (Using HTTPScaledObject)&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;After making the app lighter, I realized that the ScaledObject I had initially used was not the correct resource for HTTP-based scaling. Instead, I needed to use the HTTPScaledObject.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;8. KEDA Slack Community: HTTPScaledObject Recommendation&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;I reached out to the KEDA community via Slack for clarification on the issue. One of the maintainers suggested that I should use HTTPScaledObject instead of the regular ScaledObject for HTTP-based scaling. This recommendation led me to revise the YAML configuration.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;9. HTTPScaledObject Configuration&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;After understanding that HTTPScaledObject was needed, I updated my YAML to reflect the correct resource type. Below is the updated HTTPScaledObject.yaml:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: http.keda.sh/v1alpha1
kind: HTTPScaledObject
metadata:
  name: rock-paper-scissors-http
  namespace: default
spec:
  hosts:
    - rock-paper-scissors.local
  scaleTargetRef:
    kind: Deployment
    apiVersion: apps/v1
    name: rock-paper-scissors
    service: rock-paper-scissors-service
    port: 80
  replicas:
    min: 1
    max: 10
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Below is the image of the HTTP-add-on scaler other crds running:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frsq5yx5sxsq9yr4of1zh.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frsq5yx5sxsq9yr4of1zh.PNG" alt="Image description" width="800" height="541"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3gf4yva9641yybsvojyz.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3gf4yva9641yybsvojyz.PNG" alt="Image description" width="571" height="111"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;10. Running Load Tests and Confirming Scaling&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;After configuring the HTTPScaledObject, I used hey to simulate 1000 requests to the service:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;hey -n 1000 -c 50 http://rock-paper-scissors.local

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This successfully triggered the Horizontal Pod Autoscaler (HPA) to scale up the pods, but KEDA’s HTTP Add-on did not scale the application working through ingress. The issue persisted despite proper configuration.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fflbb1p8hihwo2x4hlp85.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fflbb1p8hihwo2x4hlp85.PNG" alt="Image description" width="800" height="111"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fp4avi2autx6z45gobpwp.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fp4avi2autx6z45gobpwp.PNG" alt="Image description" width="680" height="103"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;11. Reaching Out to KEDA Maintainer&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;I reached out again to the KEDA Slack channel, asking for help with the HTTP Add-on not scaling. The response from the KEDA maintainer made it clear that the HTTP Add-on was still in an unstable state, leading to the conclusion that the add-on might be shaky.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;12. Hypothesis on Light Application Load&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;My hypothesis was that the app was too lightweight, causing it to process requests too quickly and not generate enough load for KEDA’s HTTP Add-on to scale. To test this, I increased the request rate to 1500 requests per second. However, even with this change, the scaling did not trigger.&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>autoscaling</category>
      <category>gke</category>
    </item>
    <item>
      <title>How KEDA HTTP Add-on for Autoscaling HTTP request on Kubernetes works</title>
      <dc:creator>Ibrahim Cisse</dc:creator>
      <pubDate>Wed, 05 Feb 2025 11:47:16 +0000</pubDate>
      <link>https://dev.to/ibraheemcisse/how-keda-http-add-on-for-autoscaling-http-request-on-kubernetes-works-4j5k</link>
      <guid>https://dev.to/ibraheemcisse/how-keda-http-add-on-for-autoscaling-http-request-on-kubernetes-works-4j5k</guid>
      <description>&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fe6em4lbrnj885kjbnbep.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fe6em4lbrnj885kjbnbep.jpg" alt="Image description" width="800" height="600"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;When it comes to autoscaling in Kubernetes, there are several methods available, but one that stands out for handling HTTP-based workloads is the KEDA HTTP Add-on. Unlike other event sources, such as queues or message brokers, HTTP-based autoscaling comes with unique challenges. In this article, we’ll dive deep into the KEDA HTTP Add-on, its architecture, and how it differs from traditional autoscaling methods.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What Makes KEDA HTTP Add-on Different?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The KEDA HTTP Add-on is not as conventional as other event sources in Kubernetes. Here’s why:&lt;/p&gt;

&lt;p&gt;Unpredictable Traffic: Unlike Kafka or other event-based sources, where we can easily monitor the length of a queue or message rate, HTTP requests are unpredictable. We cannot simply call an API to determine how much traffic will be coming. In other words, the scaling criteria aren’t always clear in advance.&lt;/p&gt;

&lt;p&gt;Synchronous Nature: HTTP traffic is synchronous by nature — this means requests need to be handled in real-time. As a result, scaling to zero (where no pods are running) is more complex. To address this, an intermediate routing layer is required to temporarily hold incoming HTTP requests until new pods are scaled up and ready to serve those requests.&lt;/p&gt;

&lt;p&gt;The core Kubernetes autoscalers, such as the Horizontal Pod Autoscaler (HPA), rely on easily trackable metrics like CPU, memory, or custom metrics. However, for HTTP-based autoscaling, the KEDA HTTP Add-on introduces a different approach to meet the needs of scale-to-zero HTTP applications.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Architecture of KEDA HTTP-Add-on.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffbt2julpgjb9aqmdcjia.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffbt2julpgjb9aqmdcjia.png" alt="Image description" width="800" height="502"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;u&gt;The Key Components of KEDA HTTP Add-on Architecture&lt;/u&gt;&lt;/p&gt;

&lt;p&gt;The KEDA HTTP Add-on architecture involves several components that work together to ensure your HTTP service can scale efficiently. Here’s a breakdown of the key components:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Interceptor:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The Interceptor is the first line of defense when handling incoming HTTP requests. It accepts the requests and places them into a pending request queue while checking if the backend service is scaled up to handle the load.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Scaling to Zero&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;If the service is scaled down to zero replicas, the Interceptor will hold the incoming HTTP requests until new instances of the backend service are ready.&lt;/p&gt;

&lt;p&gt;Once the backend service scales up, the Interceptor forwards the requests to the appropriate service.&lt;br&gt;
External Scaler:&lt;/p&gt;

&lt;p&gt;The External Scaler is a component that constantly pings the Interceptor to retrieve metrics about the number of pending HTTP requests.&lt;br&gt;
This data is then sent to KEDA, which processes the information and triggers the autoscaling actions. Essentially, the External Scaler acts as the push mechanism that informs KEDA about the scaling requirements based on the HTTP traffic.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Operator:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The HTTP Operator is responsible for managing HTTPScaledObject CRDs (Custom Resource Definitions). It listens for the creation of these CRDs and takes action by configuring the necessary resources (such as the Interceptor and External Scaler) to allow autoscaling based on HTTP request traffic.&lt;/p&gt;

&lt;p&gt;The Operator makes the whole autoscaling process easier for the user by automating the setup and configuration of the necessary components.&lt;br&gt;
The Flow of HTTP Requests in KEDA HTTP Add-on&lt;/p&gt;

&lt;p&gt;Here’s a simplified flow of how the components interact with each other:&lt;/p&gt;

&lt;p&gt;Load Balancer:&lt;/p&gt;

&lt;p&gt;The Load Balancer makes the application available to the outside world by routing incoming HTTP requests to the appropriate service.&lt;br&gt;
Request Handling:&lt;/p&gt;

&lt;p&gt;The Load Balancer sends the incoming requests to the Kubernetes service that hosts the HTTP service. From here, the request is passed through the Interceptor.&lt;/p&gt;

&lt;p&gt;Interceptor:&lt;/p&gt;

&lt;p&gt;The Interceptor temporarily holds the requests in case there are no backend pods running to handle them. It also sends metrics about pending HTTP requests to the External Scaler.&lt;/p&gt;

&lt;p&gt;Scaling Decision:&lt;/p&gt;

&lt;p&gt;The External Scaler pings the Interceptor for pending HTTP queue metrics, sends the data to KEDA, which then evaluates whether to scale the backend service up or down.&lt;/p&gt;

&lt;p&gt;Scaling Action:&lt;/p&gt;

&lt;p&gt;If the traffic is high, KEDA triggers the creation of new pods for the service to handle the load, and once scaled up, the Interceptor forwards the requests to the new pods.&lt;/p&gt;

&lt;p&gt;If the traffic reduces, KEDA scales the service down accordingly, even scaling it to zero if necessary.&lt;/p&gt;

&lt;p&gt;Key Benefits of the KEDA HTTP Add-on&lt;/p&gt;

&lt;p&gt;Scale-to-Zero Support: One of the biggest benefits of using the KEDA HTTP Add-on is the ability to scale applications to zero. This means that during periods of low or no traffic, you can save resources by having no running pods while still being able to handle new traffic as soon as it arrives.&lt;/p&gt;

&lt;p&gt;Autoscaling Based on HTTP Requests: With the KEDA HTTP Add-on, scaling decisions are based on HTTP request traffic, ensuring that your backend service is dynamically scaled to meet real-time demand without any manual intervention.&lt;/p&gt;

&lt;p&gt;Efficient Traffic Handling: The Interceptor ensures that no HTTP requests are lost during the scaling process, providing a smooth experience for users without request drops.&lt;/p&gt;

&lt;p&gt;Challenges and Limitations&lt;/p&gt;

&lt;p&gt;While the KEDA HTTP Add-on is a powerful tool, there are a few challenges you might encounter:&lt;/p&gt;

&lt;p&gt;Complex Setup: Setting up the entire KEDA HTTP Add-on system can be more complex than standard Kubernetes autoscaling. It requires configuring multiple components like the Interceptor, External Scaler, and HTTPScaledObject, which might be tricky, especially for beginners.&lt;/p&gt;

&lt;p&gt;Scalability at Scale: While the system can scale effectively, you may encounter delays when handling very high traffic spikes, especially when scaling from zero. Proper tuning of scaling parameters and careful management of the request queue is crucial.&lt;/p&gt;

&lt;p&gt;Compatibility: The KEDA HTTP Add-on works best when combined with KEDA’s existing autoscaling functionality, but when used alongside other scaling mechanisms (e.g., HPA), it may require extra configuration to avoid conflicts.&lt;/p&gt;

&lt;p&gt;Conclusion&lt;/p&gt;

&lt;p&gt;The KEDA HTTP Add-on provides a smart solution for scaling HTTP-based applications in Kubernetes, addressing challenges like unpredictable traffic and scale-to-zero requirements. By introducing components like the Interceptor, External Scaler, and Operator, KEDA makes it easier to autoscale services dynamically based on HTTP traffic, ensuring that your application is always responsive to user demand.&lt;/p&gt;

&lt;p&gt;By leveraging KEDA’s unique autoscaling capabilities, you can efficiently manage your Kubernetes workloads, reduce resource wastage during idle periods, and scale your services seamlessly without manual intervention.&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>scaling</category>
      <category>http</category>
    </item>
    <item>
      <title>Resolving CRD Size Limit Issues with KEDA on Kubernetes</title>
      <dc:creator>Ibrahim Cisse</dc:creator>
      <pubDate>Sat, 01 Feb 2025 11:04:29 +0000</pubDate>
      <link>https://dev.to/ibraheemcisse/resolving-crd-size-limit-issues-with-keda-on-kubernetes-2930</link>
      <guid>https://dev.to/ibraheemcisse/resolving-crd-size-limit-issues-with-keda-on-kubernetes-2930</guid>
      <description>&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyoijtmt07tluws7vcoi0.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyoijtmt07tluws7vcoi0.PNG" alt="Image description" width="800" height="317"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;While deploying KEDA (Kubernetes-based Event Driven Autoscaler) on our Kubernetes cluster, we encountered an unexpected hurdle related to Custom Resource Definitions (CRDs) being too large. This post details the problem, the strategies we attempted to resolve it, and the ultimate solution that worked. We hope this helps others facing similar challenges.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The Problem&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Upon attempting to deploy KEDA using the following command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl apply -f https://github.com/kedacore/keda/releases/download/v2.12.1/keda-2.12.1.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;I received the error:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Error from server (Request Entity Too Large): error when creating "https://github.com/kedacore/keda/releases/download/v2.12.1/keda-2.12.1.yaml": etcdserver: request is too large

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This indicated that the CRD definitions in the YAML file were too large for etcd to process, a known issue when handling extensive Kubernetes configurations.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Strategies I Tried&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Splitting the YAML File&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;I attempted to split the keda-2.12.1.yaml into smaller chunks, applying each separately. Unfortunately, this led to dependency issues where certain resources required others to be present beforehand, causing the deployment to fail.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Editing the YAML File&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;I opened the massive YAML file (over 7,000 lines) and tried to reduce its size by:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Removing redundant annotations.&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Stripping down verbose descriptions in the CRDs.&lt;/li&gt;
&lt;li&gt;Simplifying resource definitions wherever possible.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;While this approach reduced the file size, it didn't sufficiently address the size limitation for etcd, and applying the modified file still resulted in the same error.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Increasing etcd Size Limit&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;I considered adjusting the etcd size limit by modifying the --max-request-bytes parameter. However, this approach involved significant changes to the control plane, which wasn't feasible in GKE&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Seeking Help from the Community&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;After exhausting the above strategies, we turned to the GitHub repository for KEDA, specifically issue #6447 related to the v2.16.1 release.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The GitHub Issue:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;[KEDA v2.16.1 Release - Issue #6447 (&lt;a href="https://github.com/kedacore/keda/discussions/6447" rel="noopener noreferrer"&gt;https://github.com/kedacore/keda/discussions/6447&lt;/a&gt;)&lt;/p&gt;

&lt;p&gt;In this thread, the KEDA maintainer, @JorTurFer, announced the new release and included fixes for CRD handling. While our issue wasn't directly addressed, sharing our experience in the comments helped us connect with others facing similar problems.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Comment on GitHub:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;I detailed our attempts and frustrations, specifically mentioning the CRD size limitation and the strategies we tried. The community feedback was invaluable, leading us to the solution below.&lt;/p&gt;

&lt;p&gt;The Solution&lt;/p&gt;

&lt;p&gt;Server-side apply turned out to be the fix we needed. Instead of the traditional kubectl apply, we used:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl apply --server-side -f https://github.com/kedacore/keda/releases/download/v2.12.1/keda-2.12.1-core.yaml

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This approach leverages Kubernetes' server-side apply feature, which handles larger resource definitions more efficiently and bypasses some of the etcd limitations.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Why Server-side Apply Works:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Efficient Resource Handling: Server-side apply processes resources on the server, reducing the data load on etcd.&lt;/p&gt;

&lt;p&gt;Conflict Resolution: It provides better conflict management for CRDs and other complex resources.&lt;/p&gt;

&lt;p&gt;After applying the above command, KEDA was successfully deployed without errors. We were then able to proceed with creating and managing our ScaledObjects.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Conclusion&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Facing CRD size limitations when deploying KEDA was a challenging experience, but it provided valuable insights into Kubernetes resource management. By experimenting with various strategies, seeking help from the community, and leveraging Kubernetes' server-side apply feature, we overcame the issue.&lt;/p&gt;

&lt;p&gt;If you're dealing with similar CRD size issues, we highly recommend trying the server-side apply approach. Also, don't hesitate to engage with the open-source community—the support and shared experiences can be incredibly helpful.&lt;/p&gt;

</description>
      <category>keda</category>
      <category>kubernetes</category>
    </item>
    <item>
      <title>Unlocking Kubernetes Troubleshooting with Komodor: A Journey of Exploration</title>
      <dc:creator>Ibrahim Cisse</dc:creator>
      <pubDate>Tue, 28 Jan 2025 12:03:24 +0000</pubDate>
      <link>https://dev.to/ibraheemcisse/unlocking-kubernetes-troubleshooting-with-komodor-a-journey-of-exploration-3a7o</link>
      <guid>https://dev.to/ibraheemcisse/unlocking-kubernetes-troubleshooting-with-komodor-a-journey-of-exploration-3a7o</guid>
      <description>&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F49f4scgc4f5qdaorxb1m.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F49f4scgc4f5qdaorxb1m.png" alt="Image description" width="800" height="407"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;As an avid learner of Kubernetes, I’ve always been fascinated by the sheer power and complexity of container orchestration. Kubernetes is an incredible tool, but let’s be honest—navigating its intricacies can feel overwhelming, especially when troubleshooting issues. This is where Komodor, a cutting-edge Kubernetes troubleshooting tool, enters the picture, making life easier for DevOps engineers, platform teams, and curious learners like me.&lt;/p&gt;

&lt;p&gt;In this article, I’ll share my exploration of Komodor, its role in streamlining Kubernetes troubleshooting, and why it’s an essential tool for understanding Kubernetes in real-world scenarios. If you’re on a journey to master Kubernetes, this is a tool worth adding to your radar.&lt;/p&gt;




&lt;h2&gt;
  
  
  &lt;strong&gt;What is Komodor?&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;At its core, Komodor is a Kubernetes-native troubleshooting platform designed to simplify the debugging process. By providing a unified view of changes, deployments, and incidents across your cluster, Komodor helps you understand &lt;strong&gt;what happened&lt;/strong&gt;, &lt;strong&gt;why it happened&lt;/strong&gt;, and &lt;strong&gt;how to fix it&lt;/strong&gt;—all in one place.&lt;/p&gt;

&lt;p&gt;Whether you're dealing with misconfigurations, failed deployments, or pod crashes, Komodor provides clear insights into the root cause, enabling you to resolve issues faster and with greater confidence.&lt;/p&gt;




&lt;h2&gt;
  
  
  &lt;strong&gt;Why Troubleshooting Kubernetes is Challenging&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Kubernetes is a powerful system, but its distributed nature adds complexity when things go wrong. Issues often span across multiple components like:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Misconfigured &lt;strong&gt;Deployments&lt;/strong&gt; or &lt;strong&gt;ReplicaSets&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Pod crashes&lt;/strong&gt; due to resource limits&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Networking issues&lt;/strong&gt; within services or Ingress&lt;/li&gt;
&lt;li&gt;Errors stemming from &lt;strong&gt;ConfigMaps&lt;/strong&gt;, &lt;strong&gt;Secrets&lt;/strong&gt;, or environment variables&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;These challenges demand not just technical know-how but also the ability to correlate logs, changes, and events across a cluster—something that can be time-consuming and frustrating without the right tools.&lt;/p&gt;




&lt;h2&gt;
  
  
  &lt;strong&gt;How Komodor Simplifies Kubernetes Troubleshooting&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Komodor stands out because it focuses on reducing this complexity. Here’s what I discovered during my exploration:&lt;/p&gt;

&lt;h3&gt;
  
  
  1. &lt;strong&gt;A Centralized Dashboard for Cluster Insights&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Komodor’s dashboard provides a bird’s-eye view of your cluster, displaying relevant data about pods, nodes, deployments, and more. It integrates seamlessly with your Kubernetes cluster and pulls real-time information, saving you the trouble of hunting for clues across multiple tools.&lt;/p&gt;

&lt;h3&gt;
  
  
  2. &lt;strong&gt;Tracking Changes and Deployments&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;One of Komodor’s standout features is its ability to trace changes. Every configuration update or deployment is logged, providing a timeline of events. This historical context is invaluable for pinpointing when and why something went wrong.&lt;/p&gt;

&lt;p&gt;For example, if a deployment crashes after an update, Komodor shows you exactly what changed in the YAML, making it easier to roll back or fix the issue.&lt;/p&gt;

&lt;h3&gt;
  
  
  3. &lt;strong&gt;Context-Aware Alerts&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Unlike generic alerts that bombard you with irrelevant information, Komodor provides context-aware alerts. These notifications highlight the &lt;strong&gt;root cause&lt;/strong&gt; of the problem and suggest potential fixes, helping you focus on solving the issue instead of deciphering logs.&lt;/p&gt;

&lt;h3&gt;
  
  
  4. &lt;strong&gt;Integration with Popular Tools&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Komodor integrates with tools like &lt;strong&gt;Helm&lt;/strong&gt;, &lt;strong&gt;Istio&lt;/strong&gt;, &lt;strong&gt;KEDA&lt;/strong&gt;, and &lt;strong&gt;ArgoCD&lt;/strong&gt;, making it an excellent addition to any Kubernetes environment. For learners like me, this also offers a chance to explore these integrations and understand how they contribute to a robust Kubernetes ecosystem.&lt;/p&gt;




&lt;h2&gt;
  
  
  &lt;strong&gt;Komodor in Action: Real-Life Applications&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Imagine this scenario: You’re deploying a new microservice to your Kubernetes cluster. Everything seems fine, but suddenly, you notice that the deployment isn’t scaling properly. Is it a resource constraint? A misconfigured Horizontal Pod Autoscaler (HPA)? Or an issue with KEDA’s scaling triggers?&lt;/p&gt;

&lt;p&gt;Using Komodor, you can:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Instantly see the deployment’s history and identify recent changes.&lt;/li&gt;
&lt;li&gt;Correlate logs and events to spot misconfigurations.&lt;/li&gt;
&lt;li&gt;Use its intuitive interface to narrow down the root cause and fix the issue efficiently.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;In production environments, where downtime is costly, tools like Komodor become game-changers by enabling faster recovery and ensuring smoother operations.&lt;/p&gt;




&lt;h2&gt;
  
  
  &lt;strong&gt;Why Komodor Resonates with Me as a Learner&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;For someone like me, still exploring the vast landscape of Kubernetes, Komodor is more than just a tool—it’s a teacher. It helps me:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Understand how different components in Kubernetes interact.&lt;/li&gt;
&lt;li&gt;Learn the best practices for managing and debugging clusters.&lt;/li&gt;
&lt;li&gt;Gain hands-on experience with real-world troubleshooting scenarios.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Each issue I troubleshoot using Komodor feels like a step forward in my Kubernetes journey. The tool empowers me to connect theory with practice, deepening my understanding of this powerful technology.&lt;/p&gt;




&lt;h2&gt;
  
  
  &lt;strong&gt;Final Thoughts&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Kubernetes is a dynamic, ever-evolving platform, and the path to mastering it is filled with both challenges and opportunities. Tools like Komodor not only make troubleshooting more manageable but also accelerate the learning process for enthusiasts like me.&lt;/p&gt;

&lt;p&gt;If you’re navigating the complexities of Kubernetes or just curious about its real-world applications, I encourage you to explore Komodor. It’s not just a tool for solving problems—it’s a resource for growing your expertise and confidence in Kubernetes.&lt;/p&gt;

&lt;p&gt;Let’s embrace the challenges, learn from the experience, and build better, more resilient systems together. What’s your favorite Kubernetes troubleshooting tool? Let me know in the comments—I’d love to learn from you too!&lt;/p&gt;




&lt;p&gt;&lt;strong&gt;Hashtags&lt;/strong&gt;: #Kubernetes #Komodor #DevOps #Troubleshooting #CloudNative&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>komodor</category>
      <category>devops</category>
    </item>
    <item>
      <title>Secure Kubernetes Persistent Storage with RBAC and Pod Security Standard (PSA)</title>
      <dc:creator>Ibrahim Cisse</dc:creator>
      <pubDate>Tue, 28 Jan 2025 11:34:31 +0000</pubDate>
      <link>https://dev.to/ibraheemcisse/secure-kubernetes-persistent-storage-with-rbac-and-pod-security-standard-psa-2l4b</link>
      <guid>https://dev.to/ibraheemcisse/secure-kubernetes-persistent-storage-with-rbac-and-pod-security-standard-psa-2l4b</guid>
      <description>&lt;h1&gt;
  
  
  Securing Stateful Apps in Kubernetes: MySQL with Persistent Volumes &amp;amp; RBAC
&lt;/h1&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flwhte3et6ka2e0phj5rx.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flwhte3et6ka2e0phj5rx.jpg" alt="Kubernetes Security" width="800" height="600"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Learn how to deploy stateful applications securely in Kubernetes while enforcing storage security and Pod hardening. We'll deploy MySQL with &lt;strong&gt;Persistent Volumes (PVs)&lt;/strong&gt; while implementing &lt;strong&gt;RBAC&lt;/strong&gt;, &lt;strong&gt;Pod Security Admission (PSA)&lt;/strong&gt;, and Secret management.&lt;/p&gt;




&lt;h2&gt;
  
  
  🔐 Why This Matters
&lt;/h2&gt;

&lt;p&gt;Stateful applications like databases require special attention in Kubernetes:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Data persistence&lt;/strong&gt;: Storage must survive Pod restarts&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Security&lt;/strong&gt;: Sensitive credentials and storage access need protection&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Compliance&lt;/strong&gt;: Pods should follow security best practices by default&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  🛠️ Key Security Practices
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;RBAC&lt;/strong&gt;: Limit access to storage resources&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Pod Security Admission&lt;/strong&gt;: Enforce security contexts&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Secrets Management&lt;/strong&gt;: Encrypt sensitive credentials&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Persistent Storage&lt;/strong&gt;: Use PVs/PVCs for data persistence&lt;/li&gt;
&lt;/ol&gt;




&lt;h2&gt;
  
  
  🚀 Deployment Walkthrough
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Prerequisites
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Minikube (with Docker driver)&lt;/li&gt;
&lt;li&gt;kubectl
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;minikube start &lt;span class="nt"&gt;--driver&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;docker

&lt;span class="k"&gt;**&lt;/span&gt;1. Create Dedicated Namespace&lt;span class="k"&gt;**&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;&lt;br&gt;
bash&lt;br&gt;
kubectl create namespace secure-storage&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2. RBAC Configuration&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Role Definition (rbac-role.yaml):&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  namespace: secure-storage
  name: storage-manager-role
rules:
- apiGroups: [""]
  resources: ["pods", "persistentvolumeclaims"]
  verbs: ["get", "list", "create", "delete"]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Role Binding (rbac-binding.yaml):&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: storage-manager-binding
  namespace: secure-storage
subjects:
- kind: ServiceAccount
  name: default
  namespace: secure-storage
roleRef:
  kind: Role
  name: storage-manager-role
  apiGroup: rbac.authorization.k8s.io
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Apply RBAC rules:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl apply -f rbac-role.yaml
kubectl apply -f rbac-binding.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;3. Persistent Storage Setup&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Persistent Volume (pv.yaml):&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: v1
kind: PersistentVolume
metadata:
  name: secure-pv
spec:
  capacity:
    storage: 1Gi
  accessModes:
    - ReadWriteOnce
  hostPath:
    path: "/mnt/data"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Persistent Volume Claim (pvc.yaml):&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: secure-pvc
  namespace: secure-storage
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 500Mi
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Create storage resources:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl apply -f pv.yaml
kubectl apply -f pvc.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;4. MySQL Deployment with Secrets&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Create Database Credentials:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl create secret generic mysql-secret \
  --from-literal=mysql-root-password=root_password \
  --from-literal=mysql-user=user \
  --from-literal=mysql-password=user_password \
  --namespace secure-storage
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;StatefulSet Configuration (mysql-statefulset.yaml):&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: secure-mysql
  namespace: secure-storage
spec:
  serviceName: "mysql-service"
  replicas: 1
  template:
    metadata:
      labels:
        app: secure-mysql
    spec:
      containers:
      - name: mysql
        image: mysql:5.7
        env:
        - name: MYSQL_ROOT_PASSWORD
          valueFrom:
            secretKeyRef:
              name: mysql-secret
              key: mysql-root-password
        # ... full config in GitHub repo
  volumeClaimTemplates:
  - metadata:
      name: mysql-storage
    spec:
      accessModes: ["ReadWriteOnce"]
      resources:
        requests:
          storage: 500Mi
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;5. Enforce Pod Security&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Apply restricted PSA policy:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl label namespace secure-storage \
  pod-security.kubernetes.io/enforce=restricted \
  --overwrite
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;🧪 Security Validation&lt;/p&gt;

&lt;p&gt;Test PSA Enforcement:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl apply -f non-compliant-pod.yaml -n secure-storage
# Expected error: violates PodSecurity "restricted:latest"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Verify RBAC Restrictions:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl delete pod --namespace kube-system
# Error: User cannot delete resources in kube-system
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;✅ Key Outcomes&lt;/p&gt;

&lt;p&gt;Secure Storage: PVs/PVCs with RBAC-controlled access&lt;/p&gt;

&lt;p&gt;Secret Protection: Sensitive data encrypted at rest&lt;/p&gt;

&lt;p&gt;Pod Hardening: PSA enforces security contexts&lt;/p&gt;

&lt;p&gt;Auditability: Clear access boundaries through roles&lt;/p&gt;

&lt;p&gt;📚 Full Code &amp;amp; Contribution&lt;/p&gt;

&lt;p&gt;Explore the complete implementation and contribute:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/your-repo-link" rel="noopener noreferrer"&gt;Github repository&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;💬 Discussion Points&lt;/p&gt;

&lt;p&gt;How do you handle storage security in your Kubernetes clusters?&lt;/p&gt;

&lt;p&gt;What additional security measures do you implement for stateful workloads?&lt;/p&gt;

&lt;p&gt;Have you tried the new PSA policies in production?&lt;/p&gt;

&lt;p&gt;Let's discuss in the comments! 👇&lt;/p&gt;

</description>
      <category>rbac</category>
      <category>kubernetes</category>
      <category>security</category>
    </item>
    <item>
      <title>Hello Dev.to Community! My Journey into DevOps and Beyond</title>
      <dc:creator>Ibrahim Cisse</dc:creator>
      <pubDate>Fri, 24 Jan 2025 06:34:55 +0000</pubDate>
      <link>https://dev.to/ibraheemcisse/hello-devto-community-my-journey-into-devops-and-beyond-o5m</link>
      <guid>https://dev.to/ibraheemcisse/hello-devto-community-my-journey-into-devops-and-beyond-o5m</guid>
      <description>&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzl5twfhbnhhpn0qkp3wn.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzl5twfhbnhhpn0qkp3wn.jpg" alt="Image description" width="800" height="406"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Hello, Dev.to! 👋&lt;/p&gt;

&lt;p&gt;I’m Ibraheem Cisse, a passionate tech enthusiast, aspiring DevOps/Site Reliability Engineer (SRE), and an AWS Community Builder. With a background in support engineering and extensive experience working with cloud technologies, my journey has been one of constant learning, hands-on experimentation, and sharing knowledge.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;A Bit About Me&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Current Role:&lt;/strong&gt; Support Engineer with a knack for troubleshooting complex technical issues and optimizing workflows.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Interests:&lt;/strong&gt; Automation, cloud infrastructure, containerization, and security.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Tech Stack&lt;/strong&gt;: AWS, Azure, Kubernetes, Terraform, GitLab, Docker, and Linux systems.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Why I’m Here&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;I joined Dev.to to connect with like-minded individuals, share my learnings, and document my journey into mastering DevOps and SRE practices. My aim is to break into a Cloud Engineer or Junior SRE role and grow towards becoming a seasoned SRE.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Some of my Projects&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Three-Tier Python Application on AWS&lt;/strong&gt;: Building a scalable, multi-tier application on AWS using EKS, RDS, and ALB with Terraform automation.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Event-Driven Autoscaling with KEDA&lt;/strong&gt;: Exploring custom metrics to dynamically scale Kubernetes workloads.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Infrastructure as Code (IaC)&lt;/strong&gt;: Automating multi-region deployments using Terraform to ensure reliability and scalability.&lt;/p&gt;

&lt;p&gt;My content is aimed at being engaging, technically sound, and actionable. &lt;/p&gt;

&lt;p&gt;Topics I’m particularly excited about include:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Kubernetes security best practices.&lt;/li&gt;
&lt;li&gt;Infrastructure as code &lt;/li&gt;
&lt;li&gt;Configuration Management &lt;/li&gt;
&lt;li&gt;Automation and Scripting&lt;/li&gt;
&lt;li&gt;Implementing GitOps workflows with ArgoCD and Flux.&lt;/li&gt;
&lt;li&gt;Building CI/CD pipelines for containerized applications.&lt;/li&gt;
&lt;li&gt;Monitoring and Observability.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Let’s Connect!&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://linkedin.com/in/YourLinkedInProfile" rel="noopener noreferrer"&gt;Linkedin &lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="//github.com/ibraheemcisse"&gt;Github&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://medium.com/@info_37956" rel="noopener noreferrer"&gt;Medium&lt;/a&gt; &lt;/li&gt;
&lt;li&gt;&lt;a href="https://community.aws/@ibraheemcisse" rel="noopener noreferrer"&gt;AWS Commnuity Builder Blog&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Feel free to connect with me here, on GitHub, or on Medium for more technical deep dives.&lt;/p&gt;

&lt;p&gt;Looking forward to learning, collaborating, and growing together in this amazing community! 😊&lt;/p&gt;

</description>
      <category>devops</category>
      <category>kubernetes</category>
      <category>terraform</category>
      <category>careerdevelopment</category>
    </item>
  </channel>
</rss>
