<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: olamide odufuwa</title>
    <description>The latest articles on DEV Community by olamide odufuwa (@olamyde).</description>
    <link>https://dev.to/olamyde</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/olamyde"/>
    <language>en</language>
    <item>
      <title>Automating AWS Lambda Deployment using GitHub Actions</title>
      <dc:creator>olamide odufuwa</dc:creator>
      <pubDate>Mon, 29 Sep 2025 22:33:31 +0000</pubDate>
      <link>https://dev.to/olamyde/automating-aws-lambda-deployment-using-github-actions-1f43</link>
      <guid>https://dev.to/olamyde/automating-aws-lambda-deployment-using-github-actions-1f43</guid>
      <description>&lt;p&gt;Why Teams Are Automating Lambda Deployments&lt;br&gt;
With the growing adoption of serverless architectures, AWS Lambda has become a core compute solution for running event-driven workloads. However, manually deploying Lambda functions introduces the risk of inconsistency, downtime, and human error. DevOps teams are increasingly automating these processes using CI/CD pipelines.&lt;/p&gt;

&lt;p&gt;GitHub Actions provides a powerful platform to integrate automation directly into the version control system. This empowers developers to trigger deployments automatically on code pushes, PR merges, or manually through workflow dispatches.&lt;/p&gt;

&lt;p&gt;Core Components of the Automation Workflow&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Preparing Your Lambda Function Code&lt;br&gt;
Organize your function code in a directory structure that’s easy to zip and upload. Make sure to include only necessary dependencies. If your code relies on external Python packages, use a requirements.txt and deploy with dependencies zipped in a package directory.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;GitHub Actions Workflow File (.github/workflows/deploy.yml)&lt;br&gt;
Create a GitHub Actions workflow YAML file to define your deployment pipeline. A basic Python example looks like this:&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;name: Deploy Lambda&lt;/p&gt;

&lt;p&gt;on:&lt;br&gt;
  push:&lt;br&gt;
    branches:&lt;br&gt;
      - main&lt;/p&gt;

&lt;p&gt;jobs:&lt;br&gt;
  deploy:&lt;br&gt;
    runs-on: ubuntu-latest&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;steps:
- name: Checkout Code
  uses: actions/checkout@v3

- name: Set up Python
  uses: actions/setup-python@v4
  with:
    python-version: '3.9'

- name: Install dependencies
  run: |
    pip install -r requirements.txt -t package
    cd package
    zip -r ../function.zip .
    cd ..
    zip -g function.zip lambda_function.py

- name: Deploy to Lambda
  uses: aws-actions/aws-lambda-deploy@v1
  with:
    aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID }}
    aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
    aws-region: us-east-1
    function-name: myLambdaFunction
    zip-file: function.zip
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;ol&gt;
&lt;li&gt;AWS IAM Permissions and Secrets Configuration
Create an IAM user with permissions to update Lambda functions via lambda:UpdateFunctionCode and store its credentials in GitHub Secrets. Make sure to avoid environment variable leakage by reviewing workflow logs with temporary output disabled.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Secret: AWS_ACCESS_KEY_ID&lt;br&gt;
Secret: AWS_SECRET_ACCESS_KEY&lt;br&gt;
Advanced Features for Production Pipelines&lt;br&gt;
Branch Filtering: Deploy only on specific branches like main or release/*.&lt;br&gt;
Workflow Dispatch: Trigger manual deploys using workflow_dispatch:.&lt;br&gt;
Environment Promotion: Deploy to dev, staging, and prod using environment protection rules and matrix builds.&lt;br&gt;
Monitoring: Integrate Slack, Datadog, or Amazon CloudWatch for post-deployment notifications.&lt;br&gt;
Conclusion&lt;br&gt;
Automating AWS Lambda deployments using GitHub Actions leads to faster delivery cycles, reproducible builds, and minimized manual tasks. By defining a clear release workflow, setting up the right permissions, and using environment configurations, engineering teams can streamline serverless development at scale.&lt;/p&gt;

&lt;p&gt;CTA: Explore GitHub Actions Marketplace for more Lambda integration tools.&lt;/p&gt;

</description>
      <category>technology</category>
    </item>
    <item>
      <title>Mastering Pod Affinity and Anti-affinity in Kubernetes</title>
      <dc:creator>olamide odufuwa</dc:creator>
      <pubDate>Sat, 22 Feb 2025 07:24:19 +0000</pubDate>
      <link>https://dev.to/olamyde/mastering-pod-affinity-and-anti-affinity-in-kubernetes-5ec8</link>
      <guid>https://dev.to/olamyde/mastering-pod-affinity-and-anti-affinity-in-kubernetes-5ec8</guid>
      <description>&lt;p&gt;&lt;strong&gt;Introduction&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;In the world of Kubernetes, efficient resource management is a cornerstone of maintaining a robust and scalable infrastructure. Two often discussed strategies that help achieve this are pod affinity and pod anti-affinity. These concepts assist in configuring the placement of pods in a Kubernetes cluster to either congregate or disperse across nodes. But what exactly do these terms mean, and how do they affect workloads in practice? This blog post will explore the distinct functionalities and applications of pod affinity and pod anti-affinity, delving into the intricacies of how they contribute to the overall orchestration ecosystem in Kubernetes. Understanding these concepts is vital for any DevOps engineer aiming to optimize resource usage and application performance.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Understanding Pod Affinity&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Pod affinity primarily focuses on ensuring that certain pods run on specific nodes or alongside other pods that match specified criteria. This can be particularly beneficial when certain workloads require low-latency communication or share a dependency. For instance, if two pods frequently communicate, having them reside on the same node minimizes network overhead and increases data throughput.&lt;/p&gt;

&lt;p&gt;Pod affinity is orchestrated through Kubernetes node labels and affinity rules in the pod specification. These rules can be configured using &lt;em&gt;requiredDuringSchedulingIgnoredDuringExecution&lt;/em&gt; or &lt;em&gt;preferredDuringSchedulingIgnoredDuringExecution&lt;/em&gt;. The former mandates the scheduler to enforce the rule, whereas the latter suggests it as a preference.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
affinity:
  podAffinity:
    requiredDuringSchedulingIgnoredDuringExecution:
      - labelSelector:
          matchExpressions:
            - key: "app"
              operator: In
              values: ["web"]
        topologyKey: "kubernetes.io/hostname"

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Diving into Pod Anti-affinity&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Conversely, pod anti-affinity is a strategy to prevent specific pods from being scheduled on the same node, promoting dispersion. This can be crucial for maintaining availability and reducing single points of failure. For example, if two replicas of a database service run on the same hardware, a node failure could be catastrophic. Anti-affinity rules ensure these replicas distribute across different nodes.&lt;/p&gt;

&lt;p&gt;Like pod affinity, anti-affinity is also defined through labels and rules within the pod specification. The key difference lies in the intention to separate rather than combine certain pods.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
affinity:
  podAntiAffinity:
    preferredDuringSchedulingIgnoredDuringExecution:
      - weight: 100
        podAffinityTerm:
          labelSelector:
            matchExpressions:
              - key: "app"
                operator: In
                values: ["db"]
          topologyKey: "kubernetes.io/hostname"

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Balancing Affinity and Anti-affinity&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The choice between using pod affinity and anti-affinity is not always straightforward and often depends on specific application requirements and constraints. Both strategies can be utilized simultaneously to achieve a more fine-tuned pod distribution. Administrators must carefully evaluate the trade-offs, such as potential scheduling delays associated with hard constraints versus the benefits of improved latency or fault tolerance.&lt;/p&gt;

&lt;p&gt;A specific set of priorities can be configured using the Kubernetes scheduler to tune the behavior, providing a balance that aligns with operational goals.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Conclusions&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Pod affinity and anti-affinity are critical components in optimizing Kubernetes cluster management. By understanding and leveraging these configurations, administrators can craft tailored environments that suit unique application demands. Pod affinity ensures co-location for efficiency, whereas anti-affinity focuses on separation for resilience. Both strategies, when applied judiciously, can enhance application performance, scalability, and fault tolerance in distributed systems. In adopting these practices, organizations can achieve a more resilient architecture, paving the way for a more efficient and effective Kubernetes environment. Therefore, mastering these techniques is crucial to enhancing operational strategies within Kubernetes clusters.&lt;/p&gt;

</description>
      <category>kubernetes</category>
    </item>
    <item>
      <title>Kubernetes Probes: Enhance Application Health and Reliability</title>
      <dc:creator>olamide odufuwa</dc:creator>
      <pubDate>Sat, 22 Feb 2025 07:18:26 +0000</pubDate>
      <link>https://dev.to/olamyde/kubernetes-probes-enhance-application-health-and-reliability-5518</link>
      <guid>https://dev.to/olamyde/kubernetes-probes-enhance-application-health-and-reliability-5518</guid>
      <description>&lt;p&gt;&lt;strong&gt;Introduction:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;In today’s fast-paced digital environment, maintaining the seamless operation of services is crucial. Kubernetes, the leading orchestration platform for containerized applications, employs a range of mechanisms to ensure that applications run optimally. Among these mechanisms, readiness and liveness probes are pivotal in maintaining the health and efficiency of applications. This blog post delves into the critical roles that readiness and liveness probes play in Kubernetes. We’ll explore their functionality, differences, and how they can be effectively used to enhance application reliability. Understanding these probes is vital for anyone managing containerized applications, as they prevent downtime and ensure that resources are directed only to healthy instances of your application.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Understanding Readiness Probes:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Readiness probes determine if a container is ready to start accepting traffic. Before traffic is routed to a pod, the readiness probe checks if it can handle user requests. This might involve checking if the necessary dependencies and configurations are in place. For instance, if your application relies on an external database, the readiness probe might ensure that database connections are established before your application starts receiving requests. If the readiness probe fails, the pod remains in an unready state, ensuring no traffic is directed to it.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Example of a readiness probe configuration:&lt;/em&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
  readinessProbe:
    httpGet:
      path: /health
      port: 8080
    initialDelaySeconds: 5
    periodSeconds: 10

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This example configures a readiness probe to call the /health endpoint every 10 seconds after an initial delay of 5 seconds.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Exploring Liveness Probes:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;While readiness probes focus on the application’s ability to handle traffic, liveness probes monitor its ongoing health. If an application becomes unresponsive or crashes, a liveness probe initiates corrective action. This involves restarting the pod to restore functionality. Liveness probes are crucial for self-healing and resilience, ensuring applications recover automatically from transient errors without manual intervention.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Example of a liveness probe configuration:&lt;/em&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
  livenessProbe:
    tcpSocket:
      port: 8080
    initialDelaySeconds: 15
    timeoutSeconds: 5

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This liveness probe checks the application’s TCP socket on port 8080, with an initial delay of 15 seconds and a timeout of 5 seconds.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Differentiating Readiness and Liveness Probes:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;While both probes serve to maintain application health, their roles are distinct. Readiness probes focus on when a pod can start receiving traffic, ensuring that all dependencies are ready and that the application is fully initialized. In contrast, liveness probes are concerned with the ongoing health of a pod. They can restart a pod that is running but has become unhealthy over time. Utilizing both probes ensures end-to-end application health, from deployment to uptime.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Conclusions:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;In the dynamic world of Kubernetes, readiness and liveness probes are essential for maintaining robust application deployments. Readiness probes ensure that pods receive traffic only when they are fully prepared, while liveness probes guarantee ongoing health and automatic recovery. Together, they form a comprehensive health strategy, enabling applications to adapt and thrive amidst challenges. Leveraging these tools effectively maximizes application uptime and reliability, providing a seamless and resilient user experience. For developers and operators, understanding and implementing readiness and liveness probes is an investment in the robustness and reliability of their application landscape.&lt;/p&gt;

</description>
      <category>kubernetes</category>
    </item>
    <item>
      <title>ConfigMaps vs Secrets: Secure Configuration Management in Kubernetes</title>
      <dc:creator>olamide odufuwa</dc:creator>
      <pubDate>Mon, 17 Feb 2025 02:21:17 +0000</pubDate>
      <link>https://dev.to/olamyde/configmaps-vs-secrets-secure-configuration-management-in-kubernetes-4jj5</link>
      <guid>https://dev.to/olamyde/configmaps-vs-secrets-secure-configuration-management-in-kubernetes-4jj5</guid>
      <description>&lt;p&gt;&lt;strong&gt;Introduction&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;In modern software development, Kubernetes has become a cornerstone for container orchestration. With its rise, proper configuration management in Kubernetes environments has become a critical task for developers and DevOps practitioners. ConfigMaps and Secrets are two essential components in Kubernetes used for storing configuration data. However, a common pitfall is using ConfigMaps to hold sensitive information, like passwords or API keys, which exposes systems to security vulnerabilities. This article will explore why ConfigMaps are unsuitable for sensitive data, how Kubernetes Secrets provide a more secure alternative, and the best practices for managing sensitive information in Kubernetes. By understanding these elements, teams can safeguard their applications and infrastructures, reducing the risk of data breaches and unauthorized access.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Understanding ConfigMaps and Their Limitations&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;ConfigMaps in Kubernetes are used to decouple configuration artifacts from image content, providing a way to easily change configurations that do not impact application code. They store non-sensitive data, such as environment configurations, feature flags, or external resource URLs. While useful, ConfigMaps store data in plaintext. This means any data within a ConfigMap can be accessed and read without the need for decryption, posing a significant security risk if sensitive data is stored. Given the volume of configurations that need managing, it may be tempting to use ConfigMaps for everything. However, storing sensitive information here is a practice that should be avoided to prevent vulnerabilities.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Kubernetes Secrets: A Secure Alternative&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;For securely handling sensitive data, Kubernetes Secrets provide an encrypted solution. Secrets are designed to manage small amounts of sensitive data, such as passwords, tokens, or keys. Unlike ConfigMaps, Kubernetes encrypts Secrets at rest and in transit. This means that the data stored in a Secret cannot be easily accessed or exposed by unauthorized parties. Kubernetes Secrets offer additional security features such as tighter access controls, including role-based access control (RBAC), ensuring that only authorized pods and users can access them. By utilizing Secrets, organizations can significantly reduce the risk of accidental exposure and adhere to best security practices.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Injecting Secrets into Pods&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Kubernetes provides flexible mechanisms to inject Secrets into running pods, making it straightforward to use sensitive data in applications. Secrets can be mounted as files within a pod’s filesystem or injected as environment variables. When mounted as files, each underlying file inside the mounted directory represents a key-value pair from the Secret. This is particularly useful for applications that expect configurations to be provided via files. Alternatively, by injecting as environment variables, Secrets can be accessed through the application’s runtime environment, providing another layer of abstraction. These methods offer developers multiple ways of integrating Secrets depending on their application’s architecture and requirements.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Best Practices for Managing Kubernetes Secrets&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Effective management of Kubernetes Secrets involves implementing best practices to enhance security. First, regularly audit permissions to ensure that only essential services and users have access to Secrets. Employ automated tools to monitor and rotate credentials stored in Secrets routinely. Additionally, consider encrypting the data in Secrets before placing it in Kubernetes and using tools like HashiCorp Vault for an added encryption layer. Enforce strict network policies to limit exposure and ensure that the Secret’s lifecycle management is integrated into your CI/CD pipeline for seamless updates. By following these practices, the security and integrity of Sensory data within Kubernetes environments can be significantly enhanced.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Conclusion&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;In conclusion, while ConfigMaps serve as a convenient tool for managing non-sensitive configurations in Kubernetes, they are ill-suited for handling sensitive data due to their lack of encryption and exposure to unauthorized access. Kubernetes Secrets provide a secure alternative, offering encryption at rest and in transit, along with robust access controls. By injecting Secrets into pods via environment variables or mounted files, developers can securely manage sensitive information without compromising application security. Adopting best practices in managing Secrets, such as regular auditing and credential rotation, further enhances the protection of sensitive data, helping to fortify an organization’s Kubernetes deployment.&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>configmaps</category>
      <category>secrets</category>
      <category>security</category>
    </item>
    <item>
      <title>Terraform Tree Structure: Boost Cloud Efficiency and Management</title>
      <dc:creator>olamide odufuwa</dc:creator>
      <pubDate>Mon, 17 Feb 2025 02:06:05 +0000</pubDate>
      <link>https://dev.to/olamyde/terraform-tree-structure-boost-cloud-efficiency-and-management-4il0</link>
      <guid>https://dev.to/olamyde/terraform-tree-structure-boost-cloud-efficiency-and-management-4il0</guid>
      <description>&lt;p&gt;&lt;strong&gt;Introduction&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The complexity of managing cloud infrastructure has grown significantly, prompting the need for efficient tools that simplify this process. Whether it’s deploying applications or configuring resources, the way we handle infrastructure has drastically evolved. One of the remarkable tools in this space is Terraform, known for enabling Infrastructure as Code (IaC). Within Terraform, the concept of tree structure plays a critical role in organizing configuration files and modules effectively. This article delves into the essential elements of Terraform tree structure, exploring its significance and how it contributes to streamlined and scalable infrastructure management. By understanding its framework, developers and IT professionals can optimize their cloud environments, ensuring better performance, reduced errors, and enhanced collaboration across teams.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Understanding Terraform Overview&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;In its essence, Terraform is an open-source tool developed by HashiCorp that facilitates the automation of infrastructure management across different service providers. At the heart of Terraform lies &lt;em&gt;declarative coding&lt;/em&gt;, allowing users to specify what their infrastructure should look like without detailing the steps to reach that state. With a robust plugin ecosystem, Terraform’s modular and cohesive architecture enables the development of reusable code. This proves invaluable in setting up cloud environments, from low-level components like VPCs and subnets to high-level features such as application deployments. Understanding these elements sets the scene for leveraging the tree structure to its full potential.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Importance of Tree Structure in Terraform&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The tree structure within Terraform plays a pivotal role in organizing and managing configuration files systematically. This structured approach allows teams to split environments into distinct and easily manageable segments. By adopting a tree-like representation of configurations, developers can effortlessly navigate through complex setups, tracing resource dependencies and shared modules. Such a setup not only simplifies debugging but also enhances &lt;em&gt;code readability&lt;/em&gt; and reusability. By structuring directories correctly, teams can implement consistent workflows and automate deployments, driven by clear and logical directory hierarchies, facilitating seamless collaboration.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Optimizing Resource Management&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Achieving optimal resource management in Terraform relies heavily on configuring an efficient tree structure. The integration of modules allows for reusable, independent blocks of configurations that can be easily integrated into various projects. This reusability is crucial when managing multi-account cloud deployments, where managing state becomes more complex. By adhering to a well-planned tree structure, organizations can manage multiple states across diverse environments effectively, automating the provisioning and management of resources. Moreover, standardized structures aid in instilling best practices, fostering an environment where resources and budgets are optimally utilized.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Best Practices for Implementing Tree Structures&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;To leverage the full potential of Terraform’s tree structure, it’s essential to adhere to certain best practices. Start by defining clear directory namespaces, separating configurations logically using environment-specific paths, and incorporating state files pertinent to specific environments. Ensuring consistency in module development without duplicating code is another key practice. Build a library of reusable modules for frequently used infrastructure patterns. It’s also beneficial to document each part of the structure, enabling easier onboarding of team members and simplifying maintenance tasks. Consistently revisiting and refining your tree structure can provide adaptability and long-term operational efficiency.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Conclusion&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;In conclusion, the tree structure in Terraform is more than an organization tool; it’s a blueprint for managing infrastructure with precision and efficiency. As detailed in this article, the adoption of a tree structure facilitates clear organization, modular code development, and optimal resource management, all of which are crucial in modern infrastructure management. By adhering to best practices, professionals can significantly reduce errors, enhance scalability, and foster a cohesive development environment. As cloud deployments grow in complexity, understanding and implementing a well-defined tree structure becomes indispensable for maintaining control and achieving successful outcomes in diverse IT landscapes.&lt;/p&gt;

</description>
      <category>terraform</category>
      <category>iac</category>
      <category>infrastructureascode</category>
    </item>
  </channel>
</rss>
