<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Ige Adetokunbo Temitayo</title>
    <description>The latest articles on DEV Community by Ige Adetokunbo Temitayo (@igeadetokunbo).</description>
    <link>https://dev.to/igeadetokunbo</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/igeadetokunbo"/>
    <language>en</language>
    <item>
      <title>A Practical Guide to Kubernetes Stateful Backup and Recovery</title>
      <dc:creator>Ige Adetokunbo Temitayo</dc:creator>
      <pubDate>Thu, 13 Nov 2025 20:02:50 +0000</pubDate>
      <link>https://dev.to/aws-builders/a-practical-guide-to-kubernetes-stateful-backup-and-recovery-5e9e</link>
      <guid>https://dev.to/aws-builders/a-practical-guide-to-kubernetes-stateful-backup-and-recovery-5e9e</guid>
      <description>&lt;p&gt;Explore methods, tools and best practices for protecting data in databases, memory caches, storage systems and other stateful applications.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F732rbqwk32ilupvbbw1m.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F732rbqwk32ilupvbbw1m.png" alt=" " width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Kubernetes is a very powerful and robust platform for orchestrating containerized applications, excelling in managing both stateful and stateless applications. Managing stateful applications can be challenging because of the need to maintain data consistency, integrity and availability.&lt;/p&gt;

&lt;p&gt;Proper, well-documented and tested backup and recovery strategies are essential so that when there is a disaster, you can easily restore service without any data loss. There are different methods for achieving backup and restore in a Kubernetes environment, so you must ensure the strategy you use aligns with your use case.&lt;/p&gt;

&lt;p&gt;I will walk you through the essential strategies and tools that can be adapted to perform backup and recovery in the Kubernetes environment for business continuity.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why Is Backup and Recovery Essential for Stateful Applications?
&lt;/h2&gt;

&lt;p&gt;Losing data in a stateful application can be catastrophic. Unlike stateless applications, which do not require persistent storage and can be easily scaled and replaced anytime, it’s essential to have a tested and reliable backup and recovery strategy for your stateful applications. Examples of stateful applications include:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Databases: MySQL, MongoDB and PostgreSQL&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Memory caches: Redis, RabbitMQ&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Storage systems: Elasticsearch, Cassandra&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Understanding Kubernetes Stateful Workloads
&lt;/h2&gt;

&lt;p&gt;A StatefulSet in Kubernetes is the workload API object used to manage stateful applications. A StatefulSet provides the capability for the pod to maintain a sticky identity, unique network identity, persistent volumes (PVs) and persistent volume claims (PVCs). A StatefulSet makes it easier to get each pod’s identity, which in turn makes it easier to perform database backup and restore.&lt;/p&gt;

&lt;h2&gt;
  
  
  Backup Strategies for Kubernetes Stateful Applications
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;1. Volume Snapshots&lt;/strong&gt;&lt;br&gt;
Kubernetes provides a standardized way of copying a volume’s contents at a specific time without creating an entirely new volume. Volume snapshots are handy and powerful when database administrators want to quickly restore a previous state. This can also be useful when a maintenance activity needs to be performed on the Kubernetes cluster. The backup will be performed before the activity, and the administrator will perform the restore after the activity.&lt;/p&gt;

&lt;p&gt;How to use volume snapshots: Kubernetes has built-in support for managing volume snapshots through the Container Storage Interface (CSI) Snapshot API, which integrates seamlessly with storage in cloud environments.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;In AWS, use AWS EBS.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;In Azure, use Azure Managed Disks snapshots.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;In Google Cloud Platform (GCP), use persistent disk snapshots.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Volume snapshot tools to consider:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Velero is a popular open source tool used to perform backup, restore and migration of Kubernetes resources such as PVCs and PVs. It also performs scheduled backups and integrates with major cloud providers.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;You can also set up a Kubernetes cron job. Create a Kubernetes cron job resource that schedules regular tasks to execute rsync commands. The rsync tool will synchronize or back up data from a PV to a backup location, such as external storage, cloud storage or another PV.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;2. Application-Level Backups&lt;/strong&gt;&lt;br&gt;
Stateful applications (particularly databases) require consistent routine backups. Simply copying the data may lead to data corruption, so it is better to utilize the database’s built-in tool to perform a backup.&lt;/p&gt;

&lt;p&gt;Database backup tools to consider:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;PostgreSQL: Use pg_dump.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;MySQL: Use mysqldump.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Use Velero for regular backups.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;3. Incremental and Differential Backups&lt;/strong&gt;&lt;br&gt;
In cases where the database is very large, performing incremental and differential backups will come in handy. Incremental and differential backups will back up only data that’s changed, saving time, bandwidth and storage.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Incremental backups: Capture changes since the last backup.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Differential backups: Capture changes since the last full backup.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Incremental and differential backup tools to consider:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Restic supports efficient and encrypted incremental backups.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;BorgBackup can be used to back up Kubernetes volumes on a node.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;4. Offsite and Multiregion Backups&lt;/strong&gt;&lt;br&gt;
To prevent a single point of failure at the database location and protect against database location failures, store backups offsite or in multiple regions. To store backups in the cloud, you can try:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Amazon S3&lt;/strong&gt;: S3 is a reliable and scalable object storage service that can replicate data to other regions.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Google Cloud Storage&lt;/strong&gt;: GCP Cloud Storage integrates with GCP services, stores any amount of data and retrieves it as often as you like.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Azure Blob Storage&lt;/strong&gt;: It integrates with Microsoft Azure services and provides scalable, cost-efficient object storage in the cloud.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Recovery Strategies for Kubernetes Stateful Applications
&lt;/h2&gt;

&lt;p&gt;There are so many strategies that can be adapted to perform data restoration for a stateful application.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1. Restore From Volume Snapshots&lt;/strong&gt;&lt;br&gt;
The following methods can be used to restore volume snapshots from a Kubernetes environment. You also need to validate the integrity of the volume snapshot before attempting a restore.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Use Velero to perform volume restore.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Use Kubernetes resources to restore VolumeSnapshot into a new PV.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;2. Application-Level Restore&lt;/strong&gt;&lt;br&gt;
Use the built-in tool provided by the database to perform a database restore. You can only use these tools to restore backups created with the same tool (e.g., if mysqldump was used to perform a MySQL backup).&lt;/p&gt;

&lt;p&gt;Database-specific restore tools:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;PostgreSQL: Use pg_dump.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;MySQL: Use mysqldump.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Use Velero for regular backups.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;3. Full Kubernetes Restore With Velero&lt;/strong&gt;&lt;br&gt;
Velero can both back up and restore Kubernetes resources. You can use Velero to restore Kubernetes resources such as StatefulSets, ConfigMaps, Kubernetes secrets, PVs and PVCs.&lt;/p&gt;

&lt;p&gt;Once all the resources have been successfully restored, you can reattach the PV.&lt;/p&gt;

&lt;h2&gt;
  
  
  Recommended Best Practices for Backup and Recovery
&lt;/h2&gt;

&lt;p&gt;When establishing a backup and recovery strategy, make sure it includes the following best practices:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Perform regular, scheduled backups and retention policies.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Organize periodic testing (backup restores) of the backups to validate their integrity and authenticity. This can be automated and generate reports for analysis.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Monitor and send alerts for backup failures. You can use tools like &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Nagios or Datadog to perform backup monitoring.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Document your recovery procedures.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Use encryption on backups for security.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Disaster Recovery Automation Tools
&lt;/h2&gt;

&lt;p&gt;The following tools can be used for automating backup and restore in a Kubernetes environment.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Velero is an open source tool for backing up and restoring Kubernetes workloads. It also has support for cloud storage and snapshots.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Stash is a native Kubernetes disaster recovery solution for backing up and restoring volumes and databases in Kubernetes.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Ark is an open source tool created by Heptio for backing up and restoring Kubernetes clusters and PVs. Ark allows you to back up all or part of a resource in your Kubernetes cluster, including PVs, deployments, tags and more.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;It is very important to perform regular disaster recovery (DR) drills to ensure business continuity in a disaster situation. You can also do regular activities such as chaos engineering, which will simulate failures and validate your infrastructure’s recovery process, on your Kubernetes cluster.&lt;/p&gt;

&lt;p&gt;By implementing a strategy for the backup and recovery process that aligns with your use case, leveraging StatefulSets, PV snapshots and PVCs; using backup solutions such as Velero; and maintaining a backup and restore policy, you can ensure that your stateful applications remain resilient to data loss or corruption.&lt;/p&gt;

&lt;p&gt;An architected backup and recovery strategy not only mitigates the risk associated with data loss but also enhances the overall reliability and trustworthiness of your Kubernetes-managed applications. It is a future investment in your infrastructure setup, ensuring that operations can continue running smoothly even when faced with unexpected disruptions.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;This article was first published on &lt;a href="https://thenewstack.io/a-practical-guide-to-kubernetes-stateful-backup-and-recovery/" rel="noopener noreferrer"&gt;https://thenewstack.io/a-practical-guide-to-kubernetes-stateful-backup-and-recovery/&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;

</description>
      <category>devops</category>
      <category>database</category>
      <category>tutorial</category>
      <category>kubernetes</category>
    </item>
    <item>
      <title>Simplify Kubernetes Security With Kyverno and OPA Gatekeeper</title>
      <dc:creator>Ige Adetokunbo Temitayo</dc:creator>
      <pubDate>Sat, 21 Jun 2025 05:08:58 +0000</pubDate>
      <link>https://dev.to/aws-builders/simplify-kubernetes-security-with-kyverno-and-opa-gatekeeper-11o2</link>
      <guid>https://dev.to/aws-builders/simplify-kubernetes-security-with-kyverno-and-opa-gatekeeper-11o2</guid>
      <description>&lt;p&gt;Here’s how these tools can make Kubernetes security easier and help you avoid common pitfalls.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjat7pktjvxn7kcwivy4k.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjat7pktjvxn7kcwivy4k.png" alt="Image from haalkab on Pixabay." width="720" height="480"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Kubernetes is hands-down the go-to tool for managing containerized applications, yet it comes with one specific challenge: security! With its complexity, ensuring your Kubernetes deployment is secure and aligned with best practices can be overwhelming.&lt;/p&gt;

&lt;p&gt;But there’s good news. Tools like Kyverno and OPA Gatekeeper are here to help you protect your clusters. These policy enforcement engines make sure your Kubernetes resources are safe and compliant before they even enter your cluster. Sounds like a game-changer, right?&lt;/p&gt;

&lt;p&gt;Here’s how these tools can simplify your Kubernetes security setup and help you avoid common pitfalls, like running containers as root or using images from dubious sources.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Why Kubernetes Security Matters&lt;/strong&gt;&lt;br&gt;
Kubernetes is a powerhouse for orchestration, but without the right controls, you’re leaving the door open to potential security risks. From untrusted images to excessive resource allocation, the risks can pile up fast. That’s where policy engines come in. They act as guardrails, creating a balance between security and developer autonomy.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Enter Kyverno and OPA Gatekeeper&lt;/strong&gt;&lt;br&gt;
Both Kyverno and OPA Gatekeeper are designed to lock down your Kubernetes environment without adding unnecessary complexity. Think of them as your Kubernetes security bouncers. They validate your configurations, ensure compliance and stop vulnerabilities in their tracks before they get into your system.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Spotlight on Kyverno&lt;/strong&gt;&lt;br&gt;
Kyverno is built specifically for Kubernetes, and it’s simple to use. Policies are written in YAML, a human-friendly data serialization language, with no extra programming language required. Whether you’re enforcing namespaces, applying cluster-wide rules or testing policies with the CLI tool before deployment, Kyverno has you covered. And the bonus? You get reports on compliance right out of the box. Some key highlights of Kyverno include:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Easy-to-write YAML policies&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Native integration with Kubernetes tooling&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;A CLI tool to preview policies before rolling them out&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Policy enforcement across namespaces and clusters&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Built-In Compliance Reporting&lt;/strong&gt;&lt;br&gt;
Kyverno doesn’t just enforce security; it empowers organizations to understand and adapt their policies with clarity and precision.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;How To Install Kyverno in Your Kubernetes Cluster&lt;/strong&gt;&lt;br&gt;
You will need to install Helm in your workstation. You will be using Helm to install Kyverno.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Get Started With Kyverno&lt;/strong&gt;&lt;br&gt;
Why use Helm to install Kyverno? It’s:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Better-suited for production&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Easier to install and upgrade packages or software in your cluster&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Step 1: To Install Helm (if not already installed):&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;#To install brew on macOS (with Homebrew)
brew install helm

#To install brew on Linux 
curl https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3 | bash

#To install brew on Windows (with Chocolatey)
choco install kubernetes-helm
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Step 2: Add Kyverno Helm repo:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;helm repo add kyverno https://kyverno.github.io/kyverno/
helm repo update
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fm7cy15ecu5wddtth72z0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fm7cy15ecu5wddtth72z0.png" alt="Image description" width="720" height="97"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 3: Install Kyverno&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;helm install kyverno kyverno/kyverno --namespace kyverno --create-namespace
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhaokcvt35q2dagdd0clz.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhaokcvt35q2dagdd0clz.png" alt="Image description" width="720" height="215"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 4: Verify the Kyverno Installation&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl get pods -n kyverno
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffa15hty15x6ok3a7pe68.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffa15hty15x6ok3a7pe68.png" alt="Image description" width="634" height="202"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Example of Kyverno Policy&lt;/strong&gt;&lt;br&gt;
Use case 1: It prevents users from deploying containers that use :latest tag.&lt;/p&gt;

&lt;p&gt;If there is an issue with that piece of code, it is very difficult to track or roll back since you’re not sure all instances of it have the same version. The image also might have other dependencies that are difficult to track down or fix.&lt;/p&gt;

&lt;p&gt;Please copy and paste the snippet below into a file with the filename disallow-latest-tag.yaml and use this command to execute it in your cluster. The policy below will prevent users from using image tags :latest in your cluster.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl apply -f disallow-latest-tag.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ft6okdmrg20cx3l1wb9b7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ft6okdmrg20cx3l1wb9b7.png" alt="Image description" width="634" height="193"&gt;&lt;/a&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: kyverno.io/v1
kind: ClusterPolicy
metadata:
  name: disallow-latest-tag
  annotations:
    policies.kyverno.io/title: Disallow Latest Tag
    policies.kyverno.io/category: Best Practices
    policies.kyverno.io/minversion: 1.6.0
    policies.kyverno.io/severity: medium
    policies.kyverno.io/subject: Pod
    policies.kyverno.io/description: &amp;gt;-
      The ':latest' tag is mutable and can lead to unexpected errors if the
      image changes. A best practice is to use an immutable tag that maps to
      a specific version of an application Pod. This policy validates that the image
      specifies a tag and that it is not called `latest`.
spec:
  validationFailureAction: Audit
  background: true
  rules:
  - name: require-image-tag
    match:
      any:
      - resources:
          kinds:
          - Pod
    validate:
      message: "An image tag is required."
      foreach:
        - list: "request.object.spec.containers"
          pattern:
            image: "*:*"
        - list: "request.object.spec.initContainers"
          pattern:
            image: "*:*"
        - list: "request.object.spec.ephemeralContainers"
          pattern:
            image: "*:*"
  - name: validate-image-tag
    match:
      any:
      - resources:
          kinds:
          - Pod
    validate:
      message: "Using a mutable image tag e.g. 'latest' is not allowed."
      foreach:
        - list: "request.object.spec.containers"
          pattern:
            image: "!*:latest"
        - list: "request.object.spec.initContainers"
          pattern:
            image: "!*:latest"
        - list: "request.object.spec.ephemeralContainers"
          pattern:
            image: "!*:latest"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Applying nginx with the latest tag&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Please copy and paste the snippet below into a file with the filename nginx-latest.yaml and use this command to execute it in your cluster. The manifest below uses an image with nginx:latest. The Kyverno policy should prevent you from applying the manifest:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl apply -f nginx-latest-tag.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-latest
  labels:
    app: nginx
spec:
  replicas: 1
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: nginx:latest  #This will trigger the policy to block
        ports:
        - containerPort: 80
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You can see below that we are unable to create the NGINX pod with the image tag as the latest. This is the essence of using a policy engine like Kyverno to enforce security best practices.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flqqzq5x2l2ia6teiwtws.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flqqzq5x2l2ia6teiwtws.png" alt="Image description" width="720" height="97"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;By enforcing policies such as disallowing mutable image tags (latest), teams can:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Prevent unintentional deployments of unversioned or unstable images&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Improve traceability and reproducibility&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Strengthen the overall security posture of the cluster&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;What is OPA Gatekeeper&lt;/strong&gt;&lt;br&gt;
Open Policy Agent (OPA) Gatekeeper is a policy enforcement tool tailored to work with Kubernetes. Policies are written in Rego, OPA’s declarative query language, to define rules and enforce security policies dynamically. It allows you to write policies that check whether something in your Kubernetes setup breaks a defined rule.&lt;/p&gt;

&lt;p&gt;OPA Gatekeeper acts as a Kubernetes admission controller, evaluating policies before the resources are deployed and helping to ensure compliance from the beginning.&lt;/p&gt;

&lt;p&gt;Below is an example of a simple Rego rule to ensure that all namespaces in your Kubernetes cluster have a team label:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;package kubernetes.admission

violation[{"msg": "Namespace must have a 'team' label"}] {
    api_object.kind == "Namespace"
    not has_label(api_object.metadata.labels, "team")
}

has_label(labels, label) {
    labels[label]
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Key Features of OPA Gatekeeper&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;The policy logic is kept separate from the constraints, making it reusable across different policies. The policy logic, written in Rego, defines what should be checked (for example: “Namespace must have team label”), while constraints tells the Gatekeeper where and when to apply the policy logic (for example: “Apply this rule to all namespaces in this cluster”).&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;It can scan existing resources for violations.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Comparing Kyverno and OPA Gatekeeper&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;| Feature                   | Kyverno     | OPA Gatekeeper      |
|---------------------------|-------------|---------------------|
| Policy Language           | YAML        | Rego                |
| Complexity                | Simple      | Complex             |
| Mutation Support          | Yes         | No                  |
| Custom Resource Support   | Yes         | Limited             |
| Flexibility               | Moderate    | High                |
| Learning Curve            | Low         | High                |
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Choosing Between Kyverno and OPA Gatekeeper&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The choice between Kyverno and OPA Gatekeeper depends on your specific needs and technical expertise:&lt;/p&gt;

&lt;p&gt;Choose Kyverno if:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;You prefer a Kubernetes-native approach with policies defined as CRDs using YAML.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;You and your team are familiar with Kubernetes concepts and YAML.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;You need a simpler and more intuitive way to define common Kubernetes security policies.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Choose OPA Gatekeeper if:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;You and your organization have an existing expertise in Rego or are willing to invest in learning it.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;You need to express highly complex and custom policy logic.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;You need to use a more mature and widely adopted policy engine with broader community support.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;You require a general-purpose policy engine that can be used across multiple systems. OPA Gatekeeper can be used to enforce policies not only in Kubernetes environments but also across various systems such as microservices, cloud platform and more.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Both Kyverno and OPA Gatekeeper, when implemented, can enforce security best practices such as:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Enforcing namespace-based resource quotas.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Restricting privileged container execution.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Requiring specific labels and annotations.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Install OPA Gatekeeper in your Cluster&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;You will need to have Helm running on your workstation.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 1: Add Gatekeeper Helm repo:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;helm repo add gatekeeper https://open-policy-agent.github.io/gatekeeper/charts
helm repo update
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F19p5cpev6ulosgxkvgp9.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F19p5cpev6ulosgxkvgp9.png" alt="Image description" width="720" height="233"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 2: Install Gatekeeper&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;helm install gatekeeper gatekeeper/gatekeeper \
  --namespace gatekeeper-system \
  --create-namespace
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fco33cj7fwlnyst3tl7cb.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fco33cj7fwlnyst3tl7cb.png" alt="Image description" width="634" height="246"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzh67xpy0n1equa8839bk.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzh67xpy0n1equa8839bk.png" alt="Image description" width="634" height="317"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Example of OPA Gatekeeper Policy&lt;/strong&gt;&lt;br&gt;
Use case 1: It prevents users from deploying containers that use :latest tag.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 1: Create a Constraint Template (It Defines the Logic)&lt;/strong&gt;&lt;br&gt;
Please copy and paste the snippet below into a file with the filename disallow-latest-tag-constraint-template.yaml&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl apply -f disallow-latest-tag-constraint-template.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: templates.gatekeeper.sh/v1beta1
kind: ConstraintTemplate
metadata:
  name: k8sdisallowlatesttag
spec:
  crd:
    spec:
      names:
        kind: K8sDisallowLatestTag
  targets:
    - target: admission.k8s.gatekeeper.sh
      rego: |
        package k8sdisallowlatesttag

        # Function to check containers for "latest" tag
        check_container(container) = msg {
          endswith(container.image, ":latest")
          msg := sprintf("Container '%s' is using a disallowed image tag 'latest'.", [container.name])
        }

        # Violations for regular containers in Pods
        violation[{"msg": msg}] {
          input.review.object.kind == "Pod"
          container := input.review.object.spec.containers[_]
          msg := check_container(container)
        }

        # Violations for init containers in Pods
        violation[{"msg": msg}] {
          input.review.object.kind == "Pod"
          container := input.review.object.spec.initContainers[_]
          msg := check_container(container)
        }

        # Violations for containers in Deployments
        violation[{"msg": msg}] {
          input.review.object.kind == "Deployment"
          container := input.review.object.spec.template.spec.containers[_]
          msg := check_container(container)
        }

        # Violations for init containers in Deployments
        violation[{"msg": msg}] {
          input.review.object.kind == "Deployment"
          container := input.review.object.spec.template.spec.initContainers[_]
          msg := check_container(container)
        }

        # Violations for containers in StatefulSets
        violation[{"msg": msg}] {
          input.review.object.kind == "StatefulSet"
          container := input.review.object.spec.template.spec.containers[_]
          msg := check_container(container)
        }

        # Violations for init containers in StatefulSets
        violation[{"msg": msg}] {
          input.review.object.kind == "StatefulSet"
          container := input.review.object.spec.template.spec.initContainers[_]
          msg := check_container(container)
        }

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6gslk1bycdnsec842g75.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6gslk1bycdnsec842g75.png" alt="Image description" width="720" height="193"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 2: Constraint (activates and applies the template)&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Please copy and paste the snippet below into a file with the filename disallow-latest-tag-gatekeeper.yaml&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl apply -f disallow-latest-tag-gatekeeper.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: constraints.gatekeeper.sh/v1beta1
kind: K8sDisallowLatestTag
metadata:
  name: disallow-latest-tag
spec:
  enforcementAction: deny
  match:
    kinds:
      - apiGroups: [""]
        kinds: ["Pod"]
      - apiGroups: ["apps"]
        kinds: ["Deployment", "StatefulSet", "DaemonSet"]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbat1s3q6uscdz69i0pq5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbat1s3q6uscdz69i0pq5.png" alt="Image description" width="710" height="88"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 3: Applying NGINX with the latest tag&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Please copy and paste the snippet below into a file with the filename nginx-latest.yaml. Use this command to execute it in your cluster. The manifest below uses an image with nginx:latest , the Gatekeeper rego policy should prevent you from applying the manifest&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl apply -f nginx-latest-tag.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-latest
  labels:
    app: nginx
spec:
  replicas: 1
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: nginx:latest  #This will trigger the policy to block
        ports:
        - containerPort: 80
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;With the screenshot below, the rego policy will prevent you from applying the nginx container with the tag latest.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fom7accd4u46ok00ni9ug.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fom7accd4u46ok00ni9ug.png" alt="Image description" width="720" height="75"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Conclusion&lt;/strong&gt;&lt;br&gt;
Kyverno and OPA Gatekeeper are useful tools for keeping your Kubernetes workloads secure. Kyverno stands out with its simple, YAML-based policies and Kubernetes-native design, making it easy to use. On the other hand, OPA Gatekeeper brings serious flexibility with its Rego language, which is adept at handling complex setups or working across multiple platforms. Picking the right one really comes down to what your team needs, your experience level and your security goals.&lt;/p&gt;

&lt;p&gt;Both tools help developers move quickly and confidently while staying within the rules, making sure security, compliance and best practices are baked into everything without slowing anyone down.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;This article was first published on &lt;a href="https://thenewstack.io/simplify-kubernetes-security-with-kyverno-and-opa-gatekeeper/" rel="noopener noreferrer"&gt;https://thenewstack.io/simplify-kubernetes-security-with-kyverno-and-opa-gatekeeper/&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Secret management using Pulumi ESC SDK and Azure Key Vault</title>
      <dc:creator>Ige Adetokunbo Temitayo</dc:creator>
      <pubDate>Mon, 07 Apr 2025 03:04:57 +0000</pubDate>
      <link>https://dev.to/igeadetokunbo/secret-management-using-pulumi-esc-sdk-and-azure-key-vault-1glg</link>
      <guid>https://dev.to/igeadetokunbo/secret-management-using-pulumi-esc-sdk-and-azure-key-vault-1glg</guid>
      <description>&lt;p&gt;&lt;em&gt;This is a submission for the &lt;a href="https://dev.to/challenges/pulumi"&gt;Pulumi Deploy and Document Challenge&lt;/a&gt;: Shhh, It's a Secret!&lt;/em&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  What I Built
&lt;/h2&gt;

&lt;p&gt;I built an automated infrastructure provisioning solution using Pulumi and Azure. The project leverages Pulumi's Infrastructure as code (IaC) to deploy Azure resources such as a virtual machine (VM) and network and manage secrets with Pulumi ESC. The infrastructure provisions a virtual network, a subnet, a public IP, and a Linux-based VM and integrates with Azure Key Vault to handle credentials securely.&lt;/p&gt;

&lt;h2&gt;
  
  
  Live Demo Link
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0lxlq6dgk50ehb79h3so.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0lxlq6dgk50ehb79h3so.png" alt="Image description" width="800" height="416"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Project Repo
&lt;/h2&gt;

&lt;p&gt;This project can be replicated in your own Azure environment by following the steps in the project repository.&lt;/p&gt;

&lt;p&gt;The full code can be found &lt;a href="https://github.com/ExitoLab/azure_key_vault_pulumi_esc_sdk_example" rel="noopener noreferrer"&gt;here&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I created a VM, kept the secrets in Azure Key Vault replicated it in Pulumi ESC SDK, and used the credentials with the to provision script.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjleyhhp5ybvyap4yy1wu.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjleyhhp5ybvyap4yy1wu.png" alt="Image description" width="800" height="171"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  My Journey
&lt;/h2&gt;

&lt;p&gt;The process of building this solution was an enlightening experience. Initially, I faced some challenges with managing Azure resources using Pulumi, especially when dealing with multiple secret management systems. One of the major hurdles was working with Pulumi ESC for secure secret handling and ensuring that the right credentials were passed into the VM during provisioning.&lt;/p&gt;

&lt;h3&gt;
  
  
  Here’s how I overcame them:
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Challenge with Azure Key Vault:&lt;/strong&gt; Fetching secrets securely from Azure Key Vault required the use of Pulumi's integration with the azure_native package and keyvault.get_secret. The complexity came from ensuring that these secrets were properly passed into the virtual machine for secure authentication.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;ESC Integration:&lt;/strong&gt; I struggled with integrating Pulumi ESC effectively, but after reading the documentation and experimenting, I successfully automated secret management with Pulumi ESC, creating or updating environments as needed.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  What I learned:
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;How to work with Pulumi to provision Infrastructure as Code (IaC) in Azure.&lt;/li&gt;
&lt;li&gt;How to use Pulumi ESC for secure secret management, and how it integrates with other tools.&lt;/li&gt;
&lt;li&gt;The importance of correctly structuring and testing infrastructure code to ensure repeatability and security.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Using Pulumi ESC
&lt;/h2&gt;

&lt;p&gt;I utilized Pulumi ESC (External Secret Control) to securely manage sensitive data such as admin credentials. By storing sensitive information in Azure Key Vault, I was able to pull these secrets into the Pulumi project and inject them into the virtual machine during provisioning.&lt;/p&gt;

&lt;p&gt;This is the snippet of code that retrieves credentials from &lt;code&gt;Azure Key Vault&lt;/code&gt; and insert them into &lt;code&gt;Pulumi ESC&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# === Get secrets from Azure Key Vault ===
key_vault_id = f"/subscriptions/{subscription_id}/resourceGroups/{resource_group_name}/providers/Microsoft.KeyVault/vaults/{key_vault_name}"

admin_username_secret = azure.keyvault.get_secret(name="adminUsername", key_vault_id=key_vault_id)
admin_password_secret = azure.keyvault.get_secret(name="adminPassword", key_vault_id=key_vault_id)

admin_username = admin_username_secret.value
admin_password = admin_password_secret.value

# === Upload secrets to ESC ===
env_def.values.additional_properties = {
    "adminUsername": {"fn::secret": admin_username},
    "adminPassword": {"fn::secret": admin_password},
}

client.update_environment(org_name, project_name, esc_env_name, env_def)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Here’s how Pulumi ESC helped:
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Secret Management: Instead of manually entering sensitive information into my configuration files, Pulumi ESC provided a secure and automated way to fetch secrets.&lt;/li&gt;
&lt;li&gt;Environment Definitions: I used ESC to define an environment where these secrets could be securely updated. This is essential for managing different environments (e.g., development, staging, production) securely.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3deic6t2k3dv7ue8g3rs.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3deic6t2k3dv7ue8g3rs.png" alt="Image description" width="800" height="423"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Special Thanks:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;To the Pulumi community for their ESC SDK examples and the ESC team for their exceptional documentation 💡&lt;/p&gt;

</description>
      <category>devchallenge</category>
      <category>pulumichallenge</category>
      <category>webdev</category>
      <category>cloud</category>
    </item>
    <item>
      <title>Build a Serverless Todo App With AWS, Pulumi, and Python</title>
      <dc:creator>Ige Adetokunbo Temitayo</dc:creator>
      <pubDate>Mon, 10 Feb 2025 06:32:10 +0000</pubDate>
      <link>https://dev.to/igeadetokunbo/build-a-serverless-todo-app-with-aws-pulumi-and-python-2gp6</link>
      <guid>https://dev.to/igeadetokunbo/build-a-serverless-todo-app-with-aws-pulumi-and-python-2gp6</guid>
      <description>&lt;p&gt;Try this step-by-step guide to build and deploy a scalable serverless app that’s accessible through a RESTful API.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fg9znnw896pqrd1f5y0kt.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fg9znnw896pqrd1f5y0kt.png" alt="Image description" width="720" height="378"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Developers charged with building modern, scalable applications often face the burden of having to learn new skills, but there are alternatives that can speed and simplify their work. This tutorial provides a practical, hands-on guide to deploying a serverless app that’s accessible through a RESTful API. Following along will give you valuable skills in serverless architecture, Infrastructure as Code (IaC) and API development, empowering you to create efficient and cost-effective solutions.&lt;/p&gt;

&lt;p&gt;In this tutorial, I’ll walk through a step-by-step process for creating a serverless application using Amazon Web Services (AWS) Lambda, Docker and AWS API Gateway, all orchestrated with Pulumi using Python. By the end of this guide, you’ll have a deployed serverless application that can be accessed via a RESTful API.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why Use Pulumi and Serverless for This Project?
&lt;/h2&gt;

&lt;p&gt;Pulumi is an open-source Infrastructure as a Code (IaC) tool that allows developers to define and manage infrastructure using their favorite programming languages, such as TypeScript, JavaScript, Python, Go, or C#.&lt;/p&gt;

&lt;p&gt;By using Pulumi to create AWS Lambda, Docker, and API Gateway services, developers can leverage their existing knowledge to build and deploy a highly scalable serverless solution that can handle traffic without needing additional infrastructure-creating tools.&lt;/p&gt;

&lt;p&gt;Serverless computing allows developers to manage and run application code without the need to provision or manage servers. By using this model, developers can focus mainly on their application code without worrying about the underlying infrastructure.&lt;/p&gt;

&lt;p&gt;AWS API Gateway is a fully managed service that helps developers secure APIs. It also handles rate limiting, routing and scaling API requests. AWS Lambda is a serverless computing service that allows developers to run code without the need to provision or manage servers.&lt;/p&gt;

&lt;p&gt;Project Overview&lt;/p&gt;

&lt;p&gt;The todo app will have the following features:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Create a Todo: This action will add a todo list.&lt;/li&gt;
&lt;li&gt;Read a Todo: This action will read a todo list.&lt;/li&gt;
&lt;li&gt;Update Todo: This action will update a todo list.&lt;/li&gt;
&lt;li&gt;Delete Todo: This action will delete a todo list&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Now that you know what the project will do, follow this step-by-step guide on using Python to create a serverless todo application using Docker, API Gateway, AWS Lambda, Pulumi and Python.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Get Started&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;To begin, ensure you have done the following on your development machine.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Install the Pulumi command-line interface (CLI)&lt;/li&gt;
&lt;li&gt;Install Python 3.7 or later.&lt;/li&gt;
&lt;li&gt;Install the AWS CLI.&lt;/li&gt;
&lt;li&gt;If you don’t already have an AWS account, set one up.&lt;/li&gt;
&lt;li&gt;Configure the AWS CLI with your credentials to manage your AWS services.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Step 1: Install Pulumi&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;First, ensure you have Pulumi installed in your development environment. Pulumi can be installed on Linux, macOS or Windows:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;On Linux: curl -fsSL &lt;a href="https://get.pulumi.com" rel="noopener noreferrer"&gt;https://get.pulumi.com&lt;/a&gt; | sh&lt;/li&gt;
&lt;li&gt;On macOS (using Brew): brew install pulumi/tap/pulumi&lt;/li&gt;
&lt;li&gt;On Windows: Download and run the Pulumi installer (or try one of the other methods on that page).&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Step 2: Set Up Your Environment&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Next, set up your environment and install the Python dependencies if necessary. Follow the steps for instructions on how to set up your environment.&lt;/p&gt;

&lt;p&gt;Create a Pulumi account to store your stack state, if you want to use Pulumi for state management.&lt;br&gt;
Install dependencies: Install Python and pip on your workstation, since you will use Python to provision infrastructure.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 3: Create a New Pulumi Project&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Create a folder called todo_pulumi_docker_aws_lambda_api_gateway and create another folder for the Lambda project todo-app.&lt;/p&gt;

&lt;p&gt;Initialize a new Pulumi project by running:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;cd todo_pulumi_docker_aws_lambda_api_gateway/todo-app
pulumi new aws-python
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Follow the prompts to set up your project.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 4: Install Dependencies&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Create a requirements.txt file in the project root todo-app folder with the following content:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;pulumi&amp;gt;=3.0.0,&amp;lt;4.0.0
pulumi-aws&amp;gt;=6.0.2,&amp;lt;7.0.0
pulumi_docker==3.4.0
setuptools
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Then install the dependencies; using this command&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;pip3 install -r requirements.txt
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Step 5: Create Your Lambda Function&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Create a folder called lambda_function; this will contain the Lambda code and a file named lambda.py.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import pulumi, json
import pulumi_aws as aws
from pulumi_docker import Image, DockerBuild
import pulumi_docker as docker

from pulumi import Config

# Create a config object to access configuration values
config = pulumi.Config()

docker_image = config.get("docker_image")
environment = config.get("environment")
region = config.get("region")

aws.config.region = region

# First, create the DynamoDB table with just `id` as the primary key
dynamodb_table = aws.dynamodb.Table(
    f"todo-{environment}",
    name=f"todo-{environment}",
    hash_key="id",  # Only `id` as the partition key
    attributes=[
        aws.dynamodb.TableAttributeArgs(
            name="id",
            type="S"  # `S` for string type (use appropriate type for `id`)
        ),
    ],
    billing_mode="PAY_PER_REQUEST",  # On-demand billing mode
    tags={
        "Environment": environment,
        "Created_By": "Pulumi"
    }
)

# Create an IAM Role for the Lambda function
# Create Lambda execution role
lambda_role = aws.iam.Role(
    "lambdaExecutionRole",
    assume_role_policy=json.dumps({
        "Version": "2012-10-17",
        "Statement": [{
            "Action": "sts:AssumeRole",
            "Principal": {
                "Service": "lambda.amazonaws.com"
            },
            "Effect": "Allow",
            "Sid": ""
        }]
    })
)

# Create inline policy for the role
dynamodb_policy = aws.iam.RolePolicy(
    f"lambdaRolePolicy-{environment}",
    role=lambda_role.id,
    policy=pulumi.Output.json_dumps({
        "Version": "2012-10-17",
        "Statement": [
            {
                "Effect": "Allow",
                "Action": [
                    "dynamodb:Scan",
                    "dynamodb:PutItem",
                    "dynamodb:GetItem",
                    "dynamodb:UpdateItem",
                    "dynamodb:DeleteItem",
                    "dynamodb:Query"
                ],
                "Resource": [
                    dynamodb_table.arn,
                    pulumi.Output.concat(dynamodb_table.arn, "/*")
                ]
            },
            {
                "Effect": "Allow",
                "Action": [
                    "logs:CreateLogGroup",
                    "logs:CreateLogStream",
                    "logs:PutLogEvents"
                ],
                "Resource": "arn:aws:logs:*:*:*"
            }
        ]
    })
)

# Create a Lambda function using the Docker image
lambda_function = aws.lambda_.Function(
    f"my-serverless-function-{environment}",
    role=lambda_role.arn,
    package_type="Image",
    image_uri=docker_image,
    memory_size=512,
    timeout=30,
    opts=pulumi.ResourceOptions(depends_on=[lambda_role])
)

# Create an API Gateway REST API
api = aws.apigateway.RestApi(f"my-api-{environment}",
    description="My serverless API")

# Create a catch-all resource for the API
proxy_resource = aws.apigateway.Resource(f"proxy-resource-{environment}",
    rest_api=api.id,
    parent_id=api.root_resource_id,
    path_part="{proxy+}")

# Create a method for the proxy resource that allows any method
method = aws.apigateway.Method(f"proxy-method-{environment}",
    rest_api=api.id,
    resource_id=proxy_resource.id,
    http_method="ANY",
    authorization="NONE")

# Integration of Lambda with API Gateway using AWS_PROXY
integration = aws.apigateway.Integration(f"proxy-integration-{environment}",
    rest_api=api.id,
    resource_id=proxy_resource.id,
    http_method=method.http_method,
    integration_http_method="POST",
    type="AWS_PROXY",
    uri=lambda_function.invoke_arn)  # Ensure lambda_function is defined

lambda_permission = aws.lambda_.Permission(f"api-gateway-lambda-permission-{environment}",
    action="lambda:InvokeFunction",
    function=lambda_function.name,
    principal="apigateway.amazonaws.com",
    source_arn=pulumi.Output.concat(api.execution_arn, "/*/*")
)

# Deployment of the API, explicitly depends on method and integration to avoid timing issues
deployment = aws.apigateway.Deployment(f"api-deployment-{environment}",
    rest_api=api.id,
    stage_name="dev",
    opts=pulumi.ResourceOptions(
        depends_on=[method, integration, lambda_permission]  # Ensures these are created before deployment
    )
)

# Output the API Gateway stage URL
api_invoke_url = pulumi.Output.concat(
    "https://", api.id, ".execute-api.", "us-east-1", ".amazonaws.com/", deployment.stage_name
)

pulumi.export("api_invoke_url", api_invoke_url)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Step 6: Create a Dockerfile&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Create a file named Dockerfile inside the lambda_function directory.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Stage 1: Build dependencies on Ubuntu
FROM ubuntu:22.04 as builder

WORKDIR /app

# Install Python and pip
RUN apt-get update &amp;amp;&amp;amp; \
    apt-get install -y python3 python3-pip &amp;amp;&amp;amp; \
    apt-get clean &amp;amp;&amp;amp; rm -rf /var/lib/apt/lists/*

# Copy and install dependencies into a local directory
COPY requirements.txt .
RUN pip3 install --no-cache-dir -r requirements.txt -t /app/python

# Stage 2: Lambda-compatible final image
FROM public.ecr.aws/lambda/python:3.10

# Copy dependencies from the builder stage
COPY --from=builder /app/python ${LAMBDA_TASK_ROOT}

# Copy Lambda function code
COPY lambda.py ${LAMBDA_TASK_ROOT}/lambda.py

# Set the Lambda handler
CMD ["lambda.handler"]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Step 7: Create a GitHub Action to push Docker Image to ECR&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Create a file named docker-publish.yml in the folder .github/workflows. This file will contain the GitHub Actions code to publish and push the Docker image to the AWS Elastic Container Registry (ECR).&lt;/p&gt;

&lt;p&gt;Add the following secrets to the repository.&lt;/p&gt;

&lt;p&gt;See the screenshot below for an example:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fb3ns5gcwa9kxdoxkxwfo.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fb3ns5gcwa9kxdoxkxwfo.png" alt="Image description" width="720" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Here is the workflow to deploy the Docker Image to AWS ECR:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;name: Docker Push

on:
  push:
    paths:
      - 'todo-app/lambda_function/**'
    branches:
      - main

jobs:
  push-app-ecr:
    name: Deploy to ECR
    runs-on: ubuntu-latest
    env:
      AWS_REGION: ${{ secrets.AWS_REGION }}
      sha_short: $(git rev-parse --short HEAD)
      TARGET_ENVIRONMENT: dev
      REGISTRIES: ${{ secrets.REGISTRIES }}
      ECR_REGISTRY: ${{ secrets.ECR_REGISTRY }}

    permissions:
      id-token: write
      contents: read
      pull-requests: write
      repository-projects: write

    steps:
      - name: Checkout code
        uses: actions/checkout@v4

      - name: Set ENV variables to get the repo name
        run: echo "REPO_NAME=${GITHUB_REPOSITORY#$GITHUB_REPOSITORY_OWNER/}" &amp;gt;&amp;gt; $GITHUB_ENV

      - name: Use the custom ENV variable
        run: echo $REPO_NAME
        env:
          REPO_NAME: $REPO_NAME

      - name: Set up Docker Buildx
        uses: docker/setup-buildx-action@v3

      - name: Configure AWS credentials
        uses: aws-actions/configure-aws-credentials@v4 # More information on this action can be found below in the 'AWS Credentials' section
        with:
          role-to-assume: ${{ secrets.AWS_ROLE_ARN }}
          aws-region: ${{ secrets.AWS_REGION }}
          role-session-name: GithubActionsSession

      - name: Install AWS CLI
        run: |
          sudo apt-get update
          sudo apt-get install -y awscli

      - name: Check if ECR repository exists
        id: check_ecr_repo
        run: |
          aws ecr describe-repositories --repository-names ${{ env.REPO_NAME }} --region ${{ env.AWS_REGION }} &amp;gt; /dev/null || echo "::set-output name=exists::false"

      - name: Create ECR repository if it doesn't exist
        if: steps.check_ecr_repo.outputs.exists == 'false'
        run: |
          aws ecr create-repository --repository-name ${{ env.REPO_NAME }} --region ${{ env.AWS_REGION }}

      - name: Show ECR repository details
        run: |
          aws ecr describe-repositories --repository-names ${{ env.REPO_NAME }} --region ${{ env.AWS_REGION }}

      - name: Login to Amazon ECR
        id: login-ecr
        uses: aws-actions/amazon-ecr-login@v2
        with:
          registries: "${{ env.REGISTRIES }}"

      - name: Set short sha
        id: sha_short
        run: echo "sha_short=$(git rev-parse --short HEAD)" &amp;gt;&amp;gt; $GITHUB_OUTPUT

      - name: Build and push
        uses: docker/build-push-action@v5
        id: build-push-to-ecr
        with:
          context: todo-app/lambda_function
          file: todo-app/lambda_function/Dockerfile
          push: true
          tags: ${{ env.ECR_REGISTRY }}/${{ env.REPO_NAME }}:${{ steps.sha_short.outputs.sha_short }}
          platforms: linux/amd64
          provenance: false
        continue-on-error: false
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Step 8: Create a GitHub Action to Run and Deploy the Pulumi Code&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Create a token from pulumi.com; it will be used in GitHub Actions to utilize Pulumi for state management. Then use the newly created secret in GitHub as your PULUMI_ACCESS_TOKEN.&lt;/p&gt;

&lt;p&gt;Create a file named pulumi-deploy.yml in the folder .github/workflows. It will contain the GitHub Actions code to deploy the infrastructure code on AWS.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;name: Pulumi Deploy

on:
  push:
    paths:
      - 'todo-app/**'
    branches:
      - main # Trigger on push to the main branch

jobs:
  pulumi-deploy:
    runs-on: ubuntu-latest

    permissions:
      id-token: write
      contents: read
      pull-requests: write
      repository-projects: write

    steps:
      - name: Checkout code
        uses: actions/checkout@v4

      - name: Configure AWS credentials
        uses: aws-actions/configure-aws-credentials@v4 # More information on this action can be found below in the 'AWS Credentials' section
        with:
          role-to-assume: ${{ secrets.AWS_ROLE_ARN }}
          aws-region: ${{ secrets.AWS_REGION }}
          role-session-name: GithubActionsSession

      - name: Install Dependencies
        working-directory: todo-app
        run: |
          pip install -r requirements.txt

      - name: Configure Pulumi
        working-directory: todo-app
        run: |
          pulumi stack select ExitoLab/todo-app/dev --non-interactive || pulumi stack init ExitoLab/todo-app/dev
        env:
          PULUMI_ACCESS_TOKEN: ${{ secrets.PULUMI_ACCESS_TOKEN }}

      - name: Pulumi Preview
        working-directory: todo-app
        run: |
          pulumi preview --stack ExitoLab/todo-app/dev
        env:
          PULUMI_ACCESS_TOKEN: ${{ secrets.PULUMI_ACCESS_TOKEN }}

      - name: Pulumi Up
        working-directory: todo-app
        run: |
          pulumi up --stack ExitoLab/todo-app/dev --yes
        env:
          PULUMI_ACCESS_TOKEN: ${{ secrets.PULUMI_ACCESS_TOKEN }}


      # Comment this block, if you don't want to destroy the infra
      - name: Pulumi Destroy
        working-directory: todo-app
        run: |
          pulumi destroy --stack ExitoLab/todo-app/dev --yes
        env:
          PULUMI_ACCESS_TOKEN: ${{ secrets.PULUMI_ACCESS_TOKEN }}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You can find the Lambda function code in todo-app/lambda_function. It contains the Lambda function code in Python and the following resource endpoints. It uses DynamoDB to keep track of the todo list.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;GET Endpoint uses the resource /todos with the method GET.&lt;/li&gt;
&lt;li&gt;POST Endpoint uses the resource /todos with the method POST.&lt;/li&gt;
&lt;li&gt;DELETE Endpoint uses the resource /todos/ with the method DELETE.&lt;/li&gt;
&lt;li&gt;PATCH Endpoint uses the resource /todos/ with the method PATCH.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Step 9: Create the Pulumi Code to Spin Up the Infrastructure&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Create a file called &lt;em&gt;main&lt;/em&gt;.py in the todo-app folder; it will contain the infrastructure code for spinning up the infrastructure. The Pulumi code will create the following resources on AWS.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;API-Gateway: Defines the API Gateway and its associated root_resource, linking it to the Lambda function.&lt;/li&gt;
&lt;li&gt;Lambda-function: This is a Dockerized Lambda function created using a Docker image (image_uri).&lt;/li&gt;
&lt;li&gt;IAM-Roles: This is the identity and access management (IAM) role attached to the Lambda function. It allows the Lambda function to assume role permissions and also contains permissions for it to access the DynamoDB.&lt;/li&gt;
&lt;li&gt;Deployment: This deploys the API Gateway to the dev stage.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The Pulumi code is designed to deploy into different environments, such as production and development. In this tutorial, you will be deploying to dev, and the config file for dev is in Pulumi.dev.yaml.&lt;/p&gt;

&lt;p&gt;The code for the Pulumi infrastructure resource is:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import pulumi, json
import pulumi_aws as aws
from pulumi_docker import Image, DockerBuild
import pulumi_docker as docker

from pulumi import Config

# Create a config object to access configuration values
config = pulumi.Config()

docker_image = config.get("docker_image")
environment = config.get("environment")
region = config.get("region")

aws.config.region = region

# First, create the DynamoDB table with just `id` as the primary key
dynamodb_table = aws.dynamodb.Table(
    f"todo-{environment}",
    name=f"todo-{environment}",
    hash_key="id",  # Only `id` as the partition key
    attributes=[
        aws.dynamodb.TableAttributeArgs(
            name="id",
            type="S"  # `S` for string type (use appropriate type for `id`)
        ),
    ],
    billing_mode="PAY_PER_REQUEST",  # On-demand billing mode
    tags={
        "Environment": environment,
        "Created_By": "Pulumi"
    }
)

# Create an IAM Role for the Lambda function
# Create Lambda execution role
lambda_role = aws.iam.Role(
    "lambdaExecutionRole",
    assume_role_policy=json.dumps({
        "Version": "2012-10-17",
        "Statement": [{
            "Action": "sts:AssumeRole",
            "Principal": {
                "Service": "lambda.amazonaws.com"
            },
            "Effect": "Allow",
            "Sid": ""
        }]
    })
)

# Create inline policy for the role
dynamodb_policy = aws.iam.RolePolicy(
    f"lambdaRolePolicy-{environment}",
    role=lambda_role.id,
    policy=pulumi.Output.json_dumps({
        "Version": "2012-10-17",
        "Statement": [
            {
                "Effect": "Allow",
                "Action": [
                    "dynamodb:Scan",
                    "dynamodb:PutItem",
                    "dynamodb:GetItem",
                    "dynamodb:UpdateItem",
                    "dynamodb:DeleteItem",
                    "dynamodb:Query"
                ],
                "Resource": [
                    dynamodb_table.arn,
                    pulumi.Output.concat(dynamodb_table.arn, "/*")
                ]
            },
            {
                "Effect": "Allow",
                "Action": [
                    "logs:CreateLogGroup",
                    "logs:CreateLogStream",
                    "logs:PutLogEvents"
                ],
                "Resource": "arn:aws:logs:*:*:*"
            }
        ]
    })
)

# Create a Lambda function using the Docker image
lambda_function = aws.lambda_.Function(
    f"my-serverless-function-{environment}",
    role=lambda_role.arn,
    package_type="Image",
    image_uri=docker_image,
    memory_size=512,
    timeout=30,
    opts=pulumi.ResourceOptions(depends_on=[lambda_role])
)

# Create an API Gateway REST API
api = aws.apigateway.RestApi(f"my-api-{environment}",
    description="My serverless API")

# Create a catch-all resource for the API
proxy_resource = aws.apigateway.Resource(f"proxy-resource-{environment}",
    rest_api=api.id,
    parent_id=api.root_resource_id,
    path_part="{proxy+}")

# Create a method for the proxy resource that allows any method
method = aws.apigateway.Method(f"proxy-method-{environment}",
    rest_api=api.id,
    resource_id=proxy_resource.id,
    http_method="ANY",
    authorization="NONE")

# Integration of Lambda with API Gateway using AWS_PROXY
integration = aws.apigateway.Integration(f"proxy-integration-{environment}",
    rest_api=api.id,
    resource_id=proxy_resource.id,
    http_method=method.http_method,
    integration_http_method="POST",
    type="AWS_PROXY",
    uri=lambda_function.invoke_arn)  # Ensure lambda_function is defined

lambda_permission = aws.lambda_.Permission(f"api-gateway-lambda-permission-{environment}",
    action="lambda:InvokeFunction",
    function=lambda_function.name,
    principal="apigateway.amazonaws.com",
    source_arn=pulumi.Output.concat(api.execution_arn, "/*/*")
)

# Deployment of the API, explicitly depends on method and integration to avoid timing issues
deployment = aws.apigateway.Deployment(f"api-deployment-{environment}",
    rest_api=api.id,
    stage_name="dev",
    opts=pulumi.ResourceOptions(
        depends_on=[method, integration, lambda_permission]  # Ensures these are created before deployment
    )
)

# Output the API Gateway stage URL
api_invoke_url = pulumi.Output.concat(
    "https://", api.id, ".execute-api.", "us-east-1", ".amazonaws.com/", deployment.stage_name
)

pulumi.export("api_invoke_url", api_invoke_url)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Step 10: Test the Serverless Application&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The API gateway connects to the Lambda function that contains the Python code in Lambda. To test the endpoint, you need to get its URL from the AWS console. Log in to AWS and navigate to the API Gateway. You will see something like this:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fi2sx6r6re4dmk0p3kwfi.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fi2sx6r6re4dmk0p3kwfi.png" alt="Image description" width="720" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;To get the stage URL, which is used to access the serverless application from Postman, click on my-api-dev. You can find it under “Invoke URL” in the screenshot below.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffv2bw91dgse8xtek29ei.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffv2bw91dgse8xtek29ei.png" alt="Image description" width="720" height="357"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Health endpoint: The health endpoint checks if the app is up and running.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8c46g07fjipw8k8daiu0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8c46g07fjipw8k8daiu0.png" alt="Image description" width="720" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;GET endpoint: The GET endpoint retrieves the list of the todos in the DynamoDB.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftyy9p98xmu592kkxpbxy.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftyy9p98xmu592kkxpbxy.png" alt="Image description" width="720" height="474"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;POST endpoint: The POST endpoint creates a todo list in the DynamoDB.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fngxfkbb2q3b3btr6hw57.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fngxfkbb2q3b3btr6hw57.png" alt="Image description" width="720" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;PATCH endpoint: The PATCH endpoint updates the todo list in the DynamoDB by supplying the ID.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzf91k3whirkhi4v45gax.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzf91k3whirkhi4v45gax.png" alt="Image description" width="720" height="460"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;DELETE endpoint: The DELETE endpoint deletes the todo list in the DynamoDB by supplying the ID.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhom85602vg5pqosbxd9v.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhom85602vg5pqosbxd9v.png" alt="Image description" width="720" height="446"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Conclusion&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;You have successfully built and deployed a scalable, serverless todo app on AWS using AWS, API Gateway, Lambda, Docker, GitHub Actions and Pulumi. Pulumi makes it easier to manage Infrastructure as a Code (IaC) so that deployments are efficient, maintainable and faster. GitHub Actions automates the CI/CD pipeline deployment for seamless and reliable updates. At the same time, Docker in Lambda provides the flexibility to package your application and its dependencies into a container image. You can find the complete code for this project in my GitHub repo.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;This article was first published on &lt;a href="https://thenewstack.io/build-a-serverless-todo-app-with-aws-pulumi-and-python/" rel="noopener noreferrer"&gt;https://thenewstack.io/build-a-serverless-todo-app-with-aws-pulumi-and-python/&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;

</description>
    </item>
    <item>
      <title>How to Run Databases on Kubernetes: An 8-Step Guide</title>
      <dc:creator>Ige Adetokunbo Temitayo</dc:creator>
      <pubDate>Sat, 23 Nov 2024 08:54:18 +0000</pubDate>
      <link>https://dev.to/aws-builders/how-to-run-databases-on-kubernetes-an-8-step-guide-4339</link>
      <guid>https://dev.to/aws-builders/how-to-run-databases-on-kubernetes-an-8-step-guide-4339</guid>
      <description>&lt;p&gt;In this step-by-step tutorial, learn how to run MySQL, PostgreSQL, MongoDB, and other stateful applications on Kubernetes.&lt;/p&gt;

&lt;p&gt;Even though almost no one questions using Kubernetes (K8s) to manage container applications today, many engineers (including me) remain very skeptical about running databases on Kubernetes. Because databases are typically stateful applications, they require persistent data storage and consistency, and Kubernetes built its reputation on stateless applications. Therefore, to run databases on Kubernetes, you must ensure it can provide persistent storage, backup and restore, and high availability and failover.&lt;/p&gt;

&lt;p&gt;In this tutorial, I’ll use the example of creating and running a MySQL database on Kubernetes to demonstrate how to manage stateful applications in Kubernetes. I will dive into key concepts such as StatefulSets, PersistentVolumes (PVs), PersistentVolumeClaims (PVCs) and StorageClasses. I’ll assume that you already have an understanding of both databases and Kubernetes.&lt;/p&gt;

&lt;p&gt;Before I begin, it is vital to understand the difference between a stateless and a stateful application. Stateless applications do not keep data between requests; each request processes data individually with no concern about sharing the data. Stateful applications do keep data between requests and share it across sessions or pods. Workloads like databases need the data to be persistent.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Key Concepts for Running Databases on Kubernetes&lt;/strong&gt;&lt;br&gt;
Running databases such as MySQL, PostgreSQL, and MongoDB on Kubernetes requires careful planning around persistent storage, stable network identities, and scaling strategies. The following details need to be considered when running a database in Kubernetes.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Database Storage&lt;/strong&gt;&lt;br&gt;
Each database pod needs its own PV to ensure that the data is persistent. This means that even if the pod is deleted or restarted, the data still remains intact. Each database pod is assigned a dedicated PVC and PV.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Scaling Databases&lt;/strong&gt;&lt;br&gt;
When scaling databases, it is very important to ensure data consistency. StatefulSets supports running a leader-follower database architecture (primary-secondary), or a primary, read-only replica database, like PostgreSQL or MySQL. The primary database handles updates or writes, while the secondary database replicates or synchronizes, ensuring both consistency and redundancy.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Data Consistency and Backups&lt;/strong&gt;&lt;br&gt;
It is crucial to have a strategy to ensure data consistency across all database replicas and validate the integrity of the data. Regular backups and disaster recovery plans should be incorporated into your Kubernetes workflows. This must include routine (weekly or monthly) disaster recovery tests to validate the integrity of the database backup.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;StatefulSets&lt;/strong&gt;&lt;br&gt;
A StatefulSet is a Kubernetes resource designed for managing stateful applications such as databases. It ensures that pods possess persistent storage and that data remains intact even when the pods get restarted. Key features of StatefulSets include:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Persistent storage:&lt;/strong&gt; StatefulSets utilize PVs, which ensure that each pod has dedicated, stable storage that remains intact even after a pod restarts.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Stable network identifiers:&lt;/strong&gt; Every individual pod in a StatefulSet receives a unique and consistent name, which remains unchanged even after deployment; for example: mypod-0 , mypod-1 , mypod-2.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Tutorial: Create a Database on Kubernetes&lt;/strong&gt;&lt;br&gt;
To create a StatefulSet application (such as a database) on Kubernetes, follow this step-by-step guide.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 1:&lt;/strong&gt; Create a StorageClass (if You Don’t Have One)&lt;br&gt;
A StorageClass in Kubernetes is similar in concept to a profile, as it contains the details of an object. The storage class defines the storage type (either gp2 or gp3) and the parameter for your PV. You can specify a default storage class for dynamic volume provisioning and for any PVC that does not include a specific storage class.&lt;/p&gt;

&lt;p&gt;Here is an example of a storage class created for Amazon EKS.&lt;/p&gt;

&lt;p&gt;Create a new file called storage-class.yaml and copy this code into the file.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: standard
provisioner: kubernetes.io/aws-ebs # Use the correct provisioner for your cloud provider (AWS, GCP, Azure, etc.)
parameters:
  type: gp3
reclaimPolicy: Retain
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Create the storage class by running:&lt;/p&gt;

&lt;p&gt;kubectl apply -f storage-class.yaml&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 2: Create a PersistentVolume (PV)&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: v1
kind: PersistentVolume
metadata:
  name: mysql-pv
spec:
  capacity:
    storage: 10Gi
  accessModes:
    - ReadWriteOnce
  persistentVolumeReclaimPolicy: Retain
  storageClassName: standard
  hostPath:
    path: /mnt/data # Specify a path in the host for storage
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;A PV is storage allocated in your Kubernetes cluster. If dynamic provisioning is enabled, Kubernetes will create a PV automatically. Otherwise, you can create one manually.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 3: Create a Persistent Volume Claim (PVC)&lt;/strong&gt;&lt;br&gt;
A PVC serves as an interface between your application and requested storage. A PVC allows your application to request storage from the available PV.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: mysql-pvc
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 10Gi
  storageClassName: standard
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Step 4: Deploy a MySQL StatefulSet&lt;/strong&gt;&lt;br&gt;
This code snippet creates a StatefulSet for MySQL This ensures each MySQL pod (instance) gets its own unique identifier, persistent storage and stable network identity. Please note: you can parse your password from a separate file or vault. But not in a clear text.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: mysql
spec:
  serviceName: "mysql"
  replicas: 3
  selector:
    matchLabels:
      app: mysql
  template:
    metadata:
      labels:
        app: mysql
    spec:
      containers:
      - name: mysql
        image: mysql:5.7
        ports:
        - containerPort: 3306
          name: mysql
        env:
        - name: MYSQL_ROOT_PASSWORD
          value: "your_password"
        volumeMounts:
        - name: mysql-storage
          mountPath: /var/lib/mysql
  volumeClaimTemplates:
  - metadata:
      name: mysql-storage
    spec:
      accessModes: [ "ReadWriteOnce" ]
      resources:
        requests:
          storage: 10Gi
      storageClassName: standard
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Step 5: Create a Headless Service for MySQL&lt;/strong&gt;&lt;br&gt;
Create a MySQL StatefulSets headless service to enable the pods to communicate with each other in the Kubernetes cluster. The headless service in the example below is named mysql. The MySQL pods will be accessible within the cluster by using the name .mysql from within any pod in the same Kubernetes namespace and cluster.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Headless service
apiVersion: v1
kind: Service
metadata:
  name: mysql
  labels:
    app: mysql
spec:
  ports:
  - name: mysql
    port: 3306
  selector:
    app: mysql
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Step 6: Pipe MySQL Logs to Monitoring Tools&lt;/strong&gt;&lt;br&gt;
Monitoring MySQL is very important in identifying the database performance, bottlenecks, and errors and ensuring database health. The logs from the MySQL StatefulSets can be routed to monitoring tools such as Datadog, Grafana, Prometheus and ElasticSearch (the ELK Stack) to get full visibility into the performance and heath of the database.&lt;/p&gt;

&lt;p&gt;You need to configure MySQL to pipe logs to your monitoring tools. Commonly monitored logs include:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Slow query logs&lt;/strong&gt; identify slow-running logs.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Error logs track&lt;/strong&gt; errors and warnings.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;General query logs&lt;/strong&gt; track all MySQL queries.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Step 7: Perform Regular Backups and Routine Restore&lt;/strong&gt;&lt;br&gt;
It is very important to perform regular backups to ensure the availability of your Kubernetes workloads and routine restore to validate the integrity of the database.&lt;/p&gt;

&lt;p&gt;Velero is an open-source tool designed to safely back up and restore resources on Kubernetes clusters and PVs. It is an excellent solution for ensuring that your applications or databases do not experience any data loss. Velero offers essential functionalities such as Kubernetes cluster backup, restore, disaster recovery and scheduled backups. For more information, check out Velero’s documentation.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 8: Configure Database Alerts&lt;/strong&gt;&lt;br&gt;
In a Kubernetes environment where databases and other StatefulSet applications run, it is crucial to set up alert notifications to continuously monitor and avoid performance degradation, service disruption, downtime or data corruption.&lt;/p&gt;

&lt;p&gt;Monitoring tools such as Datadog, Nagios, Prometheus and Grafana can be used to monitor and check database health. They can be integrated with alert notification platforms such as Slack and PagerDuty, so an engineer will receive a notification (often a phone call) whenever there is a degradation in service or another issue with the database.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Conclusion&lt;/strong&gt;&lt;br&gt;
Running databases in Kubernetes creates unique challenges, including state management, persistent storage and network stability. Administrators can now comfortably manage database workloads in Kubernetes ensuring database integrity and availability by leveraging Kubernetes tools like PersistentVolumes, StorageClasses, StatefulSets and PersistentVolumeClaims.&lt;/p&gt;

&lt;p&gt;As Kubernetes continues to evolve, its support for StatefulSets will increase, making running databases in Kubernetes a powerful solution for modern infrastructures.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;This article was first published on &lt;a href="https://thenewstack.io/how-to-run-databases-on-kubernetes-an-8-step-guide/" rel="noopener noreferrer"&gt;https://thenewstack.io/how-to-run-databases-on-kubernetes-an-8-step-guide/&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>pvc</category>
      <category>pv</category>
      <category>aws</category>
    </item>
    <item>
      <title>How important is having great soft skills as a Software Engineer for personal growth and career success</title>
      <dc:creator>Ige Adetokunbo Temitayo</dc:creator>
      <pubDate>Sun, 26 May 2024 04:16:24 +0000</pubDate>
      <link>https://dev.to/aws-builders/how-important-is-having-great-soft-skills-as-a-software-engineer-for-personal-growth-and-career-success-46o1</link>
      <guid>https://dev.to/aws-builders/how-important-is-having-great-soft-skills-as-a-software-engineer-for-personal-growth-and-career-success-46o1</guid>
      <description>&lt;blockquote&gt;
&lt;p&gt;As an Engineer, I would like to understand what I need to do to ensure that my soft skills are equally as good as my technical skills. Technical skills are the fundamental requirements for engineers; however, soft skills also play a vital role in career success, advancement, and effectiveness in a working environment.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;In the modern workspace having adequate soft skills is becoming very vital for career success and personal growth. Soft skills are now becoming a distinguishing factor among different great Engineers. Soft skills such as ability to engage in effective and clear communication among colleagues, teamwork, problem-solving, and adaptability.&lt;/p&gt;

&lt;p&gt;I will be walking you through some practical tips and suggestions on how you can enhance your soft skills as a Software engineer. These are suggestions from my experience and also having checked success stories of some great Software Engineers.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 1: Active Listening&lt;/strong&gt;&lt;br&gt;
It is very important to cultivate the habit of active listening and being fully present in a conversation. In addition, also avoid interrupting and focus on understanding what message the speaker is trying to pass across. One use case will be to fully understand the client's requirements before engaging in executing the assignment.&lt;/p&gt;

&lt;p&gt;Additionally, it is not only listening to the speaker but also confirming you understand the message the speaker is trying to pass across by summarizing what the speaker has said and asking clarifying questions where necessary.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 2: Effective Communication&lt;/strong&gt;&lt;br&gt;
When delivering your message, ensure your message is clear and concise by avoiding any jargon, being direct, or any irrelevant message. It is also very important to pay attention to body language and ensure you engage in eye contact with your audience.&lt;/p&gt;

&lt;p&gt;Also, practice engaging in conversation with non-technical stakeholders and getting feedback to ensure that your stakeholders can understand your message.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 3: Teamwork and Collaboration&lt;/strong&gt;&lt;br&gt;
It is very essential to understand your team’s goal and objective; this will help you to understand how you can add value and be an effective member of your team. Develop the habit of having a shared purpose and objective within your team and also carrying every member of the team along when you are working on an important feature. Ensure you regularly showcase your work and contribute positively towards the growth and development of the team. Additionally, during conflict resolution; it is very vital to use empathy to understand different options and find an agreed common resolution which is signed off by every member of the team.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 4: Developing Problem-Solving and Critical Thinking Skills&lt;/strong&gt;&lt;br&gt;
Cultivate the skills of breaking down problems into manageable and smaller parts or deliverables; this will help you to achieve your tasks faster and boost your confidence toward working on more complex tasks. Use frameworks like SWOT analysis to evaluate your situation.&lt;/p&gt;

&lt;p&gt;Critical thinking is a crucial soft skill; you should regularly engage in brainstorming sessions and be open to unconventional solutions to solving problems.&lt;/p&gt;

&lt;p&gt;You should also be open to continuous learning and developing of changing and new technologies and methodologies.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 5: Adaptability and Flexibility&lt;/strong&gt;&lt;br&gt;
Always be on the lookout for an opportunity to embrace change any time it presents itself. See a change as an opportunity for growth and development. As an Engineer always embrace an opportunity to learn new technologies and subscribe to learning platforms such as Coursera, Udemy, Udacity, and Linkedln Learning to mention but a few to learn and acquire new technical and soft skills.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 6: Leadership and Management Skills&lt;/strong&gt;&lt;br&gt;
Taking ownership and successful completion of a project is a very critical soft skill. Taking ownership shows that you are responsible and reliable. Whenever you have an opportunity to lead projects and teams ensure you work on this effectively and see it as an opportunity for you to showcase your skills and be at the top of your game&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Conclusion&lt;/strong&gt;&lt;br&gt;
Engineers need to invest time and effort in developing their soft skills to achieve a successful career and be ahead of their pairs. Improving your soft skills will require conscious, intentional efforts and continuous practice. By focusing on active learning, effective communication, teamwork, and collaboration, developing problem-solving and critical thinking skills, adaptability and flexibility, and leadership and management skills you can significantly enhance your effectiveness and succeed at your workplace. Embrace the challenge by investing in your personal growth and see how your soft skills will pave the way for new opportunities and achievements.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>How I use Ansible to automate routine tasks by running an Adhoc script</title>
      <dc:creator>Ige Adetokunbo Temitayo</dc:creator>
      <pubDate>Tue, 16 Apr 2024 19:13:33 +0000</pubDate>
      <link>https://dev.to/aws-builders/how-i-use-ansible-to-automate-routine-tasks-by-running-an-adhoc-script-4174</link>
      <guid>https://dev.to/aws-builders/how-i-use-ansible-to-automate-routine-tasks-by-running-an-adhoc-script-4174</guid>
      <description>&lt;blockquote&gt;
&lt;p&gt;As a Platform Engineer, I would like to run a script on over 1000 servers and I do not want to spend the whole day running the script manually. There are times when Engineers are given a task that involves running the script on numerous servers; this process can be automated using Ansible.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;em&gt;What is Ansible? Ansible is an open-source automation tool used for application deployment, orchestration, and configuration management. Ansible is designed to simplify complex tasks for instance infrastructure provisioning, configuration management, and software deployment across a large-scale environment. Ansible uses YAML (YAML Ain’t Markup Language) syntax to describe configuration and automation tasks.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;You are given a task to run a command to detect theTimezoneon 1000 servers because one of the Quality Engineers detected that some transactions are dated in the future and the engineers suspected that some servers might have incorrect timezones. You are given a file that contains the IP addresses of 1000 servers.&lt;/p&gt;

&lt;p&gt;I will be walking you through the process of achieving this task using Ansible. This is an assumption that you already have an understanding of Python, Ansible, and Bash.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 1&lt;/strong&gt;&lt;br&gt;
Establish a passwordless SSH connection with the target host, this will enable Ansible to securely login into the target host without having to receive a prompt to input the password.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 2&lt;/strong&gt;&lt;br&gt;
Add the IP address to the file servers.txt and ensure the IP address is valid and follows the format as it is in the servers.txt&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 3&lt;/strong&gt;&lt;br&gt;
Extract the server IP Address using Python; to dynamically generate the inventory file&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;#!/usr/bin/env python3

import re
import json
import os

# Get the current directory
current_directory = os.getcwd()

# Concatenate the current directory with the file name
server_file = os.path.join(current_directory, 'servers.txt')

def read_servers_file(server_file):
    """Reads the server file and extracts the IP addresses."""
    ips = []
    with open(server_file, 'r') as f:
        lines = f.readlines()
        for line in lines:
            if 'ip_address=' in line:
                match = re.search(r'ip_address=([\d\.]+)', line)
                if match:
                    ips.append(match.group(1))
    return ips

def generate_inventory(ips):
    """Generates the inventory in JSON format."""
    inventory = {
        '_meta': {
            'hostvars': {}
        },
        'all': {
            'hosts': ips
        }
    }

    return inventory

def main():
    """Main function."""
    ips = read_servers_file(server_file)
    inventory = generate_inventory(ips)
    print(json.dumps(inventory))

if __name__ == '__main__':
    main()
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Step 4&lt;/strong&gt;&lt;br&gt;
The Ansible Playbook below will run a command date on the target servers and will display the output in a file called report.txt&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;---
- name: Extract ctlplane IP addresses and run command on servers
  hosts: all
  gather_facts: yes
  become: yes
  remote_user: ec2-user #change this to the remote user
  tasks:
    - name: Run command on servers and save output locally
      ansible.builtin.shell: "date"
      register: command_output
      run_once: yes

    - name: Debug command output
      ansible.builtin.debug:
        msg: "{{ command_output.stdout }}"

    - name: Create your local file on master node
      ansible.builtin.file:
        path: "report.txt"
        state: touch
        mode: '0644'
      delegate_to: localhost
      become: no

    - name: Create report.txt file or append to existing file
      ansible.builtin.lineinfile:
        path: "report.txt"
        line: "{{ item }} - {{ command_output.stdout }}"
      loop: "{{ ansible_play_batch }}"
      delegate_to: localhost
      become: no
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Step 5&lt;/strong&gt;&lt;br&gt;
To run the Ansible playbook. Use this command ansible-playbook -i dynamic_inventory.py ansible-playbook.yml&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 6&lt;/strong&gt;&lt;br&gt;
You should have a similar output as the one we have below in the screenshot&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzq0tm06tt4s0vh20mjmk.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzq0tm06tt4s0vh20mjmk.png" alt="Image description" width="800" height="239"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In Conclusion, I do hope you find this article useful and interesting. It demonstrates the procedure on how to run an Adhoc script using Ansible by dynamically generating the inventory file. Follow this link below to check the complete code on GitHub &lt;a href="https://github.com/ExitoLab/example_ansible_playbook_timezone"&gt;https://github.com/ExitoLab/example_ansible_playbook_timezone&lt;/a&gt;&lt;/p&gt;

</description>
      <category>ansible</category>
      <category>python</category>
      <category>automation</category>
    </item>
    <item>
      <title>Why i use a smaller docker image</title>
      <dc:creator>Ige Adetokunbo Temitayo</dc:creator>
      <pubDate>Mon, 27 Mar 2023 07:44:00 +0000</pubDate>
      <link>https://dev.to/igeadetokunbo/why-use-a-smaller-docker-image-2olb</link>
      <guid>https://dev.to/igeadetokunbo/why-use-a-smaller-docker-image-2olb</guid>
      <description>&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--GK7tfpEc--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/nwsrw0z5uyzm2mxwlrek.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--GK7tfpEc--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/nwsrw0z5uyzm2mxwlrek.png" alt="Image description" width="336" height="280"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What is Docker?&lt;/strong&gt; Docker is a famous platform for building, shipping, and running container applications. What are containers? Containers are a way to package software in a portable and isolated environment, this allows developers to package their application with all the dependencies it needs to run together, regardless of the host operating system.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Using Docker, developers can create container images containing everything needed to run an application, including the operating system, libraries, and runtime environment. These container images can be easily shared and deployed across different environments with docker software installed without requiring any modification.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;There are cases whereby developers end up using big images which can be very harmful and contains vulnerabilities. Using containers makes things very easier, we can simply get an OS and install all the dependencies required.&lt;/p&gt;

&lt;p&gt;There are several ways to ensure that the docker images developers are using are as small as possible. There is a big advantage, imagine running an application serving billions of users and the size is in kilobytes or megabytes.&lt;/p&gt;

&lt;p&gt;The following are suggestions while writing a Dockerfile &lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Implement docker layering and docker multi-build stage&lt;/li&gt;
&lt;li&gt;Use &lt;a href="https://hub.docker.com/_/alpine"&gt;alpine&lt;/a&gt; or &lt;a href="https://github.com/GoogleContainerTools/distroless"&gt;distroless&lt;/a&gt; or any other smaller OS as your base image or build your own OS&lt;/li&gt;
&lt;li&gt;Use .dockerignore which is similar to &lt;code&gt;.gitignore&lt;/code&gt; you can use a .dockerignore file to exclude files and directories from being included in the Docker build context. This can significantly reduce the size of the build context and the resulting image.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;In conclusion, in my next article, I will show an example of a Dockerfile in golang with the above suggestion. We will be able to compare the size of the docker image with and without optimizing the Dockerfile. We will also use tools like &lt;a href="https://github.com/aquasecurity/trivy"&gt;trivy&lt;/a&gt; or &lt;a href="https://snyk.io/"&gt;synk&lt;/a&gt; to check if there is a vulnerability or exploit in the docker image with or without optimization of the Dockerfile.&lt;/p&gt;

</description>
      <category>dockerfile</category>
      <category>docker</category>
      <category>trivy</category>
      <category>snyk</category>
    </item>
    <item>
      <title>My personal experience using kubecost in the Kubernetes environment</title>
      <dc:creator>Ige Adetokunbo Temitayo</dc:creator>
      <pubDate>Tue, 21 Mar 2023 19:57:27 +0000</pubDate>
      <link>https://dev.to/igeadetokunbo/my-personal-experience-using-kubecost-in-the-kubernetes-environment-5c6p</link>
      <guid>https://dev.to/igeadetokunbo/my-personal-experience-using-kubecost-in-the-kubernetes-environment-5c6p</guid>
      <description>&lt;h2&gt;
  
  
  What is Kubernetes?
&lt;/h2&gt;

&lt;p&gt;Kubernetes which is also referred to as k8. k8 is an open-source container orchestration platform that automates deployment, scaling, and management of the containerized application. k8 was developed by Google and it is now currently maintained by the Cloud Native Computing Foundation (CNCF).&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;k8 is currently the de facto standard for container orchestration and it is widely adopted in the industry. Kubernetes provides a platform-agnostic framework that allows developers to run containers on different cloud providers or on-premise infrastructure.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;k8 architecture is based on a master-slave model, where the master node manages the overall state of the cluster and the slave nodes (also known as worker nodes) run the application workloads.&lt;/p&gt;

&lt;p&gt;The four major Kubernetes providers are Amazon Elastic Kubernetes Service (Amazon EKS), Google Kubernetes Engine (GKE), Azure Kubernetes Service (AKS), and DigitalOcean Kubernetes (DOKS).&lt;/p&gt;

&lt;p&gt;We also need to ensure we are not spending above our budget while running the k8 environment. Although using cloud providers gives us the flexibility of putting nodes offline and autoscaling groups ensures we don't have nodes online if not in use. However, tools like kubecost can help us to create visibility in our Kubernetes cluster.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What is Kubecost?&lt;/strong&gt; Kubecost is a platform that provides cost optimization and visibility in Kubernetes clusters. Kubecost can be used in Kubernetes cluster for the following:&lt;/p&gt;

&lt;p&gt;Cost Management: It assists to identify the cost of running the Kubernetes cluster, including the cost of infrastructure resources and other services running in the cluster.&lt;/p&gt;

&lt;p&gt;Resource Allocation: Kubecost can help us to identify which resources in the cluster are being overutilized or underutilized.&lt;/p&gt;

&lt;p&gt;it can also be used in capacity planning which includes forecasting resource needs for your Kubernetes cluster based on historical usage patterns.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;You can also use kubecost as an addon while using EKS. For more information &lt;a href="https://aws-ia.github.io/terraform-aws-eks-blueprints/v4.12.1/add-ons/kubecost/"&gt;https://aws-ia.github.io/terraform-aws-eks-blueprints/v4.12.1/add-ons/kubecost/&lt;/a&gt; and &lt;a href="https://docs.aws.amazon.com/eks/latest/userguide/cost-monitoring.html"&gt;https://docs.aws.amazon.com/eks/latest/userguide/cost-monitoring.html&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;For more general information on kubecost. kindly follow this link &lt;a href="https://www.kubecost.com/"&gt;https://www.kubecost.com/&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;

</description>
      <category>kubecost</category>
      <category>kubernetes</category>
      <category>costmonitoring</category>
    </item>
    <item>
      <title>The production database just went down! What do I do?</title>
      <dc:creator>Ige Adetokunbo Temitayo</dc:creator>
      <pubDate>Mon, 04 Oct 2021 15:12:02 +0000</pubDate>
      <link>https://dev.to/aws-builders/the-production-database-just-went-down-what-do-i-do-1734</link>
      <guid>https://dev.to/aws-builders/the-production-database-just-went-down-what-do-i-do-1734</guid>
      <description>&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5onkzkud8757h9lxkr2c.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5onkzkud8757h9lxkr2c.png" alt="mssql-server"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;A few of my colleagues can relate with the title of this article, they might have experienced an issue with the database in production. Dear colleague, you are not alone. Panicking during a disaster in production has happened to the best of us.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;I can recall asking a friend a question about his experience during a disaster in production. He encountered a very serious disaster and he did not know what to do, felt like taking his bag and running away from the office. This colleague of mine is very good at what he does, he decided to relax and take a second look at the issue. A few moments later, he was able to find the solution.&lt;/p&gt;

&lt;p&gt;As an ISO 22301 lead implementer, my major responsibility is to ensure there is business continuity in place in an organization. There must be a business continuity plan in place to guard against any disaster whenever it strikes and to ensure that the business continues running accordingly.&lt;/p&gt;

&lt;p&gt;Production environments are very sensitive, especially database environments. It is very important for engineers managing these environments to undergo training concerning disaster recovery, business continuity, and disaster recovery plan. Whenever a disaster strikes, everybody knows what to do.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Database administrators need to ensure the following are executed regularly to prevent any surprises before they occur.&lt;/strong&gt;&lt;/p&gt;

&lt;h4&gt;
  
  
  Regular backup and restore test
&lt;/h4&gt;

&lt;p&gt;&lt;em&gt;Some database administrators rely majorly on restoring a backup in case of disaster but do not test them beforehand to know whether they are valid or not. The rule of the thumb is to create a process of automating backup restore periodically and confirm that the recent restore is valid.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;This restore should be automated periodically, at least twice a month is fine. Once the restore is done, scripts to check the database integrity will be executed against the database. If there are issues, they should be addressed immediately and the restored database dropped afterward.&lt;/em&gt;&lt;/p&gt;

&lt;h4&gt;
  
  
  Database Backup
&lt;/h4&gt;

&lt;p&gt;&lt;em&gt;Automated database backup in form of jobs needs to be set up and configured on the database for full backup, differential, and transactional log backup; with email notification to database administrators for backup status (failure or success)&lt;/em&gt;&lt;/p&gt;

&lt;h4&gt;
  
  
  Database High Availability
&lt;/h4&gt;

&lt;p&gt;&lt;em&gt;It is very essentials for every database in production to be on high availability, this will guarantee an up-time or availability of 99.99999% equivalent to 100%.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Configuring database high availability simply means there will be more than one (1) instance of a database server running; once an instance fails, there will be an automatic failover to any of other instances. However, if the database is on high availability we can easily failover to the other instances in cases of any disaster.&lt;/em&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;During any disaster or emergency, I will recommend the following&lt;/p&gt;
&lt;/blockquote&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Stay positive and extremely calm to prevent more future damage. This also prevents mistakes.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Raise an incident. It is very important to inform our customers or colleagues about the current situations. This is very good because it will indicate that we are aware of the current situation and also working on a resolution.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Check if there is a recent backup which must include at least a recent full backup (and recent differential backup and a transactional log backup where applicable). If there is a recent backup and you can accommodate data loss, you can restore this database.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Check if the affected database can be made available by running different scripts such as dbcc check DB or any other related scripts.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Failover the database to the DR instance (alternative instance), if the database is on high availability&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;In conclusion, please do let me know if you find this article interesting.&lt;/p&gt;

</description>
      <category>database</category>
      <category>production</category>
      <category>mssqlserver</category>
      <category>awscommunitybuilders</category>
    </item>
    <item>
      <title>My personal experience with a split-brain scenario in MSSQL Server</title>
      <dc:creator>Ige Adetokunbo Temitayo</dc:creator>
      <pubDate>Mon, 13 Sep 2021 16:59:12 +0000</pubDate>
      <link>https://dev.to/aws-builders/my-personal-experience-with-a-split-brain-scenario-in-mssql-server-1ae5</link>
      <guid>https://dev.to/aws-builders/my-personal-experience-with-a-split-brain-scenario-in-mssql-server-1ae5</guid>
      <description>&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5onkzkud8757h9lxkr2c.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5onkzkud8757h9lxkr2c.png" alt="mssql-server"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The split-brain scenario in a database environment is a situation whereby the communication link between two different sites is broken, as a result of this situation the production database and standby database become writeable at the same time.&lt;/p&gt;

&lt;p&gt;The primary database in the production site remains active; while the secondary server (standby database) in disaster recovery, which was in read-only mode, becomes writeable and becomes active because it thinks the primary database is offline.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;The main reason why both database servers become active in the two different sites, the quorum does not have visibility into the two different environments (production and disaster recovery) and assumes that the two environments are reachable and couldn't decide which of the database should vote.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;In most cases, the application server will write into these two databases simultaneously (production and disaster recovery) because they are both active. if this issue was not identified on time, it will cause a serious problem, leading to database integrity issues. There will be inconsistency in viewing customer’s historical transactions because the transactions are in two different databases until the databases are reconciled.&lt;/p&gt;

&lt;p&gt;Once a split-brain scenario has been identified, it is preferable to stop the SQLServer service in the standby database to prevent the database been active together with the primary database.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://exchange.nagios.org/" rel="noopener noreferrer"&gt;Nagios&lt;/a&gt; is a monitoring and alerting tool that can be used in monitoring database servers. There are &lt;a href="https://exchange.nagios.org/directory/Plugins/Databases/SQLServer" rel="noopener noreferrer"&gt;Nagios plugins&lt;/a&gt; that can detect if there are issues with database replications, High Availability (HA) clustering, or any database-related issues.&lt;/p&gt;

&lt;p&gt;Furthermore, immediately the communication link between the primary and disaster recovery site is restored and the primary database becomes writeable and the secondary database becomes read-only. The two databases will resume their initial mode before the split-brain scenario issues occurred. The primary database will also resume replication with the standby database but there will already be issues with (data and database schema) inconsistency and this can mislead the customers because only transactions that are routed to the primary database will be visible to users.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;The next challenge will be to reconcile the data in the primary database with the secondary database. The SQL server service that was stopped in the secondary database will now be started. The following techniques was used in reconciling the data.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Remove the replication between the primary database and secondary database. This is very important because the primary database contains the most recent records.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Take a full backup of the primary and standby database in case we want to revisit the records.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Once the replication has been disabled or destroyed. The standby database now becomes writeable&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Use a tool such as &lt;a href="https://www.red-gate.com/products/sql-development/sql-data-compare/trial/" rel="noopener noreferrer"&gt;SQL data compare&lt;/a&gt; or &lt;a href="https://docs.microsoft.com/en-us/sql/ssdt/how-to-compare-and-synchronize-the-data-of-two-databases?view=sql-server-ver15" rel="noopener noreferrer"&gt;visual studio&lt;/a&gt; in comparing and synchronizing the data in the two databases.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Once the database has been properly synced and compared.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;The next step will be to visualize the record from the application to confirm that the historical records are showing properly.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Managing crises during split-brain scenarios situations:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;It is very important to configure file share witnesses. File Share Witness is a file share that is available to all nodes in a high availability (HA) cluster. The job of the Witness is to provide an additional quorum vote when necessary in order to ensure that a cluster continues to run in the event of a site outage.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;The file-share witness should be hosted in any of the public cloud providers either AWS, Azure, or Google cloud which the quorum will use in deciding who should vote and becomes the primary database.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Configure a maintenance script in both primary and standby databases. This script will send a notification any time the communication link is broken. Once the notification is received, another script will stop the service of the standby database once it is active.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In conclusion, please do let me know if you find this article interesting and also share your experience with managing a split-brain scenario&lt;/p&gt;

</description>
      <category>mssqlserver</category>
      <category>splitbrain</category>
      <category>database</category>
    </item>
    <item>
      <title>Tackling security vulnerability at an early stage in SDLC</title>
      <dc:creator>Ige Adetokunbo Temitayo</dc:creator>
      <pubDate>Wed, 25 Aug 2021 12:49:13 +0000</pubDate>
      <link>https://dev.to/aws-builders/tackling-security-vulnerability-at-an-early-stage-in-sdlc-1kl</link>
      <guid>https://dev.to/aws-builders/tackling-security-vulnerability-at-an-early-stage-in-sdlc-1kl</guid>
      <description>&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fphgisyjpu3130i7dfgar.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fphgisyjpu3130i7dfgar.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;As a Software Engineer, I will like to detect security vulnerabilities early enough in my codebase before committing my code.&lt;/p&gt;

&lt;p&gt;Detecting security vulnerability is very important in SDLC (Software Development Life Cycle), this will allow developers to fix any security-related issues before raising a change request or even before the security team flags this vulnerability.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;In tackling these security vulnerabilities, Engineers can integrate the following techniques into their current workflow.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Engineers can integrate their favorite IDE’s with security scanning and detection plugins such as &lt;a href="https://snyk.io/ide-plugins/" rel="noopener noreferrer"&gt;synk&lt;/a&gt; and &lt;a href="https://www.sonarlint.org/" rel="noopener noreferrer"&gt;sonarlint&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Synk IDE plugin helps Engineers to secure their code as they develop, the IDE plugins scans the code in real-time for vulnerabilities and provide advice on how to fix them.&lt;/p&gt;

&lt;p&gt;sonarlint IDE plugin helps to identify and fix quality and security issues as Engineers write codes. These two plugins will fix and advise on any security vulnerabilities.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Software Engineers should cultivate the habit of implementing pre-commit hooks which will contain workflow for managing security vulnerability. The pre-commit hook will run first before even typing in a commit message.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;The workflow will contain the following:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Check if there are any form of secrets (passwords, API keys) as plain text in the codebase&lt;/li&gt;
&lt;li&gt;Check if there is a private key in the codebase&lt;/li&gt;
&lt;li&gt;Remove white spaces&lt;/li&gt;
&lt;li&gt;Check added large files to confirm if we have the right files in the codebase.&lt;/li&gt;
&lt;li&gt;Integrate an automated security testing approach such as one, which will detect Cross-Site Scripting XSS vulnerabilities and test for input validation injections.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In conclusion, please do let me know if you find this article interesting. More ways of tackling security are welcomed. &lt;/p&gt;

</description>
      <category>awscommunitybuilders</category>
      <category>synk</category>
      <category>security</category>
      <category>aws</category>
    </item>
  </channel>
</rss>
