<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: CICube</title>
    <description>The latest articles on DEV Community by CICube (@cicube).</description>
    <link>https://dev.to/cicube</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/cicube"/>
    <language>en</language>
    <item>
      <title>🗽 Top 5 DevOps AI Tools for 2025</title>
      <dc:creator>CiCube</dc:creator>
      <pubDate>Fri, 24 Jan 2025 09:01:43 +0000</pubDate>
      <link>https://dev.to/cicube/top-5-devops-ai-tools-for-2025-146l</link>
      <guid>https://dev.to/cicube/top-5-devops-ai-tools-for-2025-146l</guid>
      <description>&lt;h2&gt;
  
  
  The Role of AI in Modern DevOps
&lt;/h2&gt;

&lt;p&gt;More than ever, DevOps teams are solving more complex problems. The management of cloud infrastructure is growing in ways that include the maintenance of CI/CD pipelines and making sure security is maintained across multi-environments. That's where AI steps in.&lt;/p&gt;

&lt;p&gt;AI tools are no longer just fancy add-ons but are fast becoming an integral part of modern DevOps practices. They help teams automate repetitive tasks, detect issues before they become problems, and make better decisions based on data.&lt;/p&gt;

&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;As a DevOps engineer who has seen this line of work transform pretty fast over the years, I can be sure about one thing: AI is not a buzzword, but far more significant in daily armament.&lt;/p&gt;

&lt;p&gt;Tried a lot of AI equipment in 2024, most of which I trashed immediately, and whittled the list down to just the creme de le creme top 5 offerings that really transformed the way our company does DevOps tasks.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fx3c7nueom4ikec3r47as.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fx3c7nueom4ikec3r47as.png" alt="Image description" width="566" height="238"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  1. CICube - AI-Powered CI/CD Analytics
&lt;/h2&gt;

&lt;p&gt;Being one of the founders of &lt;a href="https://cicube.io/" rel="noopener noreferrer"&gt;CICube&lt;/a&gt;, I was given the privilege to sit on a first-row seat in experiencing firsthand how AI transformed what CI/CD workflows looked and felt like. That tool has come together after our personal struggles around debugging of the CICD issues - probably something better needed to be thought through.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://cicube.io" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftd0ytgi7atpx6hviyffj.png" alt="CICube AI Demo" width="800" height="600"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;When the build breaks, CICube's AI instantly informs you and your team what exactly has failed and how to fix it. No more digging into logs, no guessing. The AI agent sends those conclusions directly through Slack or email in order for teams to rectify problems well before they affect their other members.&lt;/p&gt;

&lt;p&gt;The most useful capabilities teams get with CICube:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Async detection&lt;/strong&gt; of flaky tests before they have the chance to affect productivity&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Anomaly Detection&lt;/strong&gt; of unusual build duration spikes&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Analysing&lt;/strong&gt; tests that are constantly failing&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Pipeline Bottleneck Detection&lt;/strong&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;What makes CICube stand out is that it provides &lt;strong&gt;CI-Focused DORA metrics monitoring&lt;/strong&gt;. Instead of doing things manually, it will automatically observe and monitor:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Success Rate&lt;/strong&gt;: This will tell you how often your pipelines complete without failing. A high success rate means fewer disruptions.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;MTTR (Mean Time to Recovery)&lt;/strong&gt;: This gives you insight into how quickly you can fix a failed pipeline. The shorter this time is, the better your team is at moving forward.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Duration&lt;/strong&gt;: This essentially measures the lead time to completion. Elite teams do this within the shortest time for faster feedback and more iterations.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Throughput&lt;/strong&gt;: This is the number of successful pipeline completions in a given time period. The higher the throughput, the better.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Weekly reports for the engineering team have become more routine. They convey clear trending of pipeline performance, automatic rollup action items for team members, which otherwise takes up to a few hours of manual analysis.&lt;/p&gt;

&lt;p&gt;Real results from teams we have seen use CICube include:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Reduced debugging time from &lt;strong&gt;30 minutes to 5 minutes&lt;/strong&gt; per issue&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;40%&lt;/strong&gt; reduction in the cost of CI once the superfluous steps are identified&lt;/li&gt;
&lt;li&gt;DORA metrics improved from "medium" to &lt;strong&gt;"elite"&lt;/strong&gt; in 3 months&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If your squad spends more than 10 minutes debugging CI issues or does not have any view on DORA metrics, then you need to give CICube a try.&lt;/p&gt;

&lt;h2&gt;
  
  
  2. GitHub Copilot - Your AI Pair Programmer
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxnbkuv59hlcr9bqxdgmg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxnbkuv59hlcr9bqxdgmg.png" alt="Image description" width="800" height="493"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;My team has been using &lt;a href="https://github.com/features/copilot" rel="noopener noreferrer"&gt;GitHub Copilot&lt;/a&gt; since the very beginning. It's really good at writing infrastructure code. It saved me last week with some complicated Terraform config. It did it in half the time it would have taken.&lt;/p&gt;

&lt;p&gt;I have one of my colleagues who's quite a fan for the Kubernetes manifests. I've watched him show how it can generate complete deployment configurations just by describing what you need. The prompts are pretty accurate, at least when working with standard patterns in your codebase.&lt;/p&gt;

&lt;p&gt;The things that impressed me more:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;It generates boilerplate code much faster than I can type&lt;/li&gt;
&lt;li&gt;Suggests relevant error handling that I might have missed&lt;/li&gt;
&lt;li&gt;Helps with those annoying YAML indentations in K8s configurations&lt;/li&gt;
&lt;li&gt;Actually understands your code context&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  3. Datadog Watchdog: AI-Driven Monitoring
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3g04ohv8f48az1j8pq9t.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3g04ohv8f48az1j8pq9t.png" alt="Image description" width="800" height="585"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I've heard nothing but good from other people at other companies about &lt;a href="https://www.datadoghq.com/product/watchdog/" rel="noopener noreferrer"&gt;Datadog Watchdog&lt;/a&gt;. One of my former coworkers uses it now at a very large e-commerce company and he shared some interesting information with me.&lt;/p&gt;

&lt;p&gt;It's really strong in anomaly detection. Instead of having to configure thresholds by hand-which we all hate-it will learn what's normal for your system and alert on real issues. My colleague said it caught a memory leak that their traditional monitoring didn't catch for weeks.&lt;/p&gt;

&lt;p&gt;Key benefits they realized:&lt;/p&gt;

&lt;p&gt;Spots problems well before users report them. Reduces alert fatigue to a great extent&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Aids in tracing normally elusive infrastructure issues&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Really valuable alert correlations&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  4.Snyk - AI-Enhanced Security
&lt;/h2&gt;

&lt;p&gt;Though I haven't used &lt;a href="https://snyk.io" rel="noopener noreferrer"&gt;Snyk&lt;/a&gt;, as yet, the tool has been used for the last half a year in our security team. Remarks received are quite illumining.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F21o12hgz10qzr2z4vs1i.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F21o12hgz10qzr2z4vs1i.png" alt="Image description" width="619" height="396"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The security lead says it revolutionized how they perform vulnerability management; they didn't spend any time drowning in security alerts but received actionable results instead. The AI allows him to prioritize what's most critical for our specific codebase.&lt;/p&gt;

&lt;p&gt;What they've found valuable:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Catches security vulnerabilities early in the pipeline&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Provides clear fix recommendations&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;It integrates easily with existing workflow.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Helps meet compliance requirements&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  5. Cortex - AI Infrastructure Management
&lt;/h2&gt;

&lt;p&gt;A friend working at a FinTech startup referred me to &lt;a href="https://www.cortex.io" rel="noopener noreferrer"&gt;Cortex&lt;/a&gt;. They do it for their microservice architecture, and the results are magic.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fo26zvj163idlqh6ns0vq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fo26zvj163idlqh6ns0vq.png" alt="Image description" width="800" height="458"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Where it really shines is in complex environments with a lot of services. The tool automatically maps dependencies and allows teams to work out a better picture of their infrastructure. My friend showed me how it exposed and enabled them to fix several reliability issues they had, which they didn't even know existed.&lt;/p&gt;

&lt;p&gt;Actual benefits they have realized:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Enhanced understanding of the dependencies among services&lt;/li&gt;
&lt;li&gt;Faster problem resolution&lt;/li&gt;
&lt;li&gt;Improved use of resources&lt;/li&gt;
&lt;li&gt;Automated documentation that is actually useful&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;In a world where there is ever-escalating reliance on productivity and staying ahead of the competition, the integration of AI into DevOps is no longer optional, but rather an increasing necessity.&lt;/p&gt;

&lt;p&gt;Let me remind you, this is not about taking away human expertise but augmenting it. In fact, these AI tools will be able to let us focus on more strategic tasks and automate and simplify many routine processes.&lt;/p&gt;

</description>
      <category>devops</category>
      <category>ai</category>
      <category>webdev</category>
      <category>docker</category>
    </item>
    <item>
      <title>Mastering kubectl logs - A DevOps Engineer's Guide</title>
      <dc:creator>CiCube</dc:creator>
      <pubDate>Fri, 03 Jan 2025 06:40:51 +0000</pubDate>
      <link>https://dev.to/cicube/mastering-kubectl-logs-a-devops-engineers-guide-4f6h</link>
      <guid>https://dev.to/cicube/mastering-kubectl-logs-a-devops-engineers-guide-4f6h</guid>
      <description>&lt;p&gt;&lt;a href="https://s.cicube.io/demo" rel="noopener noreferrer"&gt;&lt;br&gt;
    &lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fc.cicube.io%2Fmarketing%2Fdevto-banner.png" alt="cicube.io" width="800" height="249"&gt; &lt;br&gt;
&lt;/a&gt;&lt;/p&gt;
&lt;h3&gt;
  
  
  Introduction
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;What is kubectl logs?&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
kubectl logs fetches logs from containers in Kubernetes pods to debug and monitor applications. These streams are directly from a container's stdout and stderr, so it is an important tool for troubleshooting.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;How to use kubectl logs to debug Kubernetes pods?&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
kubectl logs retrieves logs from a pod in Kubernetes. If the pod contains multiple containers, then the name of the container should be defined through &lt;code&gt;-c &amp;lt;container-name&amp;gt;&lt;/code&gt;.&lt;br&gt;&lt;br&gt;
 &lt;code&gt;kubectl logs -f &amp;lt;pod-name&amp;gt;&lt;/code&gt; streams the logs in real time; &lt;code&gt;kubectl logs --since=1h&lt;/code&gt; or &lt;code&gt;kubectl logs --since-time=&amp;lt;timestamp&amp;gt;&lt;/code&gt; for filtering the logs with time. This is a must-have tooling when monitoring or debugging.&lt;/p&gt;

&lt;p&gt;Having debugged numerous Kubernetes clusters, I must confirm that the first command on my mind for a daily tool would be kubectl logs. When trying to debug a failed pod, track application behavior, or simply understand what happened and why things didn't work as expected for a certain deployment, this was literally what saved me hours of sleep on more than one occasion.&lt;/p&gt;

&lt;p&gt;Now, let me explain why this is such an important command: When running applications in Kubernetes, you don't have direct access to your containers like you do with Docker on your local machine. The &lt;code&gt;kubectl logs&lt;/code&gt; command is your window into what's happening inside those containers. I use it dozens of times daily for:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Debugging application crashes
&lt;/li&gt;
&lt;li&gt;Application start-up monitoring
&lt;/li&gt;
&lt;li&gt;Investigating performance issues
&lt;/li&gt;
&lt;li&gt;Verification of configuration changes
&lt;/li&gt;
&lt;li&gt;Troubleshooting network issues
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Steps we'll cover:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Understanding kubectl logs&lt;/li&gt;
&lt;li&gt;Getting Started with Basic kubectl Commands&lt;/li&gt;
&lt;li&gt;Working with Log Output&lt;/li&gt;
&lt;li&gt;When Things Go Wrong: A Debugging Guide&lt;/li&gt;
&lt;li&gt;Lessons I've Learned the Hard Way&lt;/li&gt;
&lt;li&gt;Common kubectl logs Problems and How I Solve Them&lt;/li&gt;
&lt;/ul&gt;
&lt;h3&gt;
  
  
  Understanding kubectl logs
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://kubernetes.io/docs/reference/kubectl/generated/kubectl_logs/" rel="noopener noreferrer"&gt;kubectl logs&lt;/a&gt;: This is one command that would be in my tool belt for looking at the container logs in Kubernetes. Like running docker logs, it has additional features to make it perfect for a distributed environment.&lt;/p&gt;

&lt;p&gt;Here is the basic syntax I use:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl logs &amp;lt;pod-name&amp;gt; &lt;span class="o"&gt;[&lt;/span&gt;&lt;span class="nt"&gt;-c&lt;/span&gt; container-name] &lt;span class="o"&gt;[&lt;/span&gt;flags]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The following command fetches logs directly from the container runtime-such as Docker or containerd-and streams them into my terminal. The logs are taken directly from the container's stdout and stderr streams.&lt;/p&gt;

&lt;h3&gt;
  
  
  Getting Started with Basic kubectl Commands
&lt;/h3&gt;

&lt;h3&gt;
  
  
  Single Container Logs
&lt;/h3&gt;

&lt;p&gt;For simple pods with just one container, I use&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl logs nginx-pod
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This command is quite straightforward, but let's see what happens behind the scenes:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Kubernetes identifies the Pod
&lt;/li&gt;
&lt;li&gt;Since there is only one container, it automatically selects that container.
&lt;/li&gt;
&lt;li&gt;Streams the container's stdout/stderr logs
&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  Working with Multiple Containers
&lt;/h3&gt;

&lt;p&gt;When working with pods that have many containers as is common for a Production environment, it's usual to specify the container name itself:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl logs web-pod &lt;span class="nt"&gt;-c&lt;/span&gt; nginx-container
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;A mistake that I have seen and have done early on in my career is the forgetting to specify the name of a container in a multi-container pod. You will receive an error like:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;Error from server &lt;span class="o"&gt;(&lt;/span&gt;BadRequest&lt;span class="o"&gt;)&lt;/span&gt;: a container name must be specified &lt;span class="k"&gt;for &lt;/span&gt;pod web-pod, choose one of: &lt;span class="o"&gt;[&lt;/span&gt;nginx-container sidecar-container]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Live Log Streaming
&lt;/h3&gt;

&lt;p&gt;One of my favorite features is streaming logs in real-time with &lt;code&gt;-f&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl logs &lt;span class="nt"&gt;-f&lt;/span&gt; api-pod
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;I use this constantly during deployments to watch for startup issues.&lt;/p&gt;

&lt;h3&gt;
  
  
  Checking Previous Container Logs
&lt;/h3&gt;

&lt;p&gt;If a container crashed and restarted, I see the previous container's logs with:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl logs &lt;span class="nt"&gt;--previous&lt;/span&gt; nginx-pod
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This has saved me many times when debugging crash loops.&lt;/p&gt;

&lt;h1&gt;
  
  
  Working with Log Output
&lt;/h1&gt;

&lt;h2&gt;
  
  
  Time-Based Filtering
&lt;/h2&gt;

&lt;p&gt;In incident investigations, I often need logs from specific time windows:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Logs of the last hour&lt;/span&gt;
kubectl logs &lt;span class="nt"&gt;--since&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;1h nginx-pod

&lt;span class="c"&gt;# Logs since a specific timestamp&lt;/span&gt;
kubectl logs &lt;span class="nt"&gt;--since-time&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;2024-01-01T10:00:00Z nginx-pod
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Managing Large Log Outputs
&lt;/h2&gt;

&lt;p&gt;For chatty applications, I usually limit the output:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Show only the last 100 lines&lt;/span&gt;
kubectl logs &lt;span class="nt"&gt;--tail&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;100 nginx-pod

&lt;span class="c"&gt;# Only show recent logs with timestamps&lt;/span&gt;
kubectl logs &lt;span class="nt"&gt;--timestamps&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nb"&gt;true&lt;/span&gt; &lt;span class="nt"&gt;--tail&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;50 nginx-pod
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Handling Multiple Pod Logs
&lt;/h2&gt;

&lt;p&gt;In my practice of distributed applications, rather frequently I faced a challenge related to having logs collected from several pods:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Logs from all pods with label app=nginx&lt;/span&gt;
kubectl logs &lt;span class="nt"&gt;-l&lt;/span&gt; &lt;span class="nv"&gt;app&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;nginx &lt;span class="nt"&gt;--all-containers&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nb"&gt;true&lt;/span&gt;

&lt;span class="c"&gt;# Logs from all containers in a pod&lt;/span&gt;
kubectl logs nginx-pod &lt;span class="nt"&gt;--all-containers&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nb"&gt;true&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  When Things Go Wrong: A Debugging Guide
&lt;/h2&gt;

&lt;h2&gt;
  
  
  What to Check When Pods Won't Start
&lt;/h2&gt;

&lt;p&gt;When something isn't starting off right, here's my standard operating procedure:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# First check current logs&lt;/span&gt;
kubectl logs app-pod

&lt;span class="c"&gt;# If pod is crash-looping, check previous container logs&lt;/span&gt;
kubectl logs &lt;span class="nt"&gt;--previous&lt;/span&gt; app-pod

&lt;span class="c"&gt;# Follow logs during restart&lt;/span&gt;
kubectl logs &lt;span class="nt"&gt;-f&lt;/span&gt; app-pod
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Finding Issues in Production Environments
&lt;/h3&gt;

&lt;p&gt;In production, I frequently have to look at several containers:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Check application logs&lt;/span&gt;
kubectl app-pod logs &lt;span class="nt"&gt;-c&lt;/span&gt; application-container

&lt;span class="c"&gt;# Check sidecar logs&lt;/span&gt;
kubectl logs app-pod &lt;span class="nt"&gt;-c&lt;/span&gt; istio-proxy

&lt;span class="c"&gt;# Save logs for later analysis&lt;/span&gt;
kubectl logs app-pod &lt;span class="nt"&gt;--all-containers&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nb"&gt;true&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;&lt;/span&gt; debug_logs.txt
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h1&gt;
  
  
  Lessons I've Learned the Hard Way
&lt;/h1&gt;

&lt;p&gt;I have, over the years, collected quite a few tips that have helped make my life easier working with kubectl logs. These aren't things you'll find in the official documentation but lessons learned after hours and hours of debugging production environments. Here are some of my favorite techniques that I wish someone had told me when I was starting out:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Smart Use of Labels&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
One of the biggest powers in Kubernetes is really its label system, and a lot of log management would change with this. Being able to quickly get the logs from certain components makes a big difference for me. Instead of having a long list of pod names or writing complex scripts yourself, I use labels, such as:&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt; &lt;span class="c"&gt;# Instead of pod names, use labels&lt;/span&gt;
kubectl logs &lt;span class="nt"&gt;-l&lt;/span&gt; &lt;span class="nv"&gt;app&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;backend,environment&lt;span class="o"&gt;=&lt;/span&gt;prod
&lt;/code&gt;&lt;/pre&gt;

&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Command Chaining&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
Sometimes you need the logs from the most recent pod in a deployment - this happens during rolling updates, for example. Here's a neat trick I use to avoid finding the latest pod manually:&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Get logs from the newest pod&lt;/span&gt;
kubectl logs &lt;span class="si"&gt;$(&lt;/span&gt;kubectl get pod &lt;span class="nt"&gt;-l&lt;/span&gt; &lt;span class="nv"&gt;app&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;nginx &lt;span class="nt"&gt;-o&lt;/span&gt; &lt;span class="nv"&gt;jsonpath&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s1"&gt;'{.items[0].metadata.name}'&lt;/span&gt;&lt;span class="si"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Save Time with Aliases&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
When you are typing these commands hundreds of times a day, every keystroke counts. These aliases probably saved me days of typing over the years:&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;alias &lt;/span&gt;&lt;span class="nv"&gt;kl&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s1"&gt;'kubectl logs'&lt;/span&gt;
&lt;span class="nb"&gt;alias &lt;/span&gt;&lt;span class="nv"&gt;klf&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s1"&gt;'kubectl logs -f'&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  Monitoring GitHub Actions Workflows
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://s.cicube.io/try" rel="noopener noreferrer"&gt;CICube&lt;/a&gt; is a GitHub Actions monitoring tool that provides you with detailed insights into your workflows to further optimize your CI/CD pipeline. With CICube, you will be able to track your workflow runs, understand where the bottlenecks are, and tease out the best from your build times. Go to &lt;a href="https://s.cicube.io/try" rel="noopener noreferrer"&gt;cicube.io&lt;/a&gt; now and create a free account to better optimize your GitHub Actions workflows!&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;a href="https://s.cicube.io/try" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fep2dsojs2m0j2fksabjd.png" alt="CICube GitHub Actions Workflow Duration Monitoring" width="800" height="641"&gt;&lt;/a&gt;
&lt;/h2&gt;

&lt;h2&gt;
  
  
  Common kubectl logs Problems and How I Solve Them
&lt;/h2&gt;

&lt;p&gt;After years of working with Kubernetes in various environments, I have encountered a few common issues that keep cropping up. Here's how I handle each one of these-the solutions have gone on to become my go-to fixes for some of the most frustrating kubectl logs problems:&lt;/p&gt;

&lt;h2&gt;
  
  
  What to Do With Massive Log Files
&lt;/h2&gt;

&lt;p&gt;One of the most common issues I have to deal with involves containers generating gigabytes of logs. When your application is chatty or has run for some time, trying to fetch all logs can overwhelm your terminal or even crash the session. Here's how I do it:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl logs &lt;span class="nt"&gt;--limit-bytes&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;100000 large-log-pod
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You may also see this error:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Error from server (BadRequest): previous terminated container not found
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This usually means that the container has restarted and the logs you were looking for are gone. I would then hasten to add the --previous flag:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl logs &lt;span class="nt"&gt;--previous&lt;/span&gt; large-log-pod
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Can't Find Your Container?
&lt;/h3&gt;

&lt;p&gt;This one used to drive me crazy-you know the container is there, but kubectl logs can't seem to find it. Usually this happens in pods with multiple containers or when container names don't match what you expect. Here's my debugging approach:&lt;/p&gt;

&lt;p&gt;First I make sure the pod exists:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl get pod nginx-pod
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Then I check the names of the containers:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl get pod nginx-pod &lt;span class="nt"&gt;-o&lt;/span&gt; &lt;span class="nv"&gt;jsonpath&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s1"&gt;'{.spec.containers[*].name}'&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Common errors you might see:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Error from server (NotFound): pod "nginx-pod" not found
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This usually means you're in the wrong namespace. I check with:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl config get-contexts
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



</description>
      <category>kubernetes</category>
      <category>devops</category>
      <category>security</category>
    </item>
    <item>
      <title>What is AWS WAF? A DevOps Engineer's Perspective</title>
      <dc:creator>CiCube</dc:creator>
      <pubDate>Thu, 02 Jan 2025 14:13:15 +0000</pubDate>
      <link>https://dev.to/cicube/what-is-aws-waf-a-devops-engineers-perspective-5db6</link>
      <guid>https://dev.to/cicube/what-is-aws-waf-a-devops-engineers-perspective-5db6</guid>
      <description>&lt;p&gt;&lt;a href="https://s.cicube.io/demo" rel="noopener noreferrer"&gt;&lt;br&gt;
    &lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fc.cicube.io%2Fmarketing%2Fdevto-banner.png" alt="cicube.io" width="800" height="249"&gt; &lt;br&gt;
&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  What is AWS WAF?
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;AWS WAF (Web Application Firewall)&lt;/strong&gt; is a security service that protects your web applications from common threats like SQL injection, cross-site scripting (XSS), and bots. It works by inspecting incoming requests, blocking malicious traffic, and ensuring legitimate users can access your application securely.&lt;/p&gt;

&lt;p&gt;Let me tell you in detail, in a simple way, what AWS WAF is, considering myself an AWS DevOps engineer with several years of experience in securing web applications. Think of AWS WAF as a security guard at the gate who lets only real visitors into your web application and sends back any visitor with something not wanted in your application.&lt;/p&gt;

&lt;p&gt;This need has never been more crucial. In the modern digital world, web applications are always under attack by automated bots, hackers, and malicious scripts. A WAF is your first line of defense against these threats.&lt;/p&gt;

&lt;p&gt;AWS WAF: What Is It, and Why Do You Need It? AWS WAF is a security service that protects your web applications against common attacks. Let me illustrate this for you with the help of a simple example:&lt;/p&gt;

&lt;p&gt;Imagine that you run an online store. Every day, thousands of customers enter your site to view and purchase goods. But among the real customers, there are also:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Bots trying to scrape your prices&lt;/li&gt;
&lt;li&gt;Attackers trying to inject malicious code&lt;/li&gt;
&lt;li&gt;Bad actors attempting to steal customer information&lt;/li&gt;
&lt;li&gt;Scripts trying to overload your servers&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;AWS WAF acts as your security checkpoint, examining each request before it reaches your application. It is able to:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Block suspicious IP addresses&lt;/li&gt;
&lt;li&gt;Block malicious requests&lt;/li&gt;
&lt;li&gt;Deter data theft attempts&lt;/li&gt;
&lt;li&gt;Prevent automated attacks&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Steps we'll cover: &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;What is AWS WAF?&lt;/li&gt;
&lt;li&gt;How AWS WAF Works: An Easy Explanation&lt;/li&gt;
&lt;li&gt;
Key Features of AWS WAF

&lt;ul&gt;
&lt;li&gt;Traffic Control&lt;/li&gt;
&lt;li&gt;Rate Limiting&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;

Understanding Your Options: AWS WAF vs Alternatives

&lt;ul&gt;
&lt;li&gt;AWS WAF vs. Alternatives&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;Find Your Best WAF Solution&lt;/li&gt;

&lt;li&gt;When Should You Choose AWS WAF?&lt;/li&gt;

&lt;li&gt;Cost Breakdown: What You'll Actually Pay&lt;/li&gt;

&lt;li&gt;Calculate AWS WAF Costs for Your Use Case&lt;/li&gt;

&lt;/ul&gt;

&lt;h2&gt;
  
  
  Monitoring GitHub Actions Workflows
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://s.cicube.io/try" rel="noopener noreferrer"&gt;CICube&lt;/a&gt; is a GitHub Actions monitoring tool that provides you with detailed insights into your workflows to further optimize your CI/CD pipeline. With CICube, you will be able to track your workflow runs, understand where the bottlenecks are, and tease out the best from your build times. Go to &lt;a href="https://s.cicube.io/try" rel="noopener noreferrer"&gt;cicube.io&lt;/a&gt; now and create a free account to better optimize your GitHub Actions workflows!&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;a href="https://s.cicube.io/try" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fep2dsojs2m0j2fksabjd.png" alt="CICube GitHub Actions Workflow Duration Monitoring" width="800" height="641"&gt;&lt;/a&gt;
&lt;/h2&gt;

&lt;h2&gt;
  
  
  How AWS WAF Works: An Easy Explanation
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/.%2Fwaf-arc-2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/.%2Fwaf-arc-2.png" alt="AWS WAF Architecture" width="" height=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The process is similar to airport security.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Inspection Point&lt;/strong&gt;: Every request to your application passes through AWS WAF&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Rule Checking&lt;/strong&gt;: The WAF checks the request against your security rules &lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Decision Making&lt;/strong&gt;: WAF either, based on the rule set:

&lt;ul&gt;
&lt;li&gt;Allows legitimate traffic through&lt;/li&gt;
&lt;li&gt;Blocks suspicious requests&lt;/li&gt;
&lt;li&gt;Counts requests for monitoring&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  Key Features of AWS WAF
&lt;/h3&gt;

&lt;p&gt;Having implemented AWS WAF over the years, I have picked up the most important features which a user should learn about:&lt;/p&gt;

&lt;p&gt;Protection against Common Attacks Think of that online store example, from a bit earlier. AWS WAF provides security to this kind of resource against some common attacks:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;SQL Injection: prevents attackers from stealing your database information&lt;/li&gt;
&lt;li&gt;XSS (Cross-Site Scripting): This prevents the hackers from injecting scripts with malicious intent.&lt;/li&gt;
&lt;li&gt;Data Theft: It will block the attempts of the data thief to steal customer information.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Traffic Control
&lt;/h3&gt;

&lt;p&gt;You can control who accesses your application based on:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Geographic location (useful for region-specific services)&lt;/li&gt;
&lt;li&gt;IP addresses: Block known bad actors&lt;/li&gt;
&lt;li&gt;Request patterns: stop suspicious behavior&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Rate Limiting
&lt;/h3&gt;

&lt;p&gt;Think of rate limiting like a crowd control system that prevents your store from becoming too crowded: it prevents any one source from sending a lot of requests all at once.&lt;/p&gt;

&lt;h3&gt;
  
  
  Understanding Your Options: AWS WAF vs Alternatives
&lt;/h3&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Feature&lt;/th&gt;
&lt;th&gt;AWS WAF&lt;/th&gt;
&lt;th&gt;Cloudflare WAF&lt;/th&gt;
&lt;th&gt;ModSecurity&lt;/th&gt;
&lt;th&gt;Imperva WAF&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Ease of Use&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Medium&lt;/td&gt;
&lt;td&gt;Easy&lt;/td&gt;
&lt;td&gt;Complex&lt;/td&gt;
&lt;td&gt;Medium&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Pricing&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Pay-as-you-go&lt;/td&gt;
&lt;td&gt;$20+/month&lt;/td&gt;
&lt;td&gt;Free (open-source)&lt;/td&gt;
&lt;td&gt;Enterprise pricing&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Best For&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;AWS Ecosystem&lt;/td&gt;
&lt;td&gt;Global CDN &amp;amp; DDoS&lt;/td&gt;
&lt;td&gt;Full customization&lt;/td&gt;
&lt;td&gt;Enterprise Security&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Integration&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;AWS native services&lt;/td&gt;
&lt;td&gt;CDN &amp;amp; edge servers&lt;/td&gt;
&lt;td&gt;Self-hosted&lt;/td&gt;
&lt;td&gt;Enterprise-grade&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Scalability&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;High (AWS managed)&lt;/td&gt;
&lt;td&gt;High&lt;/td&gt;
&lt;td&gt;Custom setup&lt;/td&gt;
&lt;td&gt;Very High&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;h2&gt;
  
  
  Find Your Best WAF Solution
&lt;/h2&gt;

&lt;p&gt;Not sure which WAF is right for you? I have created an interactive tool to help you make this decision based on your particular needs.&lt;/p&gt;



&lt;h2&gt;
  
  
  When Should You Choose AWS WAF?
&lt;/h2&gt;

&lt;p&gt;In my opinion, AWS WAF is the right choice for:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;You are already using AWS. AWS WAF&lt;/strong&gt; would naturally fit into your infrastructure if your applications run on AWS with services like CloudFront, Application Load Balancer, or API Gateway.  &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;You Need Customizable Security&lt;/strong&gt; when you need to implement security rules specific to the unique needs of your application.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;You Want Cost Control&lt;/strong&gt; if you prefer to pay as per the actual usage rather than fixed subscriptions.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;You Require Compliance&lt;/strong&gt; when you are in an industry that has certain security standards that must be met, such as healthcare or finance.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Cost Breakdown: What You'll Actually Pay
&lt;/h3&gt;

&lt;p&gt;Let me make AWS WAF pricing crystal clear with a concrete example:&lt;/p&gt;

&lt;p&gt;For an average small to medium web site:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Base cost: $5.00/month for the WAF itself&lt;/li&gt;
&lt;li&gt;Rules: $1/month per rule group&lt;/li&gt;
&lt;li&gt;Usage $0.60 per million requests&lt;/li&gt;
&lt;li&gt;Rule checks: $0.10 per million rule evaluations&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Practical example for a website with 100,000 visitors per month:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Base WAF: $5&lt;/li&gt;
&lt;li&gt;Basic rule set: $5&lt;/li&gt;
&lt;li&gt;Request costs: ~$0.06&lt;/li&gt;
&lt;li&gt;Rule evaluations: ~$0.05&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Total: Approximately $10-15 per month&lt;/p&gt;

&lt;h2&gt;
  
  
  Calculate AWS WAF Costs for Your Use Case
&lt;/h2&gt;

&lt;p&gt;Want to calculate costs for your use case? Try our interactive pricing calculator: &lt;/p&gt;

&lt;h2&gt;
  
  
  Frequently Asked Questions
&lt;/h2&gt;

&lt;p&gt;Q: Must I have any technical expertise to use AWS WAF?&lt;br&gt;&lt;br&gt;
A: Basic AWS knowledge helps. You can, however, always start with the pre-configured rule. I would recommend to again start with AWS managed rules first and learn on your go.&lt;/p&gt;

&lt;p&gt;Q: Can I try AWS WAF before committing?&lt;br&gt;&lt;br&gt;
A: Yes! I often set up AWS WAF in "Count" mode first, which lets you see what it would block without actually blocking anything.&lt;/p&gt;

&lt;p&gt;Q: Will it slow down my website?&lt;br&gt;&lt;br&gt;
A: No, AWS WAF is designed at AWS edge locations and thus introduces very minimal latency, usually less than 1ms.&lt;/p&gt;

&lt;p&gt;Q: What if AWS WAF blocks legit traffic?&lt;br&gt;&lt;br&gt;
A: You can easily tune rules if you find false positives. I always recommend starting with looser rules and tightening them based on monitoring.&lt;/p&gt;

&lt;p&gt;Q: Can I use AWS WAF with services not hosted on AWS?&lt;br&gt;&lt;br&gt;
A: While possible, it is most effective with AWS services. For non-AWS applications, you might want to consider Cloudflare or ModSecurity.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;AWS WAF is the most powerful tool to protect your web applications, but it is not the only option out there. The best choice depends on your specific needs:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;AWS WAF will be the better choice if you have a heavy investment in AWS.&lt;/li&gt;
&lt;li&gt;Consider Cloudflare if you want simplicity and CDN integration&lt;/li&gt;
&lt;li&gt;Check out ModSecurity for situations where one needs complete control and the technical competence to exercise it.&lt;/li&gt;
&lt;li&gt;Evaluate Imperva for enterprise-class requirements&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Keep in mind, web security is not something that you do once, but it's a process. First, secure an application with the basic protection you learn here and build upon those as you continue to learn more about what your application will need.&lt;/p&gt;

&lt;p&gt;Feel free to use our interactive tool above to find the right solution for your specific case, and don't hesitate to start with a simple configuration-you can always enhance it later.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>devops</category>
      <category>security</category>
    </item>
    <item>
      <title>Beyond Docker - A DevOps Engineer's Guide to Container Alternatives</title>
      <dc:creator>CiCube</dc:creator>
      <pubDate>Thu, 26 Dec 2024 15:50:26 +0000</pubDate>
      <link>https://dev.to/cicube/beyond-docker-a-devops-engineers-guide-to-container-alternatives-4bk1</link>
      <guid>https://dev.to/cicube/beyond-docker-a-devops-engineers-guide-to-container-alternatives-4bk1</guid>
      <description>&lt;p&gt;&lt;a href="https://s.cicube.io/demo" rel="noopener noreferrer"&gt;&lt;br&gt;
    &lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fc.cicube.io%2Fmarketing%2Fdevto-banner.png" alt="cicube.io" width="800" height="249"&gt; &lt;br&gt;
&lt;/a&gt;&lt;/p&gt;
&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;The container landscape has changed a fair bit since Docker changed the way we packaged and deployed applications. While Docker is still one of the most widely used options today, some compelling alternatives are well worth considering that might better fit your needs. Let me share my journey to explore these alternatives and what I've learned along the way.&lt;/p&gt;
&lt;h2&gt;
  
  
  The Evolution of Container Runtimes
&lt;/h2&gt;

&lt;p&gt;When Docker first came into this world, that was the technology that was able to collect in one tool functions of container creation and managing with runtime. This was somewhat revolutionary, but then with time, as usage for containers matured, it became evident that teams needed particular tools for particular aspects in containerization, and that is what led to alternative specialization.&lt;/p&gt;

&lt;p&gt;Understanding Container Standards&lt;/p&gt;

&lt;p&gt;The container ecosystem is based on open standards - most notably the [Open Container Initiative (OCI)]. That standardizes:&lt;/p&gt;

&lt;p&gt;This standardization means you are not being locked into any particular tool. You can build your images in one tool and run them in another. This gives you the choice to utilize the best tool to get a particular job done.&lt;/p&gt;
&lt;h2&gt;
  
  
  Podman: The Daemon-free Alternative
&lt;/h2&gt;

&lt;p&gt;In the intensity of working as a DevOps engineer with the container, I have found &lt;a href="https://podman.io/" rel="noopener noreferrer"&gt;Podman&lt;/a&gt; to be a game-changer in teams that take the security aspect seriously-that means avoiding root privileges. It's daemonless compared with Docker, which is a big architectural change. Daemonless approach just magically changes how teams do container security in production environments.&lt;/p&gt;
&lt;h2&gt;
  
  
  Security through Designs
&lt;/h2&gt;

&lt;p&gt;The first time I had switched to Podman for a security-conscious client, this daemonless architecture made total sense. Each container runs with your user permissions - not as a privileged daemon:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;Running a container as your user
podman run nginx   &lt;span class="c"&gt;# No root, no daemon&lt;/span&gt;

&lt;span class="c"&gt;# Even rootless containers can bind to privileged ports&lt;/span&gt;
podman run &lt;span class="nt"&gt;-p&lt;/span&gt; 80:80 nginx   &lt;span class="c"&gt;# Works without root!&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Kubernetes-like Experience on Desktop&lt;/p&gt;

&lt;p&gt;But what was a really pleasant surprise was the pod-native support in Podman. It allows trying out Kubernetes-like concepts on a local system:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Create a pod with multi containers&lt;/span&gt;
podman pod create &lt;span class="nt"&gt;--name&lt;/span&gt; my-app 
podman run &lt;span class="nt"&gt;--pod&lt;/span&gt; my-app &lt;span class="nt"&gt;-d&lt;/span&gt; nginx
podman run &lt;span class="nt"&gt;--pod&lt;/span&gt; my-app &lt;span class="nt"&gt;-d&lt;/span&gt; redis
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  containerd: The Kubernetes Runtime
&lt;/h2&gt;

&lt;p&gt;Having operated large Kubernetes clusters, one learns to love the focused approach of &lt;a href="https://containerd.io/" rel="noopener noreferrer"&gt;containerd&lt;/a&gt;. A light-weight, high-performance container runtime, it powers a lot of container platforms, including indirectly, Kubernetes. From my experience, containerd really does one thing and does it well: it runs containers efficiently.&lt;/p&gt;

&lt;h3&gt;
  
  
  Platform Building
&lt;/h3&gt;

&lt;p&gt;This focus of containerd shines when looking at building container platforms:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight go"&gt;&lt;code&gt;&lt;span class="c"&gt;// Simple integration with containerd&lt;/span&gt;
&lt;span class="n"&gt;client&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;err&lt;/span&gt; &lt;span class="o"&gt;:=&lt;/span&gt; &lt;span class="n"&gt;containerd&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;New&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"/run/containerd/containerd.sock"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="n"&gt;container&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;err&lt;/span&gt; &lt;span class="o"&gt;:=&lt;/span&gt; &lt;span class="n"&gt;client&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;NewContainer&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;ctx&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s"&gt;"nginx"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
&lt;span class="n"&gt;containerd&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;WithNewSnapshot&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"nginx"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;image&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt;
&lt;span class="n"&gt;containerd&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;WithNewSpec&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;oci&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;WithImageConfig&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;image&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  BuildKit: Reimagining Container Building
&lt;/h2&gt;

&lt;p&gt;I remember when container builds were slow and not really efficient, and were usually a bottleneck of our CI/CD pipelines. That is until I discovered BuildKit and my life changed. &lt;a href="https://github.com/moby/buildkit" rel="noopener noreferrer"&gt;BuildKit&lt;/a&gt; is the next-generation builder engine for Docker, but it can also be used independently.&lt;/p&gt;

&lt;p&gt;Concurrent and Efficient Builds&lt;/p&gt;

&lt;p&gt;The best thing about BuildKit is how it parallelizes the steps of building:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Dockerfile
# These stages build concurrently
FROM golang:1.21 AS backend
COPY backend. 
RUN go build

FROM node:18 AS frontend
COPY frontend. 
RUN npm build

FROM alpine 
COPY --from=backend /app/backend.  
COPY --from=frontend /app/dist ./dist
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  LXC/LXD: System Containers
&lt;/h2&gt;

&lt;p&gt;Working with legacy applications that needed full system access taught me that a different way to do containerization is by using &lt;a href="https://linuxcontainers.org/" rel="noopener noreferrer"&gt;LXC/LXD&lt;/a&gt;. The focus in system containers, rather than application containers, can be thought of like a light VM rather than what most consider the typical container.&lt;/p&gt;

&lt;p&gt;Development Environments&lt;/p&gt;

&lt;p&gt;LXD does great at isolated development environments:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Create a full Ubuntu environment&lt;/span&gt;
lxc launch ubuntu:20.04 dev-env
lxc &lt;span class="nb"&gt;exec &lt;/span&gt;dev-env &lt;span class="nt"&gt;--&lt;/span&gt; &lt;span class="nb"&gt;sudo &lt;/span&gt;apt &lt;span class="nb"&gt;install &lt;/span&gt;python3

&lt;span class="c"&gt;# Share your project folder&lt;/span&gt;
lxc config device add dev-env code disk &lt;span class="nb"&gt;source&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;/path/to/code &lt;span class="nv"&gt;path&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;/code
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;h2&gt;
  
  
  Monitoring GitHub Actions Workflows
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://s.cicube.io/try" rel="noopener noreferrer"&gt;CICube&lt;/a&gt; is a GitHub Actions monitoring tool that provides you with detailed insights into your workflows to further optimize your CI/CD pipeline. With CICube, you will be able to track your workflow runs, understand where the bottlenecks are, and tease out the best from your build times. Go to &lt;a href="https://s.cicube.io/try" rel="noopener noreferrer"&gt;cicube.io&lt;/a&gt; now and create a free account to better optimize your GitHub Actions workflows!&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;a href="https://s.cicube.io/try" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fep2dsojs2m0j2fksabjd.png" alt="CICube GitHub Actions Workflow Duration Monitoring" width="800" height="641"&gt;&lt;/a&gt;
&lt;/h2&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Through my journey of exploring alternatives to Docker, it has been learned that the container ecosystem is a lot more to do with selecting the right tool for your needs opposed to finding a perfect replacement. Podman's rootless approach brings security without sacrifice, Containerd's simplicity lends itself perfectly in Kubernetes environments, BuildKit transforms how we build images, and LXC/LXD offers a unique take on system containerization.&lt;/p&gt;

&lt;p&gt;The cool thing about modern container tools is in the way they interoperate: You can have efficient builds with BuildKit, run them with Podman in development, and deploy to Containerd in production. This flexibility, enabled by OCI standards, lets us create workflows that truly fit our needs rather than adapting our needs to fit a single tool.&lt;/p&gt;

</description>
      <category>docker</category>
      <category>devops</category>
      <category>security</category>
    </item>
    <item>
      <title>Docker Swarm vs Kubernetes - A Deep Technical Analysis</title>
      <dc:creator>CiCube</dc:creator>
      <pubDate>Sat, 21 Dec 2024 09:32:27 +0000</pubDate>
      <link>https://dev.to/cicube/docker-swarm-vs-kubernetes-a-deep-technical-analysis-3p1o</link>
      <guid>https://dev.to/cicube/docker-swarm-vs-kubernetes-a-deep-technical-analysis-3p1o</guid>
      <description>&lt;p&gt;&lt;a href="https://s.cicube.io/demo" rel="noopener noreferrer"&gt;&lt;br&gt;
    &lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fc.cicube.io%2Fmarketing%2Fdevto-banner.png" alt="cicube.io" width="800" height="249"&gt; &lt;br&gt;
&lt;/a&gt;&lt;/p&gt;
&lt;h3&gt;
  
  
  Overview: What's the Difference Between Docker Swarm and Kubernetes?
&lt;/h3&gt;

&lt;p&gt;Docker Swarm and Kubernetes are both popular for container management, but they serve different needs. Swarm is simple, quick to set up, and ideal for smaller projects or teams, as it integrates directly with Docker. Kubernetes, with features like self-healing, scaling, and customization, suits complex, large-scale, or enterprise applications, though it requires more time to learn and set up.&lt;/p&gt;

&lt;p&gt;In a nutshell: &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Use Docker Swarm when quick deployment and simplicity are required.&lt;/li&gt;
&lt;li&gt;Use Kubernetes when you are dealing with a large, complex system where you need powerful tools to scale and manage the workloads. Which one to use depends on your project's size, your team's expertise, and the level of complexity you're ready to handle.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In this guide, I will be deep diving into their architectures, as well as their strengths applied in real-world applications-which will be a foundation to help you make up your mind about the very best for your DevOps needs.&lt;/p&gt;

&lt;p&gt;As a DevOps Engineer who has applied both Docker Swarm and Kubernetes in production for a few years, I would love to give deep technical insight into the above-mentioned platforms. Having deployed and managed both small startups and huge enterprise clusters, I have firsthand insight into where each shines or struggles.&lt;/p&gt;

&lt;p&gt;Steps we'll cover:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;What is Docker Swarm?&lt;/li&gt;
&lt;li&gt;What is Kubernetes?&lt;/li&gt;
&lt;li&gt;
Docker Swarm vs Kubernetes: Container Orchestration Architecture

&lt;ul&gt;
&lt;li&gt;Core Components and Architecture&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;Service Management and Deployment&lt;/li&gt;
&lt;li&gt;
Docker Swarm Services

&lt;ul&gt;
&lt;li&gt;Kubernetes Deployment&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
Docker Swarm vs. Kubernetes Networking Architecture

&lt;ul&gt;
&lt;li&gt;Docker Swarm Networking&lt;/li&gt;
&lt;li&gt;Kubernetes Networking&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
State Management

&lt;ul&gt;
&lt;li&gt;Docker Swarm State Management&lt;/li&gt;
&lt;li&gt;Kubernetes State Management&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
Storage Architecture

&lt;ul&gt;
&lt;li&gt;Docker Swarm Storage&lt;/li&gt;
&lt;li&gt;Kubernetes Storage&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;h3&gt;
  
  
  What is Docker Swarm?
&lt;/h3&gt;

&lt;p&gt;Having worked with &lt;a href="https://docs.docker.com/engine/swarm/" rel="noopener noreferrer"&gt;Docker Swarm&lt;/a&gt; since the early days, I can attest that Docker Swarm is the native orchestration of Docker that turns a cluster of Docker hosts into a single, virtual Docker host. Having used Swarm since its introduction, I found that I was immediately comfortable using Swarm because it fit very cleanly into the already learned Docker ecosystem.&lt;/p&gt;

&lt;p&gt;I use Swarm in my daily operations for:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Automatic service discovery and load balancing&lt;/li&gt;
&lt;li&gt;High availability of my applications&lt;/li&gt;
&lt;li&gt;Scale services up or down with simple commands&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The cool thing about Swarm is its simplicity. I remember my first Swarm cluster; it took me less than 5 minutes to set up. Just one single command, and there I had a working orchestration platform. This simplicity does not mean it is not powerful; I have run production workloads serving millions of requests on Swarm clusters.&lt;/p&gt;
&lt;h3&gt;
  
  
  What is Kubernetes?
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://kubernetes.io/" rel="noopener noreferrer"&gt;Kubernetes&lt;/a&gt;, or K8s in my parlance, is a whole different animal. Having managed numerous Kubernetes clusters, I can tell you that this was indeed much more than just a container orchestrator; it's a full-featured platform to run distributed systems. It was open-sourced by Google, drawing on their experience running large-scale container deployments, and it is now the de facto standard in container orchestration.&lt;/p&gt;

&lt;p&gt;In my opinion, based on experience, Kubernetes really excels at:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Managing complex, microservices-based applications&lt;/li&gt;
&lt;li&gt;Providing strong self-healing capabilities&lt;/li&gt;
&lt;li&gt;Offer the most advanced deployment strategies&lt;/li&gt;
&lt;li&gt;Support for extensive customization via API&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;When I first encountered Kubernetes, I found it overwhelming, but the more I dug into it, the more the architecture grew on me. From its control plane to the networking model, every little feature seems designed with scalability and extensibility in mind. I have used it to run clusters with thousands of containers, and its ability to handle such scale is remarkable.&lt;/p&gt;
&lt;h3&gt;
  
  
  Docker Swarm vs Kubernetes: Container Orchestration Architecture
&lt;/h3&gt;

&lt;p&gt;All these years of container orchestration have taught me that the first thing one needs to know is the architectural grounds of the thing in question. Orchestration of containers is not just about running containers, but about their entire lifecycle management-that includes high availability and desired state, maintained across a distributed system.&lt;/p&gt;
&lt;h3&gt;
  
  
  Core Components and Architecture
&lt;/h3&gt;
&lt;h3&gt;
  
  
  Docker Swarm Architecture
&lt;/h3&gt;


&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Basic Swarm architecture components&lt;/span&gt;
Manager Nodes &lt;span class="o"&gt;(&lt;/span&gt;Control Plane&lt;span class="o"&gt;)&lt;/span&gt;  
• Raft Consensus Group  
  API &lt;span class="o"&gt;(&lt;/span&gt;Extended Docker API&lt;span class="o"&gt;)&lt;/span&gt;  
  Orchestrator  
Schedulers  
Allocate

Worker Nodes
• Container Runtime  
  Network Driver  
  Volume Plugins
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/.%2Fimages%2Fdocker.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/.%2Fimages%2Fdocker.png" alt="docker swarm architecture" width="" height=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;What I like most about Swarm is that the architecture stays simple: the control plane is part of Docker Engine, which means:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Control Plane Integration&lt;/strong&gt;: When I run the &lt;code&gt;docker swarm init&lt;/code&gt; command, by default, it does the following:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Starts up the Raft consensus group&lt;/li&gt;
&lt;li&gt;Configure control plane TLS&lt;/li&gt;
&lt;li&gt;Initializes the overlay network&lt;/li&gt;
&lt;li&gt;Creates the internal DNS&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;State Management&lt;/strong&gt;: Raft consensus protocol maintains:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;LEAD: Leader election among managers&lt;/li&gt;
&lt;li&gt;Distributed state storage&lt;/li&gt;
&lt;li&gt;Replication of Configuration&lt;/li&gt;
&lt;li&gt;Failure detection&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Service Orchestration&lt;/strong&gt;: Orchestrator ensures that:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Scheduling services across the nodes&lt;/li&gt;
&lt;li&gt;Desired state maintenance&lt;/li&gt;
&lt;li&gt;Container lifecycle management&lt;/li&gt;
&lt;li&gt;Load balancing configuration&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;h3&gt;
  
  
  Kubernetes Architecture
&lt;/h3&gt;


&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Kubernetes control plane components&lt;/span&gt;
Control Plane  
• API Server &lt;span class="o"&gt;(&lt;/span&gt;REST API&lt;span class="o"&gt;)&lt;/span&gt;  
• etcd &lt;span class="o"&gt;(&lt;/span&gt;Distributed Storage&lt;span class="o"&gt;)&lt;/span&gt;  
Scheduler  
Controller Manager
Cloud Controller Manager

Node Components  
• Kubelet  
• Container Runtime  
• Kube-proxy  
• CNI Plugins
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/.%2Fimages%2Fk8s.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/.%2Fimages%2Fk8s.png" alt="K8S architecture" width="" height=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Kubernetes actually does adopt a more modular approach, and this is something with which I can find more flexibility working:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Control Plane Components&lt;/strong&gt;:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;API Server serves as a gateway for all operations.&lt;/li&gt;
&lt;li&gt;etcd provides for consistent and reliable state storage&lt;/li&gt;
&lt;li&gt;Scheduler deals with Pod placement decisions&lt;/li&gt;
&lt;li&gt;Controller Manager runs control loops&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Node Architecture&lt;/strong&gt;:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Kubelet manages containers on each node.&lt;/li&gt;
&lt;li&gt;Container Runtime Interface (CRI) allows multiple runtimes.&lt;/li&gt;
&lt;li&gt;Container Network Interface (CNI): enables network plugins&lt;/li&gt;
&lt;li&gt;Container Storage Interface (CSI) for storage extensibility&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Differences in Architecture in Practice&lt;/p&gt;

&lt;p&gt;These are some of the architectural differences manifesting in my production deployments in the following ways:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Scaling Approach&lt;/strong&gt;: &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Swarm: Scales well to hundreds of nodes, simple architecture&lt;/li&gt;
&lt;li&gt;Kubernetes: Due to its modular design, it can manage thousands of nodes.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Inter-Component Communication&lt;/strong&gt;:&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Swarm: Secure internal network with automated TLS&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Kubernetes: requires explicit configuration of secure communication&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;State Management&lt;/strong&gt;:&lt;/li&gt;
&lt;/ol&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Swarm: Integrates up the stack with Raft consensus for manager nodes &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Kubernetes: External etcd cluster for reliable state storage&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;API Design&lt;/strong&gt;:&lt;/li&gt;
&lt;/ol&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Swarm: Extended Docker API, thus very familiar to Docker users.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Kubernetes: Rich declarative API, with high degree of customization&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;h3&gt;
  
  
  Service Management and Deployment
&lt;/h3&gt;

&lt;p&gt;In modern microservices architecture, the orchestration platforms do need robust service management: how the services are defined, deployed, updated, and scaled.&lt;/p&gt;
&lt;h3&gt;
  
  
  Docker Swarm Services
&lt;/h3&gt;

&lt;p&gt;Services in Swarm services are just simple extensions of Docker containers. Swarm is capable of load balancing, service discovery and rolling updates out of the box which require minimal configuration. A typical service would look like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;version&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;3.8"&lt;/span&gt;
&lt;span class="na"&gt;services&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;api&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;myapp:latest&lt;/span&gt;
    &lt;span class="na"&gt;deploy&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;mode&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;replicated&lt;/span&gt;
      &lt;span class="na"&gt;replicas&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;3&lt;/span&gt;
      &lt;span class="na"&gt;update_config&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;order&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;start-first&lt;/span&gt;
        &lt;span class="na"&gt;failure_action&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;rollback&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Kubernetes Deployment
&lt;/h3&gt;

&lt;p&gt;Kubernetes adds more and more abstraction with its deployment model. It separates the concerns of: deployment, service definition, and pod management. This provides full control but requires more configuration:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;apps/v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Deployment&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;api&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;replicas&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;3&lt;/span&gt;
  &lt;span class="na"&gt;template&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;containers&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;api&lt;/span&gt;
        &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;myapp:latest&lt;/span&gt;
        &lt;span class="na"&gt;livenessProbe&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="na"&gt;httpGet&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
            &lt;span class="na"&gt;path&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;/health&lt;/span&gt;
            &lt;span class="na"&gt;port&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;8080&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Docker Swarm vs. Kubernetes Networking Architecture
&lt;/h3&gt;

&lt;p&gt;Container networking enables the communication of microservices. It includes service discovery and load balancing, and network isolation.&lt;/p&gt;

&lt;h3&gt;
  
  
  Docker Swarm Networking
&lt;/h3&gt;

&lt;p&gt;Swarm networking is designed to be both simple and automated. When an overlay network is created, Swarm automatically handles service discovery and load balancing of the created services. The routing mesh provides services to any node in the cluster. This makes deploying and scaling an application very straightforward with no complex configuration of networking.&lt;/p&gt;

&lt;h3&gt;
  
  
  Kubernetes Networking
&lt;/h3&gt;

&lt;p&gt;Kubernetes is more flexible and provides a CNI specification that allows pluggable network implementations-from simple solutions such as Flannel to more complicated ones like Calico. That requires more setup, but will give you very powerful features: network policies, ingress controllers, and service mesh integration.&lt;/p&gt;

&lt;h3&gt;
  
  
  State Management
&lt;/h3&gt;

&lt;p&gt;State management is a needed concern regarding container orchestration, such as cluster configuration, service states, and consistency across nodes.&lt;/p&gt;

&lt;h3&gt;
  
  
  Docker Swarm State Management
&lt;/h3&gt;

&lt;p&gt;Swarm uses the inbuilt Raft consensus algorithm on the manager nodes. This provides a very simple, yet effective mechanism of maintaining the cluster state: All manager nodes participate in the consensus with one leader coordinating updates. This works fine for smaller clusters but will be a bottleneck as the cluster grows.&lt;/p&gt;

&lt;h3&gt;
  
  
  Kubernetes State Management
&lt;/h3&gt;

&lt;p&gt;Kubernetes uses a key-value store to manage state in a distributed fashion in etcd. This further separation of concerns allows for better scalability and much stronger options in disaster recovery. This allows for greater scale and much more robust options for failure in disaster recovery. It ensures the API server validates all changes to state and properly stores them, and controllers continuously work to maintain the desired state.&lt;/p&gt;




&lt;h2&gt;
  
  
  Monitoring GitHub Actions Workflows
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://s.cicube.io/try" rel="noopener noreferrer"&gt;CICube&lt;/a&gt; is a GitHub Actions monitoring tool that provides you with detailed insights into your workflows to further optimize your CI/CD pipeline. With CICube, you will be able to track your workflow runs, understand where the bottlenecks are, and tease out the best from your build times. Go to &lt;a href="https://s.cicube.io/try" rel="noopener noreferrer"&gt;cicube.io&lt;/a&gt; now and create a free account to better optimize your GitHub Actions workflows!&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;a href="https://s.cicube.io/try" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fep2dsojs2m0j2fksabjd.png" alt="CICube GitHub Actions Workflow Duration Monitoring" width="800" height="641"&gt;&lt;/a&gt;
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Storage Architecture
&lt;/h3&gt;

&lt;p&gt;Container storage: how to handle persistent data across container restarts and node failures.&lt;/p&gt;

&lt;h3&gt;
  
  
  Docker Swarm Storage
&lt;/h3&gt;

&lt;p&gt;Swarm keeps it simple in terms of storage with volume plugins and host-mounted volumes. This simplicity of getting up and running, though, can reduce some options when things become more complex:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;services&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;db&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;postgres&lt;/span&gt;
    &lt;span class="na"&gt;volumes&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;db-data:/var/lib/postgresql/data&lt;/span&gt;
&lt;span class="na"&gt;volumes&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;db-data&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;driver&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;local&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Kubernetes Storage
&lt;/h3&gt;

&lt;p&gt;Kubernetes introduces abstractions, such as PersistentVolumes and StorageClasses that enable more sophisticated storage management:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;PersistentVolumeClaim&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;accessModes&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;ReadWriteOnce&lt;/span&gt;
  &lt;span class="na"&gt;storageClassName&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;standard&lt;/span&gt;
  &lt;span class="na"&gt;resources&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;requests&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;storage&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;10Gi&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Conclusion
&lt;/h3&gt;

&lt;p&gt;Both Docker Swarm and Kubernetes are proficient in different scenarios. Swarm is great for teams with a need for simplicity and rapid deployment. Integrated is an architectural setup that is perfect for smaller to medium-sized deployments where ease of use outweighs other considerations.&lt;/p&gt;

&lt;p&gt;Kubernetes is more fitted to complex, large-scale deployments, thanks to its modular and extensible architecture. While it requires more upfront investment in setup and learning, this covers the flexibility and features necessary at enterprise scale for container orchestration.&lt;/p&gt;

&lt;p&gt;Which one to use depends upon your specific needs, team expertise, and scale requirements. The interactive tool above should help guide your decision based on those factors.&lt;/p&gt;

</description>
      <category>docker</category>
      <category>devops</category>
      <category>security</category>
    </item>
    <item>
      <title>What is AWS Lightsail? - A DevOps Engineer's Perspective</title>
      <dc:creator>CiCube</dc:creator>
      <pubDate>Fri, 20 Dec 2024 21:41:02 +0000</pubDate>
      <link>https://dev.to/cicube/what-is-aws-lightsail-a-devops-engineers-perspective-1ph3</link>
      <guid>https://dev.to/cicube/what-is-aws-lightsail-a-devops-engineers-perspective-1ph3</guid>
      <description>&lt;p&gt;&lt;a href="https://s.cicube.io/demo" rel="noopener noreferrer"&gt;&lt;br&gt;
    &lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fc.cicube.io%2Fmarketing%2Fdevto-banner.png" alt="cicube.io" width="800" height="249"&gt; &lt;br&gt;
&lt;/a&gt;&lt;/p&gt;
&lt;h2&gt;
  
  
  What is AWS Lightsail?
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;TL:DR:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://docs.aws.amazon.com/lightsail/latest/userguide/what-is-amazon-lightsail.html" rel="noopener noreferrer"&gt;AWS Lightsail&lt;/a&gt; is the easiest cloud platform for Amazon Web Services. It is going to enable users at minimal effort to deploy website applications, small applications, and development environments quickly.&lt;/p&gt;

&lt;p&gt;Lightsail allows virtual servers, object storage, databases, and DNS management at one fixed price monthly, making Lightsail perfect for developers looking for an even simpler way than provided with traditional AWS services, start-ups, or very small businesses.&lt;/p&gt;
&lt;h3&gt;
  
  
  What Drew Me to Lightsail?
&lt;/h3&gt;

&lt;p&gt;I was skeptical of Lightsail the first time I was exposed to it-too simple after using the whole ecosystem as a DevOps engineer. It turned out to be perfect for specific projects. Actually, it is AWS's version of something like DigitalOcean or Linode: straightforward and predictively simple.&lt;/p&gt;

&lt;p&gt;Steps we will cover in this article:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;What is AWS Lightsail?&lt;/li&gt;
&lt;li&gt;AWS Lightsail Architecture&lt;/li&gt;
&lt;li&gt;The Good Parts&lt;/li&gt;
&lt;li&gt;The Not-So-Good Parts&lt;/li&gt;
&lt;li&gt;Is AWS Lightsail Right for You?&lt;/li&gt;
&lt;li&gt;Where I Actually Use Lightsail in Production&lt;/li&gt;
&lt;li&gt;When not to Use AWS Lightsail&lt;/li&gt;
&lt;li&gt;My Tips from the Trenches&lt;/li&gt;
&lt;li&gt;AWS Lightsail vs Amazon EC2: Comparison Table&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;
  
  
  AWS Lightsail Architecture
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/.%2Flight-sail.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/.%2Flight-sail.png" alt="AWS Lightsail" width="" height=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This diagram shows the core setup of AWS Lightsail in a simple and clear way, making it perfect for small to medium projects.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Lightsail Instances&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Instance 1 and Instance 2 are virtual servers, such as EC2 instances, in Lightsail.&lt;/li&gt;
&lt;li&gt;These are the servers on which your applications or websites run. Setting them up and managing them is quite easy.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Load Balancer&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The Load Balancer sits in front of the instances and distributes traffic evenly between them.&lt;/li&gt;
&lt;li&gt;This ensures your application is available and performs well even if traffic spikes.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Database&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Lightsail Database Service: The managed database service for Lightsail.&lt;/li&gt;
&lt;li&gt;It enables the storage of application data without any need to be concerned about backups, scaling, or maintenance.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Networking&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;VPC: Lightsail connects to the network securely.&lt;/li&gt;
&lt;li&gt;Subnet: This is a subdivision of the VPC where your instances and services will sit.&lt;/li&gt;
&lt;li&gt;Internet Gateway: This provides a gateway to the internet for your Lightsail setup.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Storage&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The Storage section provides additional space for your data, files, or backups. → You could say this acts like an external hard drive for your Lightsail instances.&lt;/li&gt;
&lt;/ul&gt;
&lt;h3&gt;
  
  
  Why This Matters for DevOps
&lt;/h3&gt;

&lt;p&gt;Easy to Deploy Architecture: Simple, the diagram shows, of the ease with which Lightsail can be put into place, and much configuration is not required. &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Built-in Features: Load balancers, storage, and networking are integrated, saving time and effort. &lt;/li&gt;
&lt;li&gt;Cost-Effective: Lightsail offers predictable pricing while still providing the essential features, including load balancing and databases. &lt;/li&gt;
&lt;li&gt;Appropriate for small projects: Lightsail supports the hosting of websites, simple applications, and development/test environments without requiring a full-fledged AWS setup. This is a perfect fit for DevOps engineers working on projects that do not need the complexity of EC2, VPCs, or autoscaling groups. It's straightforward, reliable, and gets the job done.&lt;/li&gt;
&lt;/ul&gt;


&lt;h2&gt;
  
  
  Monitoring GitHub Actions Workflows
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://s.cicube.io/try" rel="noopener noreferrer"&gt;CICube&lt;/a&gt; is a GitHub Actions monitoring tool that provides you with detailed insights into your workflows to further optimize your CI/CD pipeline. With CICube, you will be able to track your workflow runs, understand where the bottlenecks are, and tease out the best from your build times. Go to &lt;a href="https://s.cicube.io/try" rel="noopener noreferrer"&gt;cicube.io&lt;/a&gt; now and create a free account to better optimize your GitHub Actions workflows!&lt;/p&gt;
&lt;h2&gt;
  
  
  &lt;a href="https://s.cicube.io/try" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fep2dsojs2m0j2fksabjd.png" alt="CICube GitHub Actions Workflow Duration Monitoring" width="800" height="641"&gt;&lt;/a&gt;
&lt;/h2&gt;
&lt;h3&gt;
  
  
  The Good Parts
&lt;/h3&gt;
&lt;h4&gt;
  
  
  Simplicity that Actually Helps
&lt;/h4&gt;

&lt;p&gt;Remember the first time you tried to set up an EC2 instance? VPCs, security groups, IAM roles. it can be overwhelming. With Lightsail, I can have a server up and running in minutes. Here's a real example:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;  &lt;span class="c"&gt;# The old EC2 way (simplified, but still complex)&lt;/span&gt;
  aws ec2 create-vpc
  aws ec2 create-subnet
  aws ec2 create-internet-gateway
  &lt;span class="c"&gt;# ... and about 5 more commands&lt;/span&gt;

  &lt;span class="c"&gt;# The Lightsail way&lt;/span&gt;
  aws lightsail create-instances &lt;span class="nt"&gt;--instance-names&lt;/span&gt; my-app
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  Predictable Costs
&lt;/h4&gt;

&lt;p&gt;One thing I like about Lightsail is knowing exactly how much I'll pay at the end of the month. No surprises from unexpected data transfer or IOPS charges. Here's what I typically spend:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;  My Standard Setup:
  - Small instance ($10/month)
  - Load balancer ($18/month)
  - Database ($15/month)
  Total: $43/month
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  Built-in Features I Actually Use
&lt;/h4&gt;

&lt;p&gt;After deploying dozens of instances, these features have saved me countless hours:&lt;br&gt;
    - Automatic snapshots lifesaver on updates &lt;br&gt;
    - Load balancers with one click, no more manual configuration &lt;br&gt;
    - Simple DNS management (integrated with my domains)&lt;/p&gt;
&lt;h3&gt;
  
  
  The Not-So-Good Parts
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;The Performance Ceiling&lt;br&gt;&lt;br&gt;
I learned this the hard way: Lightsail instances have fixed resources. During a Black Friday sale, one of my client's sites hit the bandwidth limit. There's no "just scale it up" button like with regular AWS services.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Limited Integration&lt;br&gt;&lt;br&gt;
If you're used to the AWS ecosystem, you'll miss some familiar tools: &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;No integration with CloudWatch (I use my own custom monitoring scripts)&lt;/li&gt;
&lt;li&gt;VPC Peering - Basic only; complex networking is a challenge &lt;/li&gt;
&lt;li&gt;No auto-scaling (manual scaling only)&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;h3&gt;
  
  
  Is AWS Lightsail Right for You?
&lt;/h3&gt;

&lt;p&gt;Find out whether Lightsail is the right choice for you with this interactive decision guide:&lt;/p&gt;
&lt;h2&gt;
  
  
  Where I Actually Use Lightsail in Production
&lt;/h2&gt;

&lt;p&gt;Over the last couple of years, I have successfully deployed a number of projects on Lightsail. Specific examples are given below.&lt;/p&gt;
&lt;h3&gt;
  
  
  WordPress Sites That Just Work
&lt;/h3&gt;

&lt;p&gt;I manage a portfolio of websites for small business-from local restaurants to boutique consulting firms. Lightsail's WordPress blueprint has been a good default choice because:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Setup in minutes, not hours&lt;/li&gt;
&lt;li&gt;Backups are easy&lt;/li&gt;
&lt;li&gt;Updates hassle-free&lt;/li&gt;
&lt;li&gt;Clients love the predictable costs&lt;/li&gt;
&lt;/ul&gt;
&lt;h3&gt;
  
  
  Development and Staging Environments
&lt;/h3&gt;

&lt;p&gt;In larger AWS projects, I tend to use Lightsail for development and staging. Why?&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Quick to spin up and tear down&lt;/li&gt;
&lt;li&gt;Ideal for temporary workloads&lt;/li&gt;
&lt;li&gt;Costs are easy to track
Great for client demos&lt;/li&gt;
&lt;/ul&gt;
&lt;h3&gt;
  
  
  Small Business Apps that Scale
&lt;/h3&gt;

&lt;p&gt;I have built a few custom applications which found their home on Lightsail:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Node.js-based booking system for a local gym&lt;/li&gt;
&lt;li&gt;Inventory management tool using Python/Flask &lt;/li&gt;
&lt;li&gt;Real estate listing platform &lt;/li&gt;
&lt;li&gt;MEAN stack&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;These apps serve hundreds of users daily without breaking a sweat-or the bank.&lt;/p&gt;
&lt;h3&gt;
  
  
  When not to Use AWS Lightsail
&lt;/h3&gt;

&lt;p&gt;Lightsail is not always the best option. Here's when I typically recommend alternatives:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;High-Traffic E-commerce Sites&lt;/strong&gt;
It taught me a lesson last Black Friday when the client's site hit the bandwidth ceiling. Now, for e-commerce, I stick with regular EC2 instances and Auto Scaling groups.
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Data-Intensive Applications&lt;/strong&gt;
One example is an analytics platform that processes terabytes of data, whereby we outgrew Lightsail capabilities in record speed and migrated the architecture onto proper AWS architecture with ECS and RDS for another client.&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;
  
  
  My Tips from the Trenches
&lt;/h2&gt;
&lt;h3&gt;
  
  
  Performance Optimization
&lt;/h3&gt;

&lt;p&gt;After managing numerous Lightsail instances, here's what works:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Use a CDN for static content &lt;/li&gt;
&lt;li&gt;I use Cloudflare personally&lt;/li&gt;
&lt;li&gt;Implement aggressive caching&lt;/li&gt;
&lt;li&gt;Monitoring resource usage (I use custom scripts)&lt;/li&gt;
&lt;/ul&gt;
&lt;h3&gt;
  
  
  Backup Strategy
&lt;/h3&gt;

&lt;p&gt;My tried-and-tested backup approach:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;    Daily snapshots 👉 Keep for 7 days
    Weekly snapshots 👉 Keep for 1 month
    Monthly snapshots 👉 Keep for 3 months
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Security Best Practices
&lt;/h3&gt;

&lt;p&gt;Security is not to be taken for granted; even for simple installations. Here's my personal check-list:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Enable automatic updates&lt;/li&gt;
&lt;li&gt;Using custom firewall rules &lt;/li&gt;
&lt;li&gt;Always use HTTPS &lt;/li&gt;
&lt;li&gt;Regular security audits&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  AWS Lightsail vs Amazon EC2: Comparison Table
&lt;/h2&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Feature&lt;/th&gt;
&lt;th&gt;&lt;strong&gt;AWS Lightsail&lt;/strong&gt;&lt;/th&gt;
&lt;th&gt;&lt;strong&gt;Amazon EC2&lt;/strong&gt;&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Best Use Case&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Simple web applications, small projects, and prototyping&lt;/td&gt;
&lt;td&gt;Large-scale applications, enterprise workloads, and resource-heavy environments&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Pricing&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Fixed monthly pricing with clear limits for instances, bandwidth, and storage&lt;/td&gt;
&lt;td&gt;Pay-as-you-go pricing with usage-based charges for compute, bandwidth, and storage&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Performance&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Limited to smaller workloads, fixed CPU and memory options&lt;/td&gt;
&lt;td&gt;Flexible performance scaling with advanced instance types and configurations&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Setup and Ease of Use&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Extremely beginner-friendly; quick setup with minimal configuration&lt;/td&gt;
&lt;td&gt;Requires AWS knowledge; manual setup with granular configurations&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Scalability&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Manual scaling; suited for predictable or smaller setups&lt;/td&gt;
&lt;td&gt;Auto-scaling with tools like ASG (Auto Scaling Group); handles dynamic scaling&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Networking&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Basic VPC features with simplified configuration&lt;/td&gt;
&lt;td&gt;Full VPC integration with support for advanced networking (subnets, peering)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Storage Options&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Limited block storage and database options; fixed sizes&lt;/td&gt;
&lt;td&gt;Multiple storage types: EBS, S3, and custom block storage with flexible sizing&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Monitoring&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Minimal built-in monitoring; requires custom tools&lt;/td&gt;
&lt;td&gt;Full integration with CloudWatch and AWS monitoring services&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Customization&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Simple blueprints for apps like WordPress, databases, or basic servers&lt;/td&gt;
&lt;td&gt;Complete customization: OS choice, security groups, IAM roles, and more&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Security&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Basic firewall rules, TLS integration&lt;/td&gt;
&lt;td&gt;Advanced security options with IAM, security groups, and NACLs&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

</description>
      <category>aws</category>
      <category>devops</category>
      <category>security</category>
    </item>
    <item>
      <title>What is CI (Continuous Integration)? A Guide with Interactive Tool</title>
      <dc:creator>CiCube</dc:creator>
      <pubDate>Sun, 15 Dec 2024 10:14:40 +0000</pubDate>
      <link>https://dev.to/cicube/what-is-ci-continuous-integration-a-guide-with-interactive-tool-k8d</link>
      <guid>https://dev.to/cicube/what-is-ci-continuous-integration-a-guide-with-interactive-tool-k8d</guid>
      <description>&lt;p&gt;&lt;a href="https://s.cicube.io/demo" rel="noopener noreferrer"&gt;&lt;br&gt;
    &lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fc.cicube.io%2Fmarketing%2Fdevto-banner.png" alt="cicube.io" width="800" height="249"&gt; &lt;br&gt;
&lt;/a&gt;&lt;/p&gt;
&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;
&lt;h3&gt;
  
  
  What is CI?
&lt;/h3&gt;

&lt;p&gt;Continuous Integration is a software engineering practice wherein developers integrate their code changes frequently, probably into one main repository. Due to this fact, the source code has passed automated building and tests, which will, in effect, catch the bugs rather sooner than later, avoid merge conflicts, and thereby assure the quality of software by means of automated verification.&lt;/p&gt;

&lt;p&gt;Being a DevOps for more than a decade, I've watched numerous teams wrestle with the integration of code. Let me give an analogy: building a house. Each one is supposed to take care of a different part: a kitchen, a bathroom, or maybe the living room. Now imagine they all complete their work but then try to put it all together, and nothing fits! The pipes of the kitchen block the door to the bathroom, the living room is tiny, and the electric wiring is all wrong.&lt;/p&gt;

&lt;p&gt;I've seen just this very situation happen in software development when Continuous Integration is not implemented. Being a DevOps engineer myself, I have been in a position to introduce CI into many teams and have observed how it revolutionized their productivity. Let me try explaining what CI is in simple terms; let me use some real examples that I have faced in my line of work.&lt;/p&gt;

&lt;p&gt;Steps we'll cover:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;What is CI?&lt;/li&gt;
&lt;li&gt;
The Problem CI Solves

&lt;ul&gt;
&lt;li&gt;🧩 Without CI: The Chaos I've Witnessed&lt;/li&gt;
&lt;li&gt;🌟 With CI: The Solution I Implemented&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;How CI Works: A Simple Example&lt;/li&gt;
&lt;li&gt;Common Problems CI Helps Solve&lt;/li&gt;
&lt;li&gt;How to Choose the Right CI Tool?&lt;/li&gt;
&lt;li&gt;Popular CI Tools Compared&lt;/li&gt;
&lt;li&gt;How to Know If CI is Working Well&lt;/li&gt;
&lt;li&gt;Getting Started with CI&lt;/li&gt;
&lt;li&gt;Common Questions About CI Tools&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;
  
  
  Monitoring GitHub Actions Workflows
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://s.cicube.io/try" rel="noopener noreferrer"&gt;CICube&lt;/a&gt; is a GitHub Actions monitoring tool that provides you with detailed insights into your workflows to further optimize your CI/CD pipeline. With CICube, you will be able to track your workflow runs, understand where the bottlenecks are, and tease out the best from your build times. Go to &lt;a href="https://s.cicube.io/try" rel="noopener noreferrer"&gt;cicube.io&lt;/a&gt; now and create a free account to better optimize your GitHub Actions workflows!&lt;br&gt;
&lt;a href="https://s.cicube.io/try" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fep2dsojs2m0j2fksabjd.png" alt="CICube GitHub Actions Workflow Duration Monitoring" width="800" height="641"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h3&gt;
  
  
  The Problem CI Solves
&lt;/h3&gt;

&lt;p&gt;Let me illustrate a real-life scenario I faced once with the team before we went ahead implementing CI:&lt;/p&gt;
&lt;h3&gt;
  
  
  🧩 Without CI: The Chaos I've Witnessed
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Monday&lt;/strong&gt;: A developer adds a login button&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Tuesday&lt;/strong&gt;: Another developer in turn alters how users' names are displayed&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Wednesday&lt;/strong&gt;: Schema updated by DB team&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Thursday&lt;/strong&gt;: I can still remember the panic in their eyes when they tried to put their work together.

&lt;ul&gt;
&lt;li&gt;Login button breaks with the new display changes&lt;/li&gt;
&lt;li&gt;The information cannot be stored properly in the database.&lt;/li&gt;
&lt;li&gt;Nobody knows which change has caused the problems.&lt;/li&gt;
&lt;li&gt;I watch the team spend days fixing these issues.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;h3&gt;
  
  
  🌟 With CI: The Solution I Implemented
&lt;/h3&gt;

&lt;p&gt;Here is how I transformed their workflow:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Monday Morning:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Developer Creates a button for login&lt;/li&gt;
&lt;li&gt;My CI pipeline tells me whether this works with everything else.&lt;/li&gt;
&lt;li&gt;Immediate feedback within the team&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Afternoon Monday:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Another developer updates how names are displayed&lt;/li&gt;
&lt;li&gt;CI verifies it works with the login button&lt;/li&gt;
&lt;li&gt;Any problems are identified and fixed immediately&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Tuesday Morning:&lt;/strong&gt; &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Updating the schema by the database team &lt;/li&gt;
&lt;li&gt;CI checks that it works with both previous changes &lt;/li&gt;
&lt;li&gt;Everything keeps on working together!&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;h2&gt;
  
  
  How CI Works: A Simple Example
&lt;/h2&gt;

&lt;p&gt;Imagine you're writing a message in a group chat. Before sending, you: &lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Spell check &lt;/li&gt;
&lt;li&gt;Check it makes sense &lt;/li&gt;
&lt;li&gt;Ensure you send it to the right group.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;CI does the same for code:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Simple CI Check&lt;/span&gt;

&lt;span class="na"&gt;on&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[&lt;/span&gt;&lt;span class="nv"&gt;push&lt;/span&gt;&lt;span class="pi"&gt;]&lt;/span&gt;  &lt;span class="c1"&gt;# Whenever someone saves their work&lt;/span&gt;

&lt;span class="na"&gt;jobs&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;check-code&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;runs-on&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ubuntu-latest&lt;/span&gt;
    &lt;span class="na"&gt;steps&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Get the code&lt;/span&gt;
        &lt;span class="na"&gt;uses&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;actions/checkout@v3&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Check spelling, as spell-check does&lt;/span&gt;
        &lt;span class="na"&gt;run&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;npm run lint&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Ensure that it works (like preview)&lt;/span&gt;
        &lt;span class="na"&gt;run&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;npm test&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Try to build it (like send the message)&lt;/span&gt;
        &lt;span class="na"&gt;run&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;npm run build&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Common Problems CI Helps Solve
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;"It Works on My Computer!"&lt;br&gt;&lt;br&gt;
&lt;strong&gt;Without CI&lt;/strong&gt;: "It works fine for me, I don't know why it's broken for you!"&lt;br&gt;&lt;br&gt;
&lt;strong&gt;With CI&lt;/strong&gt;: That means, because it is CI testing in a clean environment, it will therefore work if it works there for everyone.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Finding Problems Late&lt;br&gt;&lt;br&gt;
&lt;strong&gt;Without CI&lt;/strong&gt;: We learn about problems on Friday when everything is due.&lt;br&gt;&lt;br&gt;
&lt;strong&gt;With CI&lt;/strong&gt;: Find and fix little problems all week&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Not Knowing What Broke&lt;br&gt;&lt;br&gt;
&lt;strong&gt;Without CI&lt;/strong&gt;: "Something's broken, but we don't know what changed!"&lt;br&gt;&lt;br&gt;
&lt;strong&gt;With CI&lt;/strong&gt;: Knowing exactly which change caused it straight away&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  How to Choose the Right CI Tool?
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Popular CI Tools Compared
&lt;/h3&gt;

&lt;p&gt;When starting with CI, one of the first decisions you'll need to make is which CI tool to use. Let me break down the most popular options in simple terms:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;CI Tool&lt;/th&gt;
&lt;th&gt;Best For&lt;/th&gt;
&lt;th&gt;Hosting&lt;/th&gt;
&lt;th&gt;Free Tier&lt;/th&gt;
&lt;th&gt;Setup Difficulty&lt;/th&gt;
&lt;th&gt;Key Feature&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;a href="https://github.com/features/actions" rel="noopener noreferrer"&gt;GitHub Actions&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;GitHub projects&lt;/td&gt;
&lt;td&gt;Cloud&lt;/td&gt;
&lt;td&gt;2000 mins/month&lt;/td&gt;
&lt;td&gt;Easy&lt;/td&gt;
&lt;td&gt;Direct GitHub integration&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;a href="https://www.jenkins.io/" rel="noopener noreferrer"&gt;Jenkins&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;Custom workflows&lt;/td&gt;
&lt;td&gt;Self-hosted&lt;/td&gt;
&lt;td&gt;Unlimited&lt;/td&gt;
&lt;td&gt;Complex&lt;/td&gt;
&lt;td&gt;Highly customizable&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;a href="https://docs.gitlab.com/ee/ci/" rel="noopener noreferrer"&gt;GitLab CI&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;GitLab projects&lt;/td&gt;
&lt;td&gt;Both&lt;/td&gt;
&lt;td&gt;400 mins/month&lt;/td&gt;
&lt;td&gt;Medium&lt;/td&gt;
&lt;td&gt;Built into GitLab&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;a href="https://circleci.com/" rel="noopener noreferrer"&gt;CircleCI&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;Quick setup&lt;/td&gt;
&lt;td&gt;Cloud&lt;/td&gt;
&lt;td&gt;6000 mins/month&lt;/td&gt;
&lt;td&gt;Easy&lt;/td&gt;
&lt;td&gt;Fast performance&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;a href="https://travis-ci.com/" rel="noopener noreferrer"&gt;Travis CI&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;Open source&lt;/td&gt;
&lt;td&gt;Cloud&lt;/td&gt;
&lt;td&gt;OSS only&lt;/td&gt;
&lt;td&gt;Easy&lt;/td&gt;
&lt;td&gt;Simple configuration&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;a href="https://azure.microsoft.com/en-us/products/devops/pipelines/" rel="noopener noreferrer"&gt;Azure Pipelines&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;Microsoft ecosystem&lt;/td&gt;
&lt;td&gt;Cloud&lt;/td&gt;
&lt;td&gt;1800 mins/month&lt;/td&gt;
&lt;td&gt;Medium&lt;/td&gt;
&lt;td&gt;.NET integration&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;h2&gt;
  
  
  How to Know If CI is Working Well
&lt;/h2&gt;

&lt;p&gt;Think of CI as an eager assistant, which should:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Be Quick&lt;/strong&gt;: Like a spell-checker, it should give fast feedback&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Be Reliable&lt;/strong&gt;: Like a calculator, it should give consistent results&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Be Clear&lt;/strong&gt;: Like a traffic light - it should be easy to understand&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Be Helpful&lt;/strong&gt;: It should act like a GPS telling you how to fix problems.&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Getting Started with CI
&lt;/h2&gt;

&lt;p&gt;If you are new to CI, here is how you get started.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Start Small&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Begin with the basic checks such as spelling/grammar checks for code&lt;/li&gt;
&lt;li&gt;Add basic tests&lt;/li&gt;
&lt;li&gt;Keep it simple!&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Add Gradually&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Learning to cook, start off with simple recipes&lt;/li&gt;
&lt;li&gt;Add more ingredient tests as you become comfortable&lt;/li&gt;
&lt;li&gt;Learn from mistakes and improve&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Use the Right Tools&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;GitHub Actions - what we used in the examples&lt;/li&gt;
&lt;li&gt;Other popular tools include: Jenkins or GitLab&lt;/li&gt;
&lt;li&gt;Select the best fit for your team&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Common Questions
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Q: Must I be a guru at programming to use CI?&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
A: No! If you can use spell-check or follow a recipe, you can understand and use CI.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Q: How often should CI run?&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
A: Ideally every time someone saves their work - like spell-check checking as you type.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Q: What if a Problem is found by CI?&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
A: Like when spell-check has underlined a word for you: you fix it before moving on.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Q: Is CI expensive to set up?&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
A: Most of these tools, like GitHub Actions, are free for basic use. The time you save is worth it.&lt;/p&gt;

&lt;h2&gt;
  
  
  Monitoring Your CI Pipelines
&lt;/h2&gt;

&lt;p&gt;No matter which CI tool you decide to use, there's a need for monitoring their performance. That's where tools like &lt;a href="https://cicube.io/" rel="noopener noreferrer"&gt;CICube&lt;/a&gt; come in:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Tracking build times across various tools&lt;/li&gt;
&lt;li&gt;Monitor success rates&lt;/li&gt;
&lt;li&gt;Compare performance&lt;/li&gt;
&lt;li&gt;Gain insights for optimization&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>devops</category>
      <category>security</category>
    </item>
    <item>
      <title>Docker Cheat Sheet - Most Useful Commands</title>
      <dc:creator>CiCube</dc:creator>
      <pubDate>Fri, 13 Dec 2024 06:29:44 +0000</pubDate>
      <link>https://dev.to/cicube/docker-cheat-sheet-most-useful-commands-ghl</link>
      <guid>https://dev.to/cicube/docker-cheat-sheet-most-useful-commands-ghl</guid>
      <description>&lt;p&gt;&lt;a href="https://s.cicube.io/demo" rel="noopener noreferrer"&gt;&lt;br&gt;
    &lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fc.cicube.io%2Fmarketing%2Fdevto-banner.png" alt="cicube.io" width="800" height="249"&gt; &lt;br&gt;
&lt;/a&gt;&lt;/p&gt;
&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;After working with Docker for a while, I have noted there could be hundreds of different Docker commands out there. But really, in my workflow, it's the same about 20-30 of these. Here's my personal cheat sheet of the most practical Docker commands I use regularly.&lt;/p&gt;
&lt;h2&gt;
  
  
  Container Management
&lt;/h2&gt;
&lt;h2&gt;
  
  
  Running Containers
&lt;/h2&gt;

&lt;p&gt;The most basic Docker command is docker run. Following is how I use it in various situations:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker run &lt;span class="nt"&gt;-d&lt;/span&gt; &lt;span class="nt"&gt;--name&lt;/span&gt; webserver nginx:latest
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This command starts an Nginx container in detached mode (-d) with a given name. I use this when I need a quick web server for testing.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker run &lt;span class="nt"&gt;-d&lt;/span&gt; &lt;span class="nt"&gt;-p&lt;/span&gt; 8080:80 nginx:latest
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Maps port 8080 on your host to port 80 in the container. Useful when you need to use container services from your host machine.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker run &lt;span class="nt"&gt;-d&lt;/span&gt; 
  &lt;span class="nt"&gt;-e&lt;/span&gt; &lt;span class="nv"&gt;POSTGRES_PASSWORD&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;mysecret 
  &lt;span class="nt"&gt;-e&lt;/span&gt; &lt;span class="nv"&gt;POSTGRES_DB&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;myapp 
  &lt;span class="nt"&gt;-v&lt;/span&gt; postgres_data:/var/lib/postgresql/data 
  postgres:13
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;I use this pattern for database containers where persistence and configuration matter a lot.&lt;/p&gt;

&lt;h2&gt;
  
  
  Container Lifecycle
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker ps
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;All running containers are shown here. I am using this above command very frequently to check up on container status, ports, and names.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker logs &lt;span class="nt"&gt;-f&lt;/span&gt; container_name
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The -f flag follows the log output. Indispensable in debugging problems within running containers.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker &lt;span class="nb"&gt;exec&lt;/span&gt; &lt;span class="nt"&gt;-it&lt;/span&gt; container_name bash
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Gives you a shell in the container. I'm using this constantly to debugging and for one-off commands.&lt;/p&gt;

&lt;h2&gt;
  
  
  Docker cheat sheet: Image Management
&lt;/h2&gt;

&lt;p&gt;Working with Images&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker pull nginx:latest
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Pulls an image from Docker Hub. Always specify a tag to avoid getting unexpected versions.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker build &lt;span class="nt"&gt;-t&lt;/span&gt; myapp:1.0 &lt;span class="nb"&gt;.&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Builds an image from a Dockerfile in the current directory. The -t flag tags the image with a name and version.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker images
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Lists all local images - I use this to check available images and their sizes.&lt;/p&gt;

&lt;h2&gt;
  
  
  Volume Management
&lt;/h2&gt;

&lt;p&gt;Volumes are vital in establishing persistence in data:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker volume create mydata
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Create a Persistent Volume. I use these for database data and other stateful applications.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker run &lt;span class="nt"&gt;-d&lt;/span&gt; 
  &lt;span class="nt"&gt;-v&lt;/span&gt; mydata:/data 
  nginx:latest
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Mounts the volume 'mydata' to /data inside the container. This is required for persistence of data.&lt;/p&gt;

&lt;h2&gt;
  
  
  Network Management
&lt;/h2&gt;

&lt;p&gt;Networking is the key to Container Communication:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker network create mynetwork
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Creates an isolated network for container communication. This is useful when I want to establish multi-container applications.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker run &lt;span class="nt"&gt;-d&lt;/span&gt; &lt;span class="nt"&gt;--network&lt;/span&gt; mynetwork nginx:latest
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Connects a container to a certain network. Useful for container-to-container communication.&lt;/p&gt;

&lt;h2&gt;
  
  
  Docker Compose
&lt;/h2&gt;

&lt;p&gt;This makes Docker Compose very important to manage multi-container applications:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker-compose up &lt;span class="nt"&gt;-d&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Starts all the services as defined in docker-compose.yml. I use this daily to start my development environment.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker-compose down
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Stops and removes all containers, networks created by docker-compose up.&lt;/p&gt;

&lt;h2&gt;
  
  
  Cleanup Commands
&lt;/h2&gt;

&lt;p&gt;These commands clean up the Docker environment:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker system prune
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Remove all stopped containers, unused networks, dangling images and build cache. I run this once a week because it saves me some disk space.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker &lt;span class="nb"&gt;rm&lt;/span&gt; &lt;span class="nt"&gt;-v&lt;/span&gt; container_name
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Remove a container and its associated volumes. Volumes will be deleted and volume data will be lost. Done&lt;/p&gt;

&lt;h2&gt;
  
  
  Troubleshooting
&lt;/h2&gt;

&lt;p&gt;These are some very helpful debugging commands:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker stats container_name
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Display live resource usage statistics. Great for debugging performance of containers.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker inspect container_name
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Displays detailed configuration information about a container. This is useful to debug networking and volume issues.&lt;/p&gt;

&lt;h2&gt;
  
  
  Best Practices
&lt;/h2&gt;

&lt;p&gt;Based on my experience, here are some key practices to follow:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Always tag your images&lt;/strong&gt;: Never use 'latest' in production&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Use named volumes&lt;/strong&gt;: Simplifies data management&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Regular cleanup&lt;/strong&gt;: Use system prune to keep disk space free&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Monitor logs&lt;/strong&gt;: Regular log checking helps catch issues early&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Security and Compliance
&lt;/h2&gt;

&lt;p&gt;Security has become a crucial part of my Docker workflow, especially when working with enterprise clients. Here are some essential security-focused commands I use:&lt;/p&gt;

&lt;h3&gt;
  
  
  SBOM (Software Bill of Materials)
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker sbom example-image:latest
docker sbom example-image:latest &lt;span class="nt"&gt;--output&lt;/span&gt; sbom.txt
docker sbom example-image:latest &lt;span class="nt"&gt;--format&lt;/span&gt; spdx-json
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;I generate these SBOMs to maintain transparency in our software supply chain. It's particularly important when working with security-conscious clients.&lt;/p&gt;

&lt;h3&gt;
  
  
  Vulnerability Scanning
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker scan example-image:latest
docker scan example-image:latest &lt;span class="nt"&gt;--file&lt;/span&gt; Dockerfile
docker scan example-image:latest &lt;span class="nt"&gt;--severity&lt;/span&gt; high
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;I always run these scans before deploying any container to production. It's saved me from potential security issues multiple times.&lt;/p&gt;

&lt;h2&gt;
  
  
  Docker Hub Operations
&lt;/h2&gt;

&lt;p&gt;These are the commands I use daily for Docker Hub interactions:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker login
docker &lt;span class="nb"&gt;logout
&lt;/span&gt;docker search nginx
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Advanced Features
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Resource Monitoring
&lt;/h3&gt;

&lt;p&gt;For performance troubleshooting, I rely on these monitoring commands:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker stats

docker stats &lt;span class="si"&gt;$(&lt;/span&gt;docker ps &lt;span class="nt"&gt;--format&lt;/span&gt;&lt;span class="o"&gt;={{&lt;/span&gt;.Names&lt;span class="o"&gt;}}&lt;/span&gt;&lt;span class="si"&gt;)&lt;/span&gt;  &lt;span class="c"&gt;# Monitor all containers&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Config Contexts
&lt;/h3&gt;

&lt;p&gt;When working with multiple Docker environments:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker context create my-remote &lt;span class="nt"&gt;--docker&lt;/span&gt; &lt;span class="s2"&gt;"host=ssh://user@remote-server"&lt;/span&gt;
docker context &lt;span class="nb"&gt;ls
&lt;/span&gt;docker context use my-remote
docker context &lt;span class="nb"&gt;rm &lt;/span&gt;old-context
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Advanced Cleanup
&lt;/h3&gt;

&lt;p&gt;Here's my detailed cleanup routine that I use to manage disk space:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker system prune &lt;span class="nt"&gt;--volumes&lt;/span&gt;  &lt;span class="c"&gt;# Remove everything unused&lt;/span&gt;
docker image prune &lt;span class="nt"&gt;-a&lt;/span&gt;  &lt;span class="c"&gt;# Remove all unused images&lt;/span&gt;
docker volume prune  &lt;span class="c"&gt;# Remove all unused volumes&lt;/span&gt;
docker container prune  &lt;span class="c"&gt;# Remove all stopped containers&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Quick Reference
&lt;/h2&gt;

&lt;p&gt;Here's a comprehensive table of all the commands covered in this cheat sheet:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Category&lt;/th&gt;
&lt;th&gt;Command&lt;/th&gt;
&lt;th&gt;Description&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Container Management&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;docker run -d --name webserver nginx:latest&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Run container in detached mode&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;docker run -d -p 8080:80 nginx:latest&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Run with port mapping&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;docker ps&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;List running containers&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;docker logs -f container_name&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Follow container logs&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;docker exec -it container_name bash&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Access container shell&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Image Management&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;docker pull nginx:latest&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Pull image from registry&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;docker build -t myapp:1.0 .&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Build image from Dockerfile&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;docker images&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;List local images&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Volume Management&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;docker volume create mydata&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Create volume&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;docker run -v mydata:/data nginx:latest&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Run with volume mount&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Network Management&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;docker network create mynetwork&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Create network&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;docker run --network mynetwork nginx:latest&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Run with network&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Docker Compose&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;docker-compose up -d&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Start services&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;docker-compose down&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Stop services&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Cleanup&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;docker system prune&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Remove unused resources&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;docker rm -v container_name&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Remove container and volumes&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;docker system prune --volumes&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Remove all unused resources&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;docker image prune -a&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Remove all unused images&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;docker volume prune&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Remove all unused volumes&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Security&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;docker sbom example-image:latest&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Generate SBOM&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;docker scan example-image:latest&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Scan for vulnerabilities&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Docker Hub&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;docker login&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Log into Docker Hub&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;docker logout&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Log out from Docker Hub&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;docker search nginx&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Search images&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Advanced&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;docker stats container_name&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Monitor container resources&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;docker context create my-remote&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Create new context&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;docker context ls&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;List contexts&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;docker inspect container_name&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;View container details&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;These are the essentials for my day-to-day work with Docker. Of course, Docker has many more commands, but once mastered, these will cover about 90% of your needs to manage containers. Keep this cheat sheet handy; I still refer back to it fairly often when working in different environments.&lt;/p&gt;




&lt;h2&gt;
  
  
  Monitoring GitHub Actions Workflows
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://s.cicube.io/try" rel="noopener noreferrer"&gt;CICube&lt;/a&gt; is a GitHub Actions monitoring tool that provides you with detailed insights into your workflows to further optimize your CI/CD pipeline. With CICube, you will be able to track your workflow runs, understand where the bottlenecks are, and tease out the best from your build times. Go to &lt;a href="https://s.cicube.io/try" rel="noopener noreferrer"&gt;cicube.io&lt;/a&gt; now and create a free account to better optimize your GitHub Actions workflows!&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;a href="https://s.cicube.io/try" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fep2dsojs2m0j2fksabjd.png" alt="CICube GitHub Actions Workflow Duration Monitoring" width="800" height="641"&gt;&lt;/a&gt;
&lt;/h2&gt;

</description>
      <category>docker</category>
      <category>devops</category>
      <category>security</category>
    </item>
    <item>
      <title>What is AWS Step Functions? - A Complete Guide</title>
      <dc:creator>CiCube</dc:creator>
      <pubDate>Wed, 11 Dec 2024 16:15:19 +0000</pubDate>
      <link>https://dev.to/cicube/what-is-aws-step-functions-a-complete-guide-4dg3</link>
      <guid>https://dev.to/cicube/what-is-aws-step-functions-a-complete-guide-4dg3</guid>
      <description>&lt;p&gt;&lt;a href="https://s.cicube.io/demo" rel="noopener noreferrer"&gt;&lt;br&gt;
    &lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fc.cicube.io%2Fmarketing%2Fdevto-banner.png" alt="cicube.io" width="800" height="249"&gt; &lt;br&gt;
&lt;/a&gt;&lt;/p&gt;
&lt;h3&gt;
  
  
  Introduction
&lt;/h3&gt;
&lt;h4&gt;
  
  
  Quick Summary: What You Need to Know About AWS Step Functions
&lt;/h4&gt;
&lt;h5&gt;
  
  
  What is AWS Step Functions and how does it work?
&lt;/h5&gt;

&lt;p&gt;AWS Step Functions is a serverless orchestration service that simplifies workflow management by connecting AWS Lambda and other AWS services. It visually organizes tasks and automates complex processes for seamless execution.&lt;/p&gt;
&lt;h5&gt;
  
  
  What are common use cases for AWS Step Functions?
&lt;/h5&gt;

&lt;ul&gt;
&lt;li&gt;E-commerce workflows: Order validation, inventory checks, and payment processing.&lt;/li&gt;
&lt;li&gt;Data processing: Managing large data sets with parallel tasks.&lt;/li&gt;
&lt;li&gt;Error handling: Retrieving tasks and managing failures in critical processes.&lt;/li&gt;
&lt;li&gt;Automation: Automating multi-service operations efficiently.&lt;/li&gt;
&lt;/ul&gt;
&lt;h5&gt;
  
  
  How do you design a workflow with AWS Step Functions?
&lt;/h5&gt;

&lt;ul&gt;
&lt;li&gt;Define your workflow: Use Amazon States Language (ASL) to structure steps.&lt;/li&gt;
&lt;li&gt;Add state types: Include Task, Choice, or Parallel states for specific actions.&lt;/li&gt;
&lt;li&gt;Handle errors: Use Catch and Retry blocks to manage failures.&lt;/li&gt;
&lt;li&gt;Integrate services: Connect with AWS services such as Lambda, DynamoDB, and S3.&lt;/li&gt;
&lt;/ul&gt;
&lt;h5&gt;
  
  
  What are the benefits of AWS Step Functions?
&lt;/h5&gt;

&lt;ul&gt;
&lt;li&gt;Simplified workflows: Visually organize and manage tasks.&lt;/li&gt;
&lt;li&gt;Built-in error handling: Retry and catch mechanisms for reliability.&lt;/li&gt;
&lt;li&gt;Parallel execution: Simultaneous task execution for efficiency.&lt;/li&gt;
&lt;li&gt;Cost efficiency: Optimize workflows to minimize resource usage.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Now that we've covered the key takeaways and provided a quick overview, let’s dive into more detailed applications and real-world scenarios. During my years of working on serverless applications, one thing that I used to feel was that managing numerous Lambda functions along with other services became complex really fast.  &lt;/p&gt;

&lt;p&gt;&lt;a href="https://aws.amazon.com/step-functions/" rel="noopener noreferrer"&gt;AWS Step Functions&lt;/a&gt; is one of those game-changing services that has completely changed how I approach this problem. &lt;br&gt;
Today, I want to share my experience with Step Functions and how it can simplify your serverless workflows.&lt;/p&gt;

&lt;p&gt;Steps we'll cover:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Understanding Step Functions&lt;/li&gt;
&lt;li&gt;Key Concepts&lt;/li&gt;
&lt;li&gt;Real-World Example: Order Processing System&lt;/li&gt;
&lt;li&gt;AWS Step Function Best Practices&lt;/li&gt;
&lt;li&gt;Advanced Features&lt;/li&gt;
&lt;li&gt;Performance Optimization&lt;/li&gt;
&lt;li&gt;Cost Optimization&lt;/li&gt;
&lt;li&gt;State Types - My Implementation Guide&lt;/li&gt;
&lt;/ul&gt;
&lt;h3&gt;
  
  
  Understanding Step Functions
&lt;/h3&gt;

&lt;p&gt;At its core, Step Functions is a serverless orchestration service that lets you combine AWS Lambda functions and other AWS services into business-critical applications. Think of it as a conductor in an orchestra, coordinating different services to work together harmoniously.&lt;/p&gt;

&lt;p&gt;Here is a simple example of what a Step Function state machine looks like:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"Comment"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"A simple order processing workflow"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"StartAt"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"ValidateOrder"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"States"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"ValidateOrder"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"Type"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Task"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"Resource"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"arn:aws:lambda:REGION:ACCOUNT:function:validateOrder"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"Next"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"CheckInventory"&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"CheckInventory"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"Type"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Task"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"Resource"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"arn:aws:lambda:REGION:ACCOUNT:function:checkInventory"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"Next"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"ProcessPayment"&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"ProcessPayment"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"Type"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Task"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"Resource"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"arn:aws:lambda:REGION:ACCOUNT:function:processPayment"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"End"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="kc"&gt;true&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;h2&gt;
  
  
  Monitoring GitHub Actions Workflows
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://s.cicube.io/try" rel="noopener noreferrer"&gt;CICube&lt;/a&gt; is a GitHub Actions monitoring tool that provides you with detailed insights into your workflows to further optimize your CI/CD pipeline. With CICube, you will be able to track your workflow runs, understand where the bottlenecks are, and tease out the best from your build times. Go to &lt;a href="https://s.cicube.io/try" rel="noopener noreferrer"&gt;cicube.io&lt;/a&gt; now and create a free account to better optimize your GitHub Actions workflows!&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;a href="https://s.cicube.io/try" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fep2dsojs2m0j2fksabjd.png" alt="CICube GitHub Actions Workflow Duration Monitoring" width="800" height="641"&gt;&lt;/a&gt;
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Key Concepts
&lt;/h3&gt;

&lt;p&gt;Let me break down the essential concepts that I work with day in and day out.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;State Machines&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;State machines are the core of Step Functions. They define your workflow using Amazon States Language (ASL). Each state machine contains:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;States: This means individual steps in your workflow.&lt;/li&gt;
&lt;li&gt;Transitions: Rules for transitioning from one state to another
Input/Output Processing: Data manipulation between states&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;State Types&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;I use these kinds of states quite often:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Task States&lt;/strong&gt;: Execute work (Lambda, AWS services)
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"ProcessOrder"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"Type"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Task"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"Resource"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"arn:aws:lambda:REGION:ACCOUNT:function:processOrder"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"Next"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"SendNotification"&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Task states are used to perform specific tasks, like running a Lambda function or invoking an AWS service. In this example, the &lt;code&gt;processOrder&lt;/code&gt; Lambda function is executed, and the workflow then moves to &lt;code&gt;SendNotification&lt;/code&gt;.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Choice States&lt;/strong&gt;: You can add branching logic
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"CheckOrderValue"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"Type"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Choice"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"Choices"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="nl"&gt;"Variable"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"$.orderValue"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="nl"&gt;"NumericGreaterThan"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;100&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="nl"&gt;"Next"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"ApplyDiscount"&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"Default"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"ProcessNormally"&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Choice states add branching logic to your workflow. Here, the workflow checks if the &lt;code&gt;orderValue&lt;/code&gt; is greater than 100. If true, it goes to &lt;code&gt;ApplyDiscount&lt;/code&gt;. Otherwise, it defaults to ProcessNormally.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Parallel States&lt;/strong&gt;: Parallel execution of the branches
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"ProcessOrder"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"Type"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Parallel"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"Branches"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="nl"&gt;"StartAt"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"UpdateInventory"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="nl"&gt;"States"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
          &lt;/span&gt;&lt;span class="nl"&gt;"UpdateInventory"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
            &lt;/span&gt;&lt;span class="nl"&gt;"Type"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Task"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
            &lt;/span&gt;&lt;span class="nl"&gt;"Resource"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"arn:aws:lambda:REGION:ACCOUNT:function:updateInventory"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
            &lt;/span&gt;&lt;span class="nl"&gt;"End"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="kc"&gt;true&lt;/span&gt;&lt;span class="w"&gt;
          &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="nl"&gt;"StartAt"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"SendNotification"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="nl"&gt;"States"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
          &lt;/span&gt;&lt;span class="nl"&gt;"SendNotification"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
            &lt;/span&gt;&lt;span class="nl"&gt;"Type"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Task"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
            &lt;/span&gt;&lt;span class="nl"&gt;"Resource"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"arn:aws:lambda:REGION:ACCOUNT:function:sendNotification"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
            &lt;/span&gt;&lt;span class="nl"&gt;"End"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="kc"&gt;true&lt;/span&gt;&lt;span class="w"&gt;
          &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"End"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="kc"&gt;true&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Parallel states allow multiple tasks to run simultaneously. In this example, two branches are executed at the same time: &lt;code&gt;UpdateInventory&lt;/code&gt; and &lt;code&gt;SendNotification&lt;/code&gt;. The workflow waits for both branches to complete before moving to &lt;code&gt;CompleteOrder&lt;/code&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  Real-World Example: Order Processing System
&lt;/h3&gt;

&lt;p&gt;Let me elaborate on one recently implemented by me. This workflow handles an e-commerce order processing system:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"Comment"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"E-commerce Order Processing Workflow"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"StartAt"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"ValidateOrder"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"States"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"ValidateOrder"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"Type"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Task"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"Resource"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"arn:aws:lambda:REGION:ACCOUNT:function:validateOrder"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"Next"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"CheckInventory"&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"CheckInventory"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"Type"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Task"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"Resource"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"arn:aws:lambda:REGION:ACCOUNT:function:checkInventory"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"Next"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"ProcessPayment"&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"ProcessPayment"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"Type"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Task"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"Resource"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"arn:aws:lambda:REGION:ACCOUNT:function:processPayment"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"Next"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"FulfillOrder"&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"FulfillOrder"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"Type"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Parallel"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"Branches"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
          &lt;/span&gt;&lt;span class="nl"&gt;"StartAt"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"UpdateInventory"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
          &lt;/span&gt;&lt;span class="nl"&gt;"States"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
            &lt;/span&gt;&lt;span class="nl"&gt;"UpdateInventory"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
              &lt;/span&gt;&lt;span class="nl"&gt;"Type"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Task"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
              &lt;/span&gt;&lt;span class="nl"&gt;"Resource"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"arn:aws:lambda:REGION:ACCOUNT:function:updateInventory"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
              &lt;/span&gt;&lt;span class="nl"&gt;"End"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="kc"&gt;true&lt;/span&gt;&lt;span class="w"&gt;
            &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
          &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
          &lt;/span&gt;&lt;span class="nl"&gt;"StartAt"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"SendConfirmation"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
          &lt;/span&gt;&lt;span class="nl"&gt;"States"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
            &lt;/span&gt;&lt;span class="nl"&gt;"SendConfirmation"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
              &lt;/span&gt;&lt;span class="nl"&gt;"Type"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Task"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
              &lt;/span&gt;&lt;span class="nl"&gt;"Resource"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"arn:aws:lambda:REGION:ACCOUNT:function:sendConfirmation"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
              &lt;/span&gt;&lt;span class="nl"&gt;"End"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="kc"&gt;true&lt;/span&gt;&lt;span class="w"&gt;
            &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
          &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"End"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="kc"&gt;true&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"HandleError"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"Type"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Task"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"Resource"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"arn:aws:lambda:REGION:ACCOUNT:function:handleError"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"End"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="kc"&gt;true&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This workflow encompasses:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Error handling with Catch blocks&lt;/li&gt;
&lt;li&gt;Retry logic for transient failures&lt;/li&gt;
&lt;li&gt;Independent tasks executed in parallel - State transitions based on business logic&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  AWS Step Functions Best Practices
&lt;/h3&gt;

&lt;p&gt;From this experience, I have developed the following best practices:&lt;/p&gt;

&lt;h4&gt;
  
  
  Error Handling
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;Always retry on transient failures&lt;/li&gt;
&lt;li&gt;Employ Catch blocks to handle errors gracefully&lt;/li&gt;
&lt;li&gt;Log state transitions in debug&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  State Machine Design
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;Keep state machines focused and single-purpose&lt;/li&gt;
&lt;li&gt;Make use of built-in error handling in Step Functions instead of implementing error handling in Lambda&lt;/li&gt;
&lt;li&gt;Leverage parallel states for independent operations&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Input/Output Processing&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;InputPath and OutputPath to filter data&lt;/li&gt;
&lt;li&gt;Implement ResultSelector to shape task output - Keep payload size below 256 KB&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Monitoring and Debugging&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Enable CloudWatch detailed logging - Use X-Ray for tracking - Configure CloudWatch alarms on failed executions&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Advanced Features of AWS Step Functions
&lt;/h3&gt;

&lt;p&gt;Some of the frequently used advanced features:&lt;/p&gt;

&lt;h4&gt;
  
  
  Dynamic Parallelism
&lt;/h4&gt;

&lt;p&gt;Dynamic Parallelism lets you process multiple tasks at once, even if you don’t know how many tasks there will be ahead of time. It's perfect for handling scenarios like processing a list of items that keeps changing.&lt;/p&gt;

&lt;p&gt;Using the Map state, you can run tasks in parallel.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"ProcessBatch"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"Type"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Map"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"ItemsPath"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"$.items"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"MaxConcurrency"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;10&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"Iterator"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"StartAt"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"ProcessItem"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"States"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="nl"&gt;"ProcessItem"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
          &lt;/span&gt;&lt;span class="nl"&gt;"Type"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Task"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
          &lt;/span&gt;&lt;span class="nl"&gt;"Resource"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"arn:aws:lambda:REGION:ACCOUNT:function:processItem"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
          &lt;/span&gt;&lt;span class="nl"&gt;"End"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="kc"&gt;true&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"End"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="kc"&gt;true&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Breaking it down:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;Map state&lt;/code&gt;: Handles parallel processing for a list of items.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;ItemsPath&lt;/code&gt;: Points to the array of items in your input JSON.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;MaxConcurrency&lt;/code&gt;: Sets how many tasks can run at the same time.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;Iterator&lt;/code&gt;: Defines the steps to follow for each item in the list.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Why it’s awesome:&lt;/p&gt;

&lt;p&gt;This setup is great for tasks like resizing images, processing payments, or transforming data. It’s make sure our system stays efficient by running tasks in parallel without overloading it.&lt;/p&gt;

&lt;h4&gt;
  
  
  Integration Patterns
&lt;/h4&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"WaitForCallback"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"Type"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Task"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"Resource"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"arn:aws:states:::lambda:invoke.waitForTaskToken"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"Parameters"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"FunctionName"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"arn:aws:lambda:REGION:ACCOUNT:function:longRunningTask"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"Payload"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="nl"&gt;"taskToken.$"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"$$.Task.Token"&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"Next"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"ProcessResult"&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This example shows how we can pause a workflow until a task finishes and sends a response. It’s perfect for long-running tasks where you need to wait for a callback before moving forward.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;Type&lt;/code&gt;: Task: Defines this as a task state.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;Resource&lt;/code&gt;: Uses a special ARN to call a Lambda function and wait for a callback with a task token.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;Parameters&lt;/code&gt;:

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;FunctionName&lt;/code&gt;: Points to the Lambda function handling the long-running task.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;taskToken.$&lt;/code&gt;: A unique token automatically generated by AWS Step Functions for this task. It’s included in the payload sent to the Lambda function.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;How it works:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;When the workflow reaches this task state, it invokes the Lambda function.&lt;/li&gt;
&lt;li&gt;The Lambda function receives the taskToken in the payload.&lt;/li&gt;
&lt;li&gt;The Step Function pauses and waits for the Lambda function to send a callback with the token.&lt;/li&gt;
&lt;li&gt;Once the callback is received, the workflow resumes and moves to the ProcessResult state (defined in the Next field).&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Why we use this setup?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;It's ideal for scenarios like manual approvals or asynchronous tasks where the next step depends on external input or a long-running process. The workflow remains efficient by pausing instead of polling or retrying.&lt;/p&gt;

&lt;h3&gt;
  
  
  Performance Optimization
&lt;/h3&gt;

&lt;p&gt;Based on my experience with Step Functions performance optimization, here are some best practices that I have learned:&lt;/p&gt;

&lt;h4&gt;
  
  
  Optimize State Transitions
&lt;/h4&gt;

&lt;p&gt;I find that the transitions among states have a significant impact on cost and performance; the following is how I optimize them:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"ProcessOrder"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"Type"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Task"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"Resource"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"arn:aws:lambda:REGION:ACCOUNT:function:processOrder"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"ResultsSelector"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"relevantData.$"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"$.specificField"&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"Next"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"NextState"&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;I used &lt;code&gt;ResultsSelector&lt;/code&gt; to map only the needed data between states&lt;/li&gt;
&lt;li&gt;I keep my payload size less than 256KB across states&lt;/li&gt;
&lt;li&gt;I combine states when possible to reduce transitions&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  Lambda Optimization
&lt;/h4&gt;

&lt;p&gt;Since I develop with Lambda functions pretty often, here go some few optimization tricks:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;I make adjustments to Lambda memory based on the workload I am processing&lt;/li&gt;
&lt;li&gt;I use Provisioned Concurrency for frequently used Lambdas&lt;/li&gt;
&lt;li&gt;I used the timeouts for each state according to my experience.&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  Parallel Processing Strategies
&lt;/h4&gt;

&lt;p&gt;This pattern I use when I have multiple items to process:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"ProcessBatch"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"Type"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Map"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"MaxConcurrency"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;10&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"ItemsPath"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"$.items"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"Iterator"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"StartAt"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"ProcessItem"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"States"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="nl"&gt;"ProcessItem"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
          &lt;/span&gt;&lt;span class="nl"&gt;"Type"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Task"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
          &lt;/span&gt;&lt;span class="nl"&gt;"Resource"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"arn:aws:lambda:REGION:ACCOUNT:function:processItem"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
          &lt;/span&gt;&lt;span class="nl"&gt;"End"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="kc"&gt;true&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;I set optimal MaxConcurrency based on my workload - I batch small operations together - I use DynamoDB batch operations for volume activities&lt;/p&gt;

&lt;h2&gt;
  
  
  Cost Optimization for AWS Step Functions
&lt;/h2&gt;

&lt;p&gt;Here is how I keep my Step Functions costs under control:&lt;/p&gt;

&lt;h3&gt;
  
  
  State Transition Costs
&lt;/h3&gt;

&lt;p&gt;I've learned that each state transition costs something:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Standard Workflow: $0.025 per 1,000 state transitions Express Workflow: Charged depending on usage duration and memory&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Here's what I do to optimize costs:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Select Workflow Type:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;I use Standard Workflow for long-running, low-state processes&lt;/li&gt;
&lt;li&gt;I use Express Workflow for most of my high-volume, short-duration tasks.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;State Combination:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"CombinedProcessing"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"Type"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Task"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"Resource"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"arn:aws:lambda:REGION:ACCOUNT:function:combinedProcessor"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"Next"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"FinalState"&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Whenever possible, I combine small states.&lt;/p&gt;

&lt;h3&gt;
  
  
  Lambda Cost Optimization
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;I balance memory and duration for optimal cost&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Use batch processing to reduce the number of Lambda invocations&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Service Integrations I use integrations of direct service to reduce costs:
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"WriteToDynamoDB"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"Type"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Task"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"Resource"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"arn:aws:states:::dynamodb:putItem"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"Parameters"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"TableName"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"MyTable"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"Item"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="nl"&gt;"id"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="nl"&gt;"S.$"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"$.id"&lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="nl"&gt;"data"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="nl"&gt;"S.$"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"$.payload"&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"Next"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"NextState"&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This codeblock shows how to write directly to a DynamoDB table using AWS Step Functions without requiring a Lambda function.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Type: Task&lt;/strong&gt;: Specifies this as a task state in the workflow.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Resource&lt;/strong&gt;: Connects directly to DynamoDB’s &lt;code&gt;putItem&lt;/code&gt; operation.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Parameters&lt;/strong&gt;:   - &lt;strong&gt;TableName&lt;/strong&gt;: The name of the DynamoDB table where data will be written.   - &lt;strong&gt;Item&lt;/strong&gt;: Maps the input values (e.g., &lt;code&gt;id&lt;/code&gt; and &lt;code&gt;payload&lt;/code&gt;) to the corresponding columns in the table.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Ok but how it works?&lt;/p&gt;

&lt;p&gt;The workflow writes the specified data directly to DynamoDB when it reaches this task state.  Once the &lt;code&gt;putItem&lt;/code&gt; operation completes, the workflow transitions to the next step, as defined in the &lt;code&gt;NextState&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;Needless to say, doing away with Lambda for workflows that would require having to use Lambda can be simplified by directly integrating AWS Step Functions with DynamoDB. In this case, the  it will now directly interact with the services over AWS. Therefore, transactions are quicker and much less costly-the best possible case for workflows, which are meant only for data storage in DynamoDB.&lt;/p&gt;

&lt;h2&gt;
  
  
  State Types - My Implementation Guide
&lt;/h2&gt;

&lt;p&gt;How I use state types differently in my workflows:&lt;/p&gt;

&lt;h3&gt;
  
  
  Task States
&lt;/h3&gt;

&lt;p&gt;I use Task states to do the actual work:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"ProcessPayment"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"Type"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Task"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"Resource"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"arn:aws:lambda:REGION:ACCOUNT:function:processPayment"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"TimeoutSeconds"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;30&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"Retry"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="nl"&gt;"ErrorEquals"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"ServiceException"&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="nl"&gt;"IntervalSeconds"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;2&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="nl"&gt;"MaxAttempts"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;3&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="nl"&gt;"BackoffRate"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mf"&gt;1.5&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"Catch"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="nl"&gt;"ErrorEquals"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"States.Timeout"&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="nl"&gt;"Next"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"HandleTimeout"&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"Next"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"NextState"&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Key features that I always set up:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;I set timeouts based on the expected duration&lt;/li&gt;
&lt;li&gt;I add retry logic for transient failures&lt;/li&gt;
&lt;li&gt;I handle errors with Catch&lt;/li&gt;
&lt;li&gt;I do the filtering of output data with ResultsSelector&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Choice States
&lt;/h3&gt;

&lt;p&gt;I use Choice states to make a decision:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"EvaluateOrder"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"Type"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Choice"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"Choices"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="nl"&gt;"And"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="w"&gt;
          &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
            &lt;/span&gt;&lt;span class="nl"&gt;"Variable"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"$.orderValue"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
            &lt;/span&gt;&lt;span class="nl"&gt;"NumericGreaterThan"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;1000&lt;/span&gt;&lt;span class="w"&gt;
          &lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;&lt;span class="w"&gt;
          &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
            &lt;/span&gt;&lt;span class="nl"&gt;"Variable"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"$.customerType"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
            &lt;/span&gt;&lt;span class="nl"&gt;"StringEquals"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"premium"&lt;/span&gt;&lt;span class="w"&gt;
          &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="nl"&gt;"Next"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"ApplyPremiumProcess"&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="nl"&gt;"Variable"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"$.orderValue"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="nl"&gt;"NumericLessThan"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;100&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="nl"&gt;"Next"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"ApplyFastProcess"&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"Default"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"StandardProcess"&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;I use them for:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Routing based on business logic&lt;/li&gt;
&lt;li&gt;Data validation&lt;/li&gt;
&lt;li&gt;Conditional processing&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Parallel States
&lt;/h3&gt;

&lt;p&gt;When I need to execute several tasks simultaneously that are independent of each other:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"ProcessOrder"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"Type"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Parallel"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"Branches"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="nl"&gt;"StartAt"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"UpdateInventory"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="nl"&gt;"States"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
          &lt;/span&gt;&lt;span class="nl"&gt;"UpdateInventory"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
            &lt;/span&gt;&lt;span class="nl"&gt;"Type"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Task"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
            &lt;/span&gt;&lt;span class="nl"&gt;"Resource"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"arn:aws:lambda:REGION:ACCOUNT:function:updateInventory"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
            &lt;/span&gt;&lt;span class="nl"&gt;"End"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="kc"&gt;true&lt;/span&gt;&lt;span class="w"&gt;
          &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="nl"&gt;"StartAt"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"NotifyCustomer"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="nl"&gt;"States"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
          &lt;/span&gt;&lt;span class="nl"&gt;"NotifyCustomer"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
            &lt;/span&gt;&lt;span class="nl"&gt;"Type"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Task"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
            &lt;/span&gt;&lt;span class="nl"&gt;"Resource"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"arn:aws:lambda:REGION:ACCOUNT:function:notifyCustomer"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
            &lt;/span&gt;&lt;span class="nl"&gt;"End"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="kc"&gt;true&lt;/span&gt;&lt;span class="w"&gt;
          &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"Next"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"CompleteOrder"&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Important things I have learned: Each branch runs independently - Next state waits for all branches - Each branch needs its own error handling&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;AWS Step Functions changed how I build serverless applications. It gave me a robust way to orchestrate even the most complex workflows-keep clear, maintainable configurations. The visual workflow editor, combined with the power of Amazon States Language, makes it so much easier to design, implement, and maintain serverless applications.&lt;/p&gt;

&lt;p&gt;Remember, the key to successful Step Functions implementation:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Clear workflow design&lt;/li&gt;
&lt;li&gt;Proper error handling&lt;/li&gt;
&lt;li&gt;Efficient state management Comprehensive monitoring&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If you are building serverless applications on AWS, I highly recommend checking out Step Functions; it may turn out to be the missing piece in your serverless architecture puzzle.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>serverless</category>
      <category>devops</category>
      <category>security</category>
    </item>
    <item>
      <title>Kubernetes Custom Resources</title>
      <dc:creator>CiCube</dc:creator>
      <pubDate>Thu, 14 Nov 2024 10:30:25 +0000</pubDate>
      <link>https://dev.to/cicube/kubernetes-custom-resources-29gg</link>
      <guid>https://dev.to/cicube/kubernetes-custom-resources-29gg</guid>
      <description>&lt;p&gt;&lt;a href="https://s.cicube.io/demo" rel="noopener noreferrer"&gt;&lt;br&gt;
    &lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fc.cicube.io%2Fmarketing%2Fdevto-banner.png" alt="cicube.io" width="800" height="249"&gt; &lt;br&gt;
&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Introduction
&lt;/h3&gt;

&lt;p&gt;Kubernetes Custom Resources extend the API of Kubernetes and define new resource types. This makes it possible to narrow solutions to act on a Kubernetes cluster according to whatever need exists. In the following, we will elaborate on particularities of custom resources, have a closer look at when to use, and how they manifest extensibility. Whether you consider implementing a so-called simple CRD or an Aggregated API, you will find here the detailed insights that will help you make informed decisions.&lt;/p&gt;

&lt;h3&gt;
  
  
  What are Custom Resources?
&lt;/h3&gt;

&lt;p&gt;What makes Custom Resources so powerful in Kubernetes is that they allow me to extend the Kubernetes API with new resource types. Consequently, this means I can tailor my environment according to specific application needs. Unlike built-in resources, Custom Resources will dynamically appear or disappear depending on their registration to allow more modular cluster setups.&lt;/p&gt;

&lt;p&gt;For instance, I would define a CRD for a new resource type, say, &lt;code&gt;Database&lt;/code&gt;. It would be a custom resource that one could then manage through &lt;code&gt;kubectl&lt;/code&gt;, just like any other object in Kubernetes, and would make perfect sense to the team. And, for example, when I run &lt;code&gt;kubectl create&lt;/code&gt;, &lt;code&gt;kubectl get&lt;/code&gt;, and &lt;code&gt;kubectl delete&lt;/code&gt;, I'm operating not just the built-in resources but my custom ones too. One nice feature is that these custom resources can also function independent of the overall cluster's life cycle. For example, even though my cluster may be down for maintenance, it does not mean my custom resource objects are not manageable. This allows a lot of flexibility and control over the kind of resources I handle, which in turn really aids operational efficiency in a Kubernetes ecosystem.&lt;/p&gt;

&lt;h3&gt;
  
  
  Role of Custom Controllers
&lt;/h3&gt;

&lt;p&gt;The custom controller, to me, is an integral part as it extends the functionalities of Kubernetes resources in working with custom resources; this is done by leading in a declarative API model. When I use a custom resource accompanying a custom controller, it allows me to declare a desired state of my resource and ensure the actual state is consistent with that declaration.&lt;/p&gt;

&lt;p&gt;This separation means I focus on what needs to be done rather than how that might be achieved, which is quite the reverse of an imperative API approach.&lt;/p&gt;

&lt;p&gt;One of the central patterns in this landscape is the Operator pattern. This allows me to encapsulate domain-specific knowledge of my applications into the Kubernetes API so that I have a smarter API that is more capable of managing complex scenarios itself. For instance, I could write a controller that automatically manages the lifecycle of a database application, scaling, performing backups, or failover without requiring manual intervention. Moreover, the deployment and updating of custom controllers take place without interfering with the life cycle of the Kubernetes cluster itself. This implies that I can upgrade the controllers easily without necessarily bringing down the entire cluster or causing disruptions to other workloads, hence greatly enhancing operational efficiency.&lt;/p&gt;

&lt;h3&gt;
  
  
  API Integration: When to Choose Custom Resources
&lt;/h3&gt;

&lt;p&gt;In considering when to extend a new API with Kubernetes, versus creating a standalone service, there are a few determining factors. In general, this will boil down to whether your API describes and supports a declarative model or whether your API is targeted at imperative operations. If your API is specifying a desired state-for instance, defining how an application should be configured-then custom resources may be appropriate. It allows for easy integration with Kubernetes tooling such as kubectl.&lt;/p&gt;

&lt;p&gt;Another aspect to decide upon is how you want your API to relate to the Kubernetes UI. If you want to have representation in the Kubernetes dashboard, then your API should be aggregated, so it can be presented with the native resource types. Another thing to consider is the scoping of resources. A CR naturally scopes itself to either a cluster or namespaces, which is perfect for resource isolation and ease of management.&lt;/p&gt;

&lt;p&gt;This might be informed lastly by leveraging features of support for APIs in Kubernetes: features intrinsic in Kubernetes, such as validations, authentication, and tooling, might simplify for you the development and maintenance if your API can fit within the Kubernetes framework. Ultimately, weighing these considerations will lead you to the most effective integration strategy.&lt;/p&gt;




&lt;h2&gt;
  
  
  Monitoring GitHub Actions Workflows
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://s.cicube.io/try" rel="noopener noreferrer"&gt;CICube&lt;/a&gt; is a GitHub Actions monitoring tool that provides you with detailed insights into your workflows to further optimize your CI/CD pipeline. With CICube, you will be able to track your workflow runs, understand where the bottlenecks are, and tease out the best from your build times. Go to &lt;a href="https://s.cicube.io/try" rel="noopener noreferrer"&gt;cicube.io&lt;/a&gt; now and create a free account to better optimize your GitHub Actions workflows!&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;a href="https://s.cicube.io/try" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fep2dsojs2m0j2fksabjd.png" alt="CICube GitHub Actions Workflow Duration Monitoring" width="800" height="641"&gt;&lt;/a&gt;
&lt;/h2&gt;

&lt;h3&gt;
  
  
  ConfigMaps vs Custom Resources: Which One to Choose
&lt;/h3&gt;

&lt;p&gt;When to use a ConfigMap versus a custom resource in Kubernetes, there is a set of criteria that can help make a decision. First, ConfigMaps are a good fit when the configuration file formats already exist and are well-documented such as &lt;code&gt;mysql.cnf&lt;/code&gt; or &lt;code&gt;pom.xml&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;This would be useful when the main application that consumes the configuration is designed to read configurations from files inside a Pod or relies on environment variables. Other scenarios where ConfigMaps apply best are when frequent rolling updates are required, as they can easily be fitted into any given deployment strategies, ensuring that the updated configurations arrive without any downtimes.&lt;/p&gt;

&lt;p&gt;On the other hand, custom resources thrive in the scenarios that are a little bit more complex-when the features of Kubernetes API become necessary. If your use case involves needing rich interaction through the Kubernetes API-such as watch capability, or requires structured object representation of application domains, then a custom resource becomes more helpful. With custom resources, you get complete flexibility in terms of defining an entire API model, and would allow the facilitation of automation using custom controllers. In the end, decisions for using either a ConfigMap or a custom resource depend on what your application needs, how that would consume configuration data, what is the nature of the interactions needed, and how complex its management logic is.&lt;/p&gt;

&lt;h3&gt;
  
  
  CustomResourceDefinitions (CRDs) vs Aggregated APIs
&lt;/h3&gt;

&lt;p&gt;There are primarily two ways to extend Kubernetes with personalized resources, namely CustomResourceDefinitions and Aggregated APIs. A difference in understanding will help you decide which one will suit your requirements.&lt;/p&gt;

&lt;h4&gt;
  
  
  Ease of Use
&lt;/h4&gt;

&lt;p&gt;Generally speaking, CRDs offer an easier and less-complicated way of employing custom resources without necessarily coding. This usability determines the appropriateness of the CRD in simple scenarios-where no extra complexity is needed. On other occasions, Aggregated APIs call for programming skills and constructing of a respective service, which complicates deployment.&lt;/p&gt;

&lt;h4&gt;
  
  
  Programming Requirements
&lt;/h4&gt;

&lt;p&gt;CRDs require no coding to be able to setup the resource itself; you will be able to create CRDs with simple YAML definitions. On the other hand, Aggregated APIs require you to code an API server and are a good choice for advanced usage when you need any custom business logic.&lt;/p&gt;

&lt;h4&gt;
  
  
  Service Dependencies
&lt;/h4&gt;

&lt;p&gt;CRDs are served directly by the Kubernetes API server, which means they work seamlessly without requiring an external service. Aggregated APIs are served by an additional API server, which one has to maintain-a source of failure and added overhead in deployment.&lt;/p&gt;

&lt;h4&gt;
  
  
  Validation Features
&lt;/h4&gt;

&lt;p&gt;Basic validation, using OpenAPI standards, can be supported by CRDs by defining the validation rules inside the resource specification. This will be very important to ensure that only valid configurations will be accepted. Aggregated APIs can offer even more complex mechanisms for validation - like arbitrary validation via webhooks - what would make them very interesting in applications where strong data integrity is in need.&lt;/p&gt;

&lt;h4&gt;
  
  
  Custom Storage Options
&lt;/h4&gt;

&lt;p&gt;Another big advantage of using Aggregated APIs is flexibility in terms of custom storage solutions. If your application uses some special way of data storage, which cannot be covered by standard Kubernetes storage options, then Aggregated API allows you to implement your way of storing data. CRDs, on the other hand, are bound to the Kubernetes model of storage, which may not fit the requirements of every application.&lt;/p&gt;

&lt;h4&gt;
  
  
  Examples of Use Cases
&lt;/h4&gt;

&lt;p&gt;&lt;strong&gt;CRDs&lt;/strong&gt; fit best when one needs to implement simple extensions to the API with simple CRUD operations, such as defining a custom resource for managing application configurations that hardly change. Aggregated APIs apply to more complex applications that involve a number of dependencies and require advanced functionality, such as services that rely heavily on business logic and data validation. In the end, both CRDs and Aggregated APIs have their merits and best-fit scenarios. Your choice should align with the complexity of your requirements, the skill set of your team, and the operational overhead you are willing to manage.&lt;/p&gt;

&lt;h3&gt;
  
  
  Important Considerations for Custom Resources
&lt;/h3&gt;

&lt;p&gt;Among several factors that are to be weighed in the case of a Kubernetes cluster about the inclusion of custom resources, one major concern is the possibility of new failure points being introduced. This is due to the fact that, by nature, a custom resource would be served either by a custom controller or additional API servers, bugs, and misconfiguration of which may lead to failures affecting system performance. Therefore, this risk underlines the need for exhaustive testing and monitoring before and after deployment.&lt;/p&gt;

&lt;p&gt;Another critical factor is that of storage consumption. The custom resources consume the storage, just as the built-in Kubernetes resources, such as ConfigMaps, take up space. Thus, if one creates a large number of these types of resources, it will easily overload the storage capacity of the API server and performance will suffer.&lt;/p&gt;

&lt;p&gt;Moreover, the authentication, as well as authorization requirements, must be put into consideration. With custom resources, just like the standard Kubernetes resources, Role-Based Access Control or RBAC is used. Thus, it's necessary to ensure proper access privileges to block unauthorized access.&lt;/p&gt;

&lt;p&gt;Second, it's important to understand how client libraries support custom resources. Not all client libraries support all kinds of custom resources, and that might affect the manner in which your applications interact with them. Knowing the client libraries available-especially for languages such as Go and Python-makes for a smooth integration and operational environment.&lt;/p&gt;

&lt;h3&gt;
  
  
  Conclusions
&lt;/h3&gt;

&lt;p&gt;With Custom Resources implemented inside a Kubernetes cluster, the latter gains huge capability for customization and scaling. Understanding just how CRDs and Aggregated APIs work, decisions will be well-informed to align with your project's goals. Be it ease of use or requiring flexibility in advanced features, when to use CRDs, or AA server integration becomes important for effectively leveraging extensibility in Kubernetes.&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>devops</category>
      <category>security</category>
    </item>
    <item>
      <title>Kubernetes Horizontal Pod Autoscaler</title>
      <dc:creator>CiCube</dc:creator>
      <pubDate>Tue, 12 Nov 2024 12:41:54 +0000</pubDate>
      <link>https://dev.to/cicube/kubernetes-horizontal-pod-autoscaler-19l0</link>
      <guid>https://dev.to/cicube/kubernetes-horizontal-pod-autoscaler-19l0</guid>
      <description>&lt;p&gt;&lt;a href="https://s.cicube.io/demo" rel="noopener noreferrer"&gt;&lt;br&gt;
    &lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fc.cicube.io%2Fmarketing%2Fdevto-banner.png" alt="cicube.io" width="800" height="249"&gt; &lt;br&gt;
&lt;/a&gt;&lt;/p&gt;
&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;Kubernetes is immensely powerful in terms of managing and scaling applications with much ease. Among the various features of this tool, Horizontal Pod Autoscaling stands out as an important ingredient for performance maintenance in case of variable loads. The automatic scaling of the HPA scales the pods based on the demand at any given moment in time so that resources are utilized to maximum efficiency. We will settle in some detail how HPA works, the working mechanisms involved, and how it can be effectively implemented in your Kubernetes environment.&lt;/p&gt;
&lt;h2&gt;
  
  
  How Horizontal Pod Autoscalers Operate
&lt;/h2&gt;

&lt;p&gt;Let's look at how a Horizontal Pod Autoscaler in Kubernetes actually works. It is a tool that automatically scales pod replicas for a deployment based on any observable CPU metrics and/or any metrics you may have specified. In other words, it tries to keep the balance between the demands of the pods and the availability of the resources.&lt;/p&gt;

&lt;p&gt;The autoscaler continuously monitors the resource metrics of the target workload-such as deploymets-and changes the count of pod replicas accordingly. If the JVM load goes higher, the HPA will ramp up the pods to meet that demand. Conversely, when the load reduces to a minimum pod specification, it scales down the pods. HPA works based on defined targets, such as average CPU utilization metrics. In doing so, I will define a metric configuration in the YAML file of HPA to trigger scaling activities with respect to the current usage of the CPU. It would be easy to set this up straightforwardly to keep workloads efficient. The HPA will keep a close watch on the metric to avoid wastage of allocated resources and optimize for performance.&lt;/p&gt;
&lt;h3&gt;
  
  
  Implementing Horizontal Scaling with Confidence
&lt;/h3&gt;

&lt;p&gt;The subtlety of understanding between horizontal and vertical scaling is key when deploying applications in Kubernetes. HPA performs horizontal scaling by increasing the number of instances such that the increased load is distributed across a larger number of instances. On the contrary, horizontal scaling only increases the amount of resources/units being utilized by a running pod, such as the CPU or memory. Workloads whose demand changes with time should definitely use this without sacrificing much on vertical scaling.&lt;/p&gt;

&lt;p&gt;Below, I will configure HPA using a YAML file to monitor average CPU utilization. Here is a basic setup:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;autoscaling/v2&lt;/span&gt;
&lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;HorizontalPodAutoscaler&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;my-app-hpa&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;scaleTargetRef&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;apps/v1&lt;/span&gt;
    &lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Deployment&lt;/span&gt;
    &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;my-app-deployment&lt;/span&gt;
  &lt;span class="na"&gt;minReplicas&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;2&lt;/span&gt;
  &lt;span class="na"&gt;maxReplicas&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;10&lt;/span&gt;
  &lt;span class="na"&gt;metrics&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Resource&lt;/span&gt;
      &lt;span class="na"&gt;resource&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;cpu&lt;/span&gt;
        &lt;span class="na"&gt;target&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Utilization&lt;/span&gt;
          &lt;span class="na"&gt;averageUtilization&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;60&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;HPA, in this setup, will keep an average CPU utilization of about 60%, scaling the number of replicas between 2 and 10. This is quite effective and easy to manage-the balance between the resource usage and application performance.&lt;/p&gt;

&lt;h2&gt;
  
  
  Scaling with Resource and Custom Metrics
&lt;/h2&gt;

&lt;p&gt;Understand how HPAs drive scaling decisions based on resource and custom metrics. Defining metrics related to resources, such as CPU or user-defined values, will dynamically auto-scale pod counts by means of an HPA. This dynamic adjustment of resource level will ensure that applications are responsive to user needs and optimize resource usage. Well, it is possible in Kubernetes to use traditional resource metrics such as CPU and memory, or custom metrics depending on your application's needs. In regard to that, you define metrics such as these in the HPA configuration file. Below is how you would set up an HPA YAML file to scale on resource utilization:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;autoscaling/v2&lt;/span&gt;
&lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;HorizontalPodAutoscaler&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;my-custom-hpa&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;scaleTargetRef&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;apps/v1&lt;/span&gt;
    &lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Deployment&lt;/span&gt;
    &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;my-app&lt;/span&gt;
  &lt;span class="na"&gt;minReplicas&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;1&lt;/span&gt;
  &lt;span class="na"&gt;maxReplicas&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;10&lt;/span&gt;
  &lt;span class="na"&gt;metrics&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Resource&lt;/span&gt;
      &lt;span class="na"&gt;resource&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;cpu&lt;/span&gt;
        &lt;span class="na"&gt;target&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Utilization&lt;/span&gt;
          &lt;span class="na"&gt;averageUtilization&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;70&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;External&lt;/span&gt;
      &lt;span class="na"&gt;external&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;metric&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;requests_per_second&lt;/span&gt;
        &lt;span class="na"&gt;target&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;AverageValue&lt;/span&gt;
          &lt;span class="na"&gt;averageValue&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;100&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Similarly, in the preceding example, HPA scales the replicas of my-app based both on CPU utilization and a custom metric representing requests per second, to efficiently handle pods due to current traffic and usage patterns. Using both resource and custom metrics within your HPA configurations allows a more responsive and efficient scaling strategy.&lt;/p&gt;

&lt;h2&gt;
  
  
  Algorithm Behind the Scaling Decisions
&lt;/h2&gt;

&lt;p&gt;Diving into the algorithm powering HPA scaling logic, one can find just how dynamically Kubernetes adjusts the workloads. The major logic of the algorithm performs a number of calculations of the ratio of current versus desired metrics within efficient resource management.&lt;/p&gt;

&lt;p&gt;It queries the current metric values from all active pods when the HPA runs. The desired replicas are computed using the formula below:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;desiredReplicas = ceil[currentReplicas * (currentMetricValue / desiredMetricValue)]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;For example, if the current CPU utilization is at &lt;code&gt;200m&lt;/code&gt; and the target of desire is &lt;code&gt;100m&lt;/code&gt;, this algorithm calculates the new number of replicas to be &lt;code&gt;2.0&lt;/code&gt;, rounded up to &lt;code&gt;2&lt;/code&gt;. If on the other hand, the current utilization falls back to &lt;code&gt;50m&lt;/code&gt;, it decreases by half the amount of replicas in use. The HPA controller also involves checks about pod readiness and the availability of metric data. It does not take into account any pod that may be in the process of startup or has missing metric values from scaling calculations. This prevents aggressive scaling in case the environments are unpredictable during their startup time. The algorithm then allows for balancing the demand for resources by Kubernetes, alongside dynamic workloads, and ensures high availability of the service provided.&lt;/p&gt;

&lt;h2&gt;
  
  
  Handling Rolling Updates with HPAs
&lt;/h2&gt;

&lt;p&gt;Overview When integrating Horizontal Pod Autoscalers with rolling updates in Kubernetes, high performance should be maintained without changing application versions. HPA will watch the metrics and will change the replica counts during the updates actively to handle the fluctuating load of the application without any disturbance.&lt;/p&gt;

&lt;p&gt;During the process of a rolling update, Kubernetes will gradually replace the old pods with new ones, probably avoiding this downtime. The HPA dynamically regulates the number of replicas. For example, if during the upgrade the load on the application increases, then it scales up the pods, depending on how HPA was configured. Coming back to scale down after upgrading and finding more resources available, it will do so but respecting the minimum count of replicas. Here is an example of how to configure HPA to manage a Deployment. This example also serves to illustrate how scaling works in the context of a rolling update:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;autoscaling/v2&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;HorizontalPodAutoscaler&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;my-app-hpa&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;scaleTargetRef&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;apps/v1&lt;/span&gt;
    &lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Deployment&lt;/span&gt;
    &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;my-app&lt;/span&gt;
  &lt;span class="na"&gt;minReplicas&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;3&lt;/span&gt;
  &lt;span class="na"&gt;maxReplicas&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;10&lt;/span&gt;
  &lt;span class="na"&gt;metrics&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Resource&lt;/span&gt;
      &lt;span class="na"&gt;resource&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;cpu&lt;/span&gt;
        &lt;span class="na"&gt;target&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Utilization&lt;/span&gt;
          &lt;span class="na"&gt;averageUtilization&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;70&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In this environment, the HPA should maintain CPU utilization of the pods to a level of ~70%. If there is a rolling update in the case of increased workload, it can lead the HPA to increase the replicas up to a maximum defined limit. The old pods will gradually be replaced by the new ones, so that remaining pods can seamlessly handle the traffic, hence reducing the time of the potential service interruption.&lt;/p&gt;

&lt;h2&gt;
  
  
  Configuring Custom Scaling Behavior
&lt;/h2&gt;

&lt;p&gt;In Kubernetes, the Horizontal Pod Autoscaler (HPA) allows you to fine-tune its scaling actions through custom configurations, such as scale-up and scale-down settings. This is particularly useful for ensuring that your applications respond quickly to changes in demand while maintaining stability and resource efficiency.&lt;/p&gt;

&lt;h3&gt;
  
  
  Custom Scale-Up and Scale-Down Configurations
&lt;/h3&gt;

&lt;p&gt;The HPA enables you to get specific about scale-up and scale-down behaviors to accommodate not only change in load inside your systems but also prevent sharp changes in the number of pods. Using the &lt;code&gt;behavior&lt;/code&gt; field in the HPA configuration allows you to manipulate the behavior of the HPA.&lt;/p&gt;

&lt;h3&gt;
  
  
  Stabilization Windows
&lt;/h3&gt;

&lt;p&gt;That is important because the window of stabilization prevents the autoscaler from making rapid changes that would actually lead to its instability. The setting ensures no scale-down operations start too quickly before potentially terminating pods that may still be needed behind a short while.&lt;/p&gt;

&lt;p&gt;Example - adding a stabilization window to your HPA:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;behavior&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;scaleDown&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;stabilizationWindowSeconds&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;300&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In this example, it specifies that a 5-minute window is to be considered in order to decide whether scaling down should be done or not in order to avoid sudden drops in available pods.&lt;/p&gt;

&lt;h3&gt;
  
  
  Rate Limits for Scaling
&lt;/h3&gt;

&lt;p&gt;Aside from the stabilization windows, you can set rate limits on how fast your HPA is allowed to scale up or down. That's helpful when you want more granular control over how fast the number of replicas is allowed to change.&lt;/p&gt;

&lt;p&gt;Here is how you might configure a rate limit to scale down by no more than 4 pods per minute:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;behavior&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;scaleDown&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;policies&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Pods&lt;/span&gt;
        &lt;span class="na"&gt;value&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;4&lt;/span&gt;
        &lt;span class="na"&gt;periodSeconds&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;60&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This setting means that the HPA is allowed to remove a maximum of 4 pods within a 1-minute period. In similar ways, you could restrict scale-up operations as well.&lt;/p&gt;

&lt;h2&gt;
  
  
  Example YAML Configuration
&lt;/h2&gt;

&lt;p&gt;Merging these - both the stabilization windows and the rate limits - you can configure your HPA as follows:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;autoscaling/v2&lt;/span&gt;
&lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;HorizontalPodAutoscaler&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;my-app-hpa&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;scaleTargetRef&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;apps/v1&lt;/span&gt;
    &lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Deployment&lt;/span&gt;
    &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;my-app&lt;/span&gt;
  &lt;span class="na"&gt;minReplicas&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;1&lt;/span&gt;
  &lt;span class="na"&gt;maxReplicas&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;10&lt;/span&gt;
  &lt;span class="na"&gt;metrics&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Resource&lt;/span&gt;
      &lt;span class="na"&gt;resource&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;cpu&lt;/span&gt;
        &lt;span class="na"&gt;target&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Utilization&lt;/span&gt;
          &lt;span class="na"&gt;averageUtilization&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;70&lt;/span&gt;
  &lt;span class="na"&gt;behavior&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;scaleDown&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;stabilizationWindowSeconds&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;300&lt;/span&gt;
      &lt;span class="na"&gt;policies&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Pods&lt;/span&gt;
          &lt;span class="na"&gt;value&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;4&lt;/span&gt;
          &lt;span class="na"&gt;periodSeconds&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;60&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This setup not only defines how the HPA scales but also makes your application's pod life cycle more rigorously managed during fluctuating load conditions.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusions
&lt;/h2&gt;

&lt;p&gt;Kubernetes Horizontal Pod Autoscaler is the sine qua non of today's cloud-native application management, enabling dynamic scaling driven by real-world demands. Having a clear understanding of its operational mechanics all the way from metric collection down to decision-making will optimize your application so that it meets user needs efficiently without wastage. You will, therefore, be able to achieve much better resilience and high performance in workloads using best practices and custom configurations with HPAs.&lt;/p&gt;




&lt;h2&gt;
  
  
  Monitoring GitHub Actions Workflows
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://s.cicube.io/try" rel="noopener noreferrer"&gt;CICube&lt;/a&gt; is a GitHub Actions monitoring tool that provides you with detailed insights into your workflows to further optimize your CI/CD pipeline. With CICube, you will be able to track your workflow runs, understand where the bottlenecks are, and tease out the best from your build times. Go to &lt;a href="https://s.cicube.io/try" rel="noopener noreferrer"&gt;cicube.io&lt;/a&gt; now and create a free account to better optimize your GitHub Actions workflows!&lt;/p&gt;

&lt;p&gt;&lt;a href="https://s.cicube.io/try" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fep2dsojs2m0j2fksabjd.png" alt="CICube GitHub Actions Workflow Duration Monitoring" width="800" height="641"&gt;&lt;/a&gt;&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>devops</category>
      <category>security</category>
    </item>
    <item>
      <title>Working with Git Remotes</title>
      <dc:creator>CiCube</dc:creator>
      <pubDate>Tue, 12 Nov 2024 12:41:39 +0000</pubDate>
      <link>https://dev.to/cicube/working-with-git-remotes-an8</link>
      <guid>https://dev.to/cicube/working-with-git-remotes-an8</guid>
      <description>&lt;p&gt;&lt;a href="https://s.cicube.io/demo" rel="noopener noreferrer"&gt;&lt;br&gt;
    &lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fc.cicube.io%2Fmarketing%2Fdevto-banner.png" alt="cicube.io" width="800" height="249"&gt; &lt;br&gt;
&lt;/a&gt;&lt;/p&gt;
&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;Version control with Git was created to be collaborative, so managing remote repositories becomes a crucial part of using Git. This article walks you through some core Git actions you can perform regarding remote URLs: how to modify existing remotes, rename, and perform a complete delete. You will learn how to master the subtleties of Git remote repository management, from actual command-line examples to the most common mistakes that one should avoid.&lt;/p&gt;
&lt;h2&gt;
  
  
  How to change a remote repository URL
&lt;/h2&gt;

&lt;p&gt;To replace an existing remote repository's URL, I use the &lt;code&gt;git remote set-url&lt;/code&gt; command. As this command name suggests, I name the remote and the new URL to be set. For instance, to update the &lt;code&gt;origin&lt;/code&gt; remote to a different HTTPS URL, I do the following:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ git remote set-url origin https://github.com/username/repository.git
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;To switch from HTTPS to SSH, I would type the following in a similar way:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ git remote set-url origin git@github.com:username/repository.git
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;After running this command, I need to make sure that the change has taken effect. I check the list of current remotes:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ git remote -v
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This should show the updated URL for both fetch and push for you. A simple mistake I find myself making often is "no such remote exists," which almost always is due to a typo in the remote name. To avoid that, I will always first check what existing remotes are present using:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ git remote -v
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In case everything appears OK yet the problems persist, I check if there are any URL typographical errors. This saves unnecessary headaches by keeping remote names and making sure they are correct.&lt;/p&gt;

&lt;h2&gt;
  
  
  Common Mistakes When Changing Remotes
&lt;/h2&gt;

&lt;p&gt;When renaming remote urls in Git, there are a couple of frequent errors that might pop up and actually interrupt your workflow. Perhaps the most common is the "no such remote exists" message. Nine out of ten times this means you have used the wrong name for something. Whenever I come across this issue, listing the current remotes is how I always begin my troubleshooting:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ git remote -v
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This would affirm whether the remote name is correct. If I have verified that the name is correct and I'm still having problems, then perhaps the problem is a simple typographical error in the URL itself. I would now check that the URL is correctly formatted and points to the repository that I want to use.&lt;/p&gt;

&lt;p&gt;Another common mistake I make is attempting to rename a remote that does not exist. So, all too often I'll run a command like:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ git remote rename nonexistent newname
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;this will give an error message that says 'remote already exists', in other words, the remote cannot be renamed. Again, listing existing remotes can clear up confusion. In case this command fails because the new name is already in use, I have to choose another name or rename the conflicting remote first. A lot of frustration can be avoided simply by paying close attention to such details.&lt;/p&gt;

&lt;h2&gt;
  
  
  How to Rename a Remote Repository
&lt;/h2&gt;

&lt;p&gt;In this chapter I will describe how to rename an already existing Git remote repository using the &lt;code&gt;git remote rename&lt;/code&gt; command. The basic syntax of this command is straightforward: one should specify a current name of the remote and a new name to be assigned. Say, to rename some remote named &lt;code&gt;origin&lt;/code&gt; into &lt;code&gt;upstream&lt;/code&gt;, I execute:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ git remote rename origin upstream
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Once this is run, it’s imperative that I check to make sure a rename has taken place. I can accomplish this by listing all current remotes:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ git remote -v
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The output should indicate that, and both fetch and push should reflect the new name:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;&amp;gt; upstream  https://github.com/username/repository.git (fetch)

&amp;gt; upstream  https://github.com/username/repository.git (push)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;For the error that a remote to rename does not exist, I list my current remotes first. That may be because I have misspelled names. Besides, if the new name has already been in use, I can either pick another name or rename the old one to some name that differs. Running this kind of command helps me ensure that my remote management is faultless and smooth.&lt;/p&gt;

&lt;h2&gt;
  
  
  Common Renaming Remotes Screw Ups
&lt;/h2&gt;

&lt;p&gt;When it comes to renaming remotes within Git, several drops can occur that might disturb my workflow. The most common error I've got is trying to rename a remote that does not exist. For example, trying to run the following:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ git remote rename nonexistent newname
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;I'll get an error like:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;&amp;gt; fatal: Could not rename config section 'remote.nonexistent' to 'remote.newname'
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Prerequisite checking of existing remotes by using:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ git remote -v
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;That will ensure that I have not mistyped the remote name. Another common mistake occurs when one tries to set a new name using a name that already does exist. If I do something like:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ git remote rename origin upstream
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;but that's already the name of another remote, I'll get an error saying the new name is not valid. If this happens, I should use a different new name, or first rename the conflicting remote:. Being aware of such issues and checking remotes beforehand will save one from much confusion and keep a number of remotes tidy.&lt;/p&gt;

&lt;h2&gt;
  
  
  How to Delete a Remote Repository
&lt;/h2&gt;

&lt;p&gt;In this chapter, I will show how one can delete a remote repository using the &lt;code&gt;git remote rm&lt;/code&gt; command. The command to remove a remote from your local setup is this followed by the name of the remote you want to get rid of. For instance, if you had Remote named &lt;code&gt;destination&lt;/code&gt;, you would run:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ git remote rm destination
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This command will remove the specified remote from your configuration. After running it, it's crucial to ensure that the remote has been successfully removed. You can verify this by listing the remaining remotes with:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ git remote -v
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The output should reflect only those remotes which are still configured. For instance:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;origin  https://github.com/username/repository.git (fetch)
&amp;gt; origin  https://github.com/username/repository.git (push)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;And hence, it is also confirmed that &lt;code&gt;destination&lt;/code&gt; has been removed.&lt;/p&gt;

&lt;p&gt;One easy mistake to make, is trying to delete a remote that doesn't exist. If you run:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ git remote rm nonexistent
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;you will get an error message such as:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;&amp;gt; error: Could not remove config section 'remote.nonexistent'
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;To avoid this error, make sure you have correctly typed the name of the remote by checking with:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ git remote -v
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If you are certain that the name is spelled correctly, case sensitivity is another thing you should also check. Note that this command only removes the reference from local repository; it will not delete itself from remote repository from the host. To keep the project clean, run regularly: &lt;code&gt;git remote -v&lt;/code&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Common Errors When Removing Remotes
&lt;/h2&gt;

&lt;p&gt;Checks when removing a Git remote repository often involve some common errors that can be very confusing. Probably the most common error involves deleting a remote that does not exist. Suppose I try to use a command like:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ git remote rm nonexistent
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;I will get an error message that says:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;&amp;gt; error: Could not remove config section 'remote.nonexistent'
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;To do so, I would need to check the list of existing remotes using:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ git remote -v
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This would affirm that I have indeed used the proper names of the remotes in my repository. The next common complication involves case sensitivity. If, out of error, I might use wrong case, like &lt;code&gt;Destination&lt;/code&gt; instead of &lt;code&gt;destination&lt;/code&gt;, I may think the remote does not exist and waste unnecessary time trying to correct the error. It is good to verify the names of remotes before deleting, to avoid such cases. Also, maintaining clear and meaningful names will definitely help in avoiding confusion while going through the process. Update regularly regarding the list of remotes to keep me in context while making changes.&lt;/p&gt;

&lt;h2&gt;
  
  
  Best Practices for Managing Remotes
&lt;/h2&gt;

&lt;p&gt;Abstract. Here, I will summarize best practices regarding Git remotes. First and foremost, you should check your naming of remote repositories. By this, you might avoid some confusion and mistakes, particularly when cooperating in a team. You can also set meaningful naming conventions. For instance, you might name the source repository as &lt;code&gt;upstream&lt;/code&gt; and your forked repository as &lt;code&gt;origin&lt;/code&gt;. This will help other collaborators immediately know where each remote originates from.&lt;/p&gt;

&lt;p&gt;Another key practice is to check your remotes regularly using the command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ git remote -v
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This will give you a snapshot of all remotes configured, so you can keep track of your repository's connections. Finally, organizing your remotes is important when working together. With clear and current references, you'll have a better way of pulling, fetching, and pushing changes accordingly, making the process much smoother.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Summary: Remote repositories form the foundation of how one works effectively with Git. Learning to add, rename, and delete these associations will help keep you with a tidy repository setup and free from clutter. Double-check your remote names for spelling, and remember some common mistakes so that collaboration goes without hiccups. With this, handling your Git projects will become way easier.&lt;/p&gt;




&lt;h2&gt;
  
  
  Monitoring GitHub Actions Workflows
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://s.cicube.io/try" rel="noopener noreferrer"&gt;CICube&lt;/a&gt; is a GitHub Actions monitoring tool that provides you with detailed insights into your workflows to further optimize your CI/CD pipeline. With CICube, you will be able to track your workflow runs, understand where the bottlenecks are, and tease out the best from your build times. Go to &lt;a href="https://s.cicube.io/try" rel="noopener noreferrer"&gt;cicube.io&lt;/a&gt; now and create a free account to better optimize your GitHub Actions workflows!&lt;/p&gt;

&lt;p&gt;&lt;a href="https://s.cicube.io/try" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fep2dsojs2m0j2fksabjd.png" alt="CICube GitHub Actions Workflow Duration Monitoring" width="800" height="641"&gt;&lt;/a&gt;&lt;/p&gt;

</description>
      <category>git</category>
      <category>devops</category>
      <category>security</category>
    </item>
  </channel>
</rss>
