<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Favour Lawrence</title>
    <description>The latest articles on DEV Community by Favour Lawrence (@favxlaw).</description>
    <link>https://dev.to/favxlaw</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/favxlaw"/>
    <language>en</language>
    <item>
      <title>Introduction to OpenTelemetry: A DevOps Beginner's Guide</title>
      <dc:creator>Favour Lawrence</dc:creator>
      <pubDate>Sat, 03 May 2025 22:56:52 +0000</pubDate>
      <link>https://dev.to/favxlaw/introduction-to-opentelemetry-a-devops-beginners-guide-2ij6</link>
      <guid>https://dev.to/favxlaw/introduction-to-opentelemetry-a-devops-beginners-guide-2ij6</guid>
      <description>&lt;p&gt;As a DevOps engineer, understanding how your applications are performing in production is crucial to ensuring their reliability and stability. Whether you’re working with microservices, containers, or cloud-native environments, monitoring and observability play a vital role. This is where OpenTelemetry comes in.&lt;br&gt;
In this article, we’ll go over what OpenTelemetry is, why it’s important for your DevOps practices, and introduce some key concepts to help you get started on the path to improved observability.&lt;/p&gt;




&lt;h2&gt;
  
  
  How OpenTelemetry Works
&lt;/h2&gt;

&lt;p&gt;OpenTelemetry is a collection of APIs, libraries, agents, and instrumentation tools designed to help you collect, process, and export telemetry data things like traces, metrics, and logs from your applications. Essentially, it offers a standardized way to capture data, providing you with a clear view of your system’s performance and behavior.&lt;/p&gt;

&lt;p&gt;This data is critical for understanding how your system is functioning, pinpointing issues, and optimizing overall performance. But how does OpenTelemetry gather all this useful information, and how can you use it to improve your system? Let’s break it down step-by-step.&lt;/p&gt;

&lt;h3&gt;
  
  
  1. Instrumentation: Making Your App Observable:
&lt;/h3&gt;

&lt;p&gt;The first step is instrumentation. Basically, you’re wiring your application to be “observed.” This is where you decide how to generate signals (traces, metrics, and logs) from your code.&lt;/p&gt;

&lt;p&gt;Now, depending on your setup, you can either go with any below;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Manually instrument: This means adding SDK calls in your code like creating spans around functions, or timing a DB query.&lt;/li&gt;
&lt;li&gt;Automatically instrument: OpenTelemetry offers auto-instrumentation for many popular frameworks (like Express for Node.js, Spring Boot for Java, Flask for Python). This way, telemetry is collected without you writing much extra code.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Once your application is instrumented, you’ll have the data needed to analyze performance, identify issues, and understand system behavior in real time.&lt;/p&gt;

&lt;h3&gt;
  
  
  2. Signal Generation: Traces, Metrics, and Logs
&lt;/h3&gt;

&lt;p&gt;Now your application is instrumented, it starts generating signals:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Traces track the path of a request as it moves through your system. They’re made up of spans, which capture operations like HTTP calls, database queries, or interactions with external services. Useful for pinpointing bottlenecks or latency issues.&lt;/li&gt;
&lt;li&gt;Metrics are numerical values you can aggregate, graph, and alert on, like request rates, response times, CPU and memory usage, error counts, etc. Ideal for monitoring trends and setting SLOs.&lt;/li&gt;
&lt;li&gt;Logs are timestamped text entries that capture specific events or states. They’re often used for debugging or to provide extra context alongside traces and metrics.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;These signals are structured in a consistent format, usually using the &lt;em&gt;OpenTelemetry Protocol (OTLP)&lt;/em&gt;, so they can be processed and exported to your observability backend without friction.&lt;/p&gt;

&lt;h3&gt;
  
  
  3. Context Propagation: Keeping Traces Connected Across Services
&lt;/h3&gt;

&lt;p&gt;In a distributed system, especially with microservices, tracking a single request across multiple services can get messy fast. That’s where context propagation comes in.&lt;/p&gt;

&lt;p&gt;OpenTelemetry handles this by passing trace context things like trace IDs and span IDs along with each request. For HTTP, this happens via headers; for gRPC, it’s in the metadata. Each service picks up that context and attaches it to its own spans.&lt;/p&gt;

&lt;p&gt;The result: when you view a trace later, you get a complete picture of the request’s path through the system, without any gaps. This is what makes distributed tracing actually usable.&lt;/p&gt;

&lt;h3&gt;
  
  
  4. The Collector: Centralized Ingestion and Processing
&lt;/h3&gt;

&lt;p&gt;When your app starts producing telemetry data, you need a way to handle it. That’s where the OpenTelemetry Collector comes in.&lt;br&gt;
Think of it as a central point that:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Ingests traces, metrics, and logs from instrumented services&lt;/li&gt;
&lt;li&gt;Processes the data, things like filtering, redacting, or adding metadata&lt;/li&gt;
&lt;li&gt;Exports it to your monitoring or observability backend&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;By separating data collection from where the data ends up, the Collector gives you a clean layer of control. You can standardize telemetry pipelines, apply consistent transformations, and avoid tight coupling with any single vendor.&lt;/p&gt;

&lt;h3&gt;
  
  
  5. Exporting: Routing Telemetry to the Right Tools
&lt;/h3&gt;

&lt;p&gt;After the Collector processes your telemetry, it sends it off to the systems you use to monitor and troubleshoot.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Metrics → Prometheus, Cloud Monitoring, etc.&lt;/li&gt;
&lt;li&gt;Traces → Jaeger, Zipkin, or other tracing backends&lt;/li&gt;
&lt;li&gt;Logs → Log aggregation tools or cloud logging platforms&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;em&gt;&lt;strong&gt;The best part? You don’t need to modify your application every time you want to switch or add a new backend. Just update the Collector’s config and you're good to go.&lt;/strong&gt;&lt;/em&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  What Makes OpenTelemetry Different?
&lt;/h2&gt;

&lt;p&gt;So maybe you’re thinking, “Why bother with OpenTelemetry? I already use Prometheus for metrics or Jaeger for tracing isn’t that enough?”&lt;/p&gt;

&lt;p&gt;Fair question. But here’s the thing: OpenTelemetry isn’t here to replace those tools it’s here to bring them together.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;- Unified Telemetry Framework:&lt;/strong&gt; Instead of juggling separate tooling for traces, metrics, and logs, OpenTelemetry gives you a single, consistent way to collect all three. Same concepts, same configuration patterns, less cognitive load for your team.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;- Vendor-Neutral by Design:&lt;/strong&gt; &lt;br&gt;
OpenTelemetry is vendor-neutral by design. You can pipe your data to Prometheus today, switch to Datadog next week, or go full open-source with Jaeger and Loki and your application code stays exactly the same. No proprietary agents. No rewriting instrumentation. You own the pipeline, not the vendor.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;- Pluggable and Extensible Pipeline:&lt;/strong&gt; &lt;br&gt;
The Collector acts as a configurable processing layer. Need to drop certain spans, mask sensitive fields, or add metadata before sending data out? Just plug in the processors you need no custom tooling required.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;- Community-Driven and CNCF-Backed&lt;/strong&gt;&lt;br&gt;
OpenTelemetry isn’t a startup’s side project. It’s part of the Cloud Native Computing Foundation (same folks behind Kubernetes), and it’s backed by a huge community of engineers and observability vendors. That means fast updates, strong documentation, and wide adoption.&lt;/p&gt;

&lt;p&gt;In summary OpenTelemetry doesn’t replace Prometheus or Jaeger, it complements and unifies them. It gives you a consistent, flexible foundation for observability, no matter what your stack looks like.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;&lt;strong&gt;Thanks for reading. Let me know in the comments if there’s a topic you’d like me to cover next. I know it’s been a while since I last posted. I’m still working on those practical, hands-on articles I promised, so keep an eye out. They’re on the way.&lt;/strong&gt;&lt;/em&gt;&lt;/p&gt;

</description>
      <category>devops</category>
      <category>programming</category>
      <category>beginners</category>
      <category>monitoring</category>
    </item>
    <item>
      <title>Squash, Rebase, Merge: Keeping Your CI/CD Pipelines Clean and Efficient 🚀</title>
      <dc:creator>Favour Lawrence</dc:creator>
      <pubDate>Wed, 26 Mar 2025 05:05:28 +0000</pubDate>
      <link>https://dev.to/favxlaw/squash-rebase-merge-keeping-your-cicd-pipelines-clean-and-efficient-8cc</link>
      <guid>https://dev.to/favxlaw/squash-rebase-merge-keeping-your-cicd-pipelines-clean-and-efficient-8cc</guid>
      <description>&lt;p&gt;In DevOps, efficiency is everything. A messy Git history slows down pipelines, causes unnecessary conflicts, and makes debugging harder.&lt;/p&gt;

&lt;p&gt;When Git workflows are unmanaged, CI/CD pipelines can stall due to conflicting merge commits, and developers waste time digging through cluttered commit histories.&lt;/p&gt;

&lt;p&gt;But with a clean Git workflow:&lt;br&gt;
✅ Faster builds – No extra history slowing things down&lt;br&gt;
✅ Fewer merge conflicts – Smoother collaboration, less frustration&lt;br&gt;
✅ Clearer logs – Easier debugging and rollbacks&lt;/p&gt;

&lt;p&gt;A structured Git history isn’t just about keeping things tidy, it directly improves pipeline speed, code quality, and developer productivity. Using squashing, rebasing, and merging correctly keeps your CI/CD pipeline fast, reliable, and hassle-free.&lt;/p&gt;


&lt;h3&gt;
  
  
  &lt;strong&gt;Squashing, Rebasing, and Merging – The Right Tool for the Job&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Git is a powerful version control system, but &lt;strong&gt;how you manage your commits can either streamline or slow down your CI/CD pipeline&lt;/strong&gt;. A cluttered history with unnecessary commits leads to:  &lt;/p&gt;

&lt;p&gt;❌ Slower builds due to excessive commit processing&lt;br&gt;&lt;br&gt;
❌ Merge conflicts that could have been avoided&lt;br&gt;&lt;br&gt;
❌ Hard-to-follow commit logs, making debugging difficult  &lt;/p&gt;

&lt;p&gt;To keep your workflow clean and efficient, you need to &lt;strong&gt;use the right Git strategy at the right time&lt;/strong&gt;. Squashing, rebasing, and merging each serve a unique purpose. Here’s how they work and when to use them.  &lt;/p&gt;
&lt;h4&gt;
  
  
  &lt;strong&gt;When to Squash: Keeping PRs Clean&lt;/strong&gt;
&lt;/h4&gt;

&lt;p&gt;&lt;strong&gt;What is Squashing?&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
Squashing combines multiple commits into a single commit. This is useful when a pull request (PR) contains many small, incremental commits that don’t need to be preserved individually. Instead of polluting the Git history with minor fixes and adjustments, you &lt;strong&gt;merge everything into one meaningful commit&lt;/strong&gt; before merging into the main branch.  &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Why Use Squash?&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
🔹 &lt;strong&gt;Keeps commit history clean&lt;/strong&gt; – Instead of “Fixed bug,” “Fixed typo,” and “Final final fix,” you get a single commit with a meaningful message.&lt;br&gt;&lt;br&gt;
🔹 &lt;strong&gt;Reduces clutter&lt;/strong&gt; – A clean commit history makes it easier to review and debug.&lt;br&gt;&lt;br&gt;
🔹 &lt;strong&gt;Best for feature branches&lt;/strong&gt; – Squash commits before merging into &lt;code&gt;main&lt;/code&gt; or &lt;code&gt;develop&lt;/code&gt; to keep the history readable.  &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Example Scenario&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
You’re working on a new &lt;strong&gt;login feature&lt;/strong&gt;. During development, you make multiple commits:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;commit 1: Added login form  
commit 2: Fixed validation error  
commit 3: Adjusted button alignment  
commit 4: Updated error messages  
commit 5: Finalized login feature  
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Instead of merging all five commits into &lt;code&gt;main&lt;/code&gt;, squash them into one:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;commit 1: Implemented login feature with validation and UI fixes  
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h5&gt;
  
  
  &lt;strong&gt;How to Squash Commits&lt;/strong&gt;
&lt;/h5&gt;

&lt;p&gt;&lt;strong&gt;Interactive Rebase (Local Changes)&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;git rebase &lt;span class="nt"&gt;-i&lt;/span&gt; HEAD~n  &lt;span class="c"&gt;# 'n' is the number of commits to squash&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This opens an interactive editor where you replace &lt;code&gt;"pick"&lt;/code&gt; with &lt;code&gt;"squash"&lt;/code&gt; for the commits you want to combine.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Squash on GitHub (PR Merging)&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;When merging a PR, select &lt;strong&gt;"Squash &amp;amp; Merge"&lt;/strong&gt; to combine all commits into one before merging.
&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  &lt;strong&gt;When to Rebase: Keeping History Linear&lt;/strong&gt;
&lt;/h4&gt;

&lt;p&gt;&lt;strong&gt;What is Rebasing?&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
Rebasing &lt;strong&gt;rewrites history&lt;/strong&gt; by moving your branch’s commits on top of the latest &lt;code&gt;main&lt;/code&gt; branch, as if your feature branch was built on the most up-to-date code. It prevents unnecessary merge commits and maintains a &lt;strong&gt;linear&lt;/strong&gt; commit history.  &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Why Use Rebase?&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
🔹 &lt;strong&gt;Keeps history clean&lt;/strong&gt; – Avoids merge commits like &lt;code&gt;Merge branch 'main' into feature-xyz&lt;/code&gt;.&lt;br&gt;&lt;br&gt;
🔹 &lt;strong&gt;Ensures a smooth merge&lt;/strong&gt; – Applying your changes on top of the latest &lt;code&gt;main&lt;/code&gt; helps prevent conflicts later.&lt;br&gt;&lt;br&gt;
🔹 &lt;strong&gt;Best for feature branches&lt;/strong&gt; – Before merging a feature, rebase it onto &lt;code&gt;main&lt;/code&gt; to ensure it integrates smoothly.  &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Example Scenario&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
You started working on a &lt;strong&gt;new checkout feature&lt;/strong&gt; based on an older version of &lt;code&gt;main&lt;/code&gt;. Meanwhile, other developers pushed updates to &lt;code&gt;main&lt;/code&gt;. Now, your branch is behind:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;* (main) commit A - Updated payment logic  
* (main) commit B - Fixed checkout bug  
|
|--- (feature-checkout) commit C - Added checkout form  
|--- (feature-checkout) commit D - Implemented discount logic  
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Instead of merging &lt;code&gt;main&lt;/code&gt; into your branch (which would create an unnecessary merge commit), &lt;strong&gt;rebase it&lt;/strong&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;git fetch origin
git rebase origin/main  &lt;span class="c"&gt;# Moves your commits on top of the latest main&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;After rebasing, your history is &lt;strong&gt;clean and linear&lt;/strong&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;* (main) commit A - Updated payment logic  
* (main) commit B - Fixed checkout bug  
* (feature-checkout) commit C - Added checkout form  
* (feature-checkout) commit D - Implemented discount logic  
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Handling Conflicts During Rebase&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
If there are conflicts, Git will stop and ask you to resolve them. Once fixed, continue the rebase:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;git add &lt;span class="nb"&gt;.&lt;/span&gt;
git rebase &lt;span class="nt"&gt;--continue&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  &lt;strong&gt;When to Merge: Preserving Full History&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;What is Merging?&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
Merging combines one branch into another, preserving all commits and commit messages &lt;strong&gt;exactly as they were made&lt;/strong&gt;. This keeps a &lt;strong&gt;detailed commit history&lt;/strong&gt; but may introduce merge commits that clutter the log.  &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Why Use Merge?&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
🔹 &lt;strong&gt;Preserves full commit history&lt;/strong&gt; – Useful for tracking all incremental changes.&lt;br&gt;&lt;br&gt;
🔹 &lt;strong&gt;Best for team collaborations&lt;/strong&gt; – When multiple developers contribute to a feature branch, merging keeps individual contributions visible.&lt;br&gt;&lt;br&gt;
🔹 &lt;strong&gt;Used in Gitflow workflows&lt;/strong&gt; – Commonly used for merging &lt;code&gt;develop → main&lt;/code&gt;.  &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Example Scenario&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
Your team worked on a &lt;strong&gt;new analytics dashboard&lt;/strong&gt; in a shared branch. The commit history looks like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;commit X: Setup analytics API  
commit Y: Implemented dashboard UI  
commit Z: Added data visualization  
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Since multiple developers contributed, merging into &lt;code&gt;main&lt;/code&gt; without squashing retains authorship and commit details.  &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;How to Merge Properly&lt;/strong&gt;  &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Basic Merge (Fast-Forward)&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;git merge feature-branch  &lt;span class="c"&gt;# Merges feature branch into current branch&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If your branch is ahead of &lt;code&gt;main&lt;/code&gt;, Git will &lt;strong&gt;fast-forward&lt;/strong&gt; (move the branch pointer) without creating a merge commit.  &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Preserving a Merge Commit&lt;/strong&gt;&lt;br&gt;
To keep a merge commit for tracking:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;git merge &lt;span class="nt"&gt;--no-ff&lt;/span&gt; feature-branch  &lt;span class="c"&gt;# Creates a merge commit even if a fast-forward is possible&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  &lt;strong&gt;Which One Should You Use?&lt;/strong&gt;
&lt;/h4&gt;

&lt;p&gt;✔️ &lt;strong&gt;Squash&lt;/strong&gt; → When cleaning up a messy PR with too many small commits.&lt;br&gt;&lt;br&gt;
✔️ &lt;strong&gt;Rebase&lt;/strong&gt; → When syncing with &lt;code&gt;main&lt;/code&gt; before merging to avoid extra merge commits.&lt;br&gt;&lt;br&gt;
✔️ &lt;strong&gt;Merge&lt;/strong&gt; → When you want to &lt;strong&gt;preserve full commit history&lt;/strong&gt;, especially in team collaborations.  &lt;/p&gt;


&lt;h3&gt;
  
  
  &lt;strong&gt;Git Strategies for DevOps Teams&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Choosing the right Git strategy is essential for maintaining an efficient CI/CD workflow. A poorly structured process can cause bottlenecks, increase merge conflicts, and slow down deployments. On the other hand, a well-defined strategy keeps the development pipeline smooth, ensuring faster releases and better collaboration.  &lt;/p&gt;

&lt;p&gt;There are two primary approaches teams use: &lt;strong&gt;Feature Branch Workflow&lt;/strong&gt; and &lt;strong&gt;Trunk-Based Development (TBD)&lt;/strong&gt;. Each has its place, and automation plays a key role in enforcing these workflows.  &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Feature Branch Workflow&lt;/strong&gt;  &lt;/p&gt;

&lt;p&gt;This is the traditional approach where developers create separate branches for each feature, bug fix, or enhancement. The branch remains isolated from the main branch until the work is complete and ready for review. Merging happens through &lt;strong&gt;pull requests (PRs)&lt;/strong&gt;, allowing for code reviews before the changes are integrated.  &lt;/p&gt;

&lt;p&gt;This workflow is often used in structured development models like Gitflow, where teams work on long-running feature branches before merging into a staging or main branch. It provides stability and makes it easier to review code, but if branches are kept open for too long, they can diverge significantly from the main branch, leading to complex merge conflicts.  &lt;/p&gt;

&lt;p&gt;To prevent this, developers should &lt;strong&gt;rebase frequently&lt;/strong&gt;, ensuring their feature branch stays in sync with the latest changes from the main branch. Automated tests and linters should be triggered on every PR to catch potential issues early. Before merging, it’s good practice to &lt;strong&gt;squash commits&lt;/strong&gt; into a single, meaningful commit to keep the Git history clean.  &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Trunk-Based Development (TBD)&lt;/strong&gt;  &lt;/p&gt;

&lt;p&gt;Unlike the feature branch workflow, Trunk-Based Development encourages &lt;strong&gt;short-lived branches&lt;/strong&gt; or even &lt;strong&gt;direct commits to main&lt;/strong&gt;. Instead of working in isolation for extended periods, developers integrate changes continuously—sometimes multiple times a day. This approach ensures that the main branch is always in a deployable state and minimizes the complexity of long-lived branches.  &lt;/p&gt;

&lt;p&gt;Frequent integration reduces merge conflicts and makes debugging easier since each change set is small and easier to track. However, this strategy requires &lt;strong&gt;strict CI/CD enforcement&lt;/strong&gt; to prevent unstable code from being deployed. Automated testing and static analysis tools must be in place to verify every commit before it reaches production.  &lt;/p&gt;

&lt;p&gt;Since features may be merged incrementally, &lt;strong&gt;feature flags&lt;/strong&gt; are often used to hide unfinished work while allowing continuous integration. This allows developers to merge work early without exposing incomplete features to end users.  &lt;/p&gt;

&lt;p&gt;Rolling back changes in Trunk-Based Development should be handled with &lt;strong&gt;&lt;code&gt;git revert&lt;/code&gt;&lt;/strong&gt; instead of &lt;code&gt;git reset&lt;/code&gt; to maintain a clear history of what was changed and why.  &lt;/p&gt;
&lt;h4&gt;
  
  
  &lt;strong&gt;Choosing the Right Strategy&lt;/strong&gt;
&lt;/h4&gt;

&lt;p&gt;The best approach depends on the team’s structure and release process. &lt;strong&gt;Feature Branch Workflow&lt;/strong&gt; works well for teams that need structured releases, code reviews, and stability. &lt;strong&gt;Trunk-Based Development&lt;/strong&gt; is better suited for high-velocity DevOps teams where frequent deployments and quick iteration cycles are necessary.  &lt;/p&gt;

&lt;p&gt;Some teams adopt a hybrid model—using feature branches for significant changes while following Trunk-Based Development for smaller, incremental updates. Regardless of the approach, automation is key to maintaining a clean workflow.  &lt;/p&gt;
&lt;h4&gt;
  
  
  &lt;strong&gt;Enforcing Git Workflows with CI/CD Automation&lt;/strong&gt;
&lt;/h4&gt;

&lt;p&gt;Manual enforcement of Git best practices is not scalable. Teams must automate workflow rules to ensure consistency and reduce human errors.  &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Branch protection rules&lt;/strong&gt; should prevent direct commits to &lt;code&gt;main&lt;/code&gt; in feature branch workflows.
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Pre-commit hooks&lt;/strong&gt; can enforce commit message formats and prevent invalid commits.
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;PR approvals and CI/CD checks&lt;/strong&gt; should be mandatory before merging changes.
&lt;/li&gt;
&lt;/ul&gt;
&lt;h4&gt;
  
  
  &lt;strong&gt;Automating Git Workflow Enforcement with GitHub Actions&lt;/strong&gt;
&lt;/h4&gt;

&lt;p&gt;GitHub Actions can be used to enforce rules such as requiring squashed commits and ensuring branches are rebased before merging.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;jobs&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;enforce-workflow&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;runs-on&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ubuntu-latest&lt;/span&gt;
    &lt;span class="na"&gt;steps&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Ensure commits are squashed&lt;/span&gt;
        &lt;span class="na"&gt;run&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;git log --format="%s" origin/main..HEAD | wc -l | grep -q '^1$' || exit &lt;/span&gt;&lt;span class="m"&gt;1&lt;/span&gt;

      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Prevent merging without rebase&lt;/span&gt;
        &lt;span class="na"&gt;run&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;git merge-base --is-ancestor main HEAD || exit &lt;/span&gt;&lt;span class="m"&gt;1&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This automation ensures that multiple commits are squashed before merging and that branches are properly rebased.  &lt;/p&gt;

&lt;p&gt;Beyond enforcing Git hygiene, integrating &lt;strong&gt;automated code quality checks&lt;/strong&gt; with tools like &lt;strong&gt;SonarQube, ESLint, or Prettier&lt;/strong&gt; ensures that coding standards are followed. Requiring all tests to pass before merging prevents broken changes from entering production.&lt;/p&gt;




&lt;p&gt;A well-structured Git workflow is essential for maintaining clean CI/CD pipelines, reducing conflicts, and ensuring efficient collaboration. Whether your team follows a &lt;strong&gt;Feature Branch Workflow&lt;/strong&gt; for stability or &lt;strong&gt;Trunk-Based Development&lt;/strong&gt; for faster iterations, enforcing best practices through automation is key to keeping development smooth and predictable.&lt;/p&gt;

&lt;p&gt;By using squashing, rebasing, and merging correctly, along with CI/CD automation, teams can improve code quality, streamline deployments, and eliminate unnecessary complexity in their repositories.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;em&gt;Thanks for reading! If you found this helpful, follow me for more DevOps concepts and best practices. If there’s a topic you’d like me to break down next, let me know! 🚀&lt;/em&gt;&lt;/strong&gt;&lt;/p&gt;

</description>
      <category>git</category>
      <category>devops</category>
      <category>cicd</category>
      <category>programming</category>
    </item>
    <item>
      <title>ELK Stack Explained: How Elasticsearch, Logstash &amp; Kibana Work Together for Real-Time Data Insights.</title>
      <dc:creator>Favour Lawrence</dc:creator>
      <pubDate>Mon, 24 Mar 2025 11:38:17 +0000</pubDate>
      <link>https://dev.to/favxlaw/elk-stack-explained-how-elasticsearch-logstash-kibana-work-together-for-real-time-data-insights-2dfa</link>
      <guid>https://dev.to/favxlaw/elk-stack-explained-how-elasticsearch-logstash-kibana-work-together-for-real-time-data-insights-2dfa</guid>
      <description>&lt;h2&gt;
  
  
  Introduction to ELK Stack
&lt;/h2&gt;

&lt;p&gt;Logs are everywhere in every app, server, and system generates them. But when something goes wrong, digging through endless log files to find the issue can be so overwhelming. Here ELK turns raw log data into clear, searchable, and visual insights.&lt;/p&gt;

&lt;p&gt;What is the ELK Stack?&lt;br&gt;
The ELK Stack is an open-source log management and data analytics tool made up of:&lt;/p&gt;

&lt;p&gt;Elasticsearch – A search engine that stores and retrieves log data quickly.&lt;/p&gt;

&lt;p&gt;Logstash – A tool that collects, processes, and forwards logs.&lt;/p&gt;

&lt;p&gt;Kibana – A dashboard for visualizing and analyzing log data.&lt;/p&gt;

&lt;p&gt;Together, these tools make it easy to collect, search, and analyze logs in real time, helping teams troubleshoot issues, monitor systems, and make data-driven decisions.&lt;/p&gt;

&lt;p&gt;Why is ELK Important?&lt;br&gt;
Modern applications generate tons of log data, and manually searching through it isn’t practical. ELK helps by:&lt;br&gt;
✔️ Finding issues fast – Instantly search massive log files.&lt;br&gt;
✔️ Handling large data – Works across multiple servers and systems.&lt;br&gt;
✔️ Turning data into insights – Creates real-time dashboards for monitoring and decision-making.&lt;/p&gt;
&lt;h3&gt;
  
  
  &lt;strong&gt;Elasticsearch vs. Traditional RDBMS: A Developer’s Perspective&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;If you're used to working with relational databases like &lt;strong&gt;MySQL&lt;/strong&gt; or &lt;strong&gt;PostgreSQL&lt;/strong&gt;, switching to &lt;strong&gt;Elasticsearch&lt;/strong&gt; might feel like stepping into a whole new world. But Elasticsearch is just another way to &lt;strong&gt;store, retrieve, and search data&lt;/strong&gt;—the difference is in how it’s structured and optimized.  &lt;/p&gt;

&lt;p&gt;Instead of &lt;strong&gt;tables and rows&lt;/strong&gt;, Elasticsearch works with &lt;strong&gt;documents and indices&lt;/strong&gt;. Let’s break it down using concepts you already know.  &lt;/p&gt;
&lt;h4&gt;
  
  
  &lt;strong&gt;Thinking in Tables vs. Thinking in Documents&lt;/strong&gt;
&lt;/h4&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;&lt;strong&gt;RDBMS Concept&lt;/strong&gt;&lt;/th&gt;
&lt;th&gt;&lt;strong&gt;Elasticsearch Equivalent&lt;/strong&gt;&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Database&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;Index&lt;/strong&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Table&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;
&lt;strong&gt;Type&lt;/strong&gt; (deprecated in newer versions)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Row&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;Document&lt;/strong&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Column&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;Field&lt;/strong&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Schema&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;Mapping&lt;/strong&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;In relational databases, data is neatly organized into &lt;strong&gt;tables&lt;/strong&gt; with predefined schemas—every row must follow a fixed structure.  &lt;/p&gt;

&lt;p&gt;Elasticsearch, on the other hand, is &lt;strong&gt;schema-less (to an extent)&lt;/strong&gt;. Instead of rows, it stores &lt;strong&gt;JSON documents&lt;/strong&gt; inside an &lt;strong&gt;index&lt;/strong&gt; (similar to a table). Each document can have &lt;strong&gt;a flexible structure&lt;/strong&gt;, making it great for &lt;strong&gt;semi-structured or dynamic data&lt;/strong&gt;.  &lt;/p&gt;
&lt;h4&gt;
  
  
  &lt;strong&gt;How Data is Stored: Rows vs. JSON Documents&lt;/strong&gt;
&lt;/h4&gt;
&lt;h5&gt;
  
  
  🔹 &lt;strong&gt;RDBMS Example (Users Table in MySQL)&lt;/strong&gt;
&lt;/h5&gt;

&lt;p&gt;Here’s how you’d define a simple &lt;code&gt;users&lt;/code&gt; table in MySQL:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight sql"&gt;&lt;code&gt;&lt;span class="k"&gt;CREATE&lt;/span&gt; &lt;span class="k"&gt;TABLE&lt;/span&gt; &lt;span class="n"&gt;users&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="n"&gt;id&lt;/span&gt; &lt;span class="nb"&gt;INT&lt;/span&gt; &lt;span class="k"&gt;PRIMARY&lt;/span&gt; &lt;span class="k"&gt;KEY&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;name&lt;/span&gt; &lt;span class="nb"&gt;VARCHAR&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;100&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt;
    &lt;span class="n"&gt;email&lt;/span&gt; &lt;span class="nb"&gt;VARCHAR&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;100&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt;
    &lt;span class="n"&gt;age&lt;/span&gt; &lt;span class="nb"&gt;INT&lt;/span&gt;
&lt;span class="p"&gt;);&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h5&gt;
  
  
  🔹 &lt;strong&gt;Elasticsearch Equivalent (JSON Document in an Index)&lt;/strong&gt;
&lt;/h5&gt;

&lt;p&gt;In Elasticsearch, each user record is stored as a &lt;strong&gt;JSON document&lt;/strong&gt; inside an index:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;json
{
    "id": 1,
    "name": "Jane Doe",
    "email": "jane.doe@example.com",
    "age": 30
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Instead of &lt;strong&gt;inserting data as rows&lt;/strong&gt;, Elasticsearch stores each entry as a &lt;strong&gt;self-contained JSON document&lt;/strong&gt;. This structure allows for &lt;strong&gt;fast searching and flexible querying&lt;/strong&gt; without requiring rigid table schemas.  &lt;/p&gt;

&lt;h4&gt;
  
  
  &lt;strong&gt;Querying: SQL vs. Elasticsearch Query DSL&lt;/strong&gt;
&lt;/h4&gt;

&lt;p&gt;One of the biggest differences between RDBMS and Elasticsearch is &lt;strong&gt;how you search for data&lt;/strong&gt;.  &lt;/p&gt;

&lt;p&gt;Traditional databases use &lt;strong&gt;SQL&lt;/strong&gt;, while Elasticsearch has its own &lt;strong&gt;Query DSL (Domain-Specific Language)&lt;/strong&gt;, which is JSON-based.  &lt;/p&gt;

&lt;h5&gt;
  
  
  🔹 &lt;strong&gt;Finding all users aged 30 in MySQL (SQL Query)&lt;/strong&gt;
&lt;/h5&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight sql"&gt;&lt;code&gt;&lt;span class="k"&gt;SELECT&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="k"&gt;FROM&lt;/span&gt; &lt;span class="n"&gt;users&lt;/span&gt; &lt;span class="k"&gt;WHERE&lt;/span&gt; &lt;span class="n"&gt;age&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;30&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h5&gt;
  
  
  🔹 &lt;strong&gt;Finding all users aged 30 in Elasticsearch (Query DSL)&lt;/strong&gt;
&lt;/h5&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;json
{
    "query": {
        "match": {
            "age": 30
        }
    }
}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;h3&gt;
  
  
  Elasticsearch
&lt;/h3&gt;

&lt;h4&gt;
  
  
  &lt;strong&gt;Fast Search &amp;amp; Distributed Indexing&lt;/strong&gt;
&lt;/h4&gt;

&lt;p&gt;At the core of the ELK Stack is &lt;strong&gt;Elasticsearch&lt;/strong&gt;, a &lt;strong&gt;highly scalable, distributed search engine&lt;/strong&gt; that enables &lt;strong&gt;rapid data access retrieval&lt;/strong&gt;. Unlike traditional databases that are optimized for structured data and transactions, Elasticsearch is designed for:  &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Full-text search&lt;/strong&gt; – Finds relevant results instantly, even in massive datasets.
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Real-time indexing&lt;/strong&gt; – New data becomes searchable almost immediately.
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Scalability&lt;/strong&gt; – Distributes data across multiple nodes to handle petabytes of information.
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;It’s built on &lt;strong&gt;Apache Lucene&lt;/strong&gt;, a powerful search library, and uses an &lt;strong&gt;inverted index&lt;/strong&gt;, a structure specifically optimized for search queries.  &lt;/p&gt;

&lt;h4&gt;
  
  
  &lt;strong&gt;How Elasticsearch Works&lt;/strong&gt;
&lt;/h4&gt;

&lt;p&gt;&lt;strong&gt;1️⃣ Indices &amp;amp; Documents – The Building Blocks&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
Elasticsearch doesn’t use tables and rows like a relational database. Instead, it stores data as &lt;strong&gt;JSON documents&lt;/strong&gt; inside an &lt;strong&gt;index&lt;/strong&gt;.  &lt;/p&gt;

&lt;p&gt;🔹 &lt;strong&gt;Think of an index like a database&lt;/strong&gt;, and each document inside it as a record. Unlike relational databases, these documents can have different structures—offering &lt;strong&gt;flexibility&lt;/strong&gt; for handling dynamic or semi-structured data.  &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2️⃣ Shards &amp;amp; Replicas – How Elasticsearch Scales&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
Handling massive amounts of data requires scalability, and that’s where &lt;strong&gt;sharding&lt;/strong&gt; and &lt;strong&gt;replication&lt;/strong&gt; come in.  &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Shards&lt;/strong&gt;: Elasticsearch &lt;strong&gt;splits an index into smaller pieces&lt;/strong&gt; (shards) to distribute data across multiple nodes.
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Replicas&lt;/strong&gt;: Each shard can have &lt;strong&gt;replicas&lt;/strong&gt;—copies stored across different nodes to improve &lt;strong&gt;redundancy and performance&lt;/strong&gt;.
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This architecture makes Elasticsearch both &lt;strong&gt;fault-tolerant&lt;/strong&gt; and &lt;strong&gt;lightning fast&lt;/strong&gt;, even when dealing with billions of records.  &lt;/p&gt;
&lt;h4&gt;
  
  
  &lt;strong&gt;Why Elasticsearch is Powerful for Log Analysis&lt;/strong&gt;
&lt;/h4&gt;

&lt;p&gt;Imagine you’re managing a cloud-based web application that logs thousands of events every second. A typical log entry might look like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"timestamp"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"2025-03-24T12:34:56Z"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"level"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"ERROR"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"message"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Database connection failed"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"service"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"authentication"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"user_id"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;1234&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Once indexed in Elasticsearch, you can &lt;strong&gt;instantly&lt;/strong&gt; search for all error messages related to the authentication service:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"query"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="nl"&gt;"match"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
            &lt;/span&gt;&lt;span class="nl"&gt;"service"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"authentication"&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This makes Elasticsearch incredibly powerful for &lt;strong&gt;log analysis, large-scale search applications, and real-time data insights&lt;/strong&gt;.  &lt;/p&gt;

&lt;h4&gt;
  
  
  &lt;strong&gt;Logstash – The Data Pipeline&lt;/strong&gt;
&lt;/h4&gt;

&lt;h5&gt;
  
  
  &lt;strong&gt;Collecting, Transforming, and Shipping Data&lt;/strong&gt;
&lt;/h5&gt;

&lt;p&gt;While Elasticsearch is great for &lt;strong&gt;searching and analyzing data&lt;/strong&gt;, it doesn’t &lt;strong&gt;collect or process&lt;/strong&gt; data on its own. That’s where &lt;strong&gt;Logstash&lt;/strong&gt; comes in.  &lt;/p&gt;

&lt;p&gt;Logstash acts as a &lt;strong&gt;data pipeline&lt;/strong&gt; that:  &lt;/p&gt;

&lt;p&gt;1️⃣ &lt;strong&gt;Collects data&lt;/strong&gt; from multiple sources (logs, databases, cloud services).&lt;br&gt;&lt;br&gt;
2️⃣ &lt;strong&gt;Transforms it&lt;/strong&gt; into a structured format (parsing, filtering, masking sensitive data).&lt;br&gt;&lt;br&gt;
3️⃣ &lt;strong&gt;Sends it&lt;/strong&gt; to Elasticsearch (or other destinations like Kafka).  &lt;/p&gt;
&lt;h5&gt;
  
  
  &lt;strong&gt;How Logstash Works&lt;/strong&gt;
&lt;/h5&gt;

&lt;p&gt;Logstash follows a simple &lt;strong&gt;ETL (Extract, Transform, Load)&lt;/strong&gt; workflow.  &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1️⃣ Input – Collecting Data&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
Logstash gathers logs from multiple sources:&lt;br&gt;&lt;br&gt;
✅ &lt;strong&gt;Files&lt;/strong&gt; – System logs, application logs, web server logs.&lt;br&gt;&lt;br&gt;
✅ &lt;strong&gt;Databases&lt;/strong&gt; – MySQL, PostgreSQL, MongoDB.&lt;br&gt;&lt;br&gt;
✅ &lt;strong&gt;Cloud Services&lt;/strong&gt; – AWS CloudWatch, Google Cloud Logs.  &lt;/p&gt;

&lt;p&gt;Example: &lt;strong&gt;Collecting Logs from a File&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="s"&gt;input {&lt;/span&gt;
  &lt;span class="s"&gt;file {&lt;/span&gt;
    &lt;span class="s"&gt;path =&amp;gt; "/var/log/syslog"&lt;/span&gt;
    &lt;span class="s"&gt;start_position =&amp;gt; "beginning"&lt;/span&gt;
  &lt;span class="s"&gt;}&lt;/span&gt;
&lt;span class="err"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;2️⃣ Filter – Transforming &amp;amp; Enriching Data&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
Before sending data to Elasticsearch, Logstash can &lt;strong&gt;clean, modify, and enrich logs&lt;/strong&gt;.  &lt;/p&gt;

&lt;p&gt;✅ &lt;strong&gt;Parse JSON logs&lt;/strong&gt; for better searchability.&lt;br&gt;&lt;br&gt;
✅ &lt;strong&gt;Mask sensitive data&lt;/strong&gt; like passwords.&lt;br&gt;&lt;br&gt;
✅ &lt;strong&gt;Geo-location enrichment&lt;/strong&gt; (find a user’s country based on IP).  &lt;/p&gt;

&lt;p&gt;Example: &lt;strong&gt;Masking Passwords in Logs&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="s"&gt;filter {&lt;/span&gt;
  &lt;span class="s"&gt;json {&lt;/span&gt;
    &lt;span class="s"&gt;source =&amp;gt; "message"&lt;/span&gt;
  &lt;span class="s"&gt;}&lt;/span&gt;
  &lt;span class="s"&gt;mutate {&lt;/span&gt;
    &lt;span class="s"&gt;gsub =&amp;gt; ["password", ".*", "[REDACTED]"]&lt;/span&gt;
  &lt;span class="s"&gt;}&lt;/span&gt;
&lt;span class="err"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;3️⃣ Output – Sending Data to Elasticsearch&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
After processing, Logstash ships logs to Elasticsearch.  &lt;/p&gt;

&lt;p&gt;Example: &lt;strong&gt;Indexing logs in Elasticsearch&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="s"&gt;output {&lt;/span&gt;
  &lt;span class="s"&gt;elasticsearch {&lt;/span&gt;
    &lt;span class="s"&gt;hosts =&amp;gt; ["http://localhost:9200"]&lt;/span&gt;
    &lt;span class="s"&gt;index =&amp;gt; "logs-%{+YYYY.MM.dd}"&lt;/span&gt;
  &lt;span class="s"&gt;}&lt;/span&gt;
&lt;span class="err"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This &lt;strong&gt;automates log ingestion&lt;/strong&gt; and ensures that logs are structured, searchable, and ready for analysis in Kibana.  &lt;/p&gt;

&lt;h4&gt;
  
  
  &lt;strong&gt;Kibana – Bringing Data to Life&lt;/strong&gt;
&lt;/h4&gt;

&lt;h5&gt;
  
  
  &lt;strong&gt;Dashboards, Analytics, and Insights&lt;/strong&gt;
&lt;/h5&gt;

&lt;p&gt;Now that our logs are in Elasticsearch, how do we make sense of all this data? &lt;strong&gt;Kibana&lt;/strong&gt; makes it &lt;strong&gt;visual and interactive&lt;/strong&gt;.  &lt;/p&gt;

&lt;p&gt;Kibana is a &lt;strong&gt;dashboard and analytics tool&lt;/strong&gt; that allows you to:&lt;br&gt;&lt;br&gt;
✅ &lt;strong&gt;Monitor logs and metrics in real-time.&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
✅ &lt;strong&gt;Run searches and filter data with ease.&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
✅ &lt;strong&gt;Set up alerts for anomalies or critical issues.&lt;/strong&gt;  &lt;/p&gt;

&lt;h4&gt;
  
  
  &lt;strong&gt;Key Features of Kibana&lt;/strong&gt;
&lt;/h4&gt;

&lt;p&gt;&lt;strong&gt;1️⃣ Dashboards &amp;amp; Visualizations&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
Kibana lets you build &lt;strong&gt;custom dashboards&lt;/strong&gt; using bar charts, line graphs, pie charts, and heatmaps.  &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;See &lt;strong&gt;server performance trends&lt;/strong&gt; over time.
&lt;/li&gt;
&lt;li&gt;Track &lt;strong&gt;error rates&lt;/strong&gt; in real time.
&lt;/li&gt;
&lt;li&gt;Visualize &lt;strong&gt;traffic spikes&lt;/strong&gt; on your website.
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;2️⃣ Discover &amp;amp; Search&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
Kibana’s search interface helps drill down into logs.  &lt;/p&gt;

&lt;p&gt;For example, you can filter logs to show only:&lt;br&gt;&lt;br&gt;
✔️ &lt;strong&gt;ERROR messages from a specific service&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
✔️ &lt;strong&gt;API requests made by a certain user&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
✔️ &lt;strong&gt;Security alerts from a particular IP range&lt;/strong&gt;  &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3️⃣ Spotting Patterns &amp;amp; Trends with Kibana&lt;/strong&gt;&lt;br&gt;
Kibana makes it easy to spot patterns in your data over time. With simple tools like Timelion and Lens, you can:&lt;/p&gt;

&lt;p&gt;See sudden jumps in website visitors and understand why.&lt;/p&gt;

&lt;p&gt;Connect system crashes to specific events to troubleshoot faster.&lt;/p&gt;

&lt;p&gt;Identify trends in user activity to improve your services.&lt;/p&gt;




&lt;h3&gt;
  
  
  Conclusion
&lt;/h3&gt;

&lt;p&gt;The ELK Stack—Elasticsearch, Logstash, and Kibana, turns raw logs into searchable, visual insights for better monitoring and decision-making. Elasticsearch handles fast searches, Logstash collects and processes data, and Kibana brings it to life with dashboards. Together, they power real-time analytics for DevOps, security, and business intelligence.&lt;/p&gt;

&lt;p&gt;Next, we’ll deploy ELK on AWS, covering setup, scaling, and optimization. Stay tuned for the hands-on guide! 🚀&lt;/p&gt;

</description>
      <category>devops</category>
      <category>programming</category>
      <category>database</category>
      <category>aws</category>
    </item>
    <item>
      <title>🚀 InfluxDB Architecture: A Beginner’s Guide for DevOps Engineers</title>
      <dc:creator>Favour Lawrence</dc:creator>
      <pubDate>Sun, 02 Mar 2025 23:24:59 +0000</pubDate>
      <link>https://dev.to/favxlaw/influxdb-architecture-a-beginners-guide-for-devops-engineers-39n5</link>
      <guid>https://dev.to/favxlaw/influxdb-architecture-a-beginners-guide-for-devops-engineers-39n5</guid>
      <description>&lt;p&gt;Shoutout to &lt;a class="mentioned-user" href="https://dev.to/madhurima_rawat"&gt;@madhurima_rawat&lt;/a&gt;  for requesting a deep dive into InfluxDB architecture after reading my Prometheus and Grafana breakdown! 🚀&lt;/p&gt;

&lt;p&gt;If you're a DevOps engineer trying to understand how InfluxDB stacks up against Prometheus, you’re in the right place. We’ll keep it simple, comparing their architectures side by side, so you know when to use which tool.&lt;/p&gt;

&lt;p&gt;Also, if there's a DevOps concept you'd love me to explain next, drop a comment. I just might write about it next! 😉&lt;/p&gt;

&lt;p&gt;Now, let’s dive in! 🔥&lt;/p&gt;




&lt;h2&gt;
  
  
  &lt;strong&gt;The Core of InfluxDB Architecture&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;InfluxDB is a high-performance time-series database built to handle massive amounts of timestamped data efficiently. It is widely used for monitoring, real-time analytics, and IoT applications.&lt;/p&gt;

&lt;p&gt;InfluxDB is designed around four key components:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Storage Engine (TSM &amp;amp; TSI)&lt;/li&gt;
&lt;li&gt;Data Ingestion &amp;amp; Retention Policies&lt;/li&gt;
&lt;li&gt;Query Engine (InfluxQL &amp;amp; Flux)&lt;/li&gt;
&lt;li&gt;High Availability &amp;amp; Scaling
Let’s break these down one by one.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Storage Engine: TSM &amp;amp; TSI
&lt;/h3&gt;

&lt;h4&gt;
  
  
  Time-Structured Merge Tree (TSM) – The Heart of InfluxDB Storage
&lt;/h4&gt;

&lt;p&gt;InfluxDB uses a TSM (Time-Structured Merge Tree) engine, which is optimized for:&lt;br&gt;
✅ Efficient writes – It writes new data to an in-memory cache and periodically flushes it to disk in compact, immutable files.&lt;br&gt;
✅ High compression – TSM stores data in compressed segments, reducing storage costs.&lt;br&gt;
✅ Fast reads – TSM is optimized for quick lookups, even in large datasets.&lt;br&gt;
🔹 How is this different from Prometheus?&lt;br&gt;
Prometheus chunks data into 2-hour blocks and doesn’t have built-in long-term retention. InfluxDB’s TSM engine allows for more efficient long-term storage and querying.&lt;/p&gt;

&lt;h4&gt;
  
  
  Time-Series Index (TSI) – Handling Millions of Tags
&lt;/h4&gt;

&lt;p&gt;InfluxDB also introduces TSI (Time-Series Indexing), which is crucial when dealing with millions of time-series labels.&lt;/p&gt;

&lt;p&gt;✅ Fast queries on large datasets – Unlike databases that slow down with too many unique tags, TSI ensures smooth performance.&lt;br&gt;
✅ Disk-based indexing – TSI allows InfluxDB to scale efficiently without consuming too much RAM.&lt;/p&gt;

&lt;p&gt;🔹 Why does this matter?&lt;br&gt;
One of the common pitfalls in Prometheus is high cardinality issues, when you have too many labels, queries become slow. InfluxDB handles high cardinality better thanks to TSI.&lt;/p&gt;

&lt;h3&gt;
  
  
  Data Ingestion &amp;amp; Retention Policies
&lt;/h3&gt;

&lt;h4&gt;
  
  
  Push-Based Data Collection
&lt;/h4&gt;

&lt;p&gt;InfluxDB primarily relies on a push-based model for ingesting data. This means data sources send metrics to InfluxDB rather than InfluxDB pulling them.&lt;br&gt;
✅ Telegraf – InfluxDB’s official data collection agent, supporting 300+ integrations.&lt;br&gt;
✅ Direct HTTP API writes – Developers can push metrics to InfluxDB using simple REST API calls.&lt;/p&gt;

&lt;p&gt;🔹If you recall from my &lt;a href="https://dev.to/favxlaw/prometheus-architecture-understanding-the-workflow-162o"&gt;prometheus article&lt;/a&gt;&lt;br&gt;
Prometheus scrapes metrics (pull-based). Here InfluxDB expects data to be pushed.&lt;/p&gt;

&lt;h4&gt;
  
  
  Retention Policies (RP) &amp;amp; Continuous Queries (CQ)
&lt;/h4&gt;

&lt;p&gt;RP Allows you to automatically delete old data after a set period. Great for managing storage costs.&lt;br&gt;
CQ Helps precompute and aggregate data in real-time, reducing query load.&lt;br&gt;
🔹Unlike Prometheus, where you need external tools (like Thanos) for retention, InfluxDB manages data lifecycle natively.&lt;/p&gt;

&lt;h3&gt;
  
  
  High Availability &amp;amp; Scaling
&lt;/h3&gt;

&lt;p&gt;Scaling InfluxDB&lt;br&gt;
InfluxDB supports horizontal scaling via:&lt;br&gt;
✅ Clustering (Enterprise Edition) – Distributes data across multiple nodes.&lt;br&gt;
✅ InfluxDB Cloud – Fully managed, scalable version.&lt;/p&gt;

&lt;p&gt;🔹 How is this different from Prometheus?&lt;br&gt;
Prometheus doesn’t natively support clustering—you need Thanos or Cortex for that. InfluxDB offers built-in clustering.&lt;/p&gt;

&lt;h3&gt;
  
  
  When Should You Use InfluxDB?
&lt;/h3&gt;

&lt;p&gt;✅ Best for long-term storage &amp;amp; analytics – If you need to keep metrics for months/years.&lt;br&gt;
✅ Great for IoT, sensors, and business analytics – Ideal for financial, industrial, and IoT applications.&lt;br&gt;
✅ Advanced query capabilities – If you need complex joins, transformations, and external API integrations.&lt;/p&gt;




&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;InfluxDB is a powerful time-series database designed for high-performance storage, flexible querying, and efficient scaling. Compared to Prometheus, it’s better suited for long-term retention, high-cardinality data, and deep analytics.&lt;/p&gt;

&lt;p&gt;🚀 Want to see InfluxDB in action? Let me know in the comments if you’d like a hands-on tutorial or use case examples! 🔥&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Thanks for reading! Don’t forget to follow, and feel free to leave a comment with the next DevOps concept you’d like me to dive into. Let’s keep the learning going!&lt;/em&gt;&lt;/p&gt;

</description>
      <category>devops</category>
      <category>beginners</category>
      <category>monitoring</category>
      <category>programming</category>
    </item>
    <item>
      <title>Grafana Architecture Explained: How the Backend and Data Flow Work.</title>
      <dc:creator>Favour Lawrence</dc:creator>
      <pubDate>Thu, 20 Feb 2025 14:37:16 +0000</pubDate>
      <link>https://dev.to/favxlaw/grafana-architecture-explained-how-the-backend-and-data-flow-work-49d0</link>
      <guid>https://dev.to/favxlaw/grafana-architecture-explained-how-the-backend-and-data-flow-work-49d0</guid>
      <description>&lt;p&gt;Grafana is a powerful open-source tool that helps turn raw data into clear, interactive dashboards making it a go-to for DevOps teams. But what’s really happening behind the scenes? In this article, we’ll break down how Grafana processes and visualizes data, keeping things simple, practical, and to the point. Whether you’re new to DevOps or just curious about how it all works under the hood, this guide will give you a solid starting point.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Grafana at a Glance&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Grafana is built on two main parts: the frontend and the backend.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Frontend:
This is the part you see and interact with the dashboards, graphs, and visualizations. Built with modern web technologies, it ensures a smooth and responsive experience, making it easy to explore and analyze your data.&lt;/li&gt;
&lt;li&gt;Backend:
This is where the heavy lifting happens. The backend processes data, runs queries, and connects to various data sources like Prometheus and InfluxDB. In short, it gathers and prepares the data that the frontend turns into useful insights.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  🔍 The Backend
&lt;/h3&gt;

&lt;p&gt;Grafana’s backend is where most of the activities happen, handling data requests, processing queries, and keeping everything running smoothly. Let’s break it down into two key parts:  &lt;/p&gt;

&lt;h4&gt;
  
  
  ⚙️ The Grafana Server &amp;amp; API Layer
&lt;/h4&gt;

&lt;p&gt;The Grafana server is the engine running behind the scenes. It acts as the bridge between your dashboards and your data sources, ensuring seamless communication. Here’s what it does:  &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;🌍 &lt;strong&gt;Manages Requests:&lt;/strong&gt; When you interact with Grafana, the server processes your actions, whether it’s loading a dashboard, changing a time range, or modifying settings.
&lt;/li&gt;
&lt;li&gt;🔌 &lt;strong&gt;Connects to Data Sources:&lt;/strong&gt; Through its RESTful APIs, the server fetches data from sources like Prometheus, InfluxDB, or MySQL.
&lt;/li&gt;
&lt;li&gt;🔄 &lt;strong&gt;Enables Automation:&lt;/strong&gt; Beyond the web interface, the API lets you integrate Grafana into scripts and automation workflows, making it a flexible tool for DevOps teams.
&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  📊 How Data Queries &amp;amp; Processing Work
&lt;/h4&gt;

&lt;p&gt;Every time you load a dashboard, Grafana works behind the scenes to fetch and process data. Here’s a step-by-step breakdown:  &lt;/p&gt;

&lt;p&gt;1️⃣ &lt;strong&gt;Request Initiation:&lt;/strong&gt; The frontend (your dashboard) sends a query request to the backend via the API.&lt;br&gt;&lt;br&gt;
2️⃣ &lt;strong&gt;Data Retrieval:&lt;/strong&gt; The backend translates this request and reaches out to the right data source.&lt;br&gt;&lt;br&gt;
3️⃣ &lt;strong&gt;Processing:&lt;/strong&gt; Once the data is retrieved, the server processes it applying filters, aggregations, or calculations as needed.&lt;br&gt;&lt;br&gt;
4️⃣ &lt;strong&gt;Response &amp;amp; Rendering:&lt;/strong&gt; The processed data is sent back to the frontend, where it’s transformed into the visualizations you see.  &lt;/p&gt;

&lt;p&gt;This smooth backend operation is what makes Grafana such a powerful tool for real-time monitoring and analysis.  &lt;/p&gt;

&lt;h3&gt;
  
  
  🔗 Connecting to Data Sources
&lt;/h3&gt;

&lt;p&gt;Grafana is like a universal translator for data, it seamlessly connects to a wide range of sources, from time series databases like &lt;strong&gt;Prometheus&lt;/strong&gt; and &lt;strong&gt;InfluxDB&lt;/strong&gt; to search engines like &lt;strong&gt;Elasticsearch&lt;/strong&gt;. Whether you're monitoring server metrics, analyzing logs, or tracking application performance, Grafana knows how to fetch and display the data you need.  &lt;/p&gt;

&lt;h4&gt;
  
  
  🛠 Setting Up a Data Source
&lt;/h4&gt;

&lt;p&gt;Connecting a data source in Grafana is a straightforward process:  &lt;/p&gt;

&lt;p&gt;1️⃣ &lt;strong&gt;Pick Your Source:&lt;/strong&gt; In Grafana’s intuitive UI, you select the database or service you want to connect to.&lt;br&gt;&lt;br&gt;
2️⃣ &lt;strong&gt;Choose the Right Plugin:&lt;/strong&gt; Grafana has built-in plugins that "speak" the native query language of each data source, ensuring seamless communication.&lt;br&gt;&lt;br&gt;
3️⃣ &lt;strong&gt;Configure &amp;amp; Authenticate:&lt;/strong&gt; You provide connection details like the database URL, credentials, and any necessary authentication tokens.&lt;br&gt;&lt;br&gt;
4️⃣ &lt;strong&gt;Test &amp;amp; Save:&lt;/strong&gt; Grafana lets you test the connection before saving, so you can ensure everything is working smoothly.  &lt;/p&gt;

&lt;p&gt;Once set up, Grafana sends queries directly to your data source in &lt;strong&gt;real time&lt;/strong&gt;, pulling in the latest metrics for visualization.  &lt;/p&gt;

&lt;h4&gt;
  
  
  🔄 Understanding Data Flow
&lt;/h4&gt;

&lt;p&gt;Every time you interact with a Grafana dashboard, there's a well orchestrated sequence happening in the background. Let’s break it down step by step:  &lt;/p&gt;

&lt;h4&gt;
  
  
  🚀 &lt;strong&gt;1. User Action → Sending a Query&lt;/strong&gt;
&lt;/h4&gt;

&lt;p&gt;It all starts when you interact with a dashboard, maybe you &lt;strong&gt;select a different time range&lt;/strong&gt;, &lt;strong&gt;refresh a panel&lt;/strong&gt;, or &lt;strong&gt;zoom into a specific data point&lt;/strong&gt;. This triggers a request that gets sent to Grafana’s backend.  &lt;/p&gt;

&lt;h4&gt;
  
  
  🔍 &lt;strong&gt;2. Query Processing → Talking to the Data Source&lt;/strong&gt;
&lt;/h4&gt;

&lt;p&gt;Grafana’s backend translates your request into a query that the selected data source understands. If you're using &lt;strong&gt;Prometheus&lt;/strong&gt;, for example, Grafana converts your request into a PromQL query. If it’s &lt;strong&gt;Elasticsearch&lt;/strong&gt;, it turns into a structured search request.  &lt;/p&gt;

&lt;h4&gt;
  
  
  📦 &lt;strong&gt;3. Data Retrieval &amp;amp; Processing → Cleaning &amp;amp; Formatting&lt;/strong&gt;
&lt;/h4&gt;

&lt;p&gt;The data source processes the request and sends back raw data. But before it reaches your dashboard, Grafana’s backend &lt;strong&gt;cleans it up&lt;/strong&gt;, &lt;strong&gt;applies filters&lt;/strong&gt;, &lt;strong&gt;aggregates values&lt;/strong&gt;, and &lt;strong&gt;formats it properly&lt;/strong&gt;, making sure you get exactly what you need.  &lt;/p&gt;

&lt;h4&gt;
  
  
  📊 &lt;strong&gt;4. Visualization → Data Comes to Life&lt;/strong&gt;
&lt;/h4&gt;

&lt;p&gt;Finally, the processed data is sent to the frontend, where Grafana transforms it into &lt;strong&gt;interactive graphs, charts, and tables&lt;/strong&gt;. This real-time flow ensures that what you're seeing is always &lt;strong&gt;current, accurate, and easy to interpret&lt;/strong&gt;.  &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;em&gt;Thanks for reading! If you found this helpful, follow for more DevOps concepts explained in a clear and simple way. Got a topic you'd like me to cover next? Let me know!&lt;/em&gt;&lt;/strong&gt;&lt;/p&gt;

</description>
      <category>devops</category>
      <category>programming</category>
      <category>developers</category>
      <category>grafana</category>
    </item>
    <item>
      <title>Kubernetes Services vs. Ingress: What You Need to Know.</title>
      <dc:creator>Favour Lawrence</dc:creator>
      <pubDate>Sat, 08 Feb 2025 22:51:29 +0000</pubDate>
      <link>https://dev.to/favxlaw/kubernetes-services-vs-ingress-what-you-need-to-know-20p6</link>
      <guid>https://dev.to/favxlaw/kubernetes-services-vs-ingress-what-you-need-to-know-20p6</guid>
      <description>&lt;p&gt;Ever tried accessing a containerized application running inside Kubernetes and realized it wasn’t as simple as running a server on your local machine? Unlike traditional setups where an app binds to a port and is instantly reachable, Kubernetes operates in a world of dynamic, ever-changing pods. If a pod dies and gets recreated, it might get a new IP, breaking direct access.&lt;/p&gt;

&lt;p&gt;So, how do applications running inside a Kubernetes cluster communicate reliably? And how do we expose these applications to the outside world? This is where Kubernetes Services and Ingress come in.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Services ensure that even if pods come and go, your application remains accessible via a stable endpoint.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Ingress provides a smarter way to manage external access, acting as a traffic controller to route requests to the right service.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In this article, we’ll break down Kubernetes Services and Ingress, explaining when and why you need them with practical examples. Let's dive in!&lt;/p&gt;




&lt;h2&gt;
  
  
  🔹What is a Kubernetes Service?
&lt;/h2&gt;

&lt;p&gt;In Kubernetes, a &lt;strong&gt;Service&lt;/strong&gt; is an abstraction that provides a stable network endpoint to access a group of pods. Since pods are dynamic (they can be created, deleted, or rescheduled), their IPs keep changing. A &lt;strong&gt;Service&lt;/strong&gt; ensures that applications can communicate reliably without worrying about changing pod IPs.  &lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;🔸 Why Do We Need Services?&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;🚀 Pods are &lt;strong&gt;ephemeral&lt;/strong&gt;—they can restart or move to another node, getting a new IP.&lt;br&gt;&lt;br&gt;
🚀 Directly accessing a pod’s IP is unreliable since it might change at any moment.&lt;br&gt;&lt;br&gt;
🚀 A &lt;strong&gt;Service&lt;/strong&gt; creates a fixed &lt;strong&gt;Cluster IP&lt;/strong&gt; that stays the same, ensuring &lt;strong&gt;stable communication&lt;/strong&gt; between pods and external users.  &lt;/p&gt;
&lt;h3&gt;
  
  
  🔹 Types of Kubernetes Services
&lt;/h3&gt;

&lt;p&gt;Kubernetes offers different types of Services based on how you want your application to be accessible. Let’s break them down:  &lt;/p&gt;
&lt;h3&gt;
  
  
  🔸 &lt;strong&gt;ClusterIP (Default &amp;amp; Internal-Only)&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;✅ The &lt;strong&gt;default&lt;/strong&gt; service type in Kubernetes.&lt;br&gt;&lt;br&gt;
✅ Creates an &lt;strong&gt;internal-only&lt;/strong&gt; IP, meaning it’s &lt;strong&gt;only accessible inside the cluster&lt;/strong&gt;.&lt;br&gt;&lt;br&gt;
✅ Perfect for &lt;strong&gt;internal communication&lt;/strong&gt; between microservices. &lt;br&gt;
&lt;strong&gt;Example: A backend API serving a frontend within the cluster&lt;/strong&gt;.&lt;/p&gt;
&lt;h3&gt;
  
  
  🔸 &lt;strong&gt;NodePort (Basic External Access)&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;✅ Exposes the service &lt;strong&gt;on every node&lt;/strong&gt; using a high-numbered port (30000–32767).&lt;br&gt;&lt;br&gt;
✅ You can access it via &lt;code&gt;NodeIP:NodePort&lt;/code&gt;.&lt;br&gt;&lt;br&gt;
✅ &lt;strong&gt;Easy to set up&lt;/strong&gt; but not ideal for production—managing ports can get messy!  &lt;/p&gt;
&lt;h3&gt;
  
  
  🔸 &lt;strong&gt;LoadBalancer (Cloud-Managed External Access)&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;✅ Works in &lt;strong&gt;cloud environments&lt;/strong&gt; like AWS, GCP, or Azure.&lt;br&gt;&lt;br&gt;
✅ &lt;strong&gt;Automatically provisions&lt;/strong&gt; a cloud load balancer to handle traffic.&lt;br&gt;&lt;br&gt;
✅ The best option for &lt;strong&gt;production-grade&lt;/strong&gt; external access.  &lt;/p&gt;
&lt;h3&gt;
  
  
  🔸 &lt;strong&gt;Headless Service (For Direct Pod Access)&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;✅ Used when &lt;strong&gt;you don’t need a stable Cluster IP&lt;/strong&gt;.&lt;br&gt;&lt;br&gt;
✅ Instead of routing traffic, it helps apps &lt;strong&gt;discover&lt;/strong&gt; individual pods &lt;strong&gt;via DNS&lt;/strong&gt;.&lt;br&gt;&lt;br&gt;
✅ Useful for databases, stateful applications, and custom service discovery.  &lt;/p&gt;
&lt;h3&gt;
  
  
  🔹  Practical Example: ClusterIP Service YAML
&lt;/h3&gt;

&lt;p&gt;Here’s a simple YAML configuration for a &lt;strong&gt;ClusterIP&lt;/strong&gt; Service that exposes an Nginx pod:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Service&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;my-nginx-service&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;selector&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;app&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;nginx&lt;/span&gt;
  &lt;span class="na"&gt;ports&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;protocol&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;TCP&lt;/span&gt;
      &lt;span class="na"&gt;port&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;80&lt;/span&gt;      &lt;span class="c1"&gt;# The Service's Port&lt;/span&gt;
      &lt;span class="na"&gt;targetPort&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;80&lt;/span&gt; &lt;span class="c1"&gt;# The Pod's Port&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;✅ &lt;strong&gt;selector:&lt;/strong&gt; Matches pods with the label &lt;code&gt;app: nginx&lt;/code&gt;, so the Service knows where to send traffic.&lt;br&gt;&lt;br&gt;
✅ &lt;strong&gt;port:&lt;/strong&gt; The port where the Service is exposed inside the cluster.&lt;br&gt;&lt;br&gt;
✅ &lt;strong&gt;targetPort:&lt;/strong&gt; The actual port inside the pod where traffic should go.  &lt;/p&gt;


&lt;h2&gt;
  
  
  Understanding Ingress
&lt;/h2&gt;

&lt;p&gt;Kubernetes gives us multiple ways to expose applications, but &lt;strong&gt;Ingress&lt;/strong&gt; is the &lt;strong&gt;smartest&lt;/strong&gt; option. Instead of creating separate external access points for each service, Ingress acts as a &lt;strong&gt;single entryway&lt;/strong&gt;, efficiently routing traffic to the right service inside your cluster.  &lt;/p&gt;
&lt;h3&gt;
  
  
  🔹 What is Ingress in Kubernetes?
&lt;/h3&gt;

&lt;p&gt;Ingress is a &lt;strong&gt;traffic manager&lt;/strong&gt; for your cluster. It controls &lt;strong&gt;external access&lt;/strong&gt; to services using &lt;strong&gt;rules&lt;/strong&gt; based on domains, paths, and protocols like HTTP/HTTPS. Think of it as a &lt;strong&gt;router&lt;/strong&gt; that directs requests to the correct backend service.  &lt;/p&gt;
&lt;h3&gt;
  
  
  🔹 Why Use Ingress Instead of NodePort or LoadBalancer?
&lt;/h3&gt;

&lt;p&gt;While &lt;strong&gt;NodePort&lt;/strong&gt; and &lt;strong&gt;LoadBalancer&lt;/strong&gt; work, they have limitations:  &lt;/p&gt;

&lt;p&gt;❌ &lt;strong&gt;NodePort&lt;/strong&gt; exposes each service on a random high-numbered port—not ideal for production.&lt;br&gt;&lt;br&gt;
❌ &lt;strong&gt;LoadBalancer&lt;/strong&gt; works better but &lt;strong&gt;creates a new cloud load balancer per service&lt;/strong&gt;, which can get &lt;strong&gt;expensive and complex&lt;/strong&gt;.&lt;br&gt;&lt;br&gt;
✅ &lt;strong&gt;Ingress&lt;/strong&gt; solves both problems by &lt;strong&gt;allowing multiple services to share a single entry point&lt;/strong&gt;, reducing cost and simplifying management.  &lt;/p&gt;
&lt;h3&gt;
  
  
  🔹 How Ingress Routes External Traffic
&lt;/h3&gt;

&lt;p&gt;1️⃣ A user sends an &lt;strong&gt;HTTP/HTTPS request&lt;/strong&gt; to your cluster.&lt;br&gt;&lt;br&gt;
2️⃣ The &lt;strong&gt;Ingress resource&lt;/strong&gt; checks its rules to decide which service should handle the request.&lt;br&gt;&lt;br&gt;
3️⃣ Traffic is forwarded to the correct &lt;strong&gt;pod&lt;/strong&gt; inside the cluster.  &lt;/p&gt;
&lt;h3&gt;
  
  
  🔹 The Role of an Ingress Controller
&lt;/h3&gt;

&lt;p&gt;An &lt;strong&gt;Ingress Controller&lt;/strong&gt; is needed to process Ingress rules. Popular choices include:  &lt;/p&gt;

&lt;p&gt;✔ &lt;strong&gt;NGINX Ingress Controller&lt;/strong&gt; (most common)&lt;br&gt;&lt;br&gt;
✔ &lt;strong&gt;Traefik, HAProxy, AWS ALB, Istio&lt;/strong&gt;, etc.  &lt;/p&gt;
&lt;h3&gt;
  
  
  🔹 Setting Up Ingress: Simple Example
&lt;/h3&gt;

&lt;p&gt;Here’s a &lt;strong&gt;basic Ingress configuration&lt;/strong&gt; that routes traffic based on a hostname:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;networking.k8s.io/v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Ingress&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;my-ingress&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;rules&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;host&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;myapp.local&lt;/span&gt;
    &lt;span class="na"&gt;http&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;paths&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;path&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;/&lt;/span&gt;
        &lt;span class="na"&gt;pathType&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Prefix&lt;/span&gt;
        &lt;span class="na"&gt;backend&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="na"&gt;service&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
            &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;my-service&lt;/span&gt;
            &lt;span class="na"&gt;port&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
              &lt;span class="na"&gt;number&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;80&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;✅ &lt;code&gt;host: myapp.local&lt;/code&gt; → Requests to &lt;code&gt;myapp.local&lt;/code&gt; are routed.&lt;br&gt;&lt;br&gt;
✅ &lt;code&gt;path: /&lt;/code&gt; → All requests are sent to the backend service.&lt;br&gt;&lt;br&gt;
✅ &lt;code&gt;backend.service.name: my-service&lt;/code&gt; → Traffic goes to &lt;strong&gt;my-service&lt;/strong&gt;.&lt;br&gt;&lt;br&gt;
✅ &lt;code&gt;port.number: 80&lt;/code&gt; → The port where the service listens.  &lt;/p&gt;
&lt;h3&gt;
  
  
  🔹 How to Apply and Test Ingress in Minikube
&lt;/h3&gt;

&lt;p&gt;1️⃣ &lt;strong&gt;Enable the NGINX Ingress Controller&lt;/strong&gt; in Minikube:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;minikube addons &lt;span class="nb"&gt;enable &lt;/span&gt;ingress
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;2️⃣ &lt;strong&gt;Apply the Ingress YAML file:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl apply &lt;span class="nt"&gt;-f&lt;/span&gt; my-ingress.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;3️⃣ &lt;strong&gt;Modify &lt;code&gt;/etc/hosts&lt;/code&gt; to point to Minikube’s IP (Linux/macOS):&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="si"&gt;$(&lt;/span&gt;minikube ip&lt;span class="si"&gt;)&lt;/span&gt;&lt;span class="s2"&gt; myapp.local"&lt;/span&gt; | &lt;span class="nb"&gt;sudo tee&lt;/span&gt; &lt;span class="nt"&gt;-a&lt;/span&gt; /etc/hosts
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;4️⃣ &lt;strong&gt;Test it in a browser or with curl:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;curl http://myapp.local
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;h2&gt;
  
  
  🚀 Service vs. Ingress: When to Use Each
&lt;/h2&gt;

&lt;p&gt;Choosing between a Service (NodePort/LoadBalancer) and Ingress depends on how you want to expose your applications. Here’s the breakdown:&lt;/p&gt;

&lt;p&gt;Use a Service when you need direct access to a single service, either internally or externally. However, each exposed service requires its own endpoint, which can get expensive and inefficient if you have many services.&lt;br&gt;
Use Ingress when you want to manage multiple services under one entry point. It routes traffic based on domain names or paths, reducing complexity and cost.&lt;br&gt;
A Service is simple but lacks advanced traffic control. Ingress, on the other hand, supports routing, TLS termination, and virtual hosts, making it ideal for large-scale apps.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;&lt;strong&gt;Thanks for reading! If you found this helpful, please like and follow for more DevOps content. Feel free to comment with any questions or topics you'd like to see next!&lt;/strong&gt;&lt;/em&gt;&lt;/p&gt;

</description>
      <category>devops</category>
      <category>kubernetes</category>
      <category>container</category>
      <category>beginners</category>
    </item>
    <item>
      <title>Understanding Kubernetes Volumes: Persistent Volume and Persistent Volume Claim</title>
      <dc:creator>Favour Lawrence</dc:creator>
      <pubDate>Fri, 07 Feb 2025 10:51:43 +0000</pubDate>
      <link>https://dev.to/favxlaw/understanding-kubernetes-volumes-persistent-volume-and-persistent-volume-claim-4600</link>
      <guid>https://dev.to/favxlaw/understanding-kubernetes-volumes-persistent-volume-and-persistent-volume-claim-4600</guid>
      <description>&lt;p&gt;Consider a case where data gets added or updated on  PostgreSQL, when pod restart all changes will be gone. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Why?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Kubernetes doesn’t automatically persist data when a pod restarts. By default, when a pod dies, its storage disappears with it. This is because Kubernetes treats pods as ephemeral, they come and go, and their associated storage doesn’t stick around unless you explicitly configure it to.&lt;/p&gt;

&lt;p&gt;For databases, logs, and any application that needs to retain state, this is a huge problem. That’s where Persistent Volumes (PV) and Persistent Volume Claims (PVC) come in. They allow Kubernetes to handle storage separately from pods, ensuring your data doesn’t vanish every time a pod is replaced.&lt;/p&gt;

&lt;p&gt;If you’re coming from Docker, you might wonder how this compares to Docker volumes,&lt;br&gt;
&lt;em&gt;"Doesn’t Docker have volumes for persistent storage?"&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Yes, it does. But Kubernetes handles storage in a more decentralized, scalable way. Unlike Docker, where volumes are tied to a single host, Kubernetes volumes are designed to be cluster-wide and can be provisioned dynamically.&lt;/p&gt;

&lt;p&gt;In this guide, we’ll break down:&lt;br&gt;
✅ What Persistent Volumes (PV) are and how they’re managed by administrators.&lt;br&gt;
✅ How developers request storage using Persistent Volume Claims (PVC).&lt;/p&gt;

&lt;p&gt;By the end, you’ll not only understand how Kubernetes storage works but also be able to set up persistent storage for your own applications. Let’s get started 🚀&lt;/p&gt;


&lt;h2&gt;
  
  
  &lt;strong&gt;How Does Kubernetes Handle Storage?&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Kubernetes uses &lt;strong&gt;Volumes&lt;/strong&gt; to provide persistent storage, but here’s the thing:  &lt;/p&gt;

&lt;p&gt;➡️ A &lt;strong&gt;Kubernetes Volume is just an abstraction&lt;/strong&gt;—it doesn’t store data itself. It needs to be backed by actual physical storage.  &lt;/p&gt;

&lt;p&gt;So, where does this storage come from?  &lt;/p&gt;
&lt;h3&gt;
  
  
  &lt;strong&gt;Types of Storage in Kubernetes&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;1️⃣ &lt;strong&gt;Local Storage (Node-Specific)&lt;/strong&gt;  &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Tied to a single node.
&lt;/li&gt;
&lt;li&gt;If a pod moves to another node, the data doesn’t follow.
&lt;/li&gt;
&lt;li&gt;Examples:

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;emptyDir&lt;/code&gt; (temporary storage that lasts as long as the pod exists).
&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;hostPath&lt;/code&gt; (uses a directory on the host machine).
&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;2️⃣ &lt;strong&gt;Remote Storage (Cluster-Wide)&lt;/strong&gt;  &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Decoupled from any single node, so pods can move freely without losing data.
&lt;/li&gt;
&lt;li&gt;Provided by external storage systems.
&lt;/li&gt;
&lt;li&gt;Examples:

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Cloud storage&lt;/strong&gt;: AWS EBS, Google Persistent Disks, Azure Disk.
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Network storage&lt;/strong&gt;: NFS, Ceph, GlusterFS.
&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;h3&gt;
  
  
  &lt;strong&gt;Persistent Volumes (PV) and Persistent Volume Claims (PVC)&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;To make storage management easier, Kubernetes introduces:&lt;br&gt;&lt;br&gt;
✔ &lt;strong&gt;Persistent Volumes (PV):&lt;/strong&gt; The actual storage backend.&lt;br&gt;&lt;br&gt;
✔ &lt;strong&gt;Persistent Volume Claims (PVC):&lt;/strong&gt; A way for pods to request storage dynamically.  &lt;/p&gt;

&lt;p&gt;Think of it like a hotel:  &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;A &lt;strong&gt;PV&lt;/strong&gt; is a hotel room.
&lt;/li&gt;
&lt;li&gt;A &lt;strong&gt;PVC&lt;/strong&gt; is a reservation.
&lt;/li&gt;
&lt;li&gt;When a pod needs storage, it "books" a room (PV) through a PVC.
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;📌 &lt;strong&gt;Kubernetes Volumes are not actual storage&lt;/strong&gt;—they just connect your pod to a real storage system.  &lt;/p&gt;


&lt;h2&gt;
  
  
  &lt;strong&gt;Persistent Volume (PV) – The Admin’s Role&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;We now know that &lt;strong&gt;Kubernetes doesn’t provide storage by itself&lt;/strong&gt;, so who sets it up?  &lt;/p&gt;

&lt;p&gt;A &lt;strong&gt;PV is a pre-configured storage unit&lt;/strong&gt; set up by the cluster administrator. Once it's available, developers can claim it using a &lt;strong&gt;Persistent Volume Claim (PVC)&lt;/strong&gt; (which we’ll cover next). But first, let’s see how admins actually set up storage in Kubernetes.  &lt;/p&gt;
&lt;h3&gt;
  
  
  &lt;strong&gt;How Do Admins Provision Storage?&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Admins set up storage by:&lt;br&gt;&lt;br&gt;
1️⃣ &lt;strong&gt;Setting up the storage backend&lt;/strong&gt; (Local disk, NFS, AWS EBS, etc.).&lt;br&gt;&lt;br&gt;
2️⃣ &lt;strong&gt;Defining a Persistent Volume (PV)&lt;/strong&gt; that connects to this storage.&lt;br&gt;&lt;br&gt;
3️⃣ &lt;strong&gt;Making the PV available&lt;/strong&gt; for developers to claim via PVCs.  &lt;/p&gt;
&lt;h3&gt;
  
  
  &lt;strong&gt;Example: Creating a Persistent Volume (PV)&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Here’s a simple YAML configuration for a &lt;strong&gt;local storage PV&lt;/strong&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;PersistentVolume&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;my-pv&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;capacity&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;storage&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;5Gi&lt;/span&gt;
  &lt;span class="na"&gt;accessModes&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;ReadWriteOnce&lt;/span&gt;
  &lt;span class="na"&gt;persistentVolumeReclaimPolicy&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Retain&lt;/span&gt;
  &lt;span class="na"&gt;storageClassName&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;local-storage&lt;/span&gt;
  &lt;span class="na"&gt;hostPath&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;path&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;/mnt/data"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  &lt;strong&gt;Breaking It Down&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;🔹 &lt;code&gt;capacity.storage: 5Gi&lt;/code&gt; → Provides &lt;strong&gt;5GB of storage&lt;/strong&gt;.&lt;br&gt;&lt;br&gt;
🔹 &lt;code&gt;accessModes: ReadWriteOnce&lt;/code&gt; → Can be &lt;strong&gt;mounted by only one node at a time&lt;/strong&gt;.&lt;br&gt;&lt;br&gt;
🔹 &lt;code&gt;persistentVolumeReclaimPolicy: Retain&lt;/code&gt; → Storage &lt;strong&gt;remains&lt;/strong&gt; even after the pod using it is deleted.&lt;br&gt;&lt;br&gt;
🔹 &lt;code&gt;storageClassName: local-storage&lt;/code&gt; → Specifies &lt;strong&gt;which storage class&lt;/strong&gt; to use.&lt;br&gt;&lt;br&gt;
🔹 &lt;code&gt;hostPath: /mnt/data&lt;/code&gt; → Uses a &lt;strong&gt;local directory as storage&lt;/strong&gt;.  &lt;/p&gt;

&lt;p&gt;Of course, admins can also configure &lt;strong&gt;networked storage&lt;/strong&gt; like NFS, AWS EBS, or Google Persistent Disk instead of using a local directory.  &lt;/p&gt;

&lt;p&gt;📌 &lt;strong&gt;PVs exist independently of pods, ensuring data persists even if a pod is deleted or rescheduled.&lt;/strong&gt;  &lt;/p&gt;


&lt;h2&gt;
  
  
  &lt;strong&gt;Persistent Volume Claim (PVC) – The Developer’s Role&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;The admin has set up a &lt;strong&gt;Persistent Volume (PV)&lt;/strong&gt;—now how do developers actually use it?  &lt;/p&gt;

&lt;p&gt;That’s where &lt;strong&gt;Persistent Volume Claims (PVCs)&lt;/strong&gt; come in.  &lt;/p&gt;

&lt;p&gt;A &lt;strong&gt;PVC is a storage request&lt;/strong&gt; from a developer. It’s like saying:  &lt;/p&gt;

&lt;p&gt;🗣️ &lt;em&gt;“Hey Kubernetes, I need 5GB of storage with read/write access. Find me a PV that matches!”&lt;/em&gt;  &lt;/p&gt;

&lt;p&gt;If a suitable &lt;strong&gt;PV&lt;/strong&gt; is available, Kubernetes automatically &lt;strong&gt;binds the PVC to it&lt;/strong&gt;, making storage available for the pod.  &lt;/p&gt;
&lt;h3&gt;
  
  
  &lt;strong&gt;How Developers Request Storage Using PVC&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;1️⃣ &lt;strong&gt;Create a PVC&lt;/strong&gt; specifying storage requirements.&lt;br&gt;&lt;br&gt;
2️⃣ &lt;strong&gt;Kubernetes finds a matching PV&lt;/strong&gt; and binds the PVC to it.&lt;br&gt;&lt;br&gt;
3️⃣ &lt;strong&gt;Mount the PVC inside a pod&lt;/strong&gt; to store data persistently.  &lt;/p&gt;
&lt;h3&gt;
  
  
  &lt;strong&gt;Example: Creating a PVC&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Here’s a &lt;strong&gt;simple YAML configuration&lt;/strong&gt; for a PVC requesting 5GB of storage:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;PersistentVolumeClaim&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;my-pvc&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;accessModes&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;ReadWriteOnce&lt;/span&gt;
  &lt;span class="na"&gt;resources&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;requests&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;storage&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;5Gi&lt;/span&gt;
  &lt;span class="na"&gt;storageClassName&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;local-storage&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  &lt;strong&gt;Breaking It Down&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;🔹 &lt;code&gt;accessModes: ReadWriteOnce&lt;/code&gt; → Storage &lt;strong&gt;can be mounted by only one node at a time&lt;/strong&gt;.&lt;br&gt;&lt;br&gt;
🔹 &lt;code&gt;resources.requests.storage: 5Gi&lt;/code&gt; → Requests &lt;strong&gt;5GB of storage&lt;/strong&gt;.&lt;br&gt;&lt;br&gt;
🔹 &lt;code&gt;storageClassName: local-storage&lt;/code&gt; → Uses a PV &lt;strong&gt;with the matching storage class&lt;/strong&gt;.  &lt;/p&gt;
&lt;h3&gt;
  
  
  &lt;strong&gt;How the PVC Gets Bound to a PV&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;✅ If a PV with the right &lt;strong&gt;storage size, access mode, and storage class&lt;/strong&gt; exists, Kubernetes &lt;strong&gt;automatically binds&lt;/strong&gt; the PVC to it.&lt;br&gt;&lt;br&gt;
✅ Once bound, the PVC can be &lt;strong&gt;used inside a pod&lt;/strong&gt; for persistent data storage.  &lt;/p&gt;
&lt;h3&gt;
  
  
  &lt;strong&gt;Mounting the PVC in a Pod&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Now that we have a &lt;strong&gt;PVC&lt;/strong&gt;, let’s use it inside a pod:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Pod&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;my-app&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;containers&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;my-container&lt;/span&gt;
      &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;postgres&lt;/span&gt;
      &lt;span class="na"&gt;volumeMounts&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;mountPath&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;/var/lib/postgresql/data"&lt;/span&gt;
          &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;my-storage&lt;/span&gt;
  &lt;span class="na"&gt;volumes&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;my-storage&lt;/span&gt;
      &lt;span class="na"&gt;persistentVolumeClaim&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;claimName&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;my-pvc&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;📌 Now, &lt;strong&gt;even if the pod restarts, the PostgreSQL database will still have its data&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;em&gt;Thanks for reading! Be sure to follow for more DevOps content, and feel free to comment with the DevOps concepts you'd like to see covered next.&lt;/em&gt;&lt;/strong&gt;&lt;/p&gt;

</description>
      <category>devops</category>
      <category>kubernetes</category>
      <category>beginners</category>
      <category>docker</category>
    </item>
    <item>
      <title>Very Insightful</title>
      <dc:creator>Favour Lawrence</dc:creator>
      <pubDate>Wed, 05 Feb 2025 23:32:25 +0000</pubDate>
      <link>https://dev.to/favxlaw/very-insightful-1c0p</link>
      <guid>https://dev.to/favxlaw/very-insightful-1c0p</guid>
      <description>&lt;div class="ltag__link"&gt;
  &lt;a href="/bobbyiliev" class="ltag__link__link"&gt;
    &lt;div class="ltag__link__pic"&gt;
      &lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Fuser%2Fprofile_image%2F191651%2F8bf0512d-f06c-47e9-a8d8-981b754b25ab.webp" alt="bobbyiliev"&gt;
    &lt;/div&gt;
  &lt;/a&gt;
  &lt;a href="https://dev.to/bobbyiliev/5-terraform-best-practices-i-wish-i-knew-when-i-started-2dc" class="ltag__link__link"&gt;
    &lt;div class="ltag__link__content"&gt;
      &lt;h2&gt;5 Terraform Best Practices I Wish I Knew When I Started&lt;/h2&gt;
      &lt;h3&gt;Bobby ・ Jan 31&lt;/h3&gt;
      &lt;div class="ltag__link__taglist"&gt;
        &lt;span class="ltag__link__tag"&gt;#terraform&lt;/span&gt;
        &lt;span class="ltag__link__tag"&gt;#devops&lt;/span&gt;
        &lt;span class="ltag__link__tag"&gt;#cloud&lt;/span&gt;
        &lt;span class="ltag__link__tag"&gt;#beginners&lt;/span&gt;
      &lt;/div&gt;
    &lt;/div&gt;
  &lt;/a&gt;
&lt;/div&gt;


</description>
      <category>terraform</category>
      <category>devops</category>
      <category>cloud</category>
      <category>beginners</category>
    </item>
    <item>
      <title>Prometheus Architecture: Understanding the Workflow 🚀</title>
      <dc:creator>Favour Lawrence</dc:creator>
      <pubDate>Mon, 03 Feb 2025 22:18:10 +0000</pubDate>
      <link>https://dev.to/favxlaw/prometheus-architecture-understanding-the-workflow-162o</link>
      <guid>https://dev.to/favxlaw/prometheus-architecture-understanding-the-workflow-162o</guid>
      <description>&lt;p&gt;Have you ever used Prometheus for monitoring systems? It’s great at collecting and storing metrics, but have you ever stopped to wonder how it actually works under the hood? What makes its architecture so efficient, and why is it the go to choice for cloud-native monitoring?&lt;br&gt;
Unlike traditional monitoring tools that passively wait for data, Prometheus actively scrapes metrics from defined targets, stores them efficiently in a time-series database. &lt;br&gt;
We’ll explore how it collects metrics, how its components interact, and why its design makes it a favorite among developers and SREs. Let’s get started! 🚀&lt;/p&gt;




&lt;h2&gt;
  
  
  &lt;strong&gt;Prometheus Architecture: Breaking It Down&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;If you want to truly understand how Prometheus works, you need to go beyond just “it collects metrics” and dive into its architecture. At its core, Prometheus is built on three essential pillars:  &lt;/p&gt;

&lt;p&gt;✅ &lt;strong&gt;Time-Series Database (TSDB)&lt;/strong&gt; – Where all metrics are efficiently stored.&lt;br&gt;&lt;br&gt;
✅ &lt;strong&gt;Data Retrieval Engine&lt;/strong&gt; – Responsible for actively pulling (scraping) metrics.&lt;br&gt;&lt;br&gt;
✅ &lt;strong&gt;Query &amp;amp; API Layer (Web Server)&lt;/strong&gt; – The interface where you analyze and visualize data.  &lt;/p&gt;

&lt;p&gt;Each of these components plays a critical role in making Prometheus &lt;em&gt;fast, scalable, and cloud-native&lt;/em&gt;. Now, let’s break them down in detail.  &lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;1. Prometheus Server – Command center&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;The &lt;strong&gt;Prometheus Server&lt;/strong&gt; is the central hub that coordinates everything, ensuring your metrics are collected, stored, and made accessible. Here’s what it does:&lt;br&gt;&lt;br&gt;
🔹 Pulls metrics from configured targets (applications, databases, and exporters).&lt;br&gt;&lt;br&gt;
🔹 Stores the collected data in a time-series format.&lt;br&gt;&lt;br&gt;
🔹 Provides a powerful query interface to analyze and visualize the data. &lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;2. Time-Series Database (TSDB) – Storing Metrics Efficiently&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Once Prometheus scrapes metrics, it needs a way to store them efficiently. That’s where the &lt;strong&gt;Time-Series Database (TSDB)&lt;/strong&gt; comes in. This isn’t your average database; it’s specifically designed for handling time-series data. Here’s what happens behind the scenes:  &lt;/p&gt;

&lt;p&gt;📌 &lt;strong&gt;Metrics are stored as time-series data&lt;/strong&gt; each metric is recorded with a timestamp and value.&lt;br&gt;&lt;br&gt;
📌 &lt;strong&gt;Compression techniques&lt;/strong&gt; uses advanced compression techniques to store data efficiently without slowing down performance.&lt;/p&gt;

&lt;p&gt;📌 &lt;strong&gt;A label-based system&lt;/strong&gt; metrics are tagged with labels  (e.g., &lt;code&gt;http_requests_total{status="200"}&lt;/code&gt;), making it easy to filter and query data with precision.  &lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;3. Data Retrieval Engine – How Prometheus Collects Metrics&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Prometheus doesn’t sit around waiting for data—it actively goes out and &lt;strong&gt;pulls&lt;/strong&gt; it from defined targets. This is known as the &lt;strong&gt;pull-based model&lt;/strong&gt;.   &lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;How It Works:&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Prometheus periodically &lt;strong&gt;scrapes &lt;code&gt;/metrics&lt;/code&gt; endpoints&lt;/strong&gt; from configured targets. These can be:&lt;br&gt;&lt;br&gt;
✔️ Applications exposing Prometheus-compatible metrics&lt;br&gt;&lt;br&gt;
✔️ Databases and external services&lt;br&gt;&lt;br&gt;
✔️ Exporters that convert non-Prometheus metrics into a readable format  &lt;/p&gt;

&lt;p&gt;This approach ensures Prometheus collects data efficiently while remaining highly adaptable.  &lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;4. Query &amp;amp; API Layer (Web Server) – Making Data Useful&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Storing metrics is one thing, but being able to &lt;strong&gt;query, analyze, and visualize&lt;/strong&gt; them is where the real power comes in. This is where the &lt;strong&gt;Query &amp;amp; API Layer&lt;/strong&gt; play their role.  &lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Key Responsibilities:&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;🔎 &lt;strong&gt;Handles PromQL (Prometheus Query Language)&lt;/strong&gt; for in-depth metric analysis.&lt;br&gt;&lt;br&gt;
🔎 &lt;strong&gt;Runs an HTTP API server&lt;/strong&gt;, allowing external tools (like Grafana) to pull data.&lt;br&gt;&lt;br&gt;
🔎 &lt;strong&gt;Provides built-in graphing&lt;/strong&gt; for quick insights.  &lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;How It All Comes Together&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;1️⃣ Prometheus &lt;strong&gt;scrapes metrics&lt;/strong&gt; from various targets.&lt;br&gt;&lt;br&gt;
2️⃣ It &lt;strong&gt;stores data efficiently&lt;/strong&gt; in TSDB.&lt;br&gt;&lt;br&gt;
3️⃣ The &lt;strong&gt;query engine&lt;/strong&gt; allows users to analyze trends and set up alerts.&lt;br&gt;&lt;br&gt;
4️⃣ Other tools (like Grafana) &lt;strong&gt;fetch data via Prometheus' API&lt;/strong&gt; for visualization.  &lt;/p&gt;




&lt;h2&gt;
  
  
  &lt;strong&gt;Pull Mechanism&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Here’s how it works: Prometheus is set up with a list of targets, think applications, databases, or exporters that provide metrics through a &lt;code&gt;/metrics&lt;/code&gt;endpoint. At regular intervals, Prometheus sends an HTTP request to these endpoints, grabs the metrics, adds a timestamp to each one, and then stores everything in its Time-Series Database (TSDB).&lt;br&gt;
It’s like Prometheus is constantly checking in on these targets, gathering fresh data, and keeping everything organized for easy analysis later on.&lt;/p&gt;




&lt;h2&gt;
  
  
  &lt;strong&gt;Why Prometheus?&lt;/strong&gt;
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;No External Storage Needed: Unlike some monitoring systems that rely on external storage, Prometheus keeps things simple by storing data locally—cutting down on complexity and external dependencies.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Resilient Pull-Based Monitoring: By actively scraping metrics instead of waiting for them, Prometheus is more resilient to network issues, ensuring data is consistently collected even when connections are not stable.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Handles Short-Lived Jobs: For tasks that don’t run long enough to be scraped, Prometheus offers the Pushgateway. This lets ephemeral jobs push their metrics before exiting, ensuring no data is lost.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;




&lt;p&gt;There are plenty of reasons why Prometheus is used worldwide—its architecture truly sets it apart. I hope this article helped you get a clear understanding of how it all works.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Thanks for reading! Don’t forget to follow, and feel free to leave a comment with the next DevOps concept you’d like me to dive into. Let’s keep the learning going!&lt;/em&gt;&lt;/p&gt;

</description>
      <category>devops</category>
      <category>monitoring</category>
      <category>programming</category>
      <category>beginners</category>
    </item>
    <item>
      <title>Namespaces in Kubernetes Explained: 🔍 Understanding Isolation and Sharing</title>
      <dc:creator>Favour Lawrence</dc:creator>
      <pubDate>Sat, 01 Feb 2025 21:46:39 +0000</pubDate>
      <link>https://dev.to/favxlaw/namespaces-in-kubernetes-explained-understanding-isolation-and-sharing-5ki</link>
      <guid>https://dev.to/favxlaw/namespaces-in-kubernetes-explained-understanding-isolation-and-sharing-5ki</guid>
      <description>&lt;h2&gt;
  
  
  &lt;strong&gt;Kubernetes Namespace?&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;If you've worked with Kubernetes long enough, you’ve probably seen how quickly things can spiral out of control. One team deploys a new service, another updates their staging environment, and suddenly, production is down because someone accidentally messed with the wrong resources. Sound familiar?&lt;/p&gt;

&lt;p&gt;That’s where Kubernetes namespaces come in.  Instead of stuffing everything into one disorganized cluster or deploying individual clusters for each project, namespaces help maintain order, enforce security boundaries, and enhance resource management efficiency.&lt;/p&gt;

&lt;p&gt;In this article, we’ll explore:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;What Kubernetes namespaces are and how they work.&lt;/li&gt;
&lt;li&gt;Why they’re essential for managing multi-team, multi-application clusters.&lt;/li&gt;
&lt;li&gt;Resources that can and can't be shared across namespaces.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;By the end, you’ll have a solid grasp of how to use namespaces to keep your cluster structured and scalable. Let’s dive in. 🚀&lt;/p&gt;




&lt;h2&gt;
  
  
  &lt;strong&gt;What Are Kubernetes Namespaces, Really?&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;A namespace in Kubernetes is basically a way to split your cluster into separate environments. At their core, namespaces are like virtual partitions in your cluster. They let you split your resources; pods, services, deployments, and more into separate, logical groups.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;The Default Namespace&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;When you first start with Kubernetes, everything you deploy lands in the &lt;em&gt;default namespace&lt;/em&gt;. It’s quick, easy, and works fine for small project.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Kubernetes comes with a few built-in namespaces:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;- default&lt;/strong&gt; – The catch-all for resources if you don’t specify a namespace.&lt;br&gt;
&lt;strong&gt;- kube-system&lt;/strong&gt; – Reserved for critical system components (like the Kubernetes API server).&lt;br&gt;
&lt;strong&gt;- kube-public&lt;/strong&gt; – Mostly unused, but contains publicly accessible resources.&lt;br&gt;
&lt;strong&gt;- kube-node-lease&lt;/strong&gt; – Helps track node health and optimize performance.&lt;/p&gt;

&lt;p&gt;For example , you could create your own namespaces. Here’s how:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl create namespace dev  
kubectl create namespace staging  
kubectl create namespace production  
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This way, &lt;code&gt;dev&lt;/code&gt; doesn’t interfere with &lt;code&gt;staging&lt;/code&gt;, and &lt;code&gt;staging&lt;/code&gt; doesn’t break &lt;code&gt;production&lt;/code&gt;.  &lt;/p&gt;

&lt;h4&gt;
  
  
  &lt;strong&gt;When to Use Namespaces (and When Not To)&lt;/strong&gt;
&lt;/h4&gt;

&lt;p&gt;✅ &lt;strong&gt;Use namespaces if:&lt;/strong&gt;  &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;You have multiple teams sharing the same cluster.
&lt;/li&gt;
&lt;li&gt;You need clear separation between environments (&lt;code&gt;dev&lt;/code&gt;, &lt;code&gt;staging&lt;/code&gt;, &lt;code&gt;prod&lt;/code&gt;).
&lt;/li&gt;
&lt;li&gt;You want to enforce security policies and resource limits per group.
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;❌ &lt;strong&gt;Don’t bother with namespaces if:&lt;/strong&gt;  &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Your cluster is small and managed by a single team.
&lt;/li&gt;
&lt;li&gt;You need &lt;strong&gt;hard&lt;/strong&gt; isolation—separate clusters might be the better option.
&lt;/li&gt;
&lt;li&gt;You’re dealing with global resources like cluster-wide CRDs.
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;At the end of the day, namespaces help keep your Kubernetes setup clean and organized. &lt;/p&gt;




&lt;h2&gt;
  
  
  &lt;strong&gt;Working with Namespaces: Let’s Get Hands-On&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Now that you understand the concept of namespaces, let’s dive into how you can actually work with them in your Kubernetes cluster. This section will cover the essential commands you need to list, create, and manage namespaces, as well as how to deploy resources to specific namespaces.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Listing Existing Namespaces&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;To see what namespaces you’ve got in your cluster, run this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl get namespaces
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This command will give you a list of namespaces, including the default ones like &lt;code&gt;default&lt;/code&gt;, &lt;code&gt;kube-system&lt;/code&gt;, and any you’ve created yourself. It's a quick way to check what namespaces are active and available.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Creating a New Namespace&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Creating a new namespace is super straightforward. Just run the &lt;code&gt;kubectl create namespace&lt;/code&gt; command followed by the name you want for your new namespace:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl create namespace my-namespace
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;And that’s it! Your new namespace is ready to go. Now you can deploy your resources into it, keeping everything organized.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Deploying Resources to a Namespace&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;If you want to deploy resources like pods, services, or deployments into a specific namespace, you can use the &lt;code&gt;-n&lt;/code&gt; flag with &lt;code&gt;kubectl apply&lt;/code&gt;. For example, to apply a YAML configuration to the &lt;code&gt;my-namespace&lt;/code&gt; namespace:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl apply &lt;span class="nt"&gt;-f&lt;/span&gt; app.yaml &lt;span class="nt"&gt;-n&lt;/span&gt; my-namespace
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This will create the resources defined in &lt;code&gt;app.yaml&lt;/code&gt; within the &lt;code&gt;my-namespace&lt;/code&gt; namespace. Just make sure your YAML file doesn’t already have a &lt;code&gt;namespace&lt;/code&gt; field in the metadata unless you want to override the command-line flag.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Switching Between Namespaces&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Sometimes you’ll need to switch between namespaces while working with &lt;code&gt;kubectl&lt;/code&gt;. To make this easier, you can set the default namespace for your session with this command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl config set-context &lt;span class="nt"&gt;--current&lt;/span&gt; &lt;span class="nt"&gt;--namespace&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;my-namespace
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;After running this, all your subsequent &lt;code&gt;kubectl&lt;/code&gt; commands will default to &lt;code&gt;my-namespace&lt;/code&gt;, so you won’t have to keep adding the &lt;code&gt;-n&lt;/code&gt; flag. To switch back to the default namespace, just run:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl config set-context &lt;span class="nt"&gt;--current&lt;/span&gt; &lt;span class="nt"&gt;--namespace&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;default
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;h2&gt;
  
  
  &lt;strong&gt;Resources That Can and Can’t Be Shared Across Namespaces&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;In Kubernetes, namespaces help isolate different resources within a cluster, but not all resources are confined to a single namespace. Some can be shared across namespaces, while others stay restricted. Knowing which resources can be shared (and which can't) is crucial when it comes to managing your cluster effectively. Let's break it down.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Resources That Can’t Be Shared Across Namespaces&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Some resources are strictly bound to a specific namespace. These are isolated within their own namespace to ensure everything remains organized and secure.&lt;/p&gt;

&lt;h4&gt;
  
  
  &lt;strong&gt;Pods&lt;/strong&gt;
&lt;/h4&gt;

&lt;p&gt;Pods are created within a namespace, and they can’t be directly accessed by pods in other namespaces. They’re local to the namespace they reside in, which means if you want to let pods in other namespaces communicate, you'll need to set up things like network policies.&lt;/p&gt;

&lt;p&gt;Why is this important? Pods are isolated by default, so if you need cross-namespace communication, you'll have to jump through a few hoops, like using services or network policies.&lt;/p&gt;

&lt;h4&gt;
  
  
  &lt;strong&gt;Services&lt;/strong&gt;
&lt;/h4&gt;

&lt;p&gt;In Kubernetes, services are tied to specific namespaces. They expose resources like pods only within their own namespace. If you need a service accessible across multiple namespaces, you'll have to either create an external service or configure DNS settings between the namespaces.&lt;/p&gt;

&lt;p&gt;Why is this important? For inter-namespace communication, managing service discovery becomes key. You’ll have to reference the service using a fully qualified domain name, like &lt;code&gt;my-service.my-namespace.svc.cluster.local&lt;/code&gt;.&lt;/p&gt;

&lt;h4&gt;
  
  
  &lt;strong&gt;ConfigMaps &amp;amp; Secrets&lt;/strong&gt;
&lt;/h4&gt;

&lt;p&gt;Both ConfigMaps and Secrets are tied to namespaces. While you can reference them within the same namespace or copy them to another namespace, you can't share them directly across namespaces.&lt;/p&gt;

&lt;p&gt;Note: It’s important to scope things like app configuration and sensitive data to the correct namespace. While you can replicate or reference them elsewhere, they can’t just be shared freely across namespaces.&lt;/p&gt;

&lt;h4&gt;
  
  
  &lt;strong&gt;Deployments and StatefulSets&lt;/strong&gt;
&lt;/h4&gt;

&lt;p&gt;Just like pods, Deployments and StatefulSets live within a namespace. These resources manage pods within that namespace, so they won’t span multiple namespaces.&lt;/p&gt;

&lt;p&gt;Why this matters: This helps keep things isolated and manageable, especially when you're scaling applications or managing their lifecycle.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Resources That Can Be Shared Across Namespaces&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Not everything in Kubernetes is namespace-bound. There are a few resources that can span multiple namespaces, which helps Kubernetes maintain global management while respecting the boundaries that namespaces provide.&lt;/p&gt;

&lt;h4&gt;
  
  
  &lt;strong&gt;Nodes&lt;/strong&gt;
&lt;/h4&gt;

&lt;p&gt;Nodes exist at the cluster level, not tied to any specific namespace. Every pod, regardless of which namespace it belongs to, can run on any available node in the cluster.&lt;br&gt;
Nodes make it possible to efficiently manage resources across the whole cluster without worrying about namespace boundaries.&lt;/p&gt;

&lt;h4&gt;
  
  
  &lt;strong&gt;Cluster-wide Resources (e.g., CRDs, ClusterRoles)&lt;/strong&gt;
&lt;/h4&gt;

&lt;p&gt;Resources like Custom Resource Definitions (CRDs) and ClusterRoles are not limited to namespaces. These resources are designed to work across the entire cluster, whether you’re defining custom resources or setting cluster-wide access policies.&lt;br&gt;
CRDs let you create custom objects that can be accessed from anywhere, while ClusterRoles manage permissions at the cluster level, allowing users and services to access resources across namespaces.&lt;/p&gt;

&lt;h4&gt;
  
  
  &lt;strong&gt;Network Policies&lt;/strong&gt;
&lt;/h4&gt;

&lt;p&gt;While network policies are generally scoped to namespaces, they can still define how pods in different namespaces communicate with each other. By setting up cross-namespace rules, you can control traffic between pods in different namespaces.&lt;/p&gt;

&lt;p&gt;Why this is important: Network policies allow you to maintain control over which namespaces can talk to each other, even if they’re isolated.&lt;/p&gt;

&lt;h4&gt;
  
  
  &lt;strong&gt;Persistent Volumes (PVs)&lt;/strong&gt;
&lt;/h4&gt;

&lt;p&gt;Persistent Volumes are another cluster-level resource that isn’t tied to a specific namespace. However, Persistent Volume Claims (PVCs) are namespace-bound, and while PVCs can request storage from a PV, the PV itself can be accessed by resources in multiple namespaces (depending on access mode).&lt;/p&gt;

&lt;p&gt;Why this matters: While you can manage storage across namespaces with PVs, the data and requests are still scoped to namespaces through PVCs.&lt;/p&gt;

&lt;h4&gt;
  
  
  &lt;strong&gt;Ingress Resources&lt;/strong&gt;
&lt;/h4&gt;

&lt;p&gt;Ingress resources allow external access to services and can be configured to route traffic to services across different namespaces. By setting up the right rules, a single Ingress controller can manage traffic to services across multiple namespaces.&lt;/p&gt;

&lt;p&gt;Note:  You can centralize traffic management with one Ingress, even when your services span multiple namespaces.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Thanks for reading! Stay tuned for more deep dives into Kubernetes concepts!&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Feel free to dm let's talk devOps on &lt;a href="https://x.com/favxlaw" rel="noopener noreferrer"&gt;X&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;

</description>
      <category>devops</category>
      <category>kubernetes</category>
      <category>beginners</category>
      <category>programming</category>
    </item>
    <item>
      <title>Pods in Kubernetes: Lifecycle, Networking, and the Role of Sidecars.</title>
      <dc:creator>Favour Lawrence</dc:creator>
      <pubDate>Mon, 13 Jan 2025 23:02:53 +0000</pubDate>
      <link>https://dev.to/favxlaw/pods-in-kubernetes-lifecycle-networking-and-the-role-of-sidecars-50cn</link>
      <guid>https://dev.to/favxlaw/pods-in-kubernetes-lifecycle-networking-and-the-role-of-sidecars-50cn</guid>
      <description>&lt;p&gt;In Kubernetes, pods are the smallest deployable units and serve as a crucial component for running one or more containers. If you’re unfamiliar with Kubernetes or need a quick overview of its architecture, I’ve written an article that explains key elements such as the master node, API server, controller manager, scheduler, and worker nodes. Feel free to check it &lt;a href="https://dev.to/favxlaw/kubernetes-for-beginners-making-sense-of-container-orchestration-in-devops-3dmd"&gt;out&lt;/a&gt; for a solid foundation before diving into this topic.&lt;/p&gt;

&lt;p&gt;Here, we’ll focus specifically on pods, examining their structure, functionality, and how they operate within a Kubernetes cluster. &lt;em&gt;You’ll learn how pods differ from standalone containers, how they are assigned Cluster IPs for communication, and the role of kube-proxy in networking. We’ll also cover how pods retrieve configurations during deployment and explore the use of support containers to enhance their functionality.&lt;/em&gt;&lt;br&gt;
By the end of this article, you’ll have a clear understanding of what pods are and gain hands-on experience creating and managing them through a simple demo.&lt;/p&gt;

&lt;p&gt;Let’s dive into the details of Kubernetes pods!&lt;/p&gt;


&lt;h2&gt;
  
  
  &lt;strong&gt;What is a Pod?&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;If you're new to Kubernetes, the term "pod" might sound abstract. Simply put, a pod is the smallest deployable unit in Kubernetes. Think of it as a container’s home, providing everything the container needs to function effectively.&lt;/p&gt;

&lt;p&gt;Let’s break it down: containers are lightweight virtual environments where your applications run. A pod is a group of one or more containers that share the same network, storage, and lifecycle, allowing them to work seamlessly together. Here's how it works:&lt;/p&gt;
&lt;h3&gt;
  
  
  &lt;strong&gt;Why Group Containers in a Pod?&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Let's say you're deploying a web application:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Web Server + Logging Sidecar&lt;br&gt;
A pod could contain a web server container alongside a "sidecar" container. The sidecar might handle tasks like log aggregation or metrics collection.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Database + Backup Helper&lt;br&gt;
Another example is a database container paired with a helper container responsible for scheduled backups.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In Kubernetes, pods are designed to group containers that need to collaborate closely or share resources. By colocating them:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Networking Simplified: All containers within the same pod share the same IP address. The pod behaves like a single application unit.
Shared Storage: Containers in a pod can access the same persistent storage volumes, making data sharing straightforward.&lt;/li&gt;
&lt;/ul&gt;
&lt;h4&gt;
  
  
  &lt;strong&gt;How Pods Enable Efficient Collaboration&lt;/strong&gt;
&lt;/h4&gt;

&lt;p&gt;Kubernetes believes in grouping containers that share a common purpose or need to communicate efficiently.&lt;br&gt;
Since all containers in a pod share the same network namespace and storage volumes, they can:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Communicate with each other effortlessly using localhost.&lt;/li&gt;
&lt;li&gt;Work together more effectively as a cohesive application components &lt;/li&gt;
&lt;/ul&gt;


&lt;h2&gt;
  
  
  &lt;strong&gt;Pods and Networking in Kubernetes: Simplifying Cluster Communication&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;When it comes to networking in Kubernetes, pods are designed to simplify communication within the cluster.&lt;br&gt;
Each pod in your cluster is designed to communicate efficiently with other pods and services. This is made possible through a Cluster IP—an internal virtual IP address assigned to every pod upon creation.&lt;br&gt;
How does this work?&lt;/p&gt;
&lt;h3&gt;
  
  
  &lt;strong&gt;How Does a Pod Get Its Cluster IP?&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;The moment you create a pod, Kubernetes dynamically assigns it an IP address from the Cluster IP range configured in the cluster.&lt;br&gt;
This IP enables seamless communication within the Kubernetes network, ensuring that:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Pods can interact with other pods.&lt;/li&gt;
&lt;li&gt;Pods can connect to services running in the cluster.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;But Kubernetes doesn’t assign these IPs on its own. That’s where a key component, &lt;strong&gt;kube-proxy&lt;/strong&gt;, comes into play.&lt;/p&gt;
&lt;h3&gt;
  
  
  &lt;strong&gt;What is kube-proxy, and Why Does It Matter?&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;kube-proxy is a fundamental part of Kubernetes networking. Acting as a traffic manager, kube-proxy ensures that pods have valid IPs and that data flows to the right places. Here’s how it works:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Watches for Changes:
kube-proxy constantly monitors the Kubernetes API for updates, like new pods being created or deleted.&lt;/li&gt;
&lt;li&gt;Configures Network Rules:
It updates the cluster’s networking rules to ensure traffic is routed to the correct pod based on its Cluster IP.&lt;/li&gt;
&lt;li&gt;Manages Traffic:
kube-proxy enables smooth communication between pods and between pods and services, even if they are running on different nodes.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Without kube-proxy, Kubernetes wouldn’t be able to provide the reliable networking infrastructure developers depend on.&lt;/p&gt;


&lt;h2&gt;
  
  
  &lt;strong&gt;The Lifecycle of a Pod in Kubernetes: Step by Step&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;A pod’s journey from definition to execution in Kubernetes is an intricate yet well-orchestrated process. Let’s break down the lifecycle of a pod, using a practical example to clarify each step.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 1: YAML File Definition&lt;/strong&gt;&lt;br&gt;
Everything begins with a YAML file that defines the desired state of the pod. This file specifies the pod’s metadata, container image, and configuration details such as resource limits and environment variables.&lt;/p&gt;

&lt;p&gt;Here’s a simple example:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: v1  
kind: Pod  
metadata:  
  name: example-pod  
spec:  
  containers:  
  - name: nginx-container  
    image: nginx  

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This YAML file describes a pod named example-pod running a single container using the official NGINX image.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 2: Sending the YAML to the Kubernetes API Server&lt;/strong&gt;&lt;br&gt;
When you execute the command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl apply -f pod.yaml  

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The YAML file is sent to the API Server, validated, and stored in the &lt;strong&gt;etcd&lt;/strong&gt; database as the desired state of the pod&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 3: Scheduling the Pod&lt;/strong&gt;&lt;br&gt;
Next, the Kubernetes Scheduler steps in. Its role is to decide which node in the cluster is best suited to run the pod. The scheduler considers various factors, such as:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Available Resources: Does the node have enough CPU and memory?&lt;/li&gt;
&lt;li&gt;Placement Preferences: Should this pod be co-located with or separated from other pods?&lt;/li&gt;
&lt;li&gt;Node Rules and Compatibility: Does the pod meet specific conditions set for nodes?
Once the scheduler makes a decision, the pod is assigned to a node.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;Step 4: Kubelet on the Node Takes Over&lt;/strong&gt;&lt;br&gt;
On the assigned node, the kubelet (a small agent running on each node) takes charge. It:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Fetches the pod’s definition from the API server.&lt;/li&gt;
&lt;li&gt;Pulls the specified container image (e.g., nginx) from the container 
registry, if it’s not already cached on the node.&lt;/li&gt;
&lt;li&gt;Configures the runtime environment based on the pod’s specifications, such as mounting volumes or setting environment variables.&lt;/li&gt;
&lt;li&gt;Starts the container(s) using the container runtime (like Docker or 
containerd).&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;Step 5: Networking Setup&lt;/strong&gt;&lt;br&gt;
Once the pod is created, the Kubernetes network component assigns it a unique Cluster IP. This IP enables the pod to communicate with other pods and services in the cluster. At this point, the pod becomes an active participant in the cluster’s network.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 6: Pod is Running&lt;/strong&gt;&lt;br&gt;
Finally, the pod is up and running on the assigned node, ready to execute its tasks.&lt;/p&gt;

&lt;p&gt;Here's a sketch chart showing the workflow of pod lifestyle;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F93832a9wm4aahs8jt30u.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F93832a9wm4aahs8jt30u.png" alt="Pod lifecycle-flowchart" width="800" height="43"&gt;&lt;/a&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  &lt;strong&gt;Support Containers in Kubernetes Pods&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;In Kubernetes, a pod is not limited to a single container. While the primary focus is often on the main container running your application, support containers (commonly known as sidecars or helper containers) play an invaluable role in enhancing the functionality and performance of the main container.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What Are Support Containers?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Support containers are auxiliary containers that run alongside the main container within the same pod. These containers share:&lt;br&gt;
Network Namespace: They can communicate using localhost.&lt;br&gt;
Storage Volumes: They can access shared data or persist information together.&lt;br&gt;
Their purpose is to perform tasks that complement the main container, such as log processing, monitoring, or handling initialization processes.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Why Use Support Containers?&lt;/strong&gt;&lt;br&gt;
The use of support containers promotes modularity and adheres to the principle of separation of concerns. By delegating specific responsibilities to dedicated containers, you:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Simplify the Main Container: Keep the application container focused on its core task.&lt;/li&gt;
&lt;li&gt;Enhance Reusability: Support containers can be reused across multiple pods or deployments.&lt;/li&gt;
&lt;li&gt;Improve Scalability: Offloading tasks to support containers reduces the workload of the main container, enabling independent scaling.
Examples of Support Containers
Here are some common use cases for support containers:&lt;/li&gt;
&lt;/ul&gt;

&lt;ol&gt;
&lt;li&gt;Logging Sidecars
A logging sidecar can handle log aggregation and forwarding for the main container. &lt;/li&gt;
&lt;li&gt;Monitoring Sidecars
Monitoring containers collect metrics, such as CPU usage, memory consumption, or application-specific statistics, and send them to tools like Prometheus or Grafana.&lt;/li&gt;
&lt;li&gt;File Synchronization
When an application relies on up-to-date configuration files or external data, a file synchronization container can keep those files fresh.&lt;/li&gt;
&lt;li&gt;Proxy Sidecars
Proxy containers act as intermediaries, managing communication between the main container and external services. They’re often used in service meshes.&lt;/li&gt;
&lt;li&gt;Initialization Tasks
Some support containers are designed to prepare the environment before the main container starts.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Thanks for reading! In my next article I will go into details on other components of Kubernetes.&lt;/p&gt;

&lt;p&gt;Please like, follow and drop a comment if this article was helpful.&lt;/p&gt;

</description>
      <category>devops</category>
      <category>kubernetes</category>
      <category>containers</category>
      <category>beginners</category>
    </item>
    <item>
      <title>☸️ Kubernetes Architecture Deep Dive: Understanding the Control Plane and Worker Nodes</title>
      <dc:creator>Favour Lawrence</dc:creator>
      <pubDate>Thu, 09 Jan 2025 18:46:23 +0000</pubDate>
      <link>https://dev.to/favxlaw/kubernetes-architecture-deep-dive-understanding-the-control-plane-and-worker-nodes-2p5o</link>
      <guid>https://dev.to/favxlaw/kubernetes-architecture-deep-dive-understanding-the-control-plane-and-worker-nodes-2p5o</guid>
      <description>&lt;p&gt;In this guide, we’ll break down Kubernetes architecture into practical components to help you understand how it orchestrates containerized applications at scale. Whether you’re new to Kubernetes or aiming to refine your expertise, this guide offers actionable insights drawn from real-world experience.&lt;br&gt;
If you’re not yet familiar with Kubernetes or container orchestration, I recommend reading my earlier article:&lt;br&gt;
&lt;a href="https://dev.to/favxlaw/kubernetes-for-beginners-making-sense-of-container-orchestration-in-devops-3dmd"&gt;Kubernetes for Beginners: Making Sense of Container Orchestration in DevOps&lt;/a&gt;. It provides a solid foundation to help you grasp the "why" behind Kubernetes.&lt;/p&gt;




&lt;h3&gt;
  
  
  &lt;strong&gt;What Is Kubernetes Architecture?&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Imagine Kubernetes as a supercharged version of Docker. If Docker is like running containers manually on your laptop, Kubernetes is like having an automated, highly organized team that ensures your containers are always deployed, monitored, and scaled efficiently.&lt;/p&gt;

&lt;p&gt;Kubernetes architecture consists of two main parts:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;The Control Plane – The mastermind overseeing everything.&lt;/li&gt;
&lt;li&gt;The Worker Nodes – The hands-on team running the containers.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Let’s break it down by comparing it to Docker components you might already know.&lt;/p&gt;

&lt;h4&gt;
  
  
  &lt;strong&gt;The Control Plane: The Mastermind&lt;/strong&gt;
&lt;/h4&gt;

&lt;p&gt;If Docker Compose is your tool for managing multiple containers, think of the Control Plane as the “Docker Compose on steroids.” It doesn’t just orchestrate containers—it keeps the entire cluster healthy and responsive.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;API Server:&lt;br&gt;
The API Server is like Docker’s CLI or REST API. It’s your main interface to Kubernetes. Instead of running &lt;code&gt;docker run&lt;/code&gt; commands, you send requests to the API Server, and it tells the cluster what to do.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Scheduler:&lt;br&gt;
Remember how you manually decide which container should run on which server when using Docker? The Scheduler automates that for you. It analyzes the available resources in your cluster and assigns workloads (Pods) to the best-fit Worker Nodes.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;etcd:&lt;br&gt;
Think of etcd as Docker’s storage for container configurations, but much more advanced. It stores the entire state of the Kubernetes cluster—like a logbook of what’s running, where, and how it should behave.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Controller Manager:&lt;br&gt;
If you’ve ever written a bash script to restart a Docker container when it crashes, the Controller Manager is the automated version of that. It continuously monitors the cluster and takes action to maintain the desired state (e.g., restarting Pods if they fail).&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  &lt;strong&gt;The Worker Nodes: The Doers&lt;/strong&gt;
&lt;/h4&gt;

&lt;p&gt;Worker Nodes are where the actual magic happens containers run here. If the Control Plane is your Docker management system, Worker Nodes are the machines where your docker run commands execute.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Kubelet:&lt;br&gt;
This is like the Docker Daemon, which runs on each node and manages the containers. The Kubelet listens to the Control Plane and ensures the containers are running exactly as specified.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Kube-proxy:&lt;br&gt;
Imagine the complexities of networking in Docker Swarm, where you manage overlay networks and service discovery. Kube-proxy simplifies this by automatically routing traffic within the cluster, ensuring services can communicate seamlessly.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Container Runtime:&lt;br&gt;
The Container Runtime is the heart of a Worker Node. It’s what actually runs your containers. If you’ve used Docker Engine before, this is the Kubernetes equivalent. Other runtimes like containerd or CRI-O can also be used here.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Below is a simple block diagram to help you visualize the Kubernetes architecture:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;      Control Plane (Master Node)
      ---------------------------
      |  API Server             |  &amp;lt;- Docker CLI/API equivalent
      |  Scheduler              |  &amp;lt;- Automated workload placement
      |  etcd (Cluster State)   |  &amp;lt;- Advanced state storage
      |  Controller Manager     |  &amp;lt;- Automated "bash scripts"
      ---------------------------
                 |
                 v
      Worker Nodes (x N)
      ---------------------------
      |  Kubelet                |  &amp;lt;- Docker Daemon equivalent
      |  Kube-proxy             |  &amp;lt;- Networking made simple
      |  Container Runtime      |  &amp;lt;- Docker Engine/Runtime
      ---------------------------
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;&lt;strong&gt;The Role of Kubernetes Architecture&lt;/strong&gt;&lt;br&gt;
Kubernetes architecture builds on the familiar concepts of Docker but adds powerful features like:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Automating where containers are placed.&lt;/li&gt;
&lt;li&gt;Healing itself when something fails.&lt;/li&gt;
&lt;li&gt;Scaling workloads up or down automatically.
In short, Kubernetes transforms container management from a manual process into an elegant, automated system designed for reliability and scalability.&lt;/li&gt;
&lt;/ul&gt;




&lt;h3&gt;
  
  
  &lt;strong&gt;Control Plane Components&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;The Control Plane is the brain of the Kubernetes cluster, managing everything from workloads to Worker Nodes. Think of it as the decision maker that keeps everything organized, healthy, and running smoothly. Each component has a specific role, working together like a well-oiled machine.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;The API Server&lt;/strong&gt;&lt;br&gt;
is the main communication hub for Kubernetes. It’s where all requests—whether from &lt;code&gt;kubectl&lt;/code&gt;, CI/CD pipelines, or other tools—are sent to interact with the cluster. We can also called it the &lt;em&gt;cluster gateway&lt;/em&gt;.&lt;br&gt;
&lt;strong&gt;What It Does:&lt;/strong&gt; Processes requests to create, delete, or update cluster resources like Pods and Nodes.&lt;br&gt;
When you use &lt;code&gt;kubectl apply -f deployment.yaml&lt;/code&gt;, the API Server ensures the cluster understands and implements your request.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;etcd: The Memory Bank&lt;/strong&gt;&lt;br&gt;
etcd is Kubernetes' memory bank a consistent, distributed key-value store that holds all the data about your cluster.&lt;br&gt;
&lt;strong&gt;Function:&lt;/strong&gt; Tracks both the desired state (what you want the cluster to do) and the actual state (what’s currently happening).&lt;br&gt;
etcd ensures Kubernetes knows which Pods are running and where, so if a Node fails, the cluster can recover seamlessly.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Scheduler: The Matchmaker&lt;/strong&gt;&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The Scheduler is Kubernetes' matchmaker, deciding where new workloads (Pods) should run.&lt;br&gt;
Assigns Pods to Worker Nodes based on available resources and other requirements like CPU, memory, or specific labels.&lt;br&gt;
For example, if you deploy a web app, the Scheduler ensures it runs on a Node with enough free memory and CPU to handle the workload.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Controller Manager: The State Keeper&lt;/strong&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The Controller Manager is Kubernetes’ state enforcer, ensuring that the cluster’s actual state matches its desired state.&lt;br&gt;
Runs controllers (small programs) that handle specific tasks, like creating new Pods if one fails or scaling replicas up or down.&lt;br&gt;
If you specify 5 replicas for a deployment but one Pod crashes, the Deployment Controller ensures a new Pod is created to maintain the desired count.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Quickie Points&lt;/strong&gt;&lt;br&gt;
Let’s simplify the Control Plane components with a quick Q&amp;amp;A-style breakdown:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Who decides the information?&lt;/strong&gt;&lt;br&gt;
→ API Server: Acts as the communication hub, deciding how information flows within the cluster.&lt;br&gt;
&lt;strong&gt;Who remembers everything?&lt;/strong&gt;&lt;br&gt;
→ etcd: The memory bank, storing all cluster data in a consistent, distributed key-value store.&lt;br&gt;
&lt;strong&gt;Who acts on the information?&lt;/strong&gt;&lt;br&gt;
→ Scheduler: Assigns workloads (Pods) to the best available Worker Nodes.&lt;br&gt;
&lt;strong&gt;Who ensures everything runs smoothly?&lt;/strong&gt;&lt;br&gt;
→ Controller Manager: Maintains the desired state of the cluster, fixing any discrepancies automatically.&lt;/p&gt;




&lt;h3&gt;
  
  
  &lt;strong&gt;Worker Node Components&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;The Worker Nodes are the essential components of the Kubernetes cluster where your actual applications (containers) run. These Nodes execute the workloads assigned by the Control Plane and ensure everything is functioning as expected.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;- Kubelet: The Supervisor&lt;/strong&gt;&lt;br&gt;
The Kubelet is like the supervisor of the Worker Node, ensuring that containers are running as defined.&lt;br&gt;
Constantly communicates with the Control Plane to ensure the desired state of Pods is achieved.&lt;br&gt;
If a container crashes, the Kubelet ensures it is restarted as per the Pod specification.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;- Kubelet: The Supervisor&lt;/strong&gt;&lt;br&gt;
The Kubelet is like the supervisor of the Worker Node, ensuring that containers are running as defined. &lt;br&gt;
Constantly communicates with the Control Plane to ensure the desired state of Pods is achieved.&lt;br&gt;
Take for instance, a container crashes, the Kubelet ensures it is restarted as per the Pod specification.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;- Container Runtime: The Executor&lt;/strong&gt;&lt;br&gt;
The Container Runtime is the software that actually runs your containers, such as Docker or containerd.&lt;br&gt;
Pulls container images, starts containers, and manages their lifecycle.&lt;br&gt;
For example, when you deploy a Pod, the Container Runtime pulls the necessary images and spins up the containers.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Quickie Points&lt;/strong&gt;&lt;br&gt;
&lt;strong&gt;Who ensures containers are running as planned?&lt;/strong&gt;&lt;br&gt;
→ Kubelet: The supervisor that keeps everything aligned with the Control Plane's instructions.&lt;br&gt;
&lt;strong&gt;Who directs traffic to the right place?&lt;/strong&gt;&lt;br&gt;
→ Kube-proxy: The traffic manager that ensures network requests reach the right Pods.&lt;br&gt;
&lt;strong&gt;Who makes containers run?&lt;/strong&gt;&lt;br&gt;
→ Container Runtime: The executor that pulls images and starts your applications.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Thanks for reading&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;In the next post, we’ll explore more key concepts in Kubernetes, stay tuned !&lt;/p&gt;

</description>
      <category>devops</category>
      <category>kubernetes</category>
      <category>containers</category>
      <category>beginners</category>
    </item>
  </channel>
</rss>
