<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Favour Lawrence</title>
    <description>The latest articles on DEV Community by Favour Lawrence (@favxlaw).</description>
    <link>https://dev.to/favxlaw</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/favxlaw"/>
    <language>en</language>
    <item>
      <title>Why Your Microservices Should Talk Like Functions, Not URLs (A Practical gRPC Walkthrough in Go)</title>
      <dc:creator>Favour Lawrence</dc:creator>
      <pubDate>Fri, 24 Apr 2026 12:14:41 +0000</pubDate>
      <link>https://dev.to/favxlaw/why-your-microservices-should-talk-like-functions-not-urls-a-practical-grpc-walkthrough-in-go-1nf7</link>
      <guid>https://dev.to/favxlaw/why-your-microservices-should-talk-like-functions-not-urls-a-practical-grpc-walkthrough-in-go-1nf7</guid>
      <description>&lt;p&gt;In most microservice setups, service-to-service communication starts the same way:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight http"&gt;&lt;code&gt;&lt;span class="err"&gt;GET /api/v1/users/{id}
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;It works. It’s familiar. It’s easy to debug.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;em&gt;But it forces service-to-service calls into a URL-driven model.&lt;/em&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Internal services aren’t browsers. They don’t benefit from clean URLs or REST-style resource modeling. They don’t need JSON payloads designed around human readability. And they don’t need APIs designed around manual testing workflows.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;What they need is a strict contract and a predictable call interface.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;They need communication that behaves like calling a function:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;typed requests and responses&lt;/li&gt;
&lt;li&gt;enforced schemas&lt;/li&gt;
&lt;li&gt;consistent error semantics&lt;/li&gt;
&lt;li&gt;backward-compatible evolution&lt;/li&gt;
&lt;li&gt;explicit timeouts and deadlines&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;With gRPC, your microservices don’t “hit endpoints”. They call methods. You define the interface once using Protocol Buffers, generate strongly typed clients, and treat cross-service communication like a normal function call, except it happens over the network.&lt;/p&gt;

&lt;p&gt;In this walkthrough, we’ll build a gRPC service in Go from scratch, implement a client, and cover the production details that actually matter.&lt;/p&gt;

&lt;h4&gt;
  
  
  The Problem: Why Are Microservices Talking Like Web Browsers?
&lt;/h4&gt;

&lt;p&gt;Say you have two internal services:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;&lt;em&gt;billing service&lt;/em&gt;&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;&lt;em&gt;auth-service&lt;/em&gt;&lt;/strong&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;&lt;em&gt;billing service&lt;/em&gt;&lt;/strong&gt; needs to charge a user. Before doing that, it needs to validate a few things with &lt;strong&gt;&lt;em&gt;auth-service&lt;/em&gt;&lt;/strong&gt;:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;does the user exist?&lt;/li&gt;
&lt;li&gt;is the user active?&lt;/li&gt;
&lt;li&gt;what role does the user have?&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;A common approach is to expose a REST endpoint from &lt;strong&gt;&lt;em&gt;auth-service&lt;/em&gt;&lt;/strong&gt; and call it from &lt;strong&gt;&lt;em&gt;billing-service&lt;/em&gt;&lt;/strong&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight go"&gt;&lt;code&gt;&lt;span class="n"&gt;resp&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;err&lt;/span&gt; &lt;span class="o"&gt;:=&lt;/span&gt; &lt;span class="n"&gt;http&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Get&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"http://auth-service:8080/api/v1/users/123"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;err&lt;/span&gt; &lt;span class="o"&gt;!=&lt;/span&gt; &lt;span class="no"&gt;nil&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
 &lt;span class="n"&gt;log&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Fatal&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;err&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="k"&gt;defer&lt;/span&gt; &lt;span class="n"&gt;resp&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Body&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Close&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;

&lt;span class="k"&gt;var&lt;/span&gt; &lt;span class="n"&gt;user&lt;/span&gt; &lt;span class="k"&gt;struct&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
 &lt;span class="n"&gt;UserID&lt;/span&gt; &lt;span class="kt"&gt;string&lt;/span&gt; &lt;span class="s"&gt;`json:"userId"`&lt;/span&gt;
 &lt;span class="n"&gt;Active&lt;/span&gt; &lt;span class="kt"&gt;bool&lt;/span&gt;   &lt;span class="s"&gt;`json:"active"`&lt;/span&gt;
 &lt;span class="n"&gt;Role&lt;/span&gt;   &lt;span class="kt"&gt;string&lt;/span&gt; &lt;span class="s"&gt;`json:"role"`&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;err&lt;/span&gt; &lt;span class="o"&gt;:=&lt;/span&gt; &lt;span class="n"&gt;json&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;NewDecoder&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;resp&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Body&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Decode&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="o"&gt;&amp;amp;&lt;/span&gt;&lt;span class="n"&gt;user&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt; &lt;span class="n"&gt;err&lt;/span&gt; &lt;span class="o"&gt;!=&lt;/span&gt; &lt;span class="no"&gt;nil&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
 &lt;span class="n"&gt;log&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Fatal&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;err&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="o"&gt;!&lt;/span&gt;&lt;span class="n"&gt;user&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Active&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
 &lt;span class="n"&gt;log&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Fatal&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"user is not active"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This works, and REST is a perfectly valid choice for internal communication.&lt;/p&gt;

&lt;p&gt;But it comes with tradeoffs that tend to show up as systems grow.&lt;/p&gt;

&lt;p&gt;This isn’t a browser fetching a page, it’s one backend service depending on another backend service. Yet REST forces that dependency to be expressed through URLs, HTTP verbs, and JSON payloads. Over time, those implementation details become the de-facto contract between services.&lt;/p&gt;

&lt;p&gt;That introduces a few common pain points:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;The contract is mostly implicit.&lt;/strong&gt; The client learns the response shape through documentation and conventions, not enforced types.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Breaking changes are easy to introduce.&lt;/strong&gt; A renamed JSON field or missing attribute can break consumers at runtime.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Error semantics rely on discipline.&lt;/strong&gt; A &lt;strong&gt;&lt;em&gt;404&lt;/em&gt;&lt;/strong&gt; might mean “user not found”, but it can also mean “wrong route”, “bad version”, or “proxy misconfiguration”.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;JSON adds overhead.&lt;/strong&gt; It’s text-based, requires encoding/decoding, and failures often surface at runtime.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Boilerplate spreads everywhere.&lt;/strong&gt; Every service ends up rewriting HTTP client logic, decoding, validation, and retries.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;None of this makes REST “bad”. It just means that for internal service-to-service calls, where you want strict contracts and predictable behavior, REST often starts to feel like the wrong tool for the job.&lt;/p&gt;

&lt;p&gt;And that’s usually when teams start looking at gRPC.&lt;/p&gt;

&lt;h4&gt;
  
  
  REST Inside Microservices Has a Silent Problem: Fake Contracts
&lt;/h4&gt;

&lt;p&gt;The problem isn’t REST itself.&lt;/p&gt;

&lt;p&gt;The problem is what REST often turns into inside a microservices environment:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;endpoints become “agreements”&lt;/li&gt;
&lt;li&gt;JSON becomes “schema”&lt;/li&gt;
&lt;li&gt;Slack threads become “documentation”&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Unless you enforce schemas and versioning aggressively, the contract between services is mostly social, not technical.&lt;/p&gt;

&lt;p&gt;For example, if the &lt;strong&gt;&lt;em&gt;auth-service&lt;/em&gt;&lt;/strong&gt; team changes a response from:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"active"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="kc"&gt;true&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;to:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"isActive"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="kc"&gt;true&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;&lt;em&gt;billing-service&lt;/em&gt;&lt;/strong&gt; still compiles. Tests might even pass if they don’t cover that path.&lt;/p&gt;

&lt;p&gt;But production breaks.&lt;/p&gt;

&lt;p&gt;And that’s the worst kind of failure:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;builds fine&lt;/li&gt;
&lt;li&gt;deploys fine&lt;/li&gt;
&lt;li&gt;fails at runtime&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;At that point, you’re not relying on a contract; you’re relying on hope.&lt;/p&gt;

&lt;h4&gt;
  
  
  What If Services Could Talk Like Functions Instead?
&lt;/h4&gt;

&lt;p&gt;Instead of thinking:&lt;br&gt;&lt;br&gt;
“call this URL and parse whatever JSON comes back”&lt;br&gt;&lt;br&gt;
what if &lt;strong&gt;&lt;em&gt;billing-service&lt;/em&gt;&lt;/strong&gt; could just do this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight go"&gt;&lt;code&gt;&lt;span class="n"&gt;user&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;err&lt;/span&gt; &lt;span class="o"&gt;:=&lt;/span&gt; &lt;span class="n"&gt;authClient&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;GetUser&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;ctx&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="o"&gt;&amp;amp;&lt;/span&gt;&lt;span class="n"&gt;pb&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;GetUserRequest&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="n"&gt;UserId&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt; &lt;span class="s"&gt;"123"&lt;/span&gt;&lt;span class="p"&gt;})&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;That’s not an endpoint. That’s a method call.&lt;/p&gt;

&lt;p&gt;And the difference matters:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;the request is typed&lt;/li&gt;
&lt;li&gt;the response is typed&lt;/li&gt;
&lt;li&gt;the contract is defined in one place&lt;/li&gt;
&lt;li&gt;both sides generate code from the same definition&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;That’s the gRPC model.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;You stop building internal APIs around URLs and start defining service interfaces the same way you’d define a package in Go: by its functions and the data structures they accept and return.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h4&gt;
  
  
  What gRPC Actually Is
&lt;/h4&gt;

&lt;p&gt;gRPC is a service-to-service communication framework based on &lt;strong&gt;RPC&lt;/strong&gt; (Remote Procedure Calls).&lt;br&gt;&lt;br&gt;
Instead of exposing resources through HTTP routes, a service exposes &lt;strong&gt;methods&lt;/strong&gt;. Another service calls those methods using a generated client.&lt;/p&gt;

&lt;p&gt;It’s still a network call. You still deal with latency, timeouts, retries, and failures.&lt;br&gt;&lt;br&gt;
The main difference is that gRPC enforces a defined interface using &lt;strong&gt;Protocol Buffers&lt;/strong&gt;.&lt;/p&gt;
&lt;h4&gt;
  
  
  What Happens When You Call a gRPC Method?
&lt;/h4&gt;

&lt;p&gt;When &lt;strong&gt;&lt;em&gt;billing-service&lt;/em&gt;&lt;/strong&gt; calls:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight go"&gt;&lt;code&gt;&lt;span class="n"&gt;client&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;GetUser&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;ctx&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;req&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;this is what happens:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;the request struct is serialized using protobuf&lt;/li&gt;
&lt;li&gt;the payload is sent over HTTP/2&lt;/li&gt;
&lt;li&gt;the server deserializes the request&lt;/li&gt;
&lt;li&gt;the server handler executes&lt;/li&gt;
&lt;li&gt;the response is serialized and returned&lt;/li&gt;
&lt;li&gt;the client deserializes the response into a typed struct&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Both sides use generated code from the same .proto definition. That .proto file is the contract.&lt;/p&gt;

&lt;h4&gt;
  
  
  Step 1: Define the Contract (auth.proto)
&lt;/h4&gt;

&lt;p&gt;📁 proto/auth.proto;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight protobuf"&gt;&lt;code&gt;&lt;span class="na"&gt;syntax&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s"&gt;"proto3"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="kn"&gt;package&lt;/span&gt; &lt;span class="nn"&gt;auth&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="k"&gt;option&lt;/span&gt; &lt;span class="na"&gt;go_package&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s"&gt;"github.com/example/microservices-grpc/proto/authpb;authpb"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="kd"&gt;service&lt;/span&gt; &lt;span class="n"&gt;AuthService&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="k"&gt;rpc&lt;/span&gt; &lt;span class="n"&gt;GetUser&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;GetUserRequest&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;returns&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;GetUserResponse&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="kd"&gt;message&lt;/span&gt; &lt;span class="nc"&gt;GetUserRequest&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="kt"&gt;string&lt;/span&gt; &lt;span class="na"&gt;user_id&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="kd"&gt;message&lt;/span&gt; &lt;span class="nc"&gt;GetUserResponse&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="kt"&gt;string&lt;/span&gt; &lt;span class="na"&gt;user_id&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
  &lt;span class="kt"&gt;bool&lt;/span&gt; &lt;span class="na"&gt;active&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;2&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
  &lt;span class="kt"&gt;string&lt;/span&gt; &lt;span class="na"&gt;role&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;3&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This defines:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;the service interface (AuthService)&lt;/li&gt;
&lt;li&gt;available RPC methods (GetUser)&lt;/li&gt;
&lt;li&gt;request and response message types&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  Step 2: Generate Go Code
&lt;/h4&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;go &lt;span class="nb"&gt;install &lt;/span&gt;google.golang.org/protobuf/cmd/protoc-gen-go@latest
go &lt;span class="nb"&gt;install &lt;/span&gt;google.golang.org/grpc/cmd/protoc-gen-go-grpc@latest
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Make sure the binaries are in your PATH:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;export &lt;/span&gt;&lt;span class="nv"&gt;PATH&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$PATH&lt;/span&gt;&lt;span class="s2"&gt;:&lt;/span&gt;&lt;span class="si"&gt;$(&lt;/span&gt;go &lt;span class="nb"&gt;env &lt;/span&gt;GOPATH&lt;span class="si"&gt;)&lt;/span&gt;&lt;span class="s2"&gt;/bin"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now generate the code:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;protoc &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--go_out&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nb"&gt;.&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--go-grpc_out&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nb"&gt;.&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  proto/auth.proto
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This generates Go files under:&lt;/p&gt;

&lt;p&gt;📁 proto/authpb/&lt;/p&gt;

&lt;p&gt;Those files are &lt;strong&gt;machine-generated output&lt;/strong&gt;, not your codebase.&lt;br&gt;&lt;br&gt;
If you ever need to change anything about the API:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;edit the .proto file&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;regenerate the Go code again using protoc&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Inside those generated files you get:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;request/response structs&lt;/li&gt;
&lt;li&gt;the server interface&lt;/li&gt;
&lt;li&gt;the client stub&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;That client stub is what makes gRPC calls feel like function calls.&lt;/p&gt;
&lt;h4&gt;
  
  
  Step 3: Implement auth-service (Server)
&lt;/h4&gt;

&lt;p&gt;📁 auth-service/main.go;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight go"&gt;&lt;code&gt;&lt;span class="k"&gt;package&lt;/span&gt; &lt;span class="n"&gt;main&lt;/span&gt;

&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;
 &lt;span class="s"&gt;"context"&lt;/span&gt;
 &lt;span class="s"&gt;"log"&lt;/span&gt;
 &lt;span class="s"&gt;"net"&lt;/span&gt;

 &lt;span class="s"&gt;"github.com/example/microservices-grpc/proto/authpb"&lt;/span&gt;
 &lt;span class="s"&gt;"google.golang.org/grpc"&lt;/span&gt;
&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="k"&gt;type&lt;/span&gt; &lt;span class="n"&gt;authServer&lt;/span&gt; &lt;span class="k"&gt;struct&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
 &lt;span class="n"&gt;authpb&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;UnimplementedAuthServiceServer&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="k"&gt;func&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;s&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt;&lt;span class="n"&gt;authServer&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="n"&gt;GetUser&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;ctx&lt;/span&gt; &lt;span class="n"&gt;context&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Context&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;req&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt;&lt;span class="n"&gt;authpb&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;GetUserRequest&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="o"&gt;*&lt;/span&gt;&lt;span class="n"&gt;authpb&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;GetUserResponse&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="kt"&gt;error&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
 &lt;span class="n"&gt;log&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Printf&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"GetUser called with user_id=%s"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;req&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;UserId&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

 &lt;span class="c"&gt;// fake DB lookup&lt;/span&gt;
 &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;req&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;UserId&lt;/span&gt; &lt;span class="o"&gt;==&lt;/span&gt; &lt;span class="s"&gt;"123"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="o"&gt;&amp;amp;&lt;/span&gt;&lt;span class="n"&gt;authpb&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;GetUserResponse&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;
   &lt;span class="n"&gt;UserId&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt; &lt;span class="s"&gt;"123"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
   &lt;span class="n"&gt;Active&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt; &lt;span class="no"&gt;true&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
   &lt;span class="n"&gt;Role&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt;   &lt;span class="s"&gt;"premium"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="p"&gt;},&lt;/span&gt; &lt;span class="no"&gt;nil&lt;/span&gt;
 &lt;span class="p"&gt;}&lt;/span&gt;

 &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="o"&gt;&amp;amp;&lt;/span&gt;&lt;span class="n"&gt;authpb&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;GetUserResponse&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="n"&gt;UserId&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt; &lt;span class="n"&gt;req&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;UserId&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="n"&gt;Active&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt; &lt;span class="no"&gt;false&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="n"&gt;Role&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt;   &lt;span class="s"&gt;"unknown"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
 &lt;span class="p"&gt;},&lt;/span&gt; &lt;span class="no"&gt;nil&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="k"&gt;func&lt;/span&gt; &lt;span class="n"&gt;main&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
 &lt;span class="n"&gt;lis&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;err&lt;/span&gt; &lt;span class="o"&gt;:=&lt;/span&gt; &lt;span class="n"&gt;net&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Listen&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"tcp"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s"&gt;":50051"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
 &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;err&lt;/span&gt; &lt;span class="o"&gt;!=&lt;/span&gt; &lt;span class="no"&gt;nil&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="n"&gt;log&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Fatalf&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"listen failed: %v"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;err&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
 &lt;span class="p"&gt;}&lt;/span&gt;

 &lt;span class="n"&gt;srv&lt;/span&gt; &lt;span class="o"&gt;:=&lt;/span&gt; &lt;span class="n"&gt;grpc&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;NewServer&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
 &lt;span class="n"&gt;authpb&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;RegisterAuthServiceServer&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;srv&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="o"&gt;&amp;amp;&lt;/span&gt;&lt;span class="n"&gt;authServer&lt;/span&gt;&lt;span class="p"&gt;{})&lt;/span&gt;

 &lt;span class="n"&gt;log&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Println&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"auth-service listening on :50051"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
 &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;err&lt;/span&gt; &lt;span class="o"&gt;:=&lt;/span&gt; &lt;span class="n"&gt;srv&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Serve&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;lis&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt; &lt;span class="n"&gt;err&lt;/span&gt; &lt;span class="o"&gt;!=&lt;/span&gt; &lt;span class="no"&gt;nil&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="n"&gt;log&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Fatalf&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"serve failed: %v"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;err&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
 &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  Step 4: Implement billing-service (Client)
&lt;/h4&gt;

&lt;p&gt;📁 billing-service/main.go;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight go"&gt;&lt;code&gt;&lt;span class="k"&gt;package&lt;/span&gt; &lt;span class="n"&gt;main&lt;/span&gt;

&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;
 &lt;span class="s"&gt;"context"&lt;/span&gt;
 &lt;span class="s"&gt;"log"&lt;/span&gt;
 &lt;span class="s"&gt;"time"&lt;/span&gt;

 &lt;span class="s"&gt;"github.com/example/microservices-grpc/proto/authpb"&lt;/span&gt;
 &lt;span class="s"&gt;"google.golang.org/grpc"&lt;/span&gt;
 &lt;span class="s"&gt;"google.golang.org/grpc/credentials/insecure"&lt;/span&gt;
&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="k"&gt;func&lt;/span&gt; &lt;span class="n"&gt;main&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
 &lt;span class="n"&gt;conn&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;err&lt;/span&gt; &lt;span class="o"&gt;:=&lt;/span&gt; &lt;span class="n"&gt;grpc&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Dial&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
  &lt;span class="s"&gt;"localhost:50051"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="n"&gt;grpc&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;WithTransportCredentials&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;insecure&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;NewCredentials&lt;/span&gt;&lt;span class="p"&gt;()),&lt;/span&gt;
 &lt;span class="p"&gt;)&lt;/span&gt;
 &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;err&lt;/span&gt; &lt;span class="o"&gt;!=&lt;/span&gt; &lt;span class="no"&gt;nil&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="n"&gt;log&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Fatalf&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"dial failed: %v"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;err&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
 &lt;span class="p"&gt;}&lt;/span&gt;
 &lt;span class="k"&gt;defer&lt;/span&gt; &lt;span class="n"&gt;conn&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Close&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;

 &lt;span class="n"&gt;client&lt;/span&gt; &lt;span class="o"&gt;:=&lt;/span&gt; &lt;span class="n"&gt;authpb&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;NewAuthServiceClient&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;conn&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

 &lt;span class="n"&gt;ctx&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;cancel&lt;/span&gt; &lt;span class="o"&gt;:=&lt;/span&gt; &lt;span class="n"&gt;context&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;WithTimeout&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;context&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Background&lt;/span&gt;&lt;span class="p"&gt;(),&lt;/span&gt; &lt;span class="m"&gt;2&lt;/span&gt;&lt;span class="o"&gt;*&lt;/span&gt;&lt;span class="n"&gt;time&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Second&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
 &lt;span class="k"&gt;defer&lt;/span&gt; &lt;span class="n"&gt;cancel&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;

 &lt;span class="n"&gt;resp&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;err&lt;/span&gt; &lt;span class="o"&gt;:=&lt;/span&gt; &lt;span class="n"&gt;client&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;GetUser&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;ctx&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="o"&gt;&amp;amp;&lt;/span&gt;&lt;span class="n"&gt;authpb&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;GetUserRequest&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="n"&gt;UserId&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt; &lt;span class="s"&gt;"123"&lt;/span&gt;&lt;span class="p"&gt;})&lt;/span&gt;
 &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;err&lt;/span&gt; &lt;span class="o"&gt;!=&lt;/span&gt; &lt;span class="no"&gt;nil&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="n"&gt;log&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Fatalf&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"GetUser failed: %v"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;err&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
 &lt;span class="p"&gt;}&lt;/span&gt;

 &lt;span class="n"&gt;log&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Printf&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"User=%s active=%v role=%s"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;resp&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;UserId&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;resp&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Active&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;resp&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Role&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

 &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="o"&gt;!&lt;/span&gt;&lt;span class="n"&gt;resp&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Active&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="n"&gt;log&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Fatal&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"user not active, abort billing"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
 &lt;span class="p"&gt;}&lt;/span&gt;

 &lt;span class="n"&gt;log&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Println&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"billing can proceed"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;&lt;em&gt;Note:&lt;/em&gt; grpc.WithInsecure() is deprecated. This uses the current supported approach.&lt;/strong&gt;&lt;/p&gt;

&lt;h4&gt;
  
  
  Step 5: Run It
&lt;/h4&gt;

&lt;p&gt;At the project root:&lt;br&gt;&lt;br&gt;
📁 go.mod;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;module github.com/example/microservices-grpc

go 1.22

require google.golang.org/grpc v1.63.2
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Run the services:&lt;/p&gt;

&lt;p&gt;Terminal 1:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;go run auth-service/main.go
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Terminal 2:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;go run billing-service/main.go
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Expected output:&lt;/p&gt;

&lt;h4&gt;
  
  
  gRPC Call Flow (Diagram)
&lt;/h4&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2ukd43hy1hb1rbys7fba.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2ukd43hy1hb1rbys7fba.png" alt="flowchart" width="800" height="1200"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h4&gt;
  
  
  Server Streaming
&lt;/h4&gt;

&lt;p&gt;Now let’s extend the example into something that shows where gRPC becomes strictly better than REST for event-style communication.&lt;/p&gt;

&lt;p&gt;Say billing-service wants to subscribe to auth-related events like:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;user logged in&lt;/li&gt;
&lt;li&gt;password changed&lt;/li&gt;
&lt;li&gt;account locked&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In a REST world, you’d usually end up doing some form of polling:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight http"&gt;&lt;code&gt;&lt;span class="err"&gt;GET /api/v1/events?since=...
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;And then you’d run it every few seconds like a caveman with a cron job.&lt;/p&gt;

&lt;p&gt;Polling works, but it’s wasteful:&lt;/p&gt;

&lt;p&gt;With gRPC, you don’t fake real-time communication.&lt;br&gt;&lt;br&gt;
You just stream.&lt;/p&gt;
&lt;h4&gt;
  
  
  &lt;strong&gt;Defining a Streaming RPC&lt;/strong&gt;
&lt;/h4&gt;

&lt;p&gt;Update the protobuf contract:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight protobuf"&gt;&lt;code&gt;&lt;span class="k"&gt;rpc&lt;/span&gt; &lt;span class="n"&gt;WatchUserEvents&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;WatchUserEventsRequest&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;returns&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;stream&lt;/span&gt; &lt;span class="n"&gt;UserEvent&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

&lt;span class="kd"&gt;message&lt;/span&gt; &lt;span class="nc"&gt;WatchUserEventsRequest&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="kt"&gt;string&lt;/span&gt; &lt;span class="na"&gt;user_id&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="kd"&gt;message&lt;/span&gt; &lt;span class="nc"&gt;UserEvent&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="kt"&gt;string&lt;/span&gt; &lt;span class="na"&gt;user_id&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
  &lt;span class="kt"&gt;string&lt;/span&gt; &lt;span class="na"&gt;event_type&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;2&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
  &lt;span class="kt"&gt;int64&lt;/span&gt; &lt;span class="na"&gt;timestamp&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;3&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;That single keyword stream changes everything.&lt;/p&gt;

&lt;p&gt;Instead of “request → response”, the server holds the connection open and pushes events as they occur.&lt;/p&gt;

&lt;p&gt;Then regenerate the Go code:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;protoc &lt;span class="nt"&gt;--go_out&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nb"&gt;.&lt;/span&gt; &lt;span class="nt"&gt;--go-grpc_out&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nb"&gt;.&lt;/span&gt; proto/auth.proto
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now both services share the same contract, and your compiler becomes the enforcer of compatibility.&lt;/p&gt;

&lt;h4&gt;
  
  
  &lt;strong&gt;Implementing Server Streaming in auth-service&lt;/strong&gt;
&lt;/h4&gt;

&lt;p&gt;Inside auth-service/main.go, implement the streaming method:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight go"&gt;&lt;code&gt;&lt;span class="k"&gt;func&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;s&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt;&lt;span class="n"&gt;authServer&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="n"&gt;WatchUserEvents&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
 &lt;span class="n"&gt;req&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt;&lt;span class="n"&gt;authpb&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;WatchUserEventsRequest&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
 &lt;span class="n"&gt;stream&lt;/span&gt; &lt;span class="n"&gt;authpb&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;AuthService_WatchUserEventsServer&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="kt"&gt;error&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
 &lt;span class="n"&gt;log&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Printf&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"WatchUserEvents started for user_id=%s"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;req&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;UserId&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

 &lt;span class="n"&gt;events&lt;/span&gt; &lt;span class="o"&gt;:=&lt;/span&gt; &lt;span class="p"&gt;[]&lt;/span&gt;&lt;span class="kt"&gt;string&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="s"&gt;"LOGIN"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s"&gt;"PASSWORD_CHANGED"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s"&gt;"ACCOUNT_LOCKED"&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;

 &lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="n"&gt;_&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;e&lt;/span&gt; &lt;span class="o"&gt;:=&lt;/span&gt; &lt;span class="k"&gt;range&lt;/span&gt; &lt;span class="n"&gt;events&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="n"&gt;resp&lt;/span&gt; &lt;span class="o"&gt;:=&lt;/span&gt; &lt;span class="o"&gt;&amp;amp;&lt;/span&gt;&lt;span class="n"&gt;authpb&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;UserEvent&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;
   &lt;span class="n"&gt;UserId&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt;    &lt;span class="n"&gt;req&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;UserId&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
   &lt;span class="n"&gt;EventType&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt; &lt;span class="n"&gt;e&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
   &lt;span class="n"&gt;Timestamp&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt; &lt;span class="n"&gt;time&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Now&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Unix&lt;/span&gt;&lt;span class="p"&gt;(),&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;

  &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;err&lt;/span&gt; &lt;span class="o"&gt;:=&lt;/span&gt; &lt;span class="n"&gt;stream&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Send&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;resp&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt; &lt;span class="n"&gt;err&lt;/span&gt; &lt;span class="o"&gt;!=&lt;/span&gt; &lt;span class="no"&gt;nil&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
   &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="n"&gt;err&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;

  &lt;span class="n"&gt;time&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Sleep&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="m"&gt;2&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="n"&gt;time&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Second&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
 &lt;span class="p"&gt;}&lt;/span&gt;

 &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="no"&gt;nil&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Don’t forget:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight go"&gt;&lt;code&gt;&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="s"&gt;"time"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This example is simplified (we’re just emitting fake events), but the shape is realistic.&lt;/p&gt;

&lt;p&gt;In production, this loop would usually be backed by something like:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;a Kafka consumer&lt;/li&gt;
&lt;li&gt;Redis pub/sub&lt;/li&gt;
&lt;li&gt;a database WAL stream&lt;/li&gt;
&lt;li&gt;an internal event bus&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The key idea stays the same: the server pushes messages &lt;em&gt;as they happen&lt;/em&gt;.&lt;/p&gt;

&lt;h4&gt;
  
  
  &lt;strong&gt;Consuming the Stream in billing-service&lt;/strong&gt;
&lt;/h4&gt;

&lt;p&gt;On the client side, you call the RPC once and then continuously receive messages:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight go"&gt;&lt;code&gt;&lt;span class="n"&gt;stream&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;err&lt;/span&gt; &lt;span class="o"&gt;:=&lt;/span&gt; &lt;span class="n"&gt;client&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;WatchUserEvents&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;ctx&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="o"&gt;&amp;amp;&lt;/span&gt;&lt;span class="n"&gt;authpb&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;WatchUserEventsRequest&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;
 &lt;span class="n"&gt;UserId&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt; &lt;span class="s"&gt;"123"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
&lt;span class="p"&gt;})&lt;/span&gt;
&lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;err&lt;/span&gt; &lt;span class="o"&gt;!=&lt;/span&gt; &lt;span class="no"&gt;nil&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
 &lt;span class="n"&gt;log&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Fatalf&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"WatchUserEvents failed: %v"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;err&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
 &lt;span class="n"&gt;event&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;err&lt;/span&gt; &lt;span class="o"&gt;:=&lt;/span&gt; &lt;span class="n"&gt;stream&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Recv&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
 &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;err&lt;/span&gt; &lt;span class="o"&gt;!=&lt;/span&gt; &lt;span class="no"&gt;nil&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="n"&gt;log&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Println&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"stream ended:"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;err&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
  &lt;span class="k"&gt;break&lt;/span&gt;
 &lt;span class="p"&gt;}&lt;/span&gt;

 &lt;span class="n"&gt;log&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Printf&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"EVENT: %s at %d"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;event&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;EventType&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;event&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Timestamp&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This is what “real-time service communication” actually looks like in clean engineering terms:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;one connection&lt;/li&gt;
&lt;li&gt;one contract&lt;/li&gt;
&lt;li&gt;structured messages&lt;/li&gt;
&lt;li&gt;backpressure handled by the transport&lt;/li&gt;
&lt;li&gt;no polling loops&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This is gRPC solving a real system problem in the most direct way possible.&lt;/p&gt;

&lt;h3&gt;
  
  
  Final Thoughts
&lt;/h3&gt;

&lt;p&gt;At the end of the day, gRPC just solves a different problem.&lt;/p&gt;

&lt;p&gt;If you’re building service-to-service communication, you quickly realize URLs and JSON start feeling like a workaround. You’re passing strings around, hoping everybody remembers the exact response shape, and most breakages only show up at runtime. With gRPC, the .proto file becomes the source of truth, your types are enforced, and calling another service feels like calling a real method, because the client stub is literally generated for that.&lt;/p&gt;

&lt;p&gt;That said, gRPC isn’t always the smoothest experience everywhere. Debugging isn’t as simple as running curl and reading JSON. Most times you’ll use grpcurl, Postman, or enable reflection just to inspect and test things quickly. Also, browsers don’t speak gRPC natively, so if your consumers are frontend clients, you’ll probably keep REST at the edge or introduce gRPC-Web / a gateway.&lt;/p&gt;

&lt;p&gt;And you still need discipline when evolving schemas. Protobuf makes it easier, but you can’t just reuse field numbers or delete fields carelessly without breaking older clients.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;So the rule is pretty simple: if it’s internal microservices talking to each other, gRPC feels natural. If it’s a public API meant for browsers and humans, REST still makes sense.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;strong&gt;&lt;em&gt;Thanks for reading.&lt;/em&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fmedium.com%2F_%2Fstat%3Fevent%3Dpost.clientViewed%26referrerSource%3Dfull_rss%26postId%3Dd57ace95d2f3" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fmedium.com%2F_%2Fstat%3Fevent%3Dpost.clientViewed%26referrerSource%3Dfull_rss%26postId%3Dd57ace95d2f3" width="800" height="400"&gt;&lt;/a&gt;&lt;/p&gt;

</description>
      <category>microservices</category>
      <category>softwaredevelopment</category>
      <category>programming</category>
      <category>microservicearchitecture</category>
    </item>
    <item>
      <title>What DevOps Really Means (and Why It’s So Hard to Explain)</title>
      <dc:creator>Favour Lawrence</dc:creator>
      <pubDate>Mon, 20 Apr 2026 09:49:28 +0000</pubDate>
      <link>https://dev.to/favxlaw/what-devops-really-means-and-why-its-so-hard-to-explain-5c6j</link>
      <guid>https://dev.to/favxlaw/what-devops-really-means-and-why-its-so-hard-to-explain-5c6j</guid>
      <description>&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fx41vtc711vhrc40l0npc.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fx41vtc711vhrc40l0npc.jpeg" width="471" height="278"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You know how people still, even now, don’t really know what DevOps is? Honestly, even most DevOps engineers struggle to explain what they actually do. So, to make life easier, we just say “&lt;em&gt;we’re software engineers”&lt;/em&gt;. I do it too it’s funny, really. We &lt;em&gt;are&lt;/em&gt; software engineers, sure, but saying “DevOps engineer” opens the door to a conversation I’m never ready for.&lt;/p&gt;

&lt;p&gt;I already stress enough trying to explain what I studied, “biomedical engineering” and now I’m in DevOps? Yeah, no thanks. I just say “software engineering.” Much simpler.&lt;/p&gt;

&lt;p&gt;So one day, I called up a friend who’s also a DevOps engineer and asked him, “Hey, what exactly &lt;em&gt;is&lt;/em&gt; DevOps engineering?” He paused and went, “Umm… it’s developer operations.” I said “okay,” waiting for more. Then he added, “You know, we automate things… CI/CD pipelines, containers, all that stuff.” I asked three other DevOps folks the same question got pretty much the same answer. These are senior people, by the way. But somehow, explaining what they do is still a challenge.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;What do they say about engineers again? Hehehe.&lt;br&gt;&lt;br&gt;
“Engineers solve problems no one else understands, then fail to explain what they just did in plain English.”&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;So yeah, I started thinking, how do we actually break this down? This isn’t meant to be a long article, just a short one. I’m just here to tell you what we, the DevOps engineers, &lt;em&gt;actually&lt;/em&gt; do. Here to explain what DevOps really is.&lt;/p&gt;

&lt;p&gt;First off, this whole piece was inspired by an interesting definition I came across in Samuel’s article titled &lt;em&gt;“&lt;/em&gt;&lt;a href="https://medium.com/nerd-for-tech/principles-in-striking-the-balance-for-devops-c2f7c70b56f" rel="noopener noreferrer"&gt;&lt;em&gt;Principles in Striking the Balance for DevOps.&lt;/em&gt;&lt;/a&gt;&lt;em&gt;”&lt;/em&gt; He described DevOps as;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;“a name given to a culture designed to increase an organization’s ability to deliver applications and services faster than traditional software development processes.”&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Pretty solid definition, right? But then I paused and thought okay, what exactly &lt;em&gt;was&lt;/em&gt; the traditional software development process? And are we saying that it’s no longer in use?&lt;/p&gt;

&lt;p&gt;So what’s the big picture with DevOps then? Is it a replacement? A mindset? A fancy label for what engineers were already doing, but now with more YAML files? That’s what I wanted to unpack in plain, human language.&lt;/p&gt;

&lt;p&gt;Back then, things were… let’s say, “well-structured” which is just a polite way of saying &lt;em&gt;slow&lt;/em&gt;. Developers stayed in their corner writing code, tossing it over to the operations team and saying, “It works on my machine.” The ops folks would then spend nights trying to figure out why it &lt;em&gt;doesn’t&lt;/em&gt; work on the actual server.&lt;/p&gt;

&lt;p&gt;It was a clear divide; devs built, ops deployed, and whenever things broke (which they did, often), both sides pointed fingers until someone fixed it or gave up. It wasn’t that people were lazy or careless; the process itself just didn’t support speed or collaboration. Each team had its own tools, goals, and sometimes even vocabulary.&lt;/p&gt;

&lt;p&gt;Deployments happened maybe once every few weeks or months. If something went wrong, rolling back meant pain, panic, and debugging marathons at 2 a.m. It worked, sort of but it wasn’t sustainable, especially as software grew more complex and user expectations skyrocketed.&lt;/p&gt;

&lt;p&gt;That’s where DevOps came in not as a job title (though that’s what it’s become), but as a mindset. It’s about bridging that gap between developers and operations, breaking the silos, and automating the stuff that used to drain everyone’s sanity.&lt;/p&gt;

&lt;p&gt;Now DevOps isn’t just a trendy term or a job title, it’s a way of working that brings developers and operations teams together. It is about collaboration and automation, with the goal of making software development, testing, and deployment more efficient and reliable.&lt;/p&gt;

&lt;p&gt;Instead of handing code off between teams, DevOps promotes shared ownership through automated pipelines that everyone can depend on. Concepts like CI/CD, containers, infrastructure as code, and monitoring all support this approach. The idea is simple: move from “it works on my machine” to “it works everywhere.”&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;In practice, DevOps means automating repetitive tasks, testing early and frequently, deploying in small, manageable updates, and continuously monitoring systems in production. These practices lead to faster releases, fewer issues, and a more stable environment.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;But beyond the tools and processes, DevOps is ultimately about culture. It’s built on trust, accountability, and collaboration. The mindset shifts from “developers versus operations” to “we build it, we run it.” When something goes wrong and it will teams work together to resolve it.&lt;/p&gt;

&lt;p&gt;In the end, DevOps isn’t defined by a specific set of tools or tasks. It’s defined by how teams work: communicating openly, automating where it makes sense, and focusing on delivering reliable software.&lt;/p&gt;

&lt;p&gt;So, if someone asks what DevOps is, the simplest answer might be: &lt;em&gt;it’s about making sure the software we build actually works everywhere.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;I’d love to hear your thoughts on DevOps, if you found this explanation useful, consider giving it a like and a follow for more content like this.&lt;/p&gt;

</description>
      <category>technology</category>
      <category>devops</category>
      <category>softwaredevelopment</category>
      <category>programming</category>
    </item>
    <item>
      <title>Introduction to OpenTelemetry: A DevOps Beginner's Guide</title>
      <dc:creator>Favour Lawrence</dc:creator>
      <pubDate>Sat, 03 May 2025 22:56:52 +0000</pubDate>
      <link>https://dev.to/favxlaw/introduction-to-opentelemetry-a-devops-beginners-guide-2ij6</link>
      <guid>https://dev.to/favxlaw/introduction-to-opentelemetry-a-devops-beginners-guide-2ij6</guid>
      <description>&lt;p&gt;As a DevOps engineer, understanding how your applications are performing in production is crucial to ensuring their reliability and stability. Whether you’re working with microservices, containers, or cloud-native environments, monitoring and observability play a vital role. This is where OpenTelemetry comes in.&lt;br&gt;
In this article, we’ll go over what OpenTelemetry is, why it’s important for your DevOps practices, and introduce some key concepts to help you get started on the path to improved observability.&lt;/p&gt;




&lt;h2&gt;
  
  
  How OpenTelemetry Works
&lt;/h2&gt;

&lt;p&gt;OpenTelemetry is a collection of APIs, libraries, agents, and instrumentation tools designed to help you collect, process, and export telemetry data things like traces, metrics, and logs from your applications. Essentially, it offers a standardized way to capture data, providing you with a clear view of your system’s performance and behavior.&lt;/p&gt;

&lt;p&gt;This data is critical for understanding how your system is functioning, pinpointing issues, and optimizing overall performance. But how does OpenTelemetry gather all this useful information, and how can you use it to improve your system? Let’s break it down step-by-step.&lt;/p&gt;

&lt;h3&gt;
  
  
  1. Instrumentation: Making Your App Observable:
&lt;/h3&gt;

&lt;p&gt;The first step is instrumentation. Basically, you’re wiring your application to be “observed.” This is where you decide how to generate signals (traces, metrics, and logs) from your code.&lt;/p&gt;

&lt;p&gt;Now, depending on your setup, you can either go with any below;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Manually instrument: This means adding SDK calls in your code like creating spans around functions, or timing a DB query.&lt;/li&gt;
&lt;li&gt;Automatically instrument: OpenTelemetry offers auto-instrumentation for many popular frameworks (like Express for Node.js, Spring Boot for Java, Flask for Python). This way, telemetry is collected without you writing much extra code.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Once your application is instrumented, you’ll have the data needed to analyze performance, identify issues, and understand system behavior in real time.&lt;/p&gt;

&lt;h3&gt;
  
  
  2. Signal Generation: Traces, Metrics, and Logs
&lt;/h3&gt;

&lt;p&gt;Now your application is instrumented, it starts generating signals:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Traces track the path of a request as it moves through your system. They’re made up of spans, which capture operations like HTTP calls, database queries, or interactions with external services. Useful for pinpointing bottlenecks or latency issues.&lt;/li&gt;
&lt;li&gt;Metrics are numerical values you can aggregate, graph, and alert on, like request rates, response times, CPU and memory usage, error counts, etc. Ideal for monitoring trends and setting SLOs.&lt;/li&gt;
&lt;li&gt;Logs are timestamped text entries that capture specific events or states. They’re often used for debugging or to provide extra context alongside traces and metrics.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;These signals are structured in a consistent format, usually using the &lt;em&gt;OpenTelemetry Protocol (OTLP)&lt;/em&gt;, so they can be processed and exported to your observability backend without friction.&lt;/p&gt;

&lt;h3&gt;
  
  
  3. Context Propagation: Keeping Traces Connected Across Services
&lt;/h3&gt;

&lt;p&gt;In a distributed system, especially with microservices, tracking a single request across multiple services can get messy fast. That’s where context propagation comes in.&lt;/p&gt;

&lt;p&gt;OpenTelemetry handles this by passing trace context things like trace IDs and span IDs along with each request. For HTTP, this happens via headers; for gRPC, it’s in the metadata. Each service picks up that context and attaches it to its own spans.&lt;/p&gt;

&lt;p&gt;The result: when you view a trace later, you get a complete picture of the request’s path through the system, without any gaps. This is what makes distributed tracing actually usable.&lt;/p&gt;

&lt;h3&gt;
  
  
  4. The Collector: Centralized Ingestion and Processing
&lt;/h3&gt;

&lt;p&gt;When your app starts producing telemetry data, you need a way to handle it. That’s where the OpenTelemetry Collector comes in.&lt;br&gt;
Think of it as a central point that:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Ingests traces, metrics, and logs from instrumented services&lt;/li&gt;
&lt;li&gt;Processes the data, things like filtering, redacting, or adding metadata&lt;/li&gt;
&lt;li&gt;Exports it to your monitoring or observability backend&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;By separating data collection from where the data ends up, the Collector gives you a clean layer of control. You can standardize telemetry pipelines, apply consistent transformations, and avoid tight coupling with any single vendor.&lt;/p&gt;

&lt;h3&gt;
  
  
  5. Exporting: Routing Telemetry to the Right Tools
&lt;/h3&gt;

&lt;p&gt;After the Collector processes your telemetry, it sends it off to the systems you use to monitor and troubleshoot.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Metrics → Prometheus, Cloud Monitoring, etc.&lt;/li&gt;
&lt;li&gt;Traces → Jaeger, Zipkin, or other tracing backends&lt;/li&gt;
&lt;li&gt;Logs → Log aggregation tools or cloud logging platforms&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;em&gt;&lt;strong&gt;The best part? You don’t need to modify your application every time you want to switch or add a new backend. Just update the Collector’s config and you're good to go.&lt;/strong&gt;&lt;/em&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  What Makes OpenTelemetry Different?
&lt;/h2&gt;

&lt;p&gt;So maybe you’re thinking, “Why bother with OpenTelemetry? I already use Prometheus for metrics or Jaeger for tracing isn’t that enough?”&lt;/p&gt;

&lt;p&gt;Fair question. But here’s the thing: OpenTelemetry isn’t here to replace those tools it’s here to bring them together.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;- Unified Telemetry Framework:&lt;/strong&gt; Instead of juggling separate tooling for traces, metrics, and logs, OpenTelemetry gives you a single, consistent way to collect all three. Same concepts, same configuration patterns, less cognitive load for your team.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;- Vendor-Neutral by Design:&lt;/strong&gt; &lt;br&gt;
OpenTelemetry is vendor-neutral by design. You can pipe your data to Prometheus today, switch to Datadog next week, or go full open-source with Jaeger and Loki and your application code stays exactly the same. No proprietary agents. No rewriting instrumentation. You own the pipeline, not the vendor.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;- Pluggable and Extensible Pipeline:&lt;/strong&gt; &lt;br&gt;
The Collector acts as a configurable processing layer. Need to drop certain spans, mask sensitive fields, or add metadata before sending data out? Just plug in the processors you need no custom tooling required.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;- Community-Driven and CNCF-Backed&lt;/strong&gt;&lt;br&gt;
OpenTelemetry isn’t a startup’s side project. It’s part of the Cloud Native Computing Foundation (same folks behind Kubernetes), and it’s backed by a huge community of engineers and observability vendors. That means fast updates, strong documentation, and wide adoption.&lt;/p&gt;

&lt;p&gt;In summary OpenTelemetry doesn’t replace Prometheus or Jaeger, it complements and unifies them. It gives you a consistent, flexible foundation for observability, no matter what your stack looks like.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;&lt;strong&gt;Thanks for reading. Let me know in the comments if there’s a topic you’d like me to cover next. I know it’s been a while since I last posted. I’m still working on those practical, hands-on articles I promised, so keep an eye out. They’re on the way.&lt;/strong&gt;&lt;/em&gt;&lt;/p&gt;

</description>
      <category>devops</category>
      <category>programming</category>
      <category>beginners</category>
      <category>monitoring</category>
    </item>
    <item>
      <title>Squash, Rebase, Merge: Keeping Your CI/CD Pipelines Clean and Efficient 🚀</title>
      <dc:creator>Favour Lawrence</dc:creator>
      <pubDate>Wed, 26 Mar 2025 05:05:28 +0000</pubDate>
      <link>https://dev.to/favxlaw/squash-rebase-merge-keeping-your-cicd-pipelines-clean-and-efficient-8cc</link>
      <guid>https://dev.to/favxlaw/squash-rebase-merge-keeping-your-cicd-pipelines-clean-and-efficient-8cc</guid>
      <description>&lt;p&gt;In DevOps, efficiency is everything. A messy Git history slows down pipelines, causes unnecessary conflicts, and makes debugging harder.&lt;/p&gt;

&lt;p&gt;When Git workflows are unmanaged, CI/CD pipelines can stall due to conflicting merge commits, and developers waste time digging through cluttered commit histories.&lt;/p&gt;

&lt;p&gt;But with a clean Git workflow:&lt;br&gt;
✅ Faster builds – No extra history slowing things down&lt;br&gt;
✅ Fewer merge conflicts – Smoother collaboration, less frustration&lt;br&gt;
✅ Clearer logs – Easier debugging and rollbacks&lt;/p&gt;

&lt;p&gt;A structured Git history isn’t just about keeping things tidy, it directly improves pipeline speed, code quality, and developer productivity. Using squashing, rebasing, and merging correctly keeps your CI/CD pipeline fast, reliable, and hassle-free.&lt;/p&gt;


&lt;h3&gt;
  
  
  &lt;strong&gt;Squashing, Rebasing, and Merging – The Right Tool for the Job&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Git is a powerful version control system, but &lt;strong&gt;how you manage your commits can either streamline or slow down your CI/CD pipeline&lt;/strong&gt;. A cluttered history with unnecessary commits leads to:  &lt;/p&gt;

&lt;p&gt;❌ Slower builds due to excessive commit processing&lt;br&gt;&lt;br&gt;
❌ Merge conflicts that could have been avoided&lt;br&gt;&lt;br&gt;
❌ Hard-to-follow commit logs, making debugging difficult  &lt;/p&gt;

&lt;p&gt;To keep your workflow clean and efficient, you need to &lt;strong&gt;use the right Git strategy at the right time&lt;/strong&gt;. Squashing, rebasing, and merging each serve a unique purpose. Here’s how they work and when to use them.  &lt;/p&gt;
&lt;h4&gt;
  
  
  &lt;strong&gt;When to Squash: Keeping PRs Clean&lt;/strong&gt;
&lt;/h4&gt;

&lt;p&gt;&lt;strong&gt;What is Squashing?&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
Squashing combines multiple commits into a single commit. This is useful when a pull request (PR) contains many small, incremental commits that don’t need to be preserved individually. Instead of polluting the Git history with minor fixes and adjustments, you &lt;strong&gt;merge everything into one meaningful commit&lt;/strong&gt; before merging into the main branch.  &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Why Use Squash?&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
🔹 &lt;strong&gt;Keeps commit history clean&lt;/strong&gt; – Instead of “Fixed bug,” “Fixed typo,” and “Final final fix,” you get a single commit with a meaningful message.&lt;br&gt;&lt;br&gt;
🔹 &lt;strong&gt;Reduces clutter&lt;/strong&gt; – A clean commit history makes it easier to review and debug.&lt;br&gt;&lt;br&gt;
🔹 &lt;strong&gt;Best for feature branches&lt;/strong&gt; – Squash commits before merging into &lt;code&gt;main&lt;/code&gt; or &lt;code&gt;develop&lt;/code&gt; to keep the history readable.  &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Example Scenario&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
You’re working on a new &lt;strong&gt;login feature&lt;/strong&gt;. During development, you make multiple commits:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;commit 1: Added login form  
commit 2: Fixed validation error  
commit 3: Adjusted button alignment  
commit 4: Updated error messages  
commit 5: Finalized login feature  
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Instead of merging all five commits into &lt;code&gt;main&lt;/code&gt;, squash them into one:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;commit 1: Implemented login feature with validation and UI fixes  
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h5&gt;
  
  
  &lt;strong&gt;How to Squash Commits&lt;/strong&gt;
&lt;/h5&gt;

&lt;p&gt;&lt;strong&gt;Interactive Rebase (Local Changes)&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;git rebase &lt;span class="nt"&gt;-i&lt;/span&gt; HEAD~n  &lt;span class="c"&gt;# 'n' is the number of commits to squash&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This opens an interactive editor where you replace &lt;code&gt;"pick"&lt;/code&gt; with &lt;code&gt;"squash"&lt;/code&gt; for the commits you want to combine.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Squash on GitHub (PR Merging)&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;When merging a PR, select &lt;strong&gt;"Squash &amp;amp; Merge"&lt;/strong&gt; to combine all commits into one before merging.
&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  &lt;strong&gt;When to Rebase: Keeping History Linear&lt;/strong&gt;
&lt;/h4&gt;

&lt;p&gt;&lt;strong&gt;What is Rebasing?&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
Rebasing &lt;strong&gt;rewrites history&lt;/strong&gt; by moving your branch’s commits on top of the latest &lt;code&gt;main&lt;/code&gt; branch, as if your feature branch was built on the most up-to-date code. It prevents unnecessary merge commits and maintains a &lt;strong&gt;linear&lt;/strong&gt; commit history.  &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Why Use Rebase?&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
🔹 &lt;strong&gt;Keeps history clean&lt;/strong&gt; – Avoids merge commits like &lt;code&gt;Merge branch 'main' into feature-xyz&lt;/code&gt;.&lt;br&gt;&lt;br&gt;
🔹 &lt;strong&gt;Ensures a smooth merge&lt;/strong&gt; – Applying your changes on top of the latest &lt;code&gt;main&lt;/code&gt; helps prevent conflicts later.&lt;br&gt;&lt;br&gt;
🔹 &lt;strong&gt;Best for feature branches&lt;/strong&gt; – Before merging a feature, rebase it onto &lt;code&gt;main&lt;/code&gt; to ensure it integrates smoothly.  &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Example Scenario&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
You started working on a &lt;strong&gt;new checkout feature&lt;/strong&gt; based on an older version of &lt;code&gt;main&lt;/code&gt;. Meanwhile, other developers pushed updates to &lt;code&gt;main&lt;/code&gt;. Now, your branch is behind:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;* (main) commit A - Updated payment logic  
* (main) commit B - Fixed checkout bug  
|
|--- (feature-checkout) commit C - Added checkout form  
|--- (feature-checkout) commit D - Implemented discount logic  
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Instead of merging &lt;code&gt;main&lt;/code&gt; into your branch (which would create an unnecessary merge commit), &lt;strong&gt;rebase it&lt;/strong&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;git fetch origin
git rebase origin/main  &lt;span class="c"&gt;# Moves your commits on top of the latest main&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;After rebasing, your history is &lt;strong&gt;clean and linear&lt;/strong&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;* (main) commit A - Updated payment logic  
* (main) commit B - Fixed checkout bug  
* (feature-checkout) commit C - Added checkout form  
* (feature-checkout) commit D - Implemented discount logic  
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Handling Conflicts During Rebase&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
If there are conflicts, Git will stop and ask you to resolve them. Once fixed, continue the rebase:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;git add &lt;span class="nb"&gt;.&lt;/span&gt;
git rebase &lt;span class="nt"&gt;--continue&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  &lt;strong&gt;When to Merge: Preserving Full History&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;What is Merging?&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
Merging combines one branch into another, preserving all commits and commit messages &lt;strong&gt;exactly as they were made&lt;/strong&gt;. This keeps a &lt;strong&gt;detailed commit history&lt;/strong&gt; but may introduce merge commits that clutter the log.  &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Why Use Merge?&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
🔹 &lt;strong&gt;Preserves full commit history&lt;/strong&gt; – Useful for tracking all incremental changes.&lt;br&gt;&lt;br&gt;
🔹 &lt;strong&gt;Best for team collaborations&lt;/strong&gt; – When multiple developers contribute to a feature branch, merging keeps individual contributions visible.&lt;br&gt;&lt;br&gt;
🔹 &lt;strong&gt;Used in Gitflow workflows&lt;/strong&gt; – Commonly used for merging &lt;code&gt;develop → main&lt;/code&gt;.  &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Example Scenario&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
Your team worked on a &lt;strong&gt;new analytics dashboard&lt;/strong&gt; in a shared branch. The commit history looks like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;commit X: Setup analytics API  
commit Y: Implemented dashboard UI  
commit Z: Added data visualization  
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Since multiple developers contributed, merging into &lt;code&gt;main&lt;/code&gt; without squashing retains authorship and commit details.  &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;How to Merge Properly&lt;/strong&gt;  &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Basic Merge (Fast-Forward)&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;git merge feature-branch  &lt;span class="c"&gt;# Merges feature branch into current branch&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If your branch is ahead of &lt;code&gt;main&lt;/code&gt;, Git will &lt;strong&gt;fast-forward&lt;/strong&gt; (move the branch pointer) without creating a merge commit.  &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Preserving a Merge Commit&lt;/strong&gt;&lt;br&gt;
To keep a merge commit for tracking:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;git merge &lt;span class="nt"&gt;--no-ff&lt;/span&gt; feature-branch  &lt;span class="c"&gt;# Creates a merge commit even if a fast-forward is possible&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  &lt;strong&gt;Which One Should You Use?&lt;/strong&gt;
&lt;/h4&gt;

&lt;p&gt;✔️ &lt;strong&gt;Squash&lt;/strong&gt; → When cleaning up a messy PR with too many small commits.&lt;br&gt;&lt;br&gt;
✔️ &lt;strong&gt;Rebase&lt;/strong&gt; → When syncing with &lt;code&gt;main&lt;/code&gt; before merging to avoid extra merge commits.&lt;br&gt;&lt;br&gt;
✔️ &lt;strong&gt;Merge&lt;/strong&gt; → When you want to &lt;strong&gt;preserve full commit history&lt;/strong&gt;, especially in team collaborations.  &lt;/p&gt;


&lt;h3&gt;
  
  
  &lt;strong&gt;Git Strategies for DevOps Teams&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Choosing the right Git strategy is essential for maintaining an efficient CI/CD workflow. A poorly structured process can cause bottlenecks, increase merge conflicts, and slow down deployments. On the other hand, a well-defined strategy keeps the development pipeline smooth, ensuring faster releases and better collaboration.  &lt;/p&gt;

&lt;p&gt;There are two primary approaches teams use: &lt;strong&gt;Feature Branch Workflow&lt;/strong&gt; and &lt;strong&gt;Trunk-Based Development (TBD)&lt;/strong&gt;. Each has its place, and automation plays a key role in enforcing these workflows.  &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Feature Branch Workflow&lt;/strong&gt;  &lt;/p&gt;

&lt;p&gt;This is the traditional approach where developers create separate branches for each feature, bug fix, or enhancement. The branch remains isolated from the main branch until the work is complete and ready for review. Merging happens through &lt;strong&gt;pull requests (PRs)&lt;/strong&gt;, allowing for code reviews before the changes are integrated.  &lt;/p&gt;

&lt;p&gt;This workflow is often used in structured development models like Gitflow, where teams work on long-running feature branches before merging into a staging or main branch. It provides stability and makes it easier to review code, but if branches are kept open for too long, they can diverge significantly from the main branch, leading to complex merge conflicts.  &lt;/p&gt;

&lt;p&gt;To prevent this, developers should &lt;strong&gt;rebase frequently&lt;/strong&gt;, ensuring their feature branch stays in sync with the latest changes from the main branch. Automated tests and linters should be triggered on every PR to catch potential issues early. Before merging, it’s good practice to &lt;strong&gt;squash commits&lt;/strong&gt; into a single, meaningful commit to keep the Git history clean.  &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Trunk-Based Development (TBD)&lt;/strong&gt;  &lt;/p&gt;

&lt;p&gt;Unlike the feature branch workflow, Trunk-Based Development encourages &lt;strong&gt;short-lived branches&lt;/strong&gt; or even &lt;strong&gt;direct commits to main&lt;/strong&gt;. Instead of working in isolation for extended periods, developers integrate changes continuously—sometimes multiple times a day. This approach ensures that the main branch is always in a deployable state and minimizes the complexity of long-lived branches.  &lt;/p&gt;

&lt;p&gt;Frequent integration reduces merge conflicts and makes debugging easier since each change set is small and easier to track. However, this strategy requires &lt;strong&gt;strict CI/CD enforcement&lt;/strong&gt; to prevent unstable code from being deployed. Automated testing and static analysis tools must be in place to verify every commit before it reaches production.  &lt;/p&gt;

&lt;p&gt;Since features may be merged incrementally, &lt;strong&gt;feature flags&lt;/strong&gt; are often used to hide unfinished work while allowing continuous integration. This allows developers to merge work early without exposing incomplete features to end users.  &lt;/p&gt;

&lt;p&gt;Rolling back changes in Trunk-Based Development should be handled with &lt;strong&gt;&lt;code&gt;git revert&lt;/code&gt;&lt;/strong&gt; instead of &lt;code&gt;git reset&lt;/code&gt; to maintain a clear history of what was changed and why.  &lt;/p&gt;
&lt;h4&gt;
  
  
  &lt;strong&gt;Choosing the Right Strategy&lt;/strong&gt;
&lt;/h4&gt;

&lt;p&gt;The best approach depends on the team’s structure and release process. &lt;strong&gt;Feature Branch Workflow&lt;/strong&gt; works well for teams that need structured releases, code reviews, and stability. &lt;strong&gt;Trunk-Based Development&lt;/strong&gt; is better suited for high-velocity DevOps teams where frequent deployments and quick iteration cycles are necessary.  &lt;/p&gt;

&lt;p&gt;Some teams adopt a hybrid model—using feature branches for significant changes while following Trunk-Based Development for smaller, incremental updates. Regardless of the approach, automation is key to maintaining a clean workflow.  &lt;/p&gt;
&lt;h4&gt;
  
  
  &lt;strong&gt;Enforcing Git Workflows with CI/CD Automation&lt;/strong&gt;
&lt;/h4&gt;

&lt;p&gt;Manual enforcement of Git best practices is not scalable. Teams must automate workflow rules to ensure consistency and reduce human errors.  &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Branch protection rules&lt;/strong&gt; should prevent direct commits to &lt;code&gt;main&lt;/code&gt; in feature branch workflows.
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Pre-commit hooks&lt;/strong&gt; can enforce commit message formats and prevent invalid commits.
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;PR approvals and CI/CD checks&lt;/strong&gt; should be mandatory before merging changes.
&lt;/li&gt;
&lt;/ul&gt;
&lt;h4&gt;
  
  
  &lt;strong&gt;Automating Git Workflow Enforcement with GitHub Actions&lt;/strong&gt;
&lt;/h4&gt;

&lt;p&gt;GitHub Actions can be used to enforce rules such as requiring squashed commits and ensuring branches are rebased before merging.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;jobs&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;enforce-workflow&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;runs-on&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ubuntu-latest&lt;/span&gt;
    &lt;span class="na"&gt;steps&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Ensure commits are squashed&lt;/span&gt;
        &lt;span class="na"&gt;run&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;git log --format="%s" origin/main..HEAD | wc -l | grep -q '^1$' || exit &lt;/span&gt;&lt;span class="m"&gt;1&lt;/span&gt;

      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Prevent merging without rebase&lt;/span&gt;
        &lt;span class="na"&gt;run&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;git merge-base --is-ancestor main HEAD || exit &lt;/span&gt;&lt;span class="m"&gt;1&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This automation ensures that multiple commits are squashed before merging and that branches are properly rebased.  &lt;/p&gt;

&lt;p&gt;Beyond enforcing Git hygiene, integrating &lt;strong&gt;automated code quality checks&lt;/strong&gt; with tools like &lt;strong&gt;SonarQube, ESLint, or Prettier&lt;/strong&gt; ensures that coding standards are followed. Requiring all tests to pass before merging prevents broken changes from entering production.&lt;/p&gt;




&lt;p&gt;A well-structured Git workflow is essential for maintaining clean CI/CD pipelines, reducing conflicts, and ensuring efficient collaboration. Whether your team follows a &lt;strong&gt;Feature Branch Workflow&lt;/strong&gt; for stability or &lt;strong&gt;Trunk-Based Development&lt;/strong&gt; for faster iterations, enforcing best practices through automation is key to keeping development smooth and predictable.&lt;/p&gt;

&lt;p&gt;By using squashing, rebasing, and merging correctly, along with CI/CD automation, teams can improve code quality, streamline deployments, and eliminate unnecessary complexity in their repositories.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;em&gt;Thanks for reading! If you found this helpful, follow me for more DevOps concepts and best practices. If there’s a topic you’d like me to break down next, let me know! 🚀&lt;/em&gt;&lt;/strong&gt;&lt;/p&gt;

</description>
      <category>git</category>
      <category>devops</category>
      <category>cicd</category>
      <category>programming</category>
    </item>
    <item>
      <title>ELK Stack Explained: How Elasticsearch, Logstash &amp; Kibana Work Together for Real-Time Data Insights.</title>
      <dc:creator>Favour Lawrence</dc:creator>
      <pubDate>Mon, 24 Mar 2025 11:38:17 +0000</pubDate>
      <link>https://dev.to/favxlaw/elk-stack-explained-how-elasticsearch-logstash-kibana-work-together-for-real-time-data-insights-2dfa</link>
      <guid>https://dev.to/favxlaw/elk-stack-explained-how-elasticsearch-logstash-kibana-work-together-for-real-time-data-insights-2dfa</guid>
      <description>&lt;h2&gt;
  
  
  Introduction to ELK Stack
&lt;/h2&gt;

&lt;p&gt;Logs are everywhere in every app, server, and system generates them. But when something goes wrong, digging through endless log files to find the issue can be so overwhelming. Here ELK turns raw log data into clear, searchable, and visual insights.&lt;/p&gt;

&lt;p&gt;What is the ELK Stack?&lt;br&gt;
The ELK Stack is an open-source log management and data analytics tool made up of:&lt;/p&gt;

&lt;p&gt;Elasticsearch – A search engine that stores and retrieves log data quickly.&lt;/p&gt;

&lt;p&gt;Logstash – A tool that collects, processes, and forwards logs.&lt;/p&gt;

&lt;p&gt;Kibana – A dashboard for visualizing and analyzing log data.&lt;/p&gt;

&lt;p&gt;Together, these tools make it easy to collect, search, and analyze logs in real time, helping teams troubleshoot issues, monitor systems, and make data-driven decisions.&lt;/p&gt;

&lt;p&gt;Why is ELK Important?&lt;br&gt;
Modern applications generate tons of log data, and manually searching through it isn’t practical. ELK helps by:&lt;br&gt;
✔️ Finding issues fast – Instantly search massive log files.&lt;br&gt;
✔️ Handling large data – Works across multiple servers and systems.&lt;br&gt;
✔️ Turning data into insights – Creates real-time dashboards for monitoring and decision-making.&lt;/p&gt;
&lt;h3&gt;
  
  
  &lt;strong&gt;Elasticsearch vs. Traditional RDBMS: A Developer’s Perspective&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;If you're used to working with relational databases like &lt;strong&gt;MySQL&lt;/strong&gt; or &lt;strong&gt;PostgreSQL&lt;/strong&gt;, switching to &lt;strong&gt;Elasticsearch&lt;/strong&gt; might feel like stepping into a whole new world. But Elasticsearch is just another way to &lt;strong&gt;store, retrieve, and search data&lt;/strong&gt;—the difference is in how it’s structured and optimized.  &lt;/p&gt;

&lt;p&gt;Instead of &lt;strong&gt;tables and rows&lt;/strong&gt;, Elasticsearch works with &lt;strong&gt;documents and indices&lt;/strong&gt;. Let’s break it down using concepts you already know.  &lt;/p&gt;
&lt;h4&gt;
  
  
  &lt;strong&gt;Thinking in Tables vs. Thinking in Documents&lt;/strong&gt;
&lt;/h4&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;&lt;strong&gt;RDBMS Concept&lt;/strong&gt;&lt;/th&gt;
&lt;th&gt;&lt;strong&gt;Elasticsearch Equivalent&lt;/strong&gt;&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Database&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;Index&lt;/strong&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Table&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;
&lt;strong&gt;Type&lt;/strong&gt; (deprecated in newer versions)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Row&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;Document&lt;/strong&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Column&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;Field&lt;/strong&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Schema&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;Mapping&lt;/strong&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;In relational databases, data is neatly organized into &lt;strong&gt;tables&lt;/strong&gt; with predefined schemas—every row must follow a fixed structure.  &lt;/p&gt;

&lt;p&gt;Elasticsearch, on the other hand, is &lt;strong&gt;schema-less (to an extent)&lt;/strong&gt;. Instead of rows, it stores &lt;strong&gt;JSON documents&lt;/strong&gt; inside an &lt;strong&gt;index&lt;/strong&gt; (similar to a table). Each document can have &lt;strong&gt;a flexible structure&lt;/strong&gt;, making it great for &lt;strong&gt;semi-structured or dynamic data&lt;/strong&gt;.  &lt;/p&gt;
&lt;h4&gt;
  
  
  &lt;strong&gt;How Data is Stored: Rows vs. JSON Documents&lt;/strong&gt;
&lt;/h4&gt;
&lt;h5&gt;
  
  
  🔹 &lt;strong&gt;RDBMS Example (Users Table in MySQL)&lt;/strong&gt;
&lt;/h5&gt;

&lt;p&gt;Here’s how you’d define a simple &lt;code&gt;users&lt;/code&gt; table in MySQL:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight sql"&gt;&lt;code&gt;&lt;span class="k"&gt;CREATE&lt;/span&gt; &lt;span class="k"&gt;TABLE&lt;/span&gt; &lt;span class="n"&gt;users&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="n"&gt;id&lt;/span&gt; &lt;span class="nb"&gt;INT&lt;/span&gt; &lt;span class="k"&gt;PRIMARY&lt;/span&gt; &lt;span class="k"&gt;KEY&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;name&lt;/span&gt; &lt;span class="nb"&gt;VARCHAR&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;100&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt;
    &lt;span class="n"&gt;email&lt;/span&gt; &lt;span class="nb"&gt;VARCHAR&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;100&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt;
    &lt;span class="n"&gt;age&lt;/span&gt; &lt;span class="nb"&gt;INT&lt;/span&gt;
&lt;span class="p"&gt;);&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h5&gt;
  
  
  🔹 &lt;strong&gt;Elasticsearch Equivalent (JSON Document in an Index)&lt;/strong&gt;
&lt;/h5&gt;

&lt;p&gt;In Elasticsearch, each user record is stored as a &lt;strong&gt;JSON document&lt;/strong&gt; inside an index:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;json
{
    "id": 1,
    "name": "Jane Doe",
    "email": "jane.doe@example.com",
    "age": 30
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Instead of &lt;strong&gt;inserting data as rows&lt;/strong&gt;, Elasticsearch stores each entry as a &lt;strong&gt;self-contained JSON document&lt;/strong&gt;. This structure allows for &lt;strong&gt;fast searching and flexible querying&lt;/strong&gt; without requiring rigid table schemas.  &lt;/p&gt;

&lt;h4&gt;
  
  
  &lt;strong&gt;Querying: SQL vs. Elasticsearch Query DSL&lt;/strong&gt;
&lt;/h4&gt;

&lt;p&gt;One of the biggest differences between RDBMS and Elasticsearch is &lt;strong&gt;how you search for data&lt;/strong&gt;.  &lt;/p&gt;

&lt;p&gt;Traditional databases use &lt;strong&gt;SQL&lt;/strong&gt;, while Elasticsearch has its own &lt;strong&gt;Query DSL (Domain-Specific Language)&lt;/strong&gt;, which is JSON-based.  &lt;/p&gt;

&lt;h5&gt;
  
  
  🔹 &lt;strong&gt;Finding all users aged 30 in MySQL (SQL Query)&lt;/strong&gt;
&lt;/h5&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight sql"&gt;&lt;code&gt;&lt;span class="k"&gt;SELECT&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="k"&gt;FROM&lt;/span&gt; &lt;span class="n"&gt;users&lt;/span&gt; &lt;span class="k"&gt;WHERE&lt;/span&gt; &lt;span class="n"&gt;age&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;30&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h5&gt;
  
  
  🔹 &lt;strong&gt;Finding all users aged 30 in Elasticsearch (Query DSL)&lt;/strong&gt;
&lt;/h5&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;json
{
    "query": {
        "match": {
            "age": 30
        }
    }
}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;h3&gt;
  
  
  Elasticsearch
&lt;/h3&gt;

&lt;h4&gt;
  
  
  &lt;strong&gt;Fast Search &amp;amp; Distributed Indexing&lt;/strong&gt;
&lt;/h4&gt;

&lt;p&gt;At the core of the ELK Stack is &lt;strong&gt;Elasticsearch&lt;/strong&gt;, a &lt;strong&gt;highly scalable, distributed search engine&lt;/strong&gt; that enables &lt;strong&gt;rapid data access retrieval&lt;/strong&gt;. Unlike traditional databases that are optimized for structured data and transactions, Elasticsearch is designed for:  &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Full-text search&lt;/strong&gt; – Finds relevant results instantly, even in massive datasets.
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Real-time indexing&lt;/strong&gt; – New data becomes searchable almost immediately.
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Scalability&lt;/strong&gt; – Distributes data across multiple nodes to handle petabytes of information.
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;It’s built on &lt;strong&gt;Apache Lucene&lt;/strong&gt;, a powerful search library, and uses an &lt;strong&gt;inverted index&lt;/strong&gt;, a structure specifically optimized for search queries.  &lt;/p&gt;

&lt;h4&gt;
  
  
  &lt;strong&gt;How Elasticsearch Works&lt;/strong&gt;
&lt;/h4&gt;

&lt;p&gt;&lt;strong&gt;1️⃣ Indices &amp;amp; Documents – The Building Blocks&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
Elasticsearch doesn’t use tables and rows like a relational database. Instead, it stores data as &lt;strong&gt;JSON documents&lt;/strong&gt; inside an &lt;strong&gt;index&lt;/strong&gt;.  &lt;/p&gt;

&lt;p&gt;🔹 &lt;strong&gt;Think of an index like a database&lt;/strong&gt;, and each document inside it as a record. Unlike relational databases, these documents can have different structures—offering &lt;strong&gt;flexibility&lt;/strong&gt; for handling dynamic or semi-structured data.  &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2️⃣ Shards &amp;amp; Replicas – How Elasticsearch Scales&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
Handling massive amounts of data requires scalability, and that’s where &lt;strong&gt;sharding&lt;/strong&gt; and &lt;strong&gt;replication&lt;/strong&gt; come in.  &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Shards&lt;/strong&gt;: Elasticsearch &lt;strong&gt;splits an index into smaller pieces&lt;/strong&gt; (shards) to distribute data across multiple nodes.
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Replicas&lt;/strong&gt;: Each shard can have &lt;strong&gt;replicas&lt;/strong&gt;—copies stored across different nodes to improve &lt;strong&gt;redundancy and performance&lt;/strong&gt;.
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This architecture makes Elasticsearch both &lt;strong&gt;fault-tolerant&lt;/strong&gt; and &lt;strong&gt;lightning fast&lt;/strong&gt;, even when dealing with billions of records.  &lt;/p&gt;
&lt;h4&gt;
  
  
  &lt;strong&gt;Why Elasticsearch is Powerful for Log Analysis&lt;/strong&gt;
&lt;/h4&gt;

&lt;p&gt;Imagine you’re managing a cloud-based web application that logs thousands of events every second. A typical log entry might look like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"timestamp"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"2025-03-24T12:34:56Z"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"level"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"ERROR"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"message"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Database connection failed"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"service"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"authentication"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"user_id"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;1234&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Once indexed in Elasticsearch, you can &lt;strong&gt;instantly&lt;/strong&gt; search for all error messages related to the authentication service:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"query"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="nl"&gt;"match"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
            &lt;/span&gt;&lt;span class="nl"&gt;"service"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"authentication"&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This makes Elasticsearch incredibly powerful for &lt;strong&gt;log analysis, large-scale search applications, and real-time data insights&lt;/strong&gt;.  &lt;/p&gt;

&lt;h4&gt;
  
  
  &lt;strong&gt;Logstash – The Data Pipeline&lt;/strong&gt;
&lt;/h4&gt;

&lt;h5&gt;
  
  
  &lt;strong&gt;Collecting, Transforming, and Shipping Data&lt;/strong&gt;
&lt;/h5&gt;

&lt;p&gt;While Elasticsearch is great for &lt;strong&gt;searching and analyzing data&lt;/strong&gt;, it doesn’t &lt;strong&gt;collect or process&lt;/strong&gt; data on its own. That’s where &lt;strong&gt;Logstash&lt;/strong&gt; comes in.  &lt;/p&gt;

&lt;p&gt;Logstash acts as a &lt;strong&gt;data pipeline&lt;/strong&gt; that:  &lt;/p&gt;

&lt;p&gt;1️⃣ &lt;strong&gt;Collects data&lt;/strong&gt; from multiple sources (logs, databases, cloud services).&lt;br&gt;&lt;br&gt;
2️⃣ &lt;strong&gt;Transforms it&lt;/strong&gt; into a structured format (parsing, filtering, masking sensitive data).&lt;br&gt;&lt;br&gt;
3️⃣ &lt;strong&gt;Sends it&lt;/strong&gt; to Elasticsearch (or other destinations like Kafka).  &lt;/p&gt;
&lt;h5&gt;
  
  
  &lt;strong&gt;How Logstash Works&lt;/strong&gt;
&lt;/h5&gt;

&lt;p&gt;Logstash follows a simple &lt;strong&gt;ETL (Extract, Transform, Load)&lt;/strong&gt; workflow.  &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1️⃣ Input – Collecting Data&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
Logstash gathers logs from multiple sources:&lt;br&gt;&lt;br&gt;
✅ &lt;strong&gt;Files&lt;/strong&gt; – System logs, application logs, web server logs.&lt;br&gt;&lt;br&gt;
✅ &lt;strong&gt;Databases&lt;/strong&gt; – MySQL, PostgreSQL, MongoDB.&lt;br&gt;&lt;br&gt;
✅ &lt;strong&gt;Cloud Services&lt;/strong&gt; – AWS CloudWatch, Google Cloud Logs.  &lt;/p&gt;

&lt;p&gt;Example: &lt;strong&gt;Collecting Logs from a File&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="s"&gt;input {&lt;/span&gt;
  &lt;span class="s"&gt;file {&lt;/span&gt;
    &lt;span class="s"&gt;path =&amp;gt; "/var/log/syslog"&lt;/span&gt;
    &lt;span class="s"&gt;start_position =&amp;gt; "beginning"&lt;/span&gt;
  &lt;span class="s"&gt;}&lt;/span&gt;
&lt;span class="err"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;2️⃣ Filter – Transforming &amp;amp; Enriching Data&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
Before sending data to Elasticsearch, Logstash can &lt;strong&gt;clean, modify, and enrich logs&lt;/strong&gt;.  &lt;/p&gt;

&lt;p&gt;✅ &lt;strong&gt;Parse JSON logs&lt;/strong&gt; for better searchability.&lt;br&gt;&lt;br&gt;
✅ &lt;strong&gt;Mask sensitive data&lt;/strong&gt; like passwords.&lt;br&gt;&lt;br&gt;
✅ &lt;strong&gt;Geo-location enrichment&lt;/strong&gt; (find a user’s country based on IP).  &lt;/p&gt;

&lt;p&gt;Example: &lt;strong&gt;Masking Passwords in Logs&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="s"&gt;filter {&lt;/span&gt;
  &lt;span class="s"&gt;json {&lt;/span&gt;
    &lt;span class="s"&gt;source =&amp;gt; "message"&lt;/span&gt;
  &lt;span class="s"&gt;}&lt;/span&gt;
  &lt;span class="s"&gt;mutate {&lt;/span&gt;
    &lt;span class="s"&gt;gsub =&amp;gt; ["password", ".*", "[REDACTED]"]&lt;/span&gt;
  &lt;span class="s"&gt;}&lt;/span&gt;
&lt;span class="err"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;3️⃣ Output – Sending Data to Elasticsearch&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
After processing, Logstash ships logs to Elasticsearch.  &lt;/p&gt;

&lt;p&gt;Example: &lt;strong&gt;Indexing logs in Elasticsearch&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="s"&gt;output {&lt;/span&gt;
  &lt;span class="s"&gt;elasticsearch {&lt;/span&gt;
    &lt;span class="s"&gt;hosts =&amp;gt; ["http://localhost:9200"]&lt;/span&gt;
    &lt;span class="s"&gt;index =&amp;gt; "logs-%{+YYYY.MM.dd}"&lt;/span&gt;
  &lt;span class="s"&gt;}&lt;/span&gt;
&lt;span class="err"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This &lt;strong&gt;automates log ingestion&lt;/strong&gt; and ensures that logs are structured, searchable, and ready for analysis in Kibana.  &lt;/p&gt;

&lt;h4&gt;
  
  
  &lt;strong&gt;Kibana – Bringing Data to Life&lt;/strong&gt;
&lt;/h4&gt;

&lt;h5&gt;
  
  
  &lt;strong&gt;Dashboards, Analytics, and Insights&lt;/strong&gt;
&lt;/h5&gt;

&lt;p&gt;Now that our logs are in Elasticsearch, how do we make sense of all this data? &lt;strong&gt;Kibana&lt;/strong&gt; makes it &lt;strong&gt;visual and interactive&lt;/strong&gt;.  &lt;/p&gt;

&lt;p&gt;Kibana is a &lt;strong&gt;dashboard and analytics tool&lt;/strong&gt; that allows you to:&lt;br&gt;&lt;br&gt;
✅ &lt;strong&gt;Monitor logs and metrics in real-time.&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
✅ &lt;strong&gt;Run searches and filter data with ease.&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
✅ &lt;strong&gt;Set up alerts for anomalies or critical issues.&lt;/strong&gt;  &lt;/p&gt;

&lt;h4&gt;
  
  
  &lt;strong&gt;Key Features of Kibana&lt;/strong&gt;
&lt;/h4&gt;

&lt;p&gt;&lt;strong&gt;1️⃣ Dashboards &amp;amp; Visualizations&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
Kibana lets you build &lt;strong&gt;custom dashboards&lt;/strong&gt; using bar charts, line graphs, pie charts, and heatmaps.  &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;See &lt;strong&gt;server performance trends&lt;/strong&gt; over time.
&lt;/li&gt;
&lt;li&gt;Track &lt;strong&gt;error rates&lt;/strong&gt; in real time.
&lt;/li&gt;
&lt;li&gt;Visualize &lt;strong&gt;traffic spikes&lt;/strong&gt; on your website.
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;2️⃣ Discover &amp;amp; Search&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
Kibana’s search interface helps drill down into logs.  &lt;/p&gt;

&lt;p&gt;For example, you can filter logs to show only:&lt;br&gt;&lt;br&gt;
✔️ &lt;strong&gt;ERROR messages from a specific service&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
✔️ &lt;strong&gt;API requests made by a certain user&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
✔️ &lt;strong&gt;Security alerts from a particular IP range&lt;/strong&gt;  &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3️⃣ Spotting Patterns &amp;amp; Trends with Kibana&lt;/strong&gt;&lt;br&gt;
Kibana makes it easy to spot patterns in your data over time. With simple tools like Timelion and Lens, you can:&lt;/p&gt;

&lt;p&gt;See sudden jumps in website visitors and understand why.&lt;/p&gt;

&lt;p&gt;Connect system crashes to specific events to troubleshoot faster.&lt;/p&gt;

&lt;p&gt;Identify trends in user activity to improve your services.&lt;/p&gt;




&lt;h3&gt;
  
  
  Conclusion
&lt;/h3&gt;

&lt;p&gt;The ELK Stack—Elasticsearch, Logstash, and Kibana, turns raw logs into searchable, visual insights for better monitoring and decision-making. Elasticsearch handles fast searches, Logstash collects and processes data, and Kibana brings it to life with dashboards. Together, they power real-time analytics for DevOps, security, and business intelligence.&lt;/p&gt;

&lt;p&gt;Next, we’ll deploy ELK on AWS, covering setup, scaling, and optimization. Stay tuned for the hands-on guide! 🚀&lt;/p&gt;

</description>
      <category>devops</category>
      <category>programming</category>
      <category>database</category>
      <category>aws</category>
    </item>
    <item>
      <title>🚀 InfluxDB Architecture: A Beginner’s Guide for DevOps Engineers</title>
      <dc:creator>Favour Lawrence</dc:creator>
      <pubDate>Sun, 02 Mar 2025 23:24:59 +0000</pubDate>
      <link>https://dev.to/favxlaw/influxdb-architecture-a-beginners-guide-for-devops-engineers-39n5</link>
      <guid>https://dev.to/favxlaw/influxdb-architecture-a-beginners-guide-for-devops-engineers-39n5</guid>
      <description>&lt;p&gt;Shoutout to &lt;a class="mentioned-user" href="https://dev.to/madhurima_rawat"&gt;@madhurima_rawat&lt;/a&gt;  for requesting a deep dive into InfluxDB architecture after reading my Prometheus and Grafana breakdown! 🚀&lt;/p&gt;

&lt;p&gt;If you're a DevOps engineer trying to understand how InfluxDB stacks up against Prometheus, you’re in the right place. We’ll keep it simple, comparing their architectures side by side, so you know when to use which tool.&lt;/p&gt;

&lt;p&gt;Also, if there's a DevOps concept you'd love me to explain next, drop a comment. I just might write about it next! 😉&lt;/p&gt;

&lt;p&gt;Now, let’s dive in! 🔥&lt;/p&gt;




&lt;h2&gt;
  
  
  &lt;strong&gt;The Core of InfluxDB Architecture&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;InfluxDB is a high-performance time-series database built to handle massive amounts of timestamped data efficiently. It is widely used for monitoring, real-time analytics, and IoT applications.&lt;/p&gt;

&lt;p&gt;InfluxDB is designed around four key components:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Storage Engine (TSM &amp;amp; TSI)&lt;/li&gt;
&lt;li&gt;Data Ingestion &amp;amp; Retention Policies&lt;/li&gt;
&lt;li&gt;Query Engine (InfluxQL &amp;amp; Flux)&lt;/li&gt;
&lt;li&gt;High Availability &amp;amp; Scaling
Let’s break these down one by one.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Storage Engine: TSM &amp;amp; TSI
&lt;/h3&gt;

&lt;h4&gt;
  
  
  Time-Structured Merge Tree (TSM) – The Heart of InfluxDB Storage
&lt;/h4&gt;

&lt;p&gt;InfluxDB uses a TSM (Time-Structured Merge Tree) engine, which is optimized for:&lt;br&gt;
✅ Efficient writes – It writes new data to an in-memory cache and periodically flushes it to disk in compact, immutable files.&lt;br&gt;
✅ High compression – TSM stores data in compressed segments, reducing storage costs.&lt;br&gt;
✅ Fast reads – TSM is optimized for quick lookups, even in large datasets.&lt;br&gt;
🔹 How is this different from Prometheus?&lt;br&gt;
Prometheus chunks data into 2-hour blocks and doesn’t have built-in long-term retention. InfluxDB’s TSM engine allows for more efficient long-term storage and querying.&lt;/p&gt;

&lt;h4&gt;
  
  
  Time-Series Index (TSI) – Handling Millions of Tags
&lt;/h4&gt;

&lt;p&gt;InfluxDB also introduces TSI (Time-Series Indexing), which is crucial when dealing with millions of time-series labels.&lt;/p&gt;

&lt;p&gt;✅ Fast queries on large datasets – Unlike databases that slow down with too many unique tags, TSI ensures smooth performance.&lt;br&gt;
✅ Disk-based indexing – TSI allows InfluxDB to scale efficiently without consuming too much RAM.&lt;/p&gt;

&lt;p&gt;🔹 Why does this matter?&lt;br&gt;
One of the common pitfalls in Prometheus is high cardinality issues, when you have too many labels, queries become slow. InfluxDB handles high cardinality better thanks to TSI.&lt;/p&gt;

&lt;h3&gt;
  
  
  Data Ingestion &amp;amp; Retention Policies
&lt;/h3&gt;

&lt;h4&gt;
  
  
  Push-Based Data Collection
&lt;/h4&gt;

&lt;p&gt;InfluxDB primarily relies on a push-based model for ingesting data. This means data sources send metrics to InfluxDB rather than InfluxDB pulling them.&lt;br&gt;
✅ Telegraf – InfluxDB’s official data collection agent, supporting 300+ integrations.&lt;br&gt;
✅ Direct HTTP API writes – Developers can push metrics to InfluxDB using simple REST API calls.&lt;/p&gt;

&lt;p&gt;🔹If you recall from my &lt;a href="https://dev.to/favxlaw/prometheus-architecture-understanding-the-workflow-162o"&gt;prometheus article&lt;/a&gt;&lt;br&gt;
Prometheus scrapes metrics (pull-based). Here InfluxDB expects data to be pushed.&lt;/p&gt;

&lt;h4&gt;
  
  
  Retention Policies (RP) &amp;amp; Continuous Queries (CQ)
&lt;/h4&gt;

&lt;p&gt;RP Allows you to automatically delete old data after a set period. Great for managing storage costs.&lt;br&gt;
CQ Helps precompute and aggregate data in real-time, reducing query load.&lt;br&gt;
🔹Unlike Prometheus, where you need external tools (like Thanos) for retention, InfluxDB manages data lifecycle natively.&lt;/p&gt;

&lt;h3&gt;
  
  
  High Availability &amp;amp; Scaling
&lt;/h3&gt;

&lt;p&gt;Scaling InfluxDB&lt;br&gt;
InfluxDB supports horizontal scaling via:&lt;br&gt;
✅ Clustering (Enterprise Edition) – Distributes data across multiple nodes.&lt;br&gt;
✅ InfluxDB Cloud – Fully managed, scalable version.&lt;/p&gt;

&lt;p&gt;🔹 How is this different from Prometheus?&lt;br&gt;
Prometheus doesn’t natively support clustering—you need Thanos or Cortex for that. InfluxDB offers built-in clustering.&lt;/p&gt;

&lt;h3&gt;
  
  
  When Should You Use InfluxDB?
&lt;/h3&gt;

&lt;p&gt;✅ Best for long-term storage &amp;amp; analytics – If you need to keep metrics for months/years.&lt;br&gt;
✅ Great for IoT, sensors, and business analytics – Ideal for financial, industrial, and IoT applications.&lt;br&gt;
✅ Advanced query capabilities – If you need complex joins, transformations, and external API integrations.&lt;/p&gt;




&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;InfluxDB is a powerful time-series database designed for high-performance storage, flexible querying, and efficient scaling. Compared to Prometheus, it’s better suited for long-term retention, high-cardinality data, and deep analytics.&lt;/p&gt;

&lt;p&gt;🚀 Want to see InfluxDB in action? Let me know in the comments if you’d like a hands-on tutorial or use case examples! 🔥&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Thanks for reading! Don’t forget to follow, and feel free to leave a comment with the next DevOps concept you’d like me to dive into. Let’s keep the learning going!&lt;/em&gt;&lt;/p&gt;

</description>
      <category>devops</category>
      <category>beginners</category>
      <category>monitoring</category>
      <category>programming</category>
    </item>
    <item>
      <title>Grafana Architecture Explained: How the Backend and Data Flow Work.</title>
      <dc:creator>Favour Lawrence</dc:creator>
      <pubDate>Thu, 20 Feb 2025 14:37:16 +0000</pubDate>
      <link>https://dev.to/favxlaw/grafana-architecture-explained-how-the-backend-and-data-flow-work-49d0</link>
      <guid>https://dev.to/favxlaw/grafana-architecture-explained-how-the-backend-and-data-flow-work-49d0</guid>
      <description>&lt;p&gt;Grafana is a powerful open-source tool that helps turn raw data into clear, interactive dashboards making it a go-to for DevOps teams. But what’s really happening behind the scenes? In this article, we’ll break down how Grafana processes and visualizes data, keeping things simple, practical, and to the point. Whether you’re new to DevOps or just curious about how it all works under the hood, this guide will give you a solid starting point.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Grafana at a Glance&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Grafana is built on two main parts: the frontend and the backend.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Frontend:
This is the part you see and interact with the dashboards, graphs, and visualizations. Built with modern web technologies, it ensures a smooth and responsive experience, making it easy to explore and analyze your data.&lt;/li&gt;
&lt;li&gt;Backend:
This is where the heavy lifting happens. The backend processes data, runs queries, and connects to various data sources like Prometheus and InfluxDB. In short, it gathers and prepares the data that the frontend turns into useful insights.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  🔍 The Backend
&lt;/h3&gt;

&lt;p&gt;Grafana’s backend is where most of the activities happen, handling data requests, processing queries, and keeping everything running smoothly. Let’s break it down into two key parts:  &lt;/p&gt;

&lt;h4&gt;
  
  
  ⚙️ The Grafana Server &amp;amp; API Layer
&lt;/h4&gt;

&lt;p&gt;The Grafana server is the engine running behind the scenes. It acts as the bridge between your dashboards and your data sources, ensuring seamless communication. Here’s what it does:  &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;🌍 &lt;strong&gt;Manages Requests:&lt;/strong&gt; When you interact with Grafana, the server processes your actions, whether it’s loading a dashboard, changing a time range, or modifying settings.
&lt;/li&gt;
&lt;li&gt;🔌 &lt;strong&gt;Connects to Data Sources:&lt;/strong&gt; Through its RESTful APIs, the server fetches data from sources like Prometheus, InfluxDB, or MySQL.
&lt;/li&gt;
&lt;li&gt;🔄 &lt;strong&gt;Enables Automation:&lt;/strong&gt; Beyond the web interface, the API lets you integrate Grafana into scripts and automation workflows, making it a flexible tool for DevOps teams.
&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  📊 How Data Queries &amp;amp; Processing Work
&lt;/h4&gt;

&lt;p&gt;Every time you load a dashboard, Grafana works behind the scenes to fetch and process data. Here’s a step-by-step breakdown:  &lt;/p&gt;

&lt;p&gt;1️⃣ &lt;strong&gt;Request Initiation:&lt;/strong&gt; The frontend (your dashboard) sends a query request to the backend via the API.&lt;br&gt;&lt;br&gt;
2️⃣ &lt;strong&gt;Data Retrieval:&lt;/strong&gt; The backend translates this request and reaches out to the right data source.&lt;br&gt;&lt;br&gt;
3️⃣ &lt;strong&gt;Processing:&lt;/strong&gt; Once the data is retrieved, the server processes it applying filters, aggregations, or calculations as needed.&lt;br&gt;&lt;br&gt;
4️⃣ &lt;strong&gt;Response &amp;amp; Rendering:&lt;/strong&gt; The processed data is sent back to the frontend, where it’s transformed into the visualizations you see.  &lt;/p&gt;

&lt;p&gt;This smooth backend operation is what makes Grafana such a powerful tool for real-time monitoring and analysis.  &lt;/p&gt;

&lt;h3&gt;
  
  
  🔗 Connecting to Data Sources
&lt;/h3&gt;

&lt;p&gt;Grafana is like a universal translator for data, it seamlessly connects to a wide range of sources, from time series databases like &lt;strong&gt;Prometheus&lt;/strong&gt; and &lt;strong&gt;InfluxDB&lt;/strong&gt; to search engines like &lt;strong&gt;Elasticsearch&lt;/strong&gt;. Whether you're monitoring server metrics, analyzing logs, or tracking application performance, Grafana knows how to fetch and display the data you need.  &lt;/p&gt;

&lt;h4&gt;
  
  
  🛠 Setting Up a Data Source
&lt;/h4&gt;

&lt;p&gt;Connecting a data source in Grafana is a straightforward process:  &lt;/p&gt;

&lt;p&gt;1️⃣ &lt;strong&gt;Pick Your Source:&lt;/strong&gt; In Grafana’s intuitive UI, you select the database or service you want to connect to.&lt;br&gt;&lt;br&gt;
2️⃣ &lt;strong&gt;Choose the Right Plugin:&lt;/strong&gt; Grafana has built-in plugins that "speak" the native query language of each data source, ensuring seamless communication.&lt;br&gt;&lt;br&gt;
3️⃣ &lt;strong&gt;Configure &amp;amp; Authenticate:&lt;/strong&gt; You provide connection details like the database URL, credentials, and any necessary authentication tokens.&lt;br&gt;&lt;br&gt;
4️⃣ &lt;strong&gt;Test &amp;amp; Save:&lt;/strong&gt; Grafana lets you test the connection before saving, so you can ensure everything is working smoothly.  &lt;/p&gt;

&lt;p&gt;Once set up, Grafana sends queries directly to your data source in &lt;strong&gt;real time&lt;/strong&gt;, pulling in the latest metrics for visualization.  &lt;/p&gt;

&lt;h4&gt;
  
  
  🔄 Understanding Data Flow
&lt;/h4&gt;

&lt;p&gt;Every time you interact with a Grafana dashboard, there's a well orchestrated sequence happening in the background. Let’s break it down step by step:  &lt;/p&gt;

&lt;h4&gt;
  
  
  🚀 &lt;strong&gt;1. User Action → Sending a Query&lt;/strong&gt;
&lt;/h4&gt;

&lt;p&gt;It all starts when you interact with a dashboard, maybe you &lt;strong&gt;select a different time range&lt;/strong&gt;, &lt;strong&gt;refresh a panel&lt;/strong&gt;, or &lt;strong&gt;zoom into a specific data point&lt;/strong&gt;. This triggers a request that gets sent to Grafana’s backend.  &lt;/p&gt;

&lt;h4&gt;
  
  
  🔍 &lt;strong&gt;2. Query Processing → Talking to the Data Source&lt;/strong&gt;
&lt;/h4&gt;

&lt;p&gt;Grafana’s backend translates your request into a query that the selected data source understands. If you're using &lt;strong&gt;Prometheus&lt;/strong&gt;, for example, Grafana converts your request into a PromQL query. If it’s &lt;strong&gt;Elasticsearch&lt;/strong&gt;, it turns into a structured search request.  &lt;/p&gt;

&lt;h4&gt;
  
  
  📦 &lt;strong&gt;3. Data Retrieval &amp;amp; Processing → Cleaning &amp;amp; Formatting&lt;/strong&gt;
&lt;/h4&gt;

&lt;p&gt;The data source processes the request and sends back raw data. But before it reaches your dashboard, Grafana’s backend &lt;strong&gt;cleans it up&lt;/strong&gt;, &lt;strong&gt;applies filters&lt;/strong&gt;, &lt;strong&gt;aggregates values&lt;/strong&gt;, and &lt;strong&gt;formats it properly&lt;/strong&gt;, making sure you get exactly what you need.  &lt;/p&gt;

&lt;h4&gt;
  
  
  📊 &lt;strong&gt;4. Visualization → Data Comes to Life&lt;/strong&gt;
&lt;/h4&gt;

&lt;p&gt;Finally, the processed data is sent to the frontend, where Grafana transforms it into &lt;strong&gt;interactive graphs, charts, and tables&lt;/strong&gt;. This real-time flow ensures that what you're seeing is always &lt;strong&gt;current, accurate, and easy to interpret&lt;/strong&gt;.  &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;em&gt;Thanks for reading! If you found this helpful, follow for more DevOps concepts explained in a clear and simple way. Got a topic you'd like me to cover next? Let me know!&lt;/em&gt;&lt;/strong&gt;&lt;/p&gt;

</description>
      <category>devops</category>
      <category>programming</category>
      <category>developers</category>
      <category>grafana</category>
    </item>
    <item>
      <title>Kubernetes Services vs. Ingress: What You Need to Know.</title>
      <dc:creator>Favour Lawrence</dc:creator>
      <pubDate>Sat, 08 Feb 2025 22:51:29 +0000</pubDate>
      <link>https://dev.to/favxlaw/kubernetes-services-vs-ingress-what-you-need-to-know-20p6</link>
      <guid>https://dev.to/favxlaw/kubernetes-services-vs-ingress-what-you-need-to-know-20p6</guid>
      <description>&lt;p&gt;Ever tried accessing a containerized application running inside Kubernetes and realized it wasn’t as simple as running a server on your local machine? Unlike traditional setups where an app binds to a port and is instantly reachable, Kubernetes operates in a world of dynamic, ever-changing pods. If a pod dies and gets recreated, it might get a new IP, breaking direct access.&lt;/p&gt;

&lt;p&gt;So, how do applications running inside a Kubernetes cluster communicate reliably? And how do we expose these applications to the outside world? This is where Kubernetes Services and Ingress come in.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Services ensure that even if pods come and go, your application remains accessible via a stable endpoint.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Ingress provides a smarter way to manage external access, acting as a traffic controller to route requests to the right service.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In this article, we’ll break down Kubernetes Services and Ingress, explaining when and why you need them with practical examples. Let's dive in!&lt;/p&gt;




&lt;h2&gt;
  
  
  🔹What is a Kubernetes Service?
&lt;/h2&gt;

&lt;p&gt;In Kubernetes, a &lt;strong&gt;Service&lt;/strong&gt; is an abstraction that provides a stable network endpoint to access a group of pods. Since pods are dynamic (they can be created, deleted, or rescheduled), their IPs keep changing. A &lt;strong&gt;Service&lt;/strong&gt; ensures that applications can communicate reliably without worrying about changing pod IPs.  &lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;🔸 Why Do We Need Services?&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;🚀 Pods are &lt;strong&gt;ephemeral&lt;/strong&gt;—they can restart or move to another node, getting a new IP.&lt;br&gt;&lt;br&gt;
🚀 Directly accessing a pod’s IP is unreliable since it might change at any moment.&lt;br&gt;&lt;br&gt;
🚀 A &lt;strong&gt;Service&lt;/strong&gt; creates a fixed &lt;strong&gt;Cluster IP&lt;/strong&gt; that stays the same, ensuring &lt;strong&gt;stable communication&lt;/strong&gt; between pods and external users.  &lt;/p&gt;
&lt;h3&gt;
  
  
  🔹 Types of Kubernetes Services
&lt;/h3&gt;

&lt;p&gt;Kubernetes offers different types of Services based on how you want your application to be accessible. Let’s break them down:  &lt;/p&gt;
&lt;h3&gt;
  
  
  🔸 &lt;strong&gt;ClusterIP (Default &amp;amp; Internal-Only)&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;✅ The &lt;strong&gt;default&lt;/strong&gt; service type in Kubernetes.&lt;br&gt;&lt;br&gt;
✅ Creates an &lt;strong&gt;internal-only&lt;/strong&gt; IP, meaning it’s &lt;strong&gt;only accessible inside the cluster&lt;/strong&gt;.&lt;br&gt;&lt;br&gt;
✅ Perfect for &lt;strong&gt;internal communication&lt;/strong&gt; between microservices. &lt;br&gt;
&lt;strong&gt;Example: A backend API serving a frontend within the cluster&lt;/strong&gt;.&lt;/p&gt;
&lt;h3&gt;
  
  
  🔸 &lt;strong&gt;NodePort (Basic External Access)&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;✅ Exposes the service &lt;strong&gt;on every node&lt;/strong&gt; using a high-numbered port (30000–32767).&lt;br&gt;&lt;br&gt;
✅ You can access it via &lt;code&gt;NodeIP:NodePort&lt;/code&gt;.&lt;br&gt;&lt;br&gt;
✅ &lt;strong&gt;Easy to set up&lt;/strong&gt; but not ideal for production—managing ports can get messy!  &lt;/p&gt;
&lt;h3&gt;
  
  
  🔸 &lt;strong&gt;LoadBalancer (Cloud-Managed External Access)&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;✅ Works in &lt;strong&gt;cloud environments&lt;/strong&gt; like AWS, GCP, or Azure.&lt;br&gt;&lt;br&gt;
✅ &lt;strong&gt;Automatically provisions&lt;/strong&gt; a cloud load balancer to handle traffic.&lt;br&gt;&lt;br&gt;
✅ The best option for &lt;strong&gt;production-grade&lt;/strong&gt; external access.  &lt;/p&gt;
&lt;h3&gt;
  
  
  🔸 &lt;strong&gt;Headless Service (For Direct Pod Access)&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;✅ Used when &lt;strong&gt;you don’t need a stable Cluster IP&lt;/strong&gt;.&lt;br&gt;&lt;br&gt;
✅ Instead of routing traffic, it helps apps &lt;strong&gt;discover&lt;/strong&gt; individual pods &lt;strong&gt;via DNS&lt;/strong&gt;.&lt;br&gt;&lt;br&gt;
✅ Useful for databases, stateful applications, and custom service discovery.  &lt;/p&gt;
&lt;h3&gt;
  
  
  🔹  Practical Example: ClusterIP Service YAML
&lt;/h3&gt;

&lt;p&gt;Here’s a simple YAML configuration for a &lt;strong&gt;ClusterIP&lt;/strong&gt; Service that exposes an Nginx pod:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Service&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;my-nginx-service&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;selector&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;app&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;nginx&lt;/span&gt;
  &lt;span class="na"&gt;ports&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;protocol&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;TCP&lt;/span&gt;
      &lt;span class="na"&gt;port&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;80&lt;/span&gt;      &lt;span class="c1"&gt;# The Service's Port&lt;/span&gt;
      &lt;span class="na"&gt;targetPort&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;80&lt;/span&gt; &lt;span class="c1"&gt;# The Pod's Port&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;✅ &lt;strong&gt;selector:&lt;/strong&gt; Matches pods with the label &lt;code&gt;app: nginx&lt;/code&gt;, so the Service knows where to send traffic.&lt;br&gt;&lt;br&gt;
✅ &lt;strong&gt;port:&lt;/strong&gt; The port where the Service is exposed inside the cluster.&lt;br&gt;&lt;br&gt;
✅ &lt;strong&gt;targetPort:&lt;/strong&gt; The actual port inside the pod where traffic should go.  &lt;/p&gt;


&lt;h2&gt;
  
  
  Understanding Ingress
&lt;/h2&gt;

&lt;p&gt;Kubernetes gives us multiple ways to expose applications, but &lt;strong&gt;Ingress&lt;/strong&gt; is the &lt;strong&gt;smartest&lt;/strong&gt; option. Instead of creating separate external access points for each service, Ingress acts as a &lt;strong&gt;single entryway&lt;/strong&gt;, efficiently routing traffic to the right service inside your cluster.  &lt;/p&gt;
&lt;h3&gt;
  
  
  🔹 What is Ingress in Kubernetes?
&lt;/h3&gt;

&lt;p&gt;Ingress is a &lt;strong&gt;traffic manager&lt;/strong&gt; for your cluster. It controls &lt;strong&gt;external access&lt;/strong&gt; to services using &lt;strong&gt;rules&lt;/strong&gt; based on domains, paths, and protocols like HTTP/HTTPS. Think of it as a &lt;strong&gt;router&lt;/strong&gt; that directs requests to the correct backend service.  &lt;/p&gt;
&lt;h3&gt;
  
  
  🔹 Why Use Ingress Instead of NodePort or LoadBalancer?
&lt;/h3&gt;

&lt;p&gt;While &lt;strong&gt;NodePort&lt;/strong&gt; and &lt;strong&gt;LoadBalancer&lt;/strong&gt; work, they have limitations:  &lt;/p&gt;

&lt;p&gt;❌ &lt;strong&gt;NodePort&lt;/strong&gt; exposes each service on a random high-numbered port—not ideal for production.&lt;br&gt;&lt;br&gt;
❌ &lt;strong&gt;LoadBalancer&lt;/strong&gt; works better but &lt;strong&gt;creates a new cloud load balancer per service&lt;/strong&gt;, which can get &lt;strong&gt;expensive and complex&lt;/strong&gt;.&lt;br&gt;&lt;br&gt;
✅ &lt;strong&gt;Ingress&lt;/strong&gt; solves both problems by &lt;strong&gt;allowing multiple services to share a single entry point&lt;/strong&gt;, reducing cost and simplifying management.  &lt;/p&gt;
&lt;h3&gt;
  
  
  🔹 How Ingress Routes External Traffic
&lt;/h3&gt;

&lt;p&gt;1️⃣ A user sends an &lt;strong&gt;HTTP/HTTPS request&lt;/strong&gt; to your cluster.&lt;br&gt;&lt;br&gt;
2️⃣ The &lt;strong&gt;Ingress resource&lt;/strong&gt; checks its rules to decide which service should handle the request.&lt;br&gt;&lt;br&gt;
3️⃣ Traffic is forwarded to the correct &lt;strong&gt;pod&lt;/strong&gt; inside the cluster.  &lt;/p&gt;
&lt;h3&gt;
  
  
  🔹 The Role of an Ingress Controller
&lt;/h3&gt;

&lt;p&gt;An &lt;strong&gt;Ingress Controller&lt;/strong&gt; is needed to process Ingress rules. Popular choices include:  &lt;/p&gt;

&lt;p&gt;✔ &lt;strong&gt;NGINX Ingress Controller&lt;/strong&gt; (most common)&lt;br&gt;&lt;br&gt;
✔ &lt;strong&gt;Traefik, HAProxy, AWS ALB, Istio&lt;/strong&gt;, etc.  &lt;/p&gt;
&lt;h3&gt;
  
  
  🔹 Setting Up Ingress: Simple Example
&lt;/h3&gt;

&lt;p&gt;Here’s a &lt;strong&gt;basic Ingress configuration&lt;/strong&gt; that routes traffic based on a hostname:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;networking.k8s.io/v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Ingress&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;my-ingress&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;rules&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;host&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;myapp.local&lt;/span&gt;
    &lt;span class="na"&gt;http&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;paths&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;path&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;/&lt;/span&gt;
        &lt;span class="na"&gt;pathType&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Prefix&lt;/span&gt;
        &lt;span class="na"&gt;backend&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="na"&gt;service&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
            &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;my-service&lt;/span&gt;
            &lt;span class="na"&gt;port&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
              &lt;span class="na"&gt;number&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;80&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;✅ &lt;code&gt;host: myapp.local&lt;/code&gt; → Requests to &lt;code&gt;myapp.local&lt;/code&gt; are routed.&lt;br&gt;&lt;br&gt;
✅ &lt;code&gt;path: /&lt;/code&gt; → All requests are sent to the backend service.&lt;br&gt;&lt;br&gt;
✅ &lt;code&gt;backend.service.name: my-service&lt;/code&gt; → Traffic goes to &lt;strong&gt;my-service&lt;/strong&gt;.&lt;br&gt;&lt;br&gt;
✅ &lt;code&gt;port.number: 80&lt;/code&gt; → The port where the service listens.  &lt;/p&gt;
&lt;h3&gt;
  
  
  🔹 How to Apply and Test Ingress in Minikube
&lt;/h3&gt;

&lt;p&gt;1️⃣ &lt;strong&gt;Enable the NGINX Ingress Controller&lt;/strong&gt; in Minikube:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;minikube addons &lt;span class="nb"&gt;enable &lt;/span&gt;ingress
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;2️⃣ &lt;strong&gt;Apply the Ingress YAML file:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl apply &lt;span class="nt"&gt;-f&lt;/span&gt; my-ingress.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;3️⃣ &lt;strong&gt;Modify &lt;code&gt;/etc/hosts&lt;/code&gt; to point to Minikube’s IP (Linux/macOS):&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="si"&gt;$(&lt;/span&gt;minikube ip&lt;span class="si"&gt;)&lt;/span&gt;&lt;span class="s2"&gt; myapp.local"&lt;/span&gt; | &lt;span class="nb"&gt;sudo tee&lt;/span&gt; &lt;span class="nt"&gt;-a&lt;/span&gt; /etc/hosts
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;4️⃣ &lt;strong&gt;Test it in a browser or with curl:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;curl http://myapp.local
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;h2&gt;
  
  
  🚀 Service vs. Ingress: When to Use Each
&lt;/h2&gt;

&lt;p&gt;Choosing between a Service (NodePort/LoadBalancer) and Ingress depends on how you want to expose your applications. Here’s the breakdown:&lt;/p&gt;

&lt;p&gt;Use a Service when you need direct access to a single service, either internally or externally. However, each exposed service requires its own endpoint, which can get expensive and inefficient if you have many services.&lt;br&gt;
Use Ingress when you want to manage multiple services under one entry point. It routes traffic based on domain names or paths, reducing complexity and cost.&lt;br&gt;
A Service is simple but lacks advanced traffic control. Ingress, on the other hand, supports routing, TLS termination, and virtual hosts, making it ideal for large-scale apps.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;&lt;strong&gt;Thanks for reading! If you found this helpful, please like and follow for more DevOps content. Feel free to comment with any questions or topics you'd like to see next!&lt;/strong&gt;&lt;/em&gt;&lt;/p&gt;

</description>
      <category>devops</category>
      <category>kubernetes</category>
      <category>container</category>
      <category>beginners</category>
    </item>
    <item>
      <title>Understanding Kubernetes Volumes: Persistent Volume and Persistent Volume Claim</title>
      <dc:creator>Favour Lawrence</dc:creator>
      <pubDate>Fri, 07 Feb 2025 10:51:43 +0000</pubDate>
      <link>https://dev.to/favxlaw/understanding-kubernetes-volumes-persistent-volume-and-persistent-volume-claim-4600</link>
      <guid>https://dev.to/favxlaw/understanding-kubernetes-volumes-persistent-volume-and-persistent-volume-claim-4600</guid>
      <description>&lt;p&gt;Consider a case where data gets added or updated on  PostgreSQL, when pod restart all changes will be gone. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Why?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Kubernetes doesn’t automatically persist data when a pod restarts. By default, when a pod dies, its storage disappears with it. This is because Kubernetes treats pods as ephemeral, they come and go, and their associated storage doesn’t stick around unless you explicitly configure it to.&lt;/p&gt;

&lt;p&gt;For databases, logs, and any application that needs to retain state, this is a huge problem. That’s where Persistent Volumes (PV) and Persistent Volume Claims (PVC) come in. They allow Kubernetes to handle storage separately from pods, ensuring your data doesn’t vanish every time a pod is replaced.&lt;/p&gt;

&lt;p&gt;If you’re coming from Docker, you might wonder how this compares to Docker volumes,&lt;br&gt;
&lt;em&gt;"Doesn’t Docker have volumes for persistent storage?"&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Yes, it does. But Kubernetes handles storage in a more decentralized, scalable way. Unlike Docker, where volumes are tied to a single host, Kubernetes volumes are designed to be cluster-wide and can be provisioned dynamically.&lt;/p&gt;

&lt;p&gt;In this guide, we’ll break down:&lt;br&gt;
✅ What Persistent Volumes (PV) are and how they’re managed by administrators.&lt;br&gt;
✅ How developers request storage using Persistent Volume Claims (PVC).&lt;/p&gt;

&lt;p&gt;By the end, you’ll not only understand how Kubernetes storage works but also be able to set up persistent storage for your own applications. Let’s get started 🚀&lt;/p&gt;


&lt;h2&gt;
  
  
  &lt;strong&gt;How Does Kubernetes Handle Storage?&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Kubernetes uses &lt;strong&gt;Volumes&lt;/strong&gt; to provide persistent storage, but here’s the thing:  &lt;/p&gt;

&lt;p&gt;➡️ A &lt;strong&gt;Kubernetes Volume is just an abstraction&lt;/strong&gt;—it doesn’t store data itself. It needs to be backed by actual physical storage.  &lt;/p&gt;

&lt;p&gt;So, where does this storage come from?  &lt;/p&gt;
&lt;h3&gt;
  
  
  &lt;strong&gt;Types of Storage in Kubernetes&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;1️⃣ &lt;strong&gt;Local Storage (Node-Specific)&lt;/strong&gt;  &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Tied to a single node.
&lt;/li&gt;
&lt;li&gt;If a pod moves to another node, the data doesn’t follow.
&lt;/li&gt;
&lt;li&gt;Examples:

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;emptyDir&lt;/code&gt; (temporary storage that lasts as long as the pod exists).
&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;hostPath&lt;/code&gt; (uses a directory on the host machine).
&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;2️⃣ &lt;strong&gt;Remote Storage (Cluster-Wide)&lt;/strong&gt;  &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Decoupled from any single node, so pods can move freely without losing data.
&lt;/li&gt;
&lt;li&gt;Provided by external storage systems.
&lt;/li&gt;
&lt;li&gt;Examples:

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Cloud storage&lt;/strong&gt;: AWS EBS, Google Persistent Disks, Azure Disk.
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Network storage&lt;/strong&gt;: NFS, Ceph, GlusterFS.
&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;h3&gt;
  
  
  &lt;strong&gt;Persistent Volumes (PV) and Persistent Volume Claims (PVC)&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;To make storage management easier, Kubernetes introduces:&lt;br&gt;&lt;br&gt;
✔ &lt;strong&gt;Persistent Volumes (PV):&lt;/strong&gt; The actual storage backend.&lt;br&gt;&lt;br&gt;
✔ &lt;strong&gt;Persistent Volume Claims (PVC):&lt;/strong&gt; A way for pods to request storage dynamically.  &lt;/p&gt;

&lt;p&gt;Think of it like a hotel:  &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;A &lt;strong&gt;PV&lt;/strong&gt; is a hotel room.
&lt;/li&gt;
&lt;li&gt;A &lt;strong&gt;PVC&lt;/strong&gt; is a reservation.
&lt;/li&gt;
&lt;li&gt;When a pod needs storage, it "books" a room (PV) through a PVC.
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;📌 &lt;strong&gt;Kubernetes Volumes are not actual storage&lt;/strong&gt;—they just connect your pod to a real storage system.  &lt;/p&gt;


&lt;h2&gt;
  
  
  &lt;strong&gt;Persistent Volume (PV) – The Admin’s Role&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;We now know that &lt;strong&gt;Kubernetes doesn’t provide storage by itself&lt;/strong&gt;, so who sets it up?  &lt;/p&gt;

&lt;p&gt;A &lt;strong&gt;PV is a pre-configured storage unit&lt;/strong&gt; set up by the cluster administrator. Once it's available, developers can claim it using a &lt;strong&gt;Persistent Volume Claim (PVC)&lt;/strong&gt; (which we’ll cover next). But first, let’s see how admins actually set up storage in Kubernetes.  &lt;/p&gt;
&lt;h3&gt;
  
  
  &lt;strong&gt;How Do Admins Provision Storage?&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Admins set up storage by:&lt;br&gt;&lt;br&gt;
1️⃣ &lt;strong&gt;Setting up the storage backend&lt;/strong&gt; (Local disk, NFS, AWS EBS, etc.).&lt;br&gt;&lt;br&gt;
2️⃣ &lt;strong&gt;Defining a Persistent Volume (PV)&lt;/strong&gt; that connects to this storage.&lt;br&gt;&lt;br&gt;
3️⃣ &lt;strong&gt;Making the PV available&lt;/strong&gt; for developers to claim via PVCs.  &lt;/p&gt;
&lt;h3&gt;
  
  
  &lt;strong&gt;Example: Creating a Persistent Volume (PV)&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Here’s a simple YAML configuration for a &lt;strong&gt;local storage PV&lt;/strong&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;PersistentVolume&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;my-pv&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;capacity&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;storage&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;5Gi&lt;/span&gt;
  &lt;span class="na"&gt;accessModes&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;ReadWriteOnce&lt;/span&gt;
  &lt;span class="na"&gt;persistentVolumeReclaimPolicy&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Retain&lt;/span&gt;
  &lt;span class="na"&gt;storageClassName&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;local-storage&lt;/span&gt;
  &lt;span class="na"&gt;hostPath&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;path&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;/mnt/data"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  &lt;strong&gt;Breaking It Down&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;🔹 &lt;code&gt;capacity.storage: 5Gi&lt;/code&gt; → Provides &lt;strong&gt;5GB of storage&lt;/strong&gt;.&lt;br&gt;&lt;br&gt;
🔹 &lt;code&gt;accessModes: ReadWriteOnce&lt;/code&gt; → Can be &lt;strong&gt;mounted by only one node at a time&lt;/strong&gt;.&lt;br&gt;&lt;br&gt;
🔹 &lt;code&gt;persistentVolumeReclaimPolicy: Retain&lt;/code&gt; → Storage &lt;strong&gt;remains&lt;/strong&gt; even after the pod using it is deleted.&lt;br&gt;&lt;br&gt;
🔹 &lt;code&gt;storageClassName: local-storage&lt;/code&gt; → Specifies &lt;strong&gt;which storage class&lt;/strong&gt; to use.&lt;br&gt;&lt;br&gt;
🔹 &lt;code&gt;hostPath: /mnt/data&lt;/code&gt; → Uses a &lt;strong&gt;local directory as storage&lt;/strong&gt;.  &lt;/p&gt;

&lt;p&gt;Of course, admins can also configure &lt;strong&gt;networked storage&lt;/strong&gt; like NFS, AWS EBS, or Google Persistent Disk instead of using a local directory.  &lt;/p&gt;

&lt;p&gt;📌 &lt;strong&gt;PVs exist independently of pods, ensuring data persists even if a pod is deleted or rescheduled.&lt;/strong&gt;  &lt;/p&gt;


&lt;h2&gt;
  
  
  &lt;strong&gt;Persistent Volume Claim (PVC) – The Developer’s Role&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;The admin has set up a &lt;strong&gt;Persistent Volume (PV)&lt;/strong&gt;—now how do developers actually use it?  &lt;/p&gt;

&lt;p&gt;That’s where &lt;strong&gt;Persistent Volume Claims (PVCs)&lt;/strong&gt; come in.  &lt;/p&gt;

&lt;p&gt;A &lt;strong&gt;PVC is a storage request&lt;/strong&gt; from a developer. It’s like saying:  &lt;/p&gt;

&lt;p&gt;🗣️ &lt;em&gt;“Hey Kubernetes, I need 5GB of storage with read/write access. Find me a PV that matches!”&lt;/em&gt;  &lt;/p&gt;

&lt;p&gt;If a suitable &lt;strong&gt;PV&lt;/strong&gt; is available, Kubernetes automatically &lt;strong&gt;binds the PVC to it&lt;/strong&gt;, making storage available for the pod.  &lt;/p&gt;
&lt;h3&gt;
  
  
  &lt;strong&gt;How Developers Request Storage Using PVC&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;1️⃣ &lt;strong&gt;Create a PVC&lt;/strong&gt; specifying storage requirements.&lt;br&gt;&lt;br&gt;
2️⃣ &lt;strong&gt;Kubernetes finds a matching PV&lt;/strong&gt; and binds the PVC to it.&lt;br&gt;&lt;br&gt;
3️⃣ &lt;strong&gt;Mount the PVC inside a pod&lt;/strong&gt; to store data persistently.  &lt;/p&gt;
&lt;h3&gt;
  
  
  &lt;strong&gt;Example: Creating a PVC&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Here’s a &lt;strong&gt;simple YAML configuration&lt;/strong&gt; for a PVC requesting 5GB of storage:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;PersistentVolumeClaim&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;my-pvc&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;accessModes&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;ReadWriteOnce&lt;/span&gt;
  &lt;span class="na"&gt;resources&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;requests&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;storage&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;5Gi&lt;/span&gt;
  &lt;span class="na"&gt;storageClassName&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;local-storage&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  &lt;strong&gt;Breaking It Down&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;🔹 &lt;code&gt;accessModes: ReadWriteOnce&lt;/code&gt; → Storage &lt;strong&gt;can be mounted by only one node at a time&lt;/strong&gt;.&lt;br&gt;&lt;br&gt;
🔹 &lt;code&gt;resources.requests.storage: 5Gi&lt;/code&gt; → Requests &lt;strong&gt;5GB of storage&lt;/strong&gt;.&lt;br&gt;&lt;br&gt;
🔹 &lt;code&gt;storageClassName: local-storage&lt;/code&gt; → Uses a PV &lt;strong&gt;with the matching storage class&lt;/strong&gt;.  &lt;/p&gt;
&lt;h3&gt;
  
  
  &lt;strong&gt;How the PVC Gets Bound to a PV&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;✅ If a PV with the right &lt;strong&gt;storage size, access mode, and storage class&lt;/strong&gt; exists, Kubernetes &lt;strong&gt;automatically binds&lt;/strong&gt; the PVC to it.&lt;br&gt;&lt;br&gt;
✅ Once bound, the PVC can be &lt;strong&gt;used inside a pod&lt;/strong&gt; for persistent data storage.  &lt;/p&gt;
&lt;h3&gt;
  
  
  &lt;strong&gt;Mounting the PVC in a Pod&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Now that we have a &lt;strong&gt;PVC&lt;/strong&gt;, let’s use it inside a pod:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Pod&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;my-app&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;containers&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;my-container&lt;/span&gt;
      &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;postgres&lt;/span&gt;
      &lt;span class="na"&gt;volumeMounts&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;mountPath&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;/var/lib/postgresql/data"&lt;/span&gt;
          &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;my-storage&lt;/span&gt;
  &lt;span class="na"&gt;volumes&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;my-storage&lt;/span&gt;
      &lt;span class="na"&gt;persistentVolumeClaim&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;claimName&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;my-pvc&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;📌 Now, &lt;strong&gt;even if the pod restarts, the PostgreSQL database will still have its data&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;em&gt;Thanks for reading! Be sure to follow for more DevOps content, and feel free to comment with the DevOps concepts you'd like to see covered next.&lt;/em&gt;&lt;/strong&gt;&lt;/p&gt;

</description>
      <category>devops</category>
      <category>kubernetes</category>
      <category>beginners</category>
      <category>docker</category>
    </item>
    <item>
      <title>Very Insightful</title>
      <dc:creator>Favour Lawrence</dc:creator>
      <pubDate>Wed, 05 Feb 2025 23:32:25 +0000</pubDate>
      <link>https://dev.to/favxlaw/very-insightful-1c0p</link>
      <guid>https://dev.to/favxlaw/very-insightful-1c0p</guid>
      <description>&lt;div class="ltag__link"&gt;
  &lt;a href="/bobbyiliev" class="ltag__link__link"&gt;
    &lt;div class="ltag__link__pic"&gt;
      &lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Fuser%2Fprofile_image%2F191651%2F8bf0512d-f06c-47e9-a8d8-981b754b25ab.webp" alt="bobbyiliev"&gt;
    &lt;/div&gt;
  &lt;/a&gt;
  &lt;a href="https://dev.to/bobbyiliev/5-terraform-best-practices-i-wish-i-knew-when-i-started-2dc" class="ltag__link__link"&gt;
    &lt;div class="ltag__link__content"&gt;
      &lt;h2&gt;5 Terraform Best Practices I Wish I Knew When I Started&lt;/h2&gt;
      &lt;h3&gt;Bobby ・ Jan 31&lt;/h3&gt;
      &lt;div class="ltag__link__taglist"&gt;
        &lt;span class="ltag__link__tag"&gt;#terraform&lt;/span&gt;
        &lt;span class="ltag__link__tag"&gt;#devops&lt;/span&gt;
        &lt;span class="ltag__link__tag"&gt;#cloud&lt;/span&gt;
        &lt;span class="ltag__link__tag"&gt;#beginners&lt;/span&gt;
      &lt;/div&gt;
    &lt;/div&gt;
  &lt;/a&gt;
&lt;/div&gt;


</description>
      <category>terraform</category>
      <category>devops</category>
      <category>cloud</category>
      <category>beginners</category>
    </item>
    <item>
      <title>Prometheus Architecture: Understanding the Workflow 🚀</title>
      <dc:creator>Favour Lawrence</dc:creator>
      <pubDate>Mon, 03 Feb 2025 22:18:10 +0000</pubDate>
      <link>https://dev.to/favxlaw/prometheus-architecture-understanding-the-workflow-162o</link>
      <guid>https://dev.to/favxlaw/prometheus-architecture-understanding-the-workflow-162o</guid>
      <description>&lt;p&gt;Have you ever used Prometheus for monitoring systems? It’s great at collecting and storing metrics, but have you ever stopped to wonder how it actually works under the hood? What makes its architecture so efficient, and why is it the go to choice for cloud-native monitoring?&lt;br&gt;
Unlike traditional monitoring tools that passively wait for data, Prometheus actively scrapes metrics from defined targets, stores them efficiently in a time-series database. &lt;br&gt;
We’ll explore how it collects metrics, how its components interact, and why its design makes it a favorite among developers and SREs. Let’s get started! 🚀&lt;/p&gt;




&lt;h2&gt;
  
  
  &lt;strong&gt;Prometheus Architecture: Breaking It Down&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;If you want to truly understand how Prometheus works, you need to go beyond just “it collects metrics” and dive into its architecture. At its core, Prometheus is built on three essential pillars:  &lt;/p&gt;

&lt;p&gt;✅ &lt;strong&gt;Time-Series Database (TSDB)&lt;/strong&gt; – Where all metrics are efficiently stored.&lt;br&gt;&lt;br&gt;
✅ &lt;strong&gt;Data Retrieval Engine&lt;/strong&gt; – Responsible for actively pulling (scraping) metrics.&lt;br&gt;&lt;br&gt;
✅ &lt;strong&gt;Query &amp;amp; API Layer (Web Server)&lt;/strong&gt; – The interface where you analyze and visualize data.  &lt;/p&gt;

&lt;p&gt;Each of these components plays a critical role in making Prometheus &lt;em&gt;fast, scalable, and cloud-native&lt;/em&gt;. Now, let’s break them down in detail.  &lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;1. Prometheus Server – Command center&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;The &lt;strong&gt;Prometheus Server&lt;/strong&gt; is the central hub that coordinates everything, ensuring your metrics are collected, stored, and made accessible. Here’s what it does:&lt;br&gt;&lt;br&gt;
🔹 Pulls metrics from configured targets (applications, databases, and exporters).&lt;br&gt;&lt;br&gt;
🔹 Stores the collected data in a time-series format.&lt;br&gt;&lt;br&gt;
🔹 Provides a powerful query interface to analyze and visualize the data. &lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;2. Time-Series Database (TSDB) – Storing Metrics Efficiently&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Once Prometheus scrapes metrics, it needs a way to store them efficiently. That’s where the &lt;strong&gt;Time-Series Database (TSDB)&lt;/strong&gt; comes in. This isn’t your average database; it’s specifically designed for handling time-series data. Here’s what happens behind the scenes:  &lt;/p&gt;

&lt;p&gt;📌 &lt;strong&gt;Metrics are stored as time-series data&lt;/strong&gt; each metric is recorded with a timestamp and value.&lt;br&gt;&lt;br&gt;
📌 &lt;strong&gt;Compression techniques&lt;/strong&gt; uses advanced compression techniques to store data efficiently without slowing down performance.&lt;/p&gt;

&lt;p&gt;📌 &lt;strong&gt;A label-based system&lt;/strong&gt; metrics are tagged with labels  (e.g., &lt;code&gt;http_requests_total{status="200"}&lt;/code&gt;), making it easy to filter and query data with precision.  &lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;3. Data Retrieval Engine – How Prometheus Collects Metrics&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Prometheus doesn’t sit around waiting for data—it actively goes out and &lt;strong&gt;pulls&lt;/strong&gt; it from defined targets. This is known as the &lt;strong&gt;pull-based model&lt;/strong&gt;.   &lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;How It Works:&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Prometheus periodically &lt;strong&gt;scrapes &lt;code&gt;/metrics&lt;/code&gt; endpoints&lt;/strong&gt; from configured targets. These can be:&lt;br&gt;&lt;br&gt;
✔️ Applications exposing Prometheus-compatible metrics&lt;br&gt;&lt;br&gt;
✔️ Databases and external services&lt;br&gt;&lt;br&gt;
✔️ Exporters that convert non-Prometheus metrics into a readable format  &lt;/p&gt;

&lt;p&gt;This approach ensures Prometheus collects data efficiently while remaining highly adaptable.  &lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;4. Query &amp;amp; API Layer (Web Server) – Making Data Useful&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Storing metrics is one thing, but being able to &lt;strong&gt;query, analyze, and visualize&lt;/strong&gt; them is where the real power comes in. This is where the &lt;strong&gt;Query &amp;amp; API Layer&lt;/strong&gt; play their role.  &lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Key Responsibilities:&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;🔎 &lt;strong&gt;Handles PromQL (Prometheus Query Language)&lt;/strong&gt; for in-depth metric analysis.&lt;br&gt;&lt;br&gt;
🔎 &lt;strong&gt;Runs an HTTP API server&lt;/strong&gt;, allowing external tools (like Grafana) to pull data.&lt;br&gt;&lt;br&gt;
🔎 &lt;strong&gt;Provides built-in graphing&lt;/strong&gt; for quick insights.  &lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;How It All Comes Together&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;1️⃣ Prometheus &lt;strong&gt;scrapes metrics&lt;/strong&gt; from various targets.&lt;br&gt;&lt;br&gt;
2️⃣ It &lt;strong&gt;stores data efficiently&lt;/strong&gt; in TSDB.&lt;br&gt;&lt;br&gt;
3️⃣ The &lt;strong&gt;query engine&lt;/strong&gt; allows users to analyze trends and set up alerts.&lt;br&gt;&lt;br&gt;
4️⃣ Other tools (like Grafana) &lt;strong&gt;fetch data via Prometheus' API&lt;/strong&gt; for visualization.  &lt;/p&gt;




&lt;h2&gt;
  
  
  &lt;strong&gt;Pull Mechanism&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Here’s how it works: Prometheus is set up with a list of targets, think applications, databases, or exporters that provide metrics through a &lt;code&gt;/metrics&lt;/code&gt;endpoint. At regular intervals, Prometheus sends an HTTP request to these endpoints, grabs the metrics, adds a timestamp to each one, and then stores everything in its Time-Series Database (TSDB).&lt;br&gt;
It’s like Prometheus is constantly checking in on these targets, gathering fresh data, and keeping everything organized for easy analysis later on.&lt;/p&gt;




&lt;h2&gt;
  
  
  &lt;strong&gt;Why Prometheus?&lt;/strong&gt;
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;No External Storage Needed: Unlike some monitoring systems that rely on external storage, Prometheus keeps things simple by storing data locally—cutting down on complexity and external dependencies.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Resilient Pull-Based Monitoring: By actively scraping metrics instead of waiting for them, Prometheus is more resilient to network issues, ensuring data is consistently collected even when connections are not stable.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Handles Short-Lived Jobs: For tasks that don’t run long enough to be scraped, Prometheus offers the Pushgateway. This lets ephemeral jobs push their metrics before exiting, ensuring no data is lost.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;




&lt;p&gt;There are plenty of reasons why Prometheus is used worldwide—its architecture truly sets it apart. I hope this article helped you get a clear understanding of how it all works.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Thanks for reading! Don’t forget to follow, and feel free to leave a comment with the next DevOps concept you’d like me to dive into. Let’s keep the learning going!&lt;/em&gt;&lt;/p&gt;

</description>
      <category>devops</category>
      <category>monitoring</category>
      <category>programming</category>
      <category>beginners</category>
    </item>
    <item>
      <title>Namespaces in Kubernetes Explained: 🔍 Understanding Isolation and Sharing</title>
      <dc:creator>Favour Lawrence</dc:creator>
      <pubDate>Sat, 01 Feb 2025 21:46:39 +0000</pubDate>
      <link>https://dev.to/favxlaw/namespaces-in-kubernetes-explained-understanding-isolation-and-sharing-5ki</link>
      <guid>https://dev.to/favxlaw/namespaces-in-kubernetes-explained-understanding-isolation-and-sharing-5ki</guid>
      <description>&lt;h2&gt;
  
  
  &lt;strong&gt;Kubernetes Namespace?&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;If you've worked with Kubernetes long enough, you’ve probably seen how quickly things can spiral out of control. One team deploys a new service, another updates their staging environment, and suddenly, production is down because someone accidentally messed with the wrong resources. Sound familiar?&lt;/p&gt;

&lt;p&gt;That’s where Kubernetes namespaces come in.  Instead of stuffing everything into one disorganized cluster or deploying individual clusters for each project, namespaces help maintain order, enforce security boundaries, and enhance resource management efficiency.&lt;/p&gt;

&lt;p&gt;In this article, we’ll explore:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;What Kubernetes namespaces are and how they work.&lt;/li&gt;
&lt;li&gt;Why they’re essential for managing multi-team, multi-application clusters.&lt;/li&gt;
&lt;li&gt;Resources that can and can't be shared across namespaces.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;By the end, you’ll have a solid grasp of how to use namespaces to keep your cluster structured and scalable. Let’s dive in. 🚀&lt;/p&gt;




&lt;h2&gt;
  
  
  &lt;strong&gt;What Are Kubernetes Namespaces, Really?&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;A namespace in Kubernetes is basically a way to split your cluster into separate environments. At their core, namespaces are like virtual partitions in your cluster. They let you split your resources; pods, services, deployments, and more into separate, logical groups.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;The Default Namespace&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;When you first start with Kubernetes, everything you deploy lands in the &lt;em&gt;default namespace&lt;/em&gt;. It’s quick, easy, and works fine for small project.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Kubernetes comes with a few built-in namespaces:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;- default&lt;/strong&gt; – The catch-all for resources if you don’t specify a namespace.&lt;br&gt;
&lt;strong&gt;- kube-system&lt;/strong&gt; – Reserved for critical system components (like the Kubernetes API server).&lt;br&gt;
&lt;strong&gt;- kube-public&lt;/strong&gt; – Mostly unused, but contains publicly accessible resources.&lt;br&gt;
&lt;strong&gt;- kube-node-lease&lt;/strong&gt; – Helps track node health and optimize performance.&lt;/p&gt;

&lt;p&gt;For example , you could create your own namespaces. Here’s how:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl create namespace dev  
kubectl create namespace staging  
kubectl create namespace production  
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This way, &lt;code&gt;dev&lt;/code&gt; doesn’t interfere with &lt;code&gt;staging&lt;/code&gt;, and &lt;code&gt;staging&lt;/code&gt; doesn’t break &lt;code&gt;production&lt;/code&gt;.  &lt;/p&gt;

&lt;h4&gt;
  
  
  &lt;strong&gt;When to Use Namespaces (and When Not To)&lt;/strong&gt;
&lt;/h4&gt;

&lt;p&gt;✅ &lt;strong&gt;Use namespaces if:&lt;/strong&gt;  &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;You have multiple teams sharing the same cluster.
&lt;/li&gt;
&lt;li&gt;You need clear separation between environments (&lt;code&gt;dev&lt;/code&gt;, &lt;code&gt;staging&lt;/code&gt;, &lt;code&gt;prod&lt;/code&gt;).
&lt;/li&gt;
&lt;li&gt;You want to enforce security policies and resource limits per group.
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;❌ &lt;strong&gt;Don’t bother with namespaces if:&lt;/strong&gt;  &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Your cluster is small and managed by a single team.
&lt;/li&gt;
&lt;li&gt;You need &lt;strong&gt;hard&lt;/strong&gt; isolation—separate clusters might be the better option.
&lt;/li&gt;
&lt;li&gt;You’re dealing with global resources like cluster-wide CRDs.
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;At the end of the day, namespaces help keep your Kubernetes setup clean and organized. &lt;/p&gt;




&lt;h2&gt;
  
  
  &lt;strong&gt;Working with Namespaces: Let’s Get Hands-On&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Now that you understand the concept of namespaces, let’s dive into how you can actually work with them in your Kubernetes cluster. This section will cover the essential commands you need to list, create, and manage namespaces, as well as how to deploy resources to specific namespaces.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Listing Existing Namespaces&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;To see what namespaces you’ve got in your cluster, run this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl get namespaces
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This command will give you a list of namespaces, including the default ones like &lt;code&gt;default&lt;/code&gt;, &lt;code&gt;kube-system&lt;/code&gt;, and any you’ve created yourself. It's a quick way to check what namespaces are active and available.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Creating a New Namespace&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Creating a new namespace is super straightforward. Just run the &lt;code&gt;kubectl create namespace&lt;/code&gt; command followed by the name you want for your new namespace:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl create namespace my-namespace
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;And that’s it! Your new namespace is ready to go. Now you can deploy your resources into it, keeping everything organized.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Deploying Resources to a Namespace&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;If you want to deploy resources like pods, services, or deployments into a specific namespace, you can use the &lt;code&gt;-n&lt;/code&gt; flag with &lt;code&gt;kubectl apply&lt;/code&gt;. For example, to apply a YAML configuration to the &lt;code&gt;my-namespace&lt;/code&gt; namespace:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl apply &lt;span class="nt"&gt;-f&lt;/span&gt; app.yaml &lt;span class="nt"&gt;-n&lt;/span&gt; my-namespace
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This will create the resources defined in &lt;code&gt;app.yaml&lt;/code&gt; within the &lt;code&gt;my-namespace&lt;/code&gt; namespace. Just make sure your YAML file doesn’t already have a &lt;code&gt;namespace&lt;/code&gt; field in the metadata unless you want to override the command-line flag.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Switching Between Namespaces&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Sometimes you’ll need to switch between namespaces while working with &lt;code&gt;kubectl&lt;/code&gt;. To make this easier, you can set the default namespace for your session with this command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl config set-context &lt;span class="nt"&gt;--current&lt;/span&gt; &lt;span class="nt"&gt;--namespace&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;my-namespace
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;After running this, all your subsequent &lt;code&gt;kubectl&lt;/code&gt; commands will default to &lt;code&gt;my-namespace&lt;/code&gt;, so you won’t have to keep adding the &lt;code&gt;-n&lt;/code&gt; flag. To switch back to the default namespace, just run:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl config set-context &lt;span class="nt"&gt;--current&lt;/span&gt; &lt;span class="nt"&gt;--namespace&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;default
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;h2&gt;
  
  
  &lt;strong&gt;Resources That Can and Can’t Be Shared Across Namespaces&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;In Kubernetes, namespaces help isolate different resources within a cluster, but not all resources are confined to a single namespace. Some can be shared across namespaces, while others stay restricted. Knowing which resources can be shared (and which can't) is crucial when it comes to managing your cluster effectively. Let's break it down.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Resources That Can’t Be Shared Across Namespaces&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Some resources are strictly bound to a specific namespace. These are isolated within their own namespace to ensure everything remains organized and secure.&lt;/p&gt;

&lt;h4&gt;
  
  
  &lt;strong&gt;Pods&lt;/strong&gt;
&lt;/h4&gt;

&lt;p&gt;Pods are created within a namespace, and they can’t be directly accessed by pods in other namespaces. They’re local to the namespace they reside in, which means if you want to let pods in other namespaces communicate, you'll need to set up things like network policies.&lt;/p&gt;

&lt;p&gt;Why is this important? Pods are isolated by default, so if you need cross-namespace communication, you'll have to jump through a few hoops, like using services or network policies.&lt;/p&gt;

&lt;h4&gt;
  
  
  &lt;strong&gt;Services&lt;/strong&gt;
&lt;/h4&gt;

&lt;p&gt;In Kubernetes, services are tied to specific namespaces. They expose resources like pods only within their own namespace. If you need a service accessible across multiple namespaces, you'll have to either create an external service or configure DNS settings between the namespaces.&lt;/p&gt;

&lt;p&gt;Why is this important? For inter-namespace communication, managing service discovery becomes key. You’ll have to reference the service using a fully qualified domain name, like &lt;code&gt;my-service.my-namespace.svc.cluster.local&lt;/code&gt;.&lt;/p&gt;

&lt;h4&gt;
  
  
  &lt;strong&gt;ConfigMaps &amp;amp; Secrets&lt;/strong&gt;
&lt;/h4&gt;

&lt;p&gt;Both ConfigMaps and Secrets are tied to namespaces. While you can reference them within the same namespace or copy them to another namespace, you can't share them directly across namespaces.&lt;/p&gt;

&lt;p&gt;Note: It’s important to scope things like app configuration and sensitive data to the correct namespace. While you can replicate or reference them elsewhere, they can’t just be shared freely across namespaces.&lt;/p&gt;

&lt;h4&gt;
  
  
  &lt;strong&gt;Deployments and StatefulSets&lt;/strong&gt;
&lt;/h4&gt;

&lt;p&gt;Just like pods, Deployments and StatefulSets live within a namespace. These resources manage pods within that namespace, so they won’t span multiple namespaces.&lt;/p&gt;

&lt;p&gt;Why this matters: This helps keep things isolated and manageable, especially when you're scaling applications or managing their lifecycle.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Resources That Can Be Shared Across Namespaces&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Not everything in Kubernetes is namespace-bound. There are a few resources that can span multiple namespaces, which helps Kubernetes maintain global management while respecting the boundaries that namespaces provide.&lt;/p&gt;

&lt;h4&gt;
  
  
  &lt;strong&gt;Nodes&lt;/strong&gt;
&lt;/h4&gt;

&lt;p&gt;Nodes exist at the cluster level, not tied to any specific namespace. Every pod, regardless of which namespace it belongs to, can run on any available node in the cluster.&lt;br&gt;
Nodes make it possible to efficiently manage resources across the whole cluster without worrying about namespace boundaries.&lt;/p&gt;

&lt;h4&gt;
  
  
  &lt;strong&gt;Cluster-wide Resources (e.g., CRDs, ClusterRoles)&lt;/strong&gt;
&lt;/h4&gt;

&lt;p&gt;Resources like Custom Resource Definitions (CRDs) and ClusterRoles are not limited to namespaces. These resources are designed to work across the entire cluster, whether you’re defining custom resources or setting cluster-wide access policies.&lt;br&gt;
CRDs let you create custom objects that can be accessed from anywhere, while ClusterRoles manage permissions at the cluster level, allowing users and services to access resources across namespaces.&lt;/p&gt;

&lt;h4&gt;
  
  
  &lt;strong&gt;Network Policies&lt;/strong&gt;
&lt;/h4&gt;

&lt;p&gt;While network policies are generally scoped to namespaces, they can still define how pods in different namespaces communicate with each other. By setting up cross-namespace rules, you can control traffic between pods in different namespaces.&lt;/p&gt;

&lt;p&gt;Why this is important: Network policies allow you to maintain control over which namespaces can talk to each other, even if they’re isolated.&lt;/p&gt;

&lt;h4&gt;
  
  
  &lt;strong&gt;Persistent Volumes (PVs)&lt;/strong&gt;
&lt;/h4&gt;

&lt;p&gt;Persistent Volumes are another cluster-level resource that isn’t tied to a specific namespace. However, Persistent Volume Claims (PVCs) are namespace-bound, and while PVCs can request storage from a PV, the PV itself can be accessed by resources in multiple namespaces (depending on access mode).&lt;/p&gt;

&lt;p&gt;Why this matters: While you can manage storage across namespaces with PVs, the data and requests are still scoped to namespaces through PVCs.&lt;/p&gt;

&lt;h4&gt;
  
  
  &lt;strong&gt;Ingress Resources&lt;/strong&gt;
&lt;/h4&gt;

&lt;p&gt;Ingress resources allow external access to services and can be configured to route traffic to services across different namespaces. By setting up the right rules, a single Ingress controller can manage traffic to services across multiple namespaces.&lt;/p&gt;

&lt;p&gt;Note:  You can centralize traffic management with one Ingress, even when your services span multiple namespaces.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Thanks for reading! Stay tuned for more deep dives into Kubernetes concepts!&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Feel free to dm let's talk devOps on &lt;a href="https://x.com/favxlaw" rel="noopener noreferrer"&gt;X&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;

</description>
      <category>devops</category>
      <category>kubernetes</category>
      <category>beginners</category>
      <category>programming</category>
    </item>
  </channel>
</rss>
