<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Mohamed Sambo</title>
    <description>The latest articles on DEV Community by Mohamed Sambo (@sambo2021).</description>
    <link>https://dev.to/sambo2021</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/sambo2021"/>
    <language>en</language>
    <item>
      <title>When Kafka Goes Kaiju: Building Monster-Scale Streaming Architectures</title>
      <dc:creator>Mohamed Sambo</dc:creator>
      <pubDate>Sat, 29 Nov 2025 19:37:43 +0000</pubDate>
      <link>https://dev.to/sambo2021/when-kafka-goes-kaiju-building-monster-scale-streaming-architectures-3ccp</link>
      <guid>https://dev.to/sambo2021/when-kafka-goes-kaiju-building-monster-scale-streaming-architectures-3ccp</guid>
      <description>&lt;h4&gt;
  
  
  🔥 Kafka: The Event Backbone Behind Real-Time Systems
&lt;/h4&gt;

&lt;p&gt;In every modern distributed system, one subtle truth emerges:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Your system is only as real-time, resilient, and trustworthy as the event pipeline behind it.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;From fintech fraud scoring to telecom usage tracking to global e-commerce personalization, Apache Kafka has become the backbone of that pipeline.&lt;/p&gt;

&lt;p&gt;There’s a quote often attributed to Franz Kafka:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;“Start with what is right rather than what is acceptable.” Franz Kafka&lt;br&gt;
And ironically, that’s exactly how Apache Kafka should be architected.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Many engineers think Kafka is “just a message queue,” but the moment you try to build a real-time, end-to-end, fault-tolerant event platform, you quickly realize:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Kafka is a distributed log, a real-time streaming engine, a replication protocol, a storage layer, and an ecosystem — all in one.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;This blog will walk through Kafka not as a theoretical concept but through a story-driven, practical, senior-level exploration of how real production systems use it.&lt;/p&gt;

&lt;h4&gt;
  
  
  🧭 What You’ll Learn (Explained for Senior Engineers)
&lt;/h4&gt;

&lt;p&gt;This post covers the full lifecycle of events in Kafka:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;How brokers, partitions, and replicas actually work&lt;/li&gt;
&lt;li&gt;Why leader partitions exist and how failover affects data safety&lt;/li&gt;
&lt;li&gt;How producers choose partitions and how Kafka guarantees ordering&lt;/li&gt;
&lt;li&gt;How TLS vs mTLS protects event traffic&lt;/li&gt;
&lt;li&gt;How certificates, keystores, and truststores fit together&lt;/li&gt;
&lt;li&gt;How Kafka behaves on Kubernetes (and why it’s tricky)&lt;/li&gt;
&lt;li&gt;When to use a Kafka operator like Strimzi&lt;/li&gt;
&lt;li&gt;Where Kafka Streams fits into modern architectures&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Everything is explained through a real e-commerce event pipeline, which mirrors how today’s largest digital platforms operate.&lt;/p&gt;

&lt;h4&gt;
  
  
  🏬 A Real Scenario: The E-Commerce Event Firehose
&lt;/h4&gt;

&lt;p&gt;Imagine you're leading platform engineering for a major e-commerce business.&lt;br&gt;
Every second, thousands of events flow through your systems:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Product views&lt;/li&gt;
&lt;li&gt;Search queries&lt;/li&gt;
&lt;li&gt;Add-to-cart actions&lt;/li&gt;
&lt;li&gt;Purchases&lt;/li&gt;
&lt;li&gt;Fraud anomalies&lt;/li&gt;
&lt;li&gt;Inventory updates&lt;/li&gt;
&lt;li&gt;Delivery status changes&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Each downstream team wants a specific live stream of this data:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Data Engineering → analytics&lt;/li&gt;
&lt;li&gt;ML/Recommendations → personalization&lt;/li&gt;
&lt;li&gt;Fraud &amp;amp; Security → anomaly detection&lt;/li&gt;
&lt;li&gt;Finance → auditing + reconciliation&lt;/li&gt;
&lt;li&gt;Mobile/Web Teams → real-time experiences&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;And every team expects:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;No downtime&lt;/li&gt;
&lt;li&gt;No data loss&lt;/li&gt;
&lt;li&gt;Guaranteed ordering (per user/session)&lt;/li&gt;
&lt;li&gt;Low latency&lt;/li&gt;
&lt;li&gt;End-to-end encryption&lt;/li&gt;
&lt;li&gt;Horizontal scalability&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;A traditional message queue can’t do this.&lt;br&gt;
A database absolutely can’t do this in real time.&lt;/p&gt;

&lt;p&gt;So you choose Apache Kafka and its real value becomes visible only when you understand how it works internally.&lt;/p&gt;
&lt;h4&gt;
  
  
  🎬 Turning Abstractions Into Reality: The User-Interactions Stream
&lt;/h4&gt;

&lt;p&gt;People often talk about Kafka in abstract terms:&lt;br&gt;
“Kafka for real-time analytics.”&lt;/p&gt;

&lt;p&gt;Let’s make it real.&lt;/p&gt;

&lt;p&gt;Here’s the event the frontend emits on every user action:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
  "user_id": "213884",
  "event_type": "product_view",
  "product_id": "P-4440",
  "timestamp": 1732707200,
  "device": "iOS",
  "session_id": "S-9912"
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Examples include:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;page views&lt;/li&gt;
&lt;li&gt;product views&lt;/li&gt;
&lt;li&gt;add-to-cart&lt;/li&gt;
&lt;li&gt;checkout started&lt;/li&gt;
&lt;li&gt;purchase completed&lt;/li&gt;
&lt;li&gt;login attempts&lt;/li&gt;
&lt;li&gt;search queries&lt;/li&gt;
&lt;li&gt;ad clicks&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;All of these events flow into a single Kafka topic:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;user-interactions&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;But raw events alone don’t create value — processing them does.&lt;/p&gt;

&lt;p&gt;And that’s where Kafka Streams comes in.&lt;/p&gt;

&lt;h4&gt;
  
  
  🧠 Kafka Streams: The Real-Time Brain of the Event Pipeline
&lt;/h4&gt;

&lt;p&gt;Kafka Streams is not a separate cluster or a heavy engine.&lt;br&gt;
It’s a lightweight library embedded inside your microservices.&lt;/p&gt;

&lt;p&gt;Think of it as:&lt;/p&gt;

&lt;p&gt;“A distributed execution engine inside your services that reads from Kafka, transforms data, maintains state, and writes enriched results back to Kafka — with exactly-once semantics.”&lt;/p&gt;

&lt;p&gt;Your e-commerce architecture typically has multiple Kafka Streams applications working together.&lt;/p&gt;
&lt;h5&gt;
  
  
  A. Recommendation Engine Stream Processor
&lt;/h5&gt;
&lt;h6&gt;
  
  
  ##Input:
&lt;/h6&gt;

&lt;p&gt;user-interactions topic&lt;/p&gt;
&lt;h6&gt;
  
  
  ##Processing:
&lt;/h6&gt;

&lt;p&gt;sessionization&lt;br&gt;
affinity scoring&lt;br&gt;
behavior aggregation&lt;br&gt;
similar-item calculations&lt;/p&gt;
&lt;h6&gt;
  
  
  ##Output:
&lt;/h6&gt;

&lt;p&gt;processed-events&lt;/p&gt;

&lt;p&gt;Your recommendation service subscribes to this topic to update UI suggestions in under 200 ms.&lt;/p&gt;
&lt;h5&gt;
  
  
  B. Real-Time Analytics / Clickstream Pipeline
&lt;/h5&gt;
&lt;h6&gt;
  
  
  ##This Streams app:
&lt;/h6&gt;

&lt;p&gt;counts events&lt;br&gt;
aggregates metrics per minute/hour&lt;br&gt;
computes funnel drop-offs&lt;br&gt;
pushes results into Druid / ClickHouse / BigQuery&lt;/p&gt;
&lt;h6&gt;
  
  
  ##Output:
&lt;/h6&gt;

&lt;p&gt;aggregated-metrics&lt;/p&gt;
&lt;h5&gt;
  
  
  C. Fraud Detection Stream
&lt;/h5&gt;
&lt;h6&gt;
  
  
  ##This service monitors patterns like:
&lt;/h6&gt;

&lt;p&gt;rapid login attempts&lt;br&gt;
too many purchases in a short window&lt;br&gt;
mismatched session IDs&lt;br&gt;
abnormal add-to-cart actions&lt;/p&gt;
&lt;h6&gt;
  
  
  ##Output:
&lt;/h6&gt;

&lt;p&gt;fraud-events&lt;/p&gt;

&lt;p&gt;Downstream systems react instantly.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyseynrxnp5dsnaqkez5u.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyseynrxnp5dsnaqkez5u.png" alt="kafka overview" width="761" height="571"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h4&gt;
  
  
  ⚙️ How Kafka Distributes Partitions &amp;amp; Replicas Internally
&lt;/h4&gt;

&lt;p&gt;A Deep Dive Into Leadership Balancing, Replica Placement, and Cluster Stability&lt;/p&gt;

&lt;p&gt;One of Kafka’s most critical architectural responsibilities is determining where partitions live and which broker becomes their leader.&lt;br&gt;
This decision directly affects latency, fault-tolerance, throughput, and cluster stability.&lt;/p&gt;

&lt;p&gt;To understand how Kafka optimizes these properties, we analyze a realistic, production-grade scenario.&lt;/p&gt;
&lt;h6&gt;
  
  
  🎯 Scenario Setup
&lt;/h6&gt;

&lt;p&gt;You operate a Kafka cluster with:&lt;/p&gt;

&lt;p&gt;3 Brokers&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Broker-01&lt;/li&gt;
&lt;li&gt;Broker-02&lt;/li&gt;
&lt;li&gt;Broker-03&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;2 Topics, Each topic has 3 partitions&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;topicA   → p0, p1, p2&lt;/li&gt;
&lt;li&gt;topicB   → p0, p1, p2&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Replication factor = 3&lt;/p&gt;

&lt;p&gt;This means:&lt;br&gt;
Each partition is replicated across all brokers (1 leader + 2 followers).&lt;/p&gt;

&lt;p&gt;But the important question is:&lt;br&gt;
How does Kafka decide which broker is the leader for each partition?&lt;/p&gt;

&lt;p&gt;This is where Kafka’s internal placement strategy becomes crucial.&lt;/p&gt;
&lt;h4&gt;
  
  
  🎛️ Why Kafka Must Distribute Leaders Evenly
&lt;/h4&gt;

&lt;p&gt;If Kafka randomly or naïvely assigned all leaders to one broker, that broker would immediately become a bottleneck.&lt;/p&gt;

&lt;p&gt;Every leader handles:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;All incoming writes from producers&lt;/li&gt;
&lt;li&gt;All read requests from consumers (unless follower-fetching is enabled)&lt;/li&gt;
&lt;li&gt;All coordination with followers (replication, ISR management, log divergence detection)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If one broker hosted all leaders:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Write load → concentrated on a single machine
&lt;/li&gt;
&lt;li&gt;Consumer fetch load → same single machine
&lt;/li&gt;
&lt;li&gt;Network traffic → same machine
&lt;/li&gt;
&lt;li&gt;Risk of failure → extremely high&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Kafka prevents this collapse by intentionally distributing leaders evenly across brokers.&lt;br&gt;
This design principle is known as:&lt;/p&gt;

&lt;p&gt;Partition Leadership Balancing&lt;/p&gt;
&lt;h4&gt;
  
  
  📦 Example: How Kafka Balances topicA
&lt;/h4&gt;

&lt;p&gt;Kafka tries to assign leaders in a round-robin fashion across brokers.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Topic A (3 partitions, replication factor = 3):
Partition   Leader          Followers
p0          Broker-01   Broker-02, Broker-03
p1          Broker-02   Broker-01, Broker-03
p2          Broker-03   Broker-01, Broker-02
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Result:&lt;br&gt;
Each broker leads exactly one partition.&lt;/p&gt;

&lt;p&gt;This ensures leadership load is evenly distributed:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Broker-01 leads p0&lt;/li&gt;
&lt;li&gt;Broker-02 leads p1&lt;/li&gt;
&lt;li&gt;Broker-03 leads p2&lt;/li&gt;
&lt;/ul&gt;
&lt;h4&gt;
  
  
  📦 Example: How Kafka Balances topicB
&lt;/h4&gt;

&lt;p&gt;Kafka repeats the same balancing logic for each topic independently.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Topic B:
Partition   Leader          Followers
p0          Broker-02   Broker-01, Broker-03
p1          Broker-01   Broker-02, Broker-03
p2          Broker-03   Broker-01, Broker-02
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Again, every broker receives one leader:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Broker-01 → p1&lt;/li&gt;
&lt;li&gt;Broker-02 → p0&lt;/li&gt;
&lt;li&gt;Broker-03 → p2&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This avoids hotspots and keeps the cluster balanced for both topics.&lt;/p&gt;

&lt;p&gt;🔍 Why This Architecture Matters&lt;br&gt;
✅ 1. High Throughput&lt;/p&gt;

&lt;p&gt;Producers send data only to the leader of a partition.&lt;/p&gt;

&lt;p&gt;By distributing leaders evenly:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Write traffic is spread out.&lt;/li&gt;
&lt;li&gt;No single broker becomes saturated.&lt;/li&gt;
&lt;li&gt;Replication remains efficient because followers are also spread across nodes.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;✅ 2. High Availability&lt;/p&gt;

&lt;p&gt;Because each partition is replicated to all brokers:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Any broker can fail.&lt;/li&gt;
&lt;li&gt;A different replica can immediately take over as leader.&lt;/li&gt;
&lt;li&gt;Cluster continues operating smoothly.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Kafka ensures high availability by maintaining the ISR (In-Sync Replica) list, which tracks replicas fully caught up with the leader.&lt;/p&gt;

&lt;p&gt;✅ 3. Predictable Latency&lt;/p&gt;

&lt;p&gt;Consumers also read from the leader (unless using follower-fetching).&lt;/p&gt;

&lt;p&gt;Balanced leadership:&lt;br&gt;
prevents consumer load concentration,&lt;br&gt;
and produces predictable read latency across the system.&lt;/p&gt;

&lt;p&gt;✅ 4. Clean, Safe Failover&lt;/p&gt;

&lt;p&gt;When a broker fails, Kafka automatically triggers leader election.&lt;/p&gt;

&lt;p&gt;Example:&lt;/p&gt;

&lt;p&gt;Broker-01 fails → leaders of topicA/p0 and topicB/p1 must move.&lt;/p&gt;

&lt;p&gt;Kafka chooses the next leader from the ISR list:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;p0 → Broker-02 or Broker-03&lt;/li&gt;
&lt;li&gt;p1 → Broker-02 or Broker-03&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;No data loss occurs if:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;unclean.leader.election = false
&lt;/li&gt;
&lt;li&gt;acks=all
&lt;/li&gt;
&lt;li&gt;min.insync.replicas=2&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This ensures Kafka promotes only replicas that are fully caught up, preventing log truncation or loss of committed messages.&lt;/p&gt;

&lt;p&gt;🧠 Summary: How Kafka Achieves Balanced, Fault-tolerant Partition Placement&lt;/p&gt;

&lt;p&gt;Kafka’s internal placement logic ensures:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Leaders are evenly distributed across all brokers.&lt;/li&gt;
&lt;li&gt;Replicas are also evenly spread.&lt;/li&gt;
&lt;li&gt;High availability is automatically maintained.&lt;/li&gt;
&lt;li&gt;Producers and consumers experience consistent performance.&lt;/li&gt;
&lt;li&gt;Failures trigger safe ISR-driven elections (when configured properly).&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This intelligent distribution architecture is what makes Kafka capable of:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;linear scalability,&lt;/li&gt;
&lt;li&gt;fault isolation,&lt;/li&gt;
&lt;li&gt;stable throughput, and&lt;/li&gt;
&lt;li&gt;predictable performance in large production deployments.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdmfx8o0jpgmnal0aj6h8.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdmfx8o0jpgmnal0aj6h8.png" alt="kafka-replication" width="800" height="800"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h4&gt;
  
  
  How Events Reach Kafka (Producers):
&lt;/h4&gt;

&lt;p&gt;Modern distributed systems rely heavily on event-driven architectures, and Apache Kafka sits at the center as the backbone of high-throughput, low-latency ingestion.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;How Events Are Produced: The Producer Application&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;A Kafka producer is any application, service, or component that sends data into Kafka.&lt;/p&gt;

&lt;p&gt;Producers can be built in many forms:&lt;br&gt;
🔹 1.1 Native Kafka Clients (Official Libraries)&lt;/p&gt;

&lt;p&gt;Kafka provides native client libraries in different languages; these libraries directly manage:&lt;br&gt;
Connections, Batching, Retries, Compression, Partition selection&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;How Events Are Serialized Before Sending&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Kafka accepts bytes, not objects, so the producer must serialize the event. Common choices:&lt;/p&gt;

&lt;p&gt;🔹 2.1 JSON Serialization&lt;br&gt;
Simple, human-readable, slower, larger size.&lt;/p&gt;

&lt;p&gt;🔹 2.2 Avro Serialization&lt;br&gt;
Schema-based&lt;br&gt;
Requires a Schema Registry&lt;br&gt;
Backward/forward compatibility&lt;/p&gt;

&lt;p&gt;🔹 2.3 Protobuf / gRPC&lt;br&gt;
Strong typed&lt;br&gt;
Efficient&lt;br&gt;
Great for evolving microservices&lt;/p&gt;

&lt;p&gt;🔹 2.4 Thrift / FlatBuffers&lt;br&gt;
Ultra-low latency use cases.&lt;/p&gt;

&lt;p&gt;🔹 2.5 Custom binary formats&lt;br&gt;
For extreme performance.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;How Producers Connect to Kafka&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;A producer connects to bootstrap brokers: bootstrap.servers = broker1:9092, broker2:9092&lt;/p&gt;

&lt;p&gt;The bootstrap server is NOT responsible for all messages — it only provides:&lt;br&gt;
Cluster metadata, List of brokers, Topic→partition→leader mapping&lt;/p&gt;

&lt;p&gt;After the initial metadata fetch, the producer does direct leader communication.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;How Kafka Decides Which Partition Gets the Message&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Kafka MUST assign every message to exactly one partition.&lt;/p&gt;

&lt;p&gt;✅ 4.1 Key-based Partitioning (Most Common)&lt;br&gt;
producer.send("customer_events", key="customer_123", value="event...")&lt;br&gt;
Kafka hashes the key:&lt;br&gt;
partition = hash(key) % number_of_partitions&lt;br&gt;
➡️ Ensures ordering per key&lt;/p&gt;

&lt;p&gt;✅ 4.2 Round-Robin Partitioning (No Key)&lt;br&gt;
If key = null:&lt;br&gt;
Producer sends one batch → partition 0&lt;br&gt;
Next batch → partition 1 And so on&lt;br&gt;
➡️ Ordering NOT guaranteed&lt;/p&gt;

&lt;p&gt;✅ 4.3 Custom Partitioners&lt;br&gt;
Used for special routing logic.&lt;/p&gt;

&lt;p&gt;✅ 4.4 Sticky Partitioning (newer behavior)&lt;br&gt;
Kafka 2.4+ introduced a "sticky" policy:&lt;br&gt;
If no key is provided, producer sticks to a single partition for batching efficiency, then switches after a batch is sent.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Producer Batching &amp;amp; Compression&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Before sending, producers batch records:&lt;br&gt;
Batching improves throughput&lt;br&gt;
More events per network call → less overhead&lt;/p&gt;

&lt;p&gt;Compression types:&lt;br&gt;
gzip (high CPU)&lt;br&gt;
snappy (balanced)&lt;br&gt;
lz4 (high-performance)&lt;br&gt;
zstd (best modern choice)&lt;/p&gt;

&lt;p&gt;Producers compress batches before sending.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;TLS / mTLS Authentication (Optional but Common in Production)&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;For production clusters with security:&lt;br&gt;
Producer must provide:&lt;br&gt;
Truststore → to validate broker cert&lt;br&gt;
Contains:&lt;br&gt;
cluster CA certificate&lt;br&gt;
intermediate CAs&lt;/p&gt;

&lt;p&gt;Keystore → to validate producer identity (mTLS)&lt;br&gt;
Contains:&lt;br&gt;
client certificate signed by CA&lt;br&gt;
client private key&lt;/p&gt;

&lt;p&gt;Broker verifies:&lt;br&gt;
client → broker: trusted CA&lt;br&gt;
broker → client:&lt;br&gt;
client certificate signed by cluster CA&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Producer Sends the Message Over TCP&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;When the batch is ready:&lt;br&gt;
Producer opens a TCP connection to the partition leader broker&lt;br&gt;
Uses binary Kafka protocol&lt;br&gt;
Sends a ProduceRequest&lt;br&gt;
Broker responds with ProduceResponse&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Broker Handles the Message&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;The leader:&lt;br&gt;
Writes event to local log segment on disk&lt;br&gt;
Appends metadata and offsets&lt;br&gt;
Replicates the event to ISR (In-Sync Replicas) followers&lt;br&gt;
Depending on producer durability settings…&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Producer Durability Settings (acks)&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;acks=0 → fire and forget (might lose data)&lt;br&gt;
acks=1 → leader writes only (faster, risk of leader crash)&lt;br&gt;
acks=all → safest, waits for ISR replicas&lt;/p&gt;

&lt;p&gt;Most financial systems use:&lt;br&gt;
acks=all&lt;br&gt;
min.insync.replicas=2&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;What Happens If Leader Broker Fails?&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;If leader down → Kafka elects new leader:&lt;br&gt;
If unclean.leader.election=false → only ISR replicas can become leader (no data loss)&lt;br&gt;
If true → an out-of-sync replica may become leader (possible data loss)&lt;/p&gt;

&lt;p&gt;Producers retry automatically using:&lt;br&gt;
retries&lt;br&gt;
retry.backoff.ms&lt;br&gt;
max.in.flight.requests&lt;/p&gt;

</description>
      <category>kafka</category>
      <category>kubernetes</category>
      <category>docker</category>
      <category>security</category>
    </item>
    <item>
      <title>RDP to your EC2-Ubuntu</title>
      <dc:creator>Mohamed Sambo</dc:creator>
      <pubDate>Wed, 26 Mar 2025 20:09:32 +0000</pubDate>
      <link>https://dev.to/sambo2021/rdp-to-your-ec2-ubuntu-c4n</link>
      <guid>https://dev.to/sambo2021/rdp-to-your-ec2-ubuntu-c4n</guid>
      <description>&lt;p&gt;1- Create security group with outpund to everywhere and inbound on port 3350 and 3389 to your ip only&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5mzub1uakqt7290s2ofp.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5mzub1uakqt7290s2ofp.png" alt=" " width="800" height="259"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fc9ollyf9zuib7rxwzdx1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fc9ollyf9zuib7rxwzdx1.png" alt=" " width="800" height="235"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;2- Create role of policy SsmManagedInstanceCore&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjoyuw9ksw9qs72ombs0j.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjoyuw9ksw9qs72ombs0j.png" alt=" " width="800" height="261"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;3- Install ubuntu machine with public ip enabled in public subnet, without keypair&lt;br&gt;
attach previous security group created at step 1 to the machine and role created in step 2 to the instance profile&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftm7aharn88756jz71bxx.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftm7aharn88756jz71bxx.png" alt=" " width="800" height="385"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fycgjkxuhgoe0ndqes9ii.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fycgjkxuhgoe0ndqes9ii.png" alt=" " width="800" height="376"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;4-Connect to machine using ssm session&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhjhcxkj63oxl535rie2q.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhjhcxkj63oxl535rie2q.png" alt=" " width="800" height="175"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;then run the following commands&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo su - 
apt-get update
apt install xrdp 
systemctl enable xrdp

###add-apt-repository tool that adds new software repositories to your system's APT (Advanced Package Tool) sources.
###ppa:gnome3-team/gnome3 a Personal Package Archive (unofficial Ubuntu repository). maintained by the GNOME 3 development team.

add-apt-repository ppa:gnome3-team/gnome3
apt-get install gnome-shell ubuntu-gnome-desktop
passwd ubuntu
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;5-then connect using remote desktop application from your local machine to public ip of the ec2 machine&lt;/p&gt;

</description>
      <category>rdp</category>
      <category>ec2</category>
      <category>ubuntu</category>
      <category>aws</category>
    </item>
    <item>
      <title>RDP to your EC2-Ubuntu</title>
      <dc:creator>Mohamed Sambo</dc:creator>
      <pubDate>Wed, 26 Mar 2025 20:09:32 +0000</pubDate>
      <link>https://dev.to/sambo2021/rdp-to-your-ec2-ubuntu-k1o</link>
      <guid>https://dev.to/sambo2021/rdp-to-your-ec2-ubuntu-k1o</guid>
      <description>&lt;p&gt;1- Create security group with outpund to everywhere and inbound on port 3350 to your ip only&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frw8wn2qh7ray5651iip6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frw8wn2qh7ray5651iip6.png" alt=" " width="800" height="258"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fc9ollyf9zuib7rxwzdx1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fc9ollyf9zuib7rxwzdx1.png" alt=" " width="800" height="235"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;2- Create role of policy SsmManagedInstanceCore&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjoyuw9ksw9qs72ombs0j.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjoyuw9ksw9qs72ombs0j.png" alt=" " width="800" height="261"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;3- Install ubuntu machine with public ip enabled in public subnet, without keypair&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftm7aharn88756jz71bxx.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftm7aharn88756jz71bxx.png" alt=" " width="800" height="385"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fycgjkxuhgoe0ndqes9ii.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fycgjkxuhgoe0ndqes9ii.png" alt=" " width="800" height="376"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;4-Connect to machine using ssm session&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhjhcxkj63oxl535rie2q.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhjhcxkj63oxl535rie2q.png" alt=" " width="800" height="175"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;then run the following commands&lt;/p&gt;

&lt;p&gt;sudo su - &lt;br&gt;
apt-get update&lt;br&gt;
apt install xrdp &lt;br&gt;
systemctl enable xrdp&lt;br&gt;
add-apt-repository ppa:gnome3-team/gnome3&lt;br&gt;
apt-get install gnome-shell ubuntu-gnome-desktop&lt;br&gt;
passwd ubuntu&lt;/p&gt;

&lt;p&gt;5-then connect using remote desktop application from your local machine to public ip of the ec2 machine&lt;/p&gt;

</description>
      <category>rdp</category>
      <category>ec2</category>
      <category>ubuntu</category>
      <category>aws</category>
    </item>
    <item>
      <title>Server Certificate Chain</title>
      <dc:creator>Mohamed Sambo</dc:creator>
      <pubDate>Wed, 31 Jul 2024 22:37:00 +0000</pubDate>
      <link>https://dev.to/sambo2021/certificate-chain-5a80</link>
      <guid>https://dev.to/sambo2021/certificate-chain-5a80</guid>
      <description>&lt;p&gt;&lt;strong&gt;What happens while curl https domains?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;-The simplest command to use for example is curl "&lt;a href="https://www.vodafone.com/" rel="noopener noreferrer"&gt;https://www.vodafone.com/&lt;/a&gt;"&lt;br&gt;
-curl is making a GET request and returns the page source without any error because the server uses Trusted CA Signed SSL Certificates.&lt;br&gt;
This means that the server is using a certificate that was signed by a trusted authority.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What is certificate chain?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;-when you curl a domain name there are sequences happen in order to &lt;br&gt;
 establish a secure connection between the remote server and client,&lt;br&gt;
 beyond DNS resolution, TCP handshake and SSL/TLS handshake, Server sends&lt;br&gt;
 its SSL/TLS certificate chain to the client &lt;em&gt;[including all intermediate &lt;br&gt;
 certificates up to (but not including) the root certificate]&lt;/em&gt;.&lt;br&gt;
-certificate chain is an ordered list of certificates and each certificate&lt;br&gt;
 in the chain is signed by the entity identified by the next certificate&lt;br&gt;
 in the chain, that enables the receiver to verify that the sender and all&lt;br&gt;
 CAs are trustworthy.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Certificate chain typically includes:&lt;/strong&gt;&lt;br&gt;
  1-Leaf Certificate:&lt;br&gt;
    Issued to the domain (e.g., &lt;a href="http://www.vodafone.com" rel="noopener noreferrer"&gt;www.vodafone.com&lt;/a&gt;). This certificate is&lt;br&gt;
    used to encrypt the communication between the client and the server.&lt;br&gt;
    And known as SSL/TLS Certificate&lt;br&gt;
  2-Intermediate Certificate(s):&lt;br&gt;
    -Any certificate that sits between the SSL/TLS Certificate and the&lt;br&gt;
     Root Certificate is called a chain or Intermediate Certificate. &lt;br&gt;
    -The Intermediate Certificate is the signer/issuer of the SSL/TLS&lt;br&gt;
     Certificate which is the leaf one or previous intermediate one. &lt;br&gt;
    -The Root CA Certificate is the signer/issuer of the Intermediate&lt;br&gt;
     or last intermediate Certificate. &lt;br&gt;
    -If the Intermediate Certificate is not installed on the server (where&lt;br&gt;
     the SSL/TLS certificate is installed) it may prevent some browsers,&lt;br&gt;
     mobile devices, applications, etc. from trusting the SSL/TLS&lt;br&gt;
     certificate. &lt;br&gt;
    -In order to make the SSL/TLS certificate compatible with all clients,&lt;br&gt;
     it is necessary that the Intermediate Certificate be installed.&lt;br&gt;
  3-Root CA Certificate: &lt;a href="https://www.checktls.com/showcas.html" rel="noopener noreferrer"&gt;Trusted Root Certificate Authority List&lt;/a&gt;&lt;br&gt;
    -The root CA (Certificate Authority) certificate is the top-most&lt;br&gt;
     certificate in the certificate chain. It is self-signed and must be&lt;br&gt;
     verified up to The Root CA Certificate, meaning the issuer and&lt;br&gt;
     subjects are the same.&lt;br&gt;
    -It serves as the ultimate trust anchor for all certificates issued&lt;br&gt;
     under it.&lt;br&gt;
    -Root CA certificates are typically distributed with operating systems&lt;br&gt;
     and browsers so that they are trusted by default.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzooojyyu4w5427ekt621.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzooojyyu4w5427ekt621.png" alt="Image Certificate-Chain" width="800" height="455"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Use Case:&lt;/strong&gt; checking "&lt;a href="https://www.vodafone.com" rel="noopener noreferrer"&gt;https://www.vodafone.com&lt;/a&gt;" &lt;a href="https://www.ssllabs.com/ssltest/analyze.html?d=vodafone.com" rel="noopener noreferrer"&gt;https://www.ssllabs.com/ssltest/analyze.html?d=vodafone.com&lt;/a&gt;&lt;br&gt;
and you can find the leaf certificate &lt;a href="http://www.vodafone.com" rel="noopener noreferrer"&gt;www.vodafone.com&lt;/a&gt; is issued by the chain one DigiCert SHA2 Secure Server CA, and the chain certificate is issued by trusted CA DigiCert Global Root CA.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhd5dxalpnb86hbffpncz.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhd5dxalpnb86hbffpncz.png" alt="Image vodafone.com" width="621" height="409"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;You can connect to the server and retrieve the certificate chain using openssl s_client. This command will show you the certificates sent by the server&lt;/strong&gt;&lt;br&gt;
-showcerts: Displays all certificates sent by the server&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;[sh@ip-10-160-78-8 ~]$ openssl s_client -connect www.vodafone.com:443 -showcerts

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;What may cause a problem?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;-If the server has 2 intermediate certificates, and sends only one intermediate certificate (the one directly issuing the leaf certificate) but omits the second intermediate certificate (the one issuing the first intermediate certificate), the client might not be able to validate the certificate chain properly&lt;br&gt;
for more issues and &lt;a href="https://help.zerossl.com/hc/en-us/articles/360058296114-Missing-Intermediate-Certificate" rel="noopener noreferrer"&gt;troubleshooting&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;For more info: &lt;a href="https://www.digicert.com/blog/what-is-a-certificate-authority" rel="noopener noreferrer"&gt;https://www.digicert.com/blog/what-is-a-certificate-authority&lt;/a&gt;&lt;/p&gt;

</description>
      <category>webdev</category>
      <category>security</category>
      <category>learning</category>
      <category>openssl</category>
    </item>
    <item>
      <title>Nginx Ingress Controller-Part01</title>
      <dc:creator>Mohamed Sambo</dc:creator>
      <pubDate>Sat, 25 May 2024 18:37:46 +0000</pubDate>
      <link>https://dev.to/sambo2021/kubernetes-ingress-in-a-nutshell-part01-28j</link>
      <guid>https://dev.to/sambo2021/kubernetes-ingress-in-a-nutshell-part01-28j</guid>
      <description>&lt;p&gt;one of opensource projects for Kubernetes ingress controllers, for ex: nginx-ingress-controller&lt;/p&gt;

&lt;p&gt;What you are truly deploying for your services is ingress resources, but ingress Controller is required so that ingress resources come to life.&lt;br&gt;
So please keep in mind that ingress-resource is different from ingress-controller&lt;/p&gt;

&lt;p&gt;What is Ingress?&lt;br&gt;
Ingress is an API object for routing and load balancing requests to a kubernetes service. Ingress can run on HTTP or HTTPS protocols and performs redirection by applying the rules we define as developers.&lt;/p&gt;

&lt;p&gt;What is Ingress Controller?&lt;br&gt;
Ingress Controller is a backend service developed with the Ingress API. It reads Ingress objects and takes actions to properly route incoming requests. Ingress Controllers can perform load balancing as well as forwarding operations. There are many Ingress Controllers in use&lt;br&gt;
&lt;a href="https://kubernetes.io/docs/concepts/services-networking/ingress-controllers/?source=post_page-----7b448f6314f6--------------------------------" rel="noopener noreferrer"&gt;ingress-controllers&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Install Ingress Controller
firstly, you need to deploy nginx-controller by helm chart&lt;/strong&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx
helm upgrade --install ingress-nginx ingress-nginx \ 
             --repo https://kubernetes.github.io/ingress-nginx \ 
             --namespace ingress-nginx \
             --create-namespace
             --set-string controller.service.annotations."service\.beta\.kubernetes\.io/aws-load-balancer-type"="nlb"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;-- the default type of chart service type is LoadBalancer: &lt;a href="https://github.com/kubernetes/ingress-nginx/blob/3b1908e20693c57a97b55d8a563da284a5dbf36c/charts/ingress-nginx/values.yaml#L482" rel="noopener noreferrer"&gt;https://github.com/kubernetes/ingress-nginx/blob/3b1908e20693c57a97b55d8a563da284a5dbf36c/charts/ingress-nginx/values.yaml#L482&lt;/a&gt;&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkzjvef3wvkduujlm7u53.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkzjvef3wvkduujlm7u53.png" alt="nginx-ingress-helm-chart" width="800" height="208"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;-- and to define that created LoadBalancer should be NLB, it is defined as annotation at nginx-ingress-controller service:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;controller:
  service:
    annotations:
      service.beta.kubernetes.io/aws-load-balancer-type: "nlb"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;For example, if we have public domain "tools.com"&lt;br&gt;
-- to Set SSL/TLS termination on AWS load balancer:&lt;br&gt;
simply this is a way of abstracting TLS handling is to terminate on load balancer and have HTTP inside the cluster by default.&lt;br&gt;
by requesting a public certificate for your custom domain &lt;code&gt;"api.tools.com"&lt;/code&gt; and wildcard custom domain &lt;code&gt;"*.api.tools.com"&lt;/code&gt;, and don't forget to record the certificate CNAME under your public hosted zone to validate it.&lt;br&gt;
then use the ACM certificate arn in controller service annotation and define the ssl port as "https"&lt;br&gt;
and of course, don't forget to record all needed sub domains &lt;code&gt;"api.tools.com"&lt;/code&gt; and &lt;code&gt;"*.api.tools.com"&lt;/code&gt; to route to your NLP as record type A but the certificate is CNAME&lt;br&gt;
Now you can set not only one ingress controller, but multiples and each ingress-controller has its own NLP and hostname for example&lt;br&gt;
hostnameA: ui.api.tools.com -&amp;gt; to route to all ui websites&lt;br&gt;
hostnameB: services.api.tools.com -&amp;gt; to route to restful apis services &lt;br&gt;
and how the ingress resource knows which ingress-nginx-controller ?&lt;br&gt;
--- each one will have unique nginx-controller-class&lt;/p&gt;

&lt;p&gt;Another important note that ingress-controller should have the minimum permissions to allow it to create loadbalancer on aws, and this should use an IRSA role and passed as annotation to serviceAccount inside the helm chart&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;controller:
  service:
    annotations:
      service.beta.kubernetes.io/aws-load-balancer-ssl-cert: arn:aws:acm:eu-central-1:55xxxxxxx:certificate/5k0c5513-a947-6cc5-a506-b3yxxx
      service.beta.kubernetes.io/aws-load-balancer-ssl-ports: "https"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;-- Choosing Publicly Accessible &lt;br&gt;
This will configure the AWS load balancer for public access&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;controller:
  service:
    annotations:
      service.beta.kubernetes.io/aws-load-balancer-scheme: "internet-facing"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;-- NLB with NGINX Ingress Controller maybe overwrite client IP, how to retain actual client IP:&lt;br&gt;
you need to have proxy protocol enabled on your NLB and have the appropriate configuration in ingress-nginx.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;controller:
  config:
    use-proxy-protocol: "true"
    real-ip-header: "proxy_protocol"
    use-forwarded-headers: "true"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;--So Finally maybe all you needs&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;controller:
  config:
    use-proxy-protocol: "true"
    real-ip-header: "proxy_protocol"
    use-forwarded-headers: "true"
  service:
    annotations:
      service.beta.kubernetes.io/aws-load-balancer-type: nlb
      service.beta.kubernetes.io/aws-load-balancer-ssl-cert: arn:aws:acm:eu-central-1:55xxxxxxx:certificate/5k0c5513-a947-6cc5-a506-b3yxxx
      service.beta.kubernetes.io/aws-load-balancer-ssl-ports: https
      service.beta.kubernetes.io/aws-load-balancer-scheme: internet-facing
      service.beta.kubernetes.io/aws-load-balancer-proxy-protocol: "*"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;-- what the chart deploys:&lt;br&gt;
-&lt;u&gt;ingress-nginx&lt;/u&gt; namespace&lt;br&gt;
-&lt;u&gt;ingress-nginx-controller-7ed7998c-j2er5&lt;/u&gt; pod&lt;br&gt;
-&lt;u&gt;ingress-nginx-controller&lt;/u&gt; service of type LoadBalancer&lt;br&gt;
-&lt;u&gt;ingress-nginx-controller-admission&lt;/u&gt; service of type ClusterIP (Validating admission controller which helps in preventing outages due to wrong ingress configuration)&lt;br&gt;
-EXTERNAL-IP -&amp;gt; in turn points to AWS Load Balancer DNS Name which gets created when the Ingress Controller is installed cause of service of type LoadBalancer created by the chart&lt;/p&gt;

&lt;p&gt;-- In details:&lt;br&gt;
The controller deploys, configures, and manages Pods that contain instances of nginx, which is a popular open-source HTTP and reverse proxy server. These Pods are exposed via the controller’s Service resource, which receives all the traffic intended for the relevant applications represented by the Ingress and backend Services resources. The controller translates Ingress and Services’ configurations, in combination with additional parameters provided to it statically, into a standard nginx configuration. It then injects the configuration into the nginx Pods, which route the traffic to the application’s Pods.&lt;br&gt;
The Ingress-Nginx Controller Service is exposed for external traffic via a load balancer. That same Service can be consumed internally via the usual &lt;u&gt;ingress-nginx-controller.ingress-nginx.svc.cluster.local&lt;/u&gt; cluster DNS name.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fd21518csvea39w3ln8vi.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fd21518csvea39w3ln8vi.png" alt="nginx-ingress-controller-graph" width="800" height="386"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Create Deployment and Expose it as a service&lt;/strong&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# create deployment
kubectl create deployment demo --image=nginx --port=80 
# expose deployment as a service
kubectl expose deployment demo
# Create Ingress resource to route request to demo service
kubectl create ingress demo --class=nginx \
  --rule your-public-domain/=demo:80
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F95mbxjbd1weg5vwt8bww.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F95mbxjbd1weg5vwt8bww.png" alt="nginx-demo" width="800" height="255"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Refereces:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://kubernetes.github.io/ingress-nginx/" rel="noopener noreferrer"&gt;https://kubernetes.github.io/ingress-nginx/&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://aws.amazon.com/blogs/containers/exposing-kubernetes-applications-part-3-nginx-ingress-controller/" rel="noopener noreferrer"&gt;https://aws.amazon.com/blogs/containers/exposing-kubernetes-applications-part-3-nginx-ingress-controller/&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://repost.aws/questions/QUw4SGJL79RO2SMT-LbpDRoQ/nlb-with-nginx-ingress-controller-is-overwriting-client-ip-how-to-retain-actual-client-ip" rel="noopener noreferrer"&gt;https://repost.aws/questions/QUw4SGJL79RO2SMT-LbpDRoQ/nlb-with-nginx-ingress-controller-is-overwriting-client-ip-how-to-retain-actual-client-ip&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://dev.to/zenika/kubernetes-nginx-ingress-controller-10-complementary-configurations-for-web-applications-ken"&gt;https://dev.to/zenika/kubernetes-nginx-ingress-controller-10-complementary-configurations-for-web-applications-ken&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://aws.amazon.com/blogs/opensource/network-load-balancer-nginx-ingress-controller-eks/" rel="noopener noreferrer"&gt;https://aws.amazon.com/blogs/opensource/network-load-balancer-nginx-ingress-controller-eks/&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>kubernetes</category>
      <category>nginx</category>
      <category>aws</category>
      <category>helm</category>
    </item>
    <item>
      <title>Your First FastApi+JWT token</title>
      <dc:creator>Mohamed Sambo</dc:creator>
      <pubDate>Mon, 01 Jan 2024 23:05:09 +0000</pubDate>
      <link>https://dev.to/sambo2021/3-your-first-fastapijwt-token-3fi4</link>
      <guid>https://dev.to/sambo2021/3-your-first-fastapijwt-token-3fi4</guid>
      <description>&lt;h2&gt;
  
  
  Objectives the blog will discuss
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Hashing&lt;/li&gt;
&lt;li&gt;OAuth2 with Password&lt;/li&gt;
&lt;li&gt;SQLMODEL&lt;/li&gt;
&lt;li&gt;Bearer with JWT tokens&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  GITHUB REPO: &lt;a href="https://github.com/sambo2021/python-dev/tree/master/fastapi-auth-project" rel="noopener noreferrer"&gt;https://github.com/sambo2021/python-dev/tree/master/fastapi-auth-project&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;We'll proceed to installing the necessary dependencies needed in this guide. Copy the below content to requirements.txt.&lt;br&gt;
&lt;code&gt;pip install -r requirements.txt&lt;/code&gt;&lt;/p&gt;
&lt;h2&gt;
  
  
  Hashing:
&lt;/h2&gt;

&lt;p&gt;Our database will handle users signing in for now, but you do not want to store the password as it is plaintext, converting the original plain-text passwords &lt;br&gt;
&lt;code&gt;adrian123&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;into irreversible, fixed-length hash codes&lt;br&gt;
&lt;code&gt;$2b$12$fNiX.PSSs4XQg0YYC5PEF.t5.aDjEvhIVYHIN5UxLXO2.9LIRHnO6&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;This way, even if the hashed data is somehow obtained by an attacker, they cannot reverse-engineer it to reveal the original passwords.&lt;/p&gt;

&lt;p&gt;We implement it by installing the necessary dependencies.&lt;br&gt;
&lt;code&gt;pip install "passlib[bcrypt]"&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Python package to handle password hashes with the recommended algorithm is “Bcrypt”.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;from passlib.context import CryptContext
pwd_context = CryptContext(schemes=["bcrypt"], deprecated="auto")

def hash_password(password:str):
    return pwd_context.hash(password)
if __name__ == "__main__":
    print(hash_password("adrian123"))
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;output:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;python3 main.py 
python3 main.py 
(trapped) error reading bcrypt version
Traceback (most recent call last):
  File "/home/sambo/.local/lib/python3.10/site-packages/passlib/handlers/bcrypt.py", line 620, in _load_backend_mixin
    version = _bcrypt.__about__.__version__
AttributeError: module 'bcrypt' has no attribute '__about__'
$2b$12$fNiX.PSSs4XQg0YYC5PEF.t5.aDjEvhIVYHIN5UxLXO2.9LIRHnO6
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;note:&lt;br&gt;
pwd_context = CryptContext(schemes=["bcrypt"], deprecated="auto"): This creates an instance of the CryptContext class, specifying that the bcrypt hashing algorithm should be used.&lt;br&gt;
The deprecated="auto" parameter means that if bcrypt becomes deprecated in the future, passlib will automatically choose a more secure scheme.&lt;/p&gt;
&lt;h2&gt;
  
  
  OAuth2 with Password and Bearer:
&lt;/h2&gt;

&lt;p&gt;as mentioned in the docs-&amp;gt; &lt;a href="https://fastapi.tiangolo.com/tutorial/security/simple-oauth2/" rel="noopener noreferrer"&gt;https://fastapi.tiangolo.com/tutorial/security/simple-oauth2/&lt;/a&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;We are going to use FastAPI security utilities to get the username and password.&lt;br&gt;
OAuth2 specifies that when using the "password flow" (that we are using) the client/user must send a username and password fields as form data.&lt;br&gt;
And the spec says that the fields have to be named like that. So user-name or email wouldn't work.&lt;br&gt;
But don't worry, you can show it as you wish to your final users in the frontend.&lt;br&gt;
And your database models can use any other names you want.&lt;br&gt;
But for the login path operation, we need to use these names to be compatible with the spec (and be able to, for example, use the integrated API documentation system).&lt;br&gt;
The spec also states that the username and password must be sent as form data (so, no JSON here).&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;but before going into that part lets dive into &lt;/p&gt;
&lt;h2&gt;
  
  
  Create Database and Tables on startup:
&lt;/h2&gt;

&lt;p&gt;I am using sqlmodel to connent to database, create table and do the sql operations to store users into the table and use them into other endpoints especially /token for generating jwt token &lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;database.py&lt;br&gt;
&lt;/p&gt;


&lt;/blockquote&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;from sqlmodel import Field, SQLModel, create_engine

class User(SQLModel, table=True):
    __table_args__ = (UniqueConstraint("email"),)
    username: str = Field( primary_key=True)
    fullname: str
    email: str
    hashed_password: str
    join: datetime = Field(default=datetime.utcnow())
    disabled: bool = Field(default=False)

sqlite_file_name = "database.db"
sqlite_url = f"sqlite:///{sqlite_file_name}"
connect_args = {"check_same_thread": False}
engine = create_engine(sqlite_url, echo=True, connect_args=connect_args)
def create_db_and_tables():
    SQLModel.metadata.create_all(engine)

adrianholland = User(
        username = "adrianholland", 
        fullname = "Adrian Holland",
        email = "Adrian.Holland@gmail.com",
        hashed_password = hash_password("a1dri2an5@6holl7and"))

def initiate_admin():
    admin = get_user("adrianholland")
    if not admin:
        add_user(adrianholland)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;For that, we will import SQLModel (plus other things we will also use) and create a class User that inherits from SQLModel and represents the table model for our users, and on start up of the main app we will use:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;main.py&lt;br&gt;
&lt;/p&gt;
&lt;/blockquote&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;@app.on_event("startup")
def on_startup():
    create_db_and_tables()
    initiate_admin()
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;but of course we can not do it before adding main database functions we need, so:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;database.py&lt;br&gt;
&lt;/p&gt;
&lt;/blockquote&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;from sqlmodel import Field, SQLModel, Session, create_engine,select
from passlib.context import CryptContext
from datetime import datetime
from sqlalchemy import UniqueConstraint
from fastapi import HTTPException
from email_validator import EmailNotValidError, validate_email
from disposable_email_domains import blocklist

sqlite_file_name = "database.db"
sqlite_url = f"sqlite:///{sqlite_file_name}"
connect_args = {"check_same_thread": False}
engine = create_engine(sqlite_url, echo=True, connect_args=connect_args)


def create_db_and_tables():
    SQLModel.metadata.create_all(engine)

class User(SQLModel, table=True):
    __table_args__ = (UniqueConstraint("email"),)
    username: str = Field( primary_key=True)
    fullname: str
    email: str
    hashed_password: str
    join: datetime = Field(default=datetime.utcnow())
    disabled: bool = Field(default=False)

# to hash the passowrd as we did before    
pwd_context = CryptContext(schemes=["bcrypt"], deprecated="auto")

def hash_password(password: str):
    return pwd_context.hash(password)

##https://github.com/s-azizkhan/fastapi-email-validation-server/blob/main/main.py
def validate_email_data(email: str):
    try:
        # Validate against general email rules
        v = validate_email(email, check_deliverability=True)

        # Check if the domain is in the disposable email blocklist
        domain = email.split("@")[1]
        if domain in blocklist:
            raise HTTPException(
                status_code=400,
                detail=f"Disposable email addresses are not allowed: {email}",
            )

        return True
    except EmailNotValidError as e:
        return False
    except Exception as e:
        return False

def validate_data(user: User):
    return validate_email_data(user.email) and type(user.username) == str

def get_user(username: str):
    with Session(engine) as session:
        user = session.get(User, username)
        return user

def add_user(user: User):
    exist_user = get_user(user.username)
    if not exist_user:
        with Session(engine) as session:
            session.add(user,_warn=True)
            session.commit()
    else:
        raise HTTPException(status_code=409, detail=f"user {exist_user.fullname} exists and username is {exist_user.username}")


def get_all_users():
    with Session(engine) as session:
        statement = select(User)
        users = session.exec(statement).fetchall()
        return users

adrianholland = User(
        username = "adrianholland", 
        fullname = "Adrian Holland",
        email = "Adrian.Holland@gmail.com",
        hashed_password = hash_password("a1dri2an5@6holl7and"))

def initiate_admin():
    admin = get_user("adrianholland")
    if not admin:
        add_user(adrianholland)

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;so now we have our main skeleton to build our app, get_user, add_user, and get_users, and of course, helper functions like hash_passowrd to store the password you entered as hashed one, and the validator function to validate mainly the email&lt;/p&gt;

&lt;p&gt;now we get back to the main.py to add all endpoint we need&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;main.py&lt;br&gt;
&lt;/p&gt;
&lt;/blockquote&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;from datetime import timedelta
from fastapi import Depends, FastAPI, HTTPException, Request, status
from fastapi.security import OAuth2PasswordRequestForm
from models import Token, User, authenticate_user, create_access_token, get_current_active_user
from database import create_db_and_tables, get_user, add_user, get_all_users, hash_password, initiate_admin, validate_data
import json


app = FastAPI()

@app.on_event("startup")
def on_startup():
    create_db_and_tables()
    initiate_admin()

@app.get("/",tags=["root"])
async def read_root(current_user: User = Depends(get_current_active_user)):
    return {"message":"Welcome inside first FastApi api",
            "owner": current_user}


# when u login u redirected to /token to generate token by username and password
# u enter the username
@app.post("/token", response_model=Token)
async def login_for_access_token(form_data: OAuth2PasswordRequestForm = Depends()):
    user = authenticate_user(form_data.username, form_data.password)
    if not user:
        raise HTTPException(status_code=status.HTTP_401_UNAUTHORIZED,
                            detail="Incorrect username or password", headers={"WWW-Authenticate": "Bearer"})
    access_token_expires = timedelta(minutes=30)
    access_token = create_access_token(
        data={"username" : user.username, "email": user.email, "fullname": user.fullname }, expires_delta=access_token_expires)
    return {"access_token": access_token, "token_type": "bearer"}


@app.get("/users/me", tags=["my-user-data"])
async def read_users_me(current_user: User = Depends(get_current_active_user)):
    return {
            "owner": current_user
            }


@app.get("/users/{user_name}",tags=["get-one-user"])
async def read_item(user_name: str , request: Request , current_user: User = Depends(get_current_active_user)):
    user = get_user(user_name)
    try:
        return {"user_name": user.username,
                "user_fullname": user.fullname,
                "user_email": user.email,
                "query_params": request.query_params,
                "request_headers": request.headers,
                "owner": current_user
                }
    except:
        raise HTTPException(status_code=status.HTTP_404_NOT_FOUND,
                            detail=f"user {user_name} doesnot exist", headers={"WWW-Authenticate": "Bearer"})




@app.get("/users",tags=["get-all-users"])
async def read_item(request: Request , current_user: User = Depends(get_current_active_user)):
    return {
            "users": get_all_users(),
            "request_headers": request.headers,
            "owner": current_user
            }

@app.post("/users",tags=["add-new-user"])
async def read_item(request: Request , current_user: User = Depends(get_current_active_user)):
    request_body  = await request.body()
    json_str = request_body.decode('utf-8')
    json_data = json.loads(json_str)
    new_user=User(
        username = json_data["username"], 
        fullname = json_data["fullname"],
        email = json_data["email"],
        hashed_password = hash_password(json_data["password"])
    )
    if validate_data(new_user):
        add_user(new_user)
        return {
                "request_headers": request.headers,
                "owner": current_user
                }
    else:
        raise HTTPException(status_code=status.HTTP_404_NOT_FOUND,
                            detail=f"{new_user.username} or {new_user.email} may be not valid", headers={"WWW-Authenticate": "Bearer"})

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Bearer with JWT tokens
&lt;/h2&gt;

&lt;p&gt;but you still need some functionality to generate the jwt token and verify it against any login so have a look on :&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;models.py&lt;br&gt;
and you will find everything described, but for SECRET_KEY used in encoding the token you need to generate your own local and use it &lt;br&gt;
&lt;code&gt;$ openssl rand -hex 32&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;
&lt;/blockquote&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;from fastapi import Depends,HTTPException, status
from fastapi.security import OAuth2PasswordBearer
from pydantic import BaseModel
from datetime import datetime, timedelta
from jose import JWTError, jwt
from passlib.context import CryptContext # we use it for password hasher
from database import get_user,add_user,User


#openssl rand -hex 32
#we gonna use it to encode and decode the token

SECRET_KEY = "2cfea755f57d42cc34ce427475f5891aa92361bff9af79a360d1dafd296853d9"

# lets consider this as out data base already which have users and thier hashed password
# what is the actual password -&amp;gt; adrian123
# how u hashed it -&amp;gt; print(get_password_hash("adrian123"))



class Token(BaseModel):
    access_token: str
    token_type: str

class TokenData(BaseModel):
    username: str or None = None


#This parameter specifies how to handle deprecated hashing algorithms. "auto" means 
#that FastAPI will automatically handle deprecated hashing schemes and update them as needed.
#"bcrypt" is chosen. Bcrypt is a popular and secure password hashing algorithm
#This parameter specifies how to handle deprecated hashing algorithms. "auto" means that 
#FastAPI will automatically handle deprecated hashing schemes and update them as needed.
pwd_context = CryptContext(schemes=["bcrypt"], deprecated="auto")


#OAuth2PasswordBearer: This is a class provided by the fastapi.security module for handling OAuth2 password bearer authentication. 
#OAuth2 is a protocol that allows secure authorization in a standardized way.
#tokenUrl="token": This parameter specifies the URL where clients can request a token. 
#In this case, it's set to "token," meaning that when clients want to authenticate using OAuth2 password flow, 
#they should send their credentials to the "/token" endpoint.
#With this code, you've created an instance of the OAuth2PasswordBearer class named oauth2_scheme, and you can use it as a dependency in 
#your FastAPI routes. When a route depends on oauth2_scheme, FastAPI will expect clients to include an OAuth2 token in the "Authorization" 
#header of their requests. The token will be validated and can be used to identify and authenticate the user.
oauth2_scheme = OAuth2PasswordBearer(tokenUrl="token")


def authenticate_user(username: str, password: str):
    user = get_user(username)
    # if the user u entered is not exist 
    if not user:
        return False
    # if the user u entered exists but lets compare the password text u entered with the hashed one
    if not pwd_context.verify(password, user.hashed_password):
        return False

    return user


def create_access_token(data: dict, expires_delta: timedelta or None = None):
    to_encode = data.copy()
    print(f"data: {to_encode}")
    if expires_delta:
        expire = datetime.utcnow() + expires_delta
    else:
        expire = datetime.utcnow() + timedelta(minutes=15)

    to_encode.update({"exp": expire})
    print(f"data: {to_encode}")

    encoded_jwt = jwt.encode(to_encode, SECRET_KEY, algorithm="HS256")
    print(f"jwt: {encoded_jwt}")
    return encoded_jwt


async def get_current_user(token: str = Depends(oauth2_scheme)):
    credential_exception = HTTPException(status_code=status.HTTP_401_UNAUTHORIZED,
                                         detail="Could not validate credentials", headers={"WWW-Authenticate": "Bearer"})

    try:
        payload = jwt.decode(token, SECRET_KEY, algorithms=["HS256"])
        print(f"payload: {payload}")
        username = payload.get("username")
        print(f"username: {username}")
        if username is None:
            raise credential_exception

        token_data = TokenData(username=username)
        print(f"token_data: {token_data}")
    except JWTError:
        raise credential_exception

    user = get_user(username=token_data.username)
    if user is None:
        raise credential_exception

    return user


async def get_current_active_user(current_user: User = Depends(get_current_user)):
    if current_user.disabled:
        raise HTTPException(status_code=400, detail="Inactive user")

    return current_user

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  finally your app structure shall be something like
&lt;/h2&gt;

&lt;p&gt;app&lt;br&gt;
|__ database.py&lt;br&gt;
|__ models.py&lt;br&gt;
|__ main.py&lt;/p&gt;

&lt;p&gt;once u run your app &lt;code&gt;uvicorn main:app --port 9095 --reload&lt;/code&gt;&lt;br&gt;
you can go through your brower directly &lt;a href="http://localhost:9095/docs" rel="noopener noreferrer"&gt;http://localhost:9095/docs&lt;/a&gt;&lt;br&gt;
you should have endpoints for user login and remaining endpoints&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fx8nootk93ioqc6hopgvb.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fx8nootk93ioqc6hopgvb.png" alt="fastapi/docs" width="800" height="550"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;and once you go into /token endpoint and enter the username and password&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8vik1r0wxzedhbs4pc5g.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8vik1r0wxzedhbs4pc5g.png" alt="token endpoint" width="800" height="513"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;you will have a valid token &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F69qe87knr47d85377or5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F69qe87knr47d85377or5.png" alt="jwt token" width="800" height="338"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;you can see a lock on each endpoint because we already used &lt;br&gt;
&lt;code&gt;current_user: User = Depends(get_current_active_user)&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;so you can not use any of them if you dont have a valid token at first point &lt;/p&gt;

&lt;p&gt;next enhancement:&lt;br&gt;
1- add more endpoints delete-user, update-user, block-user&lt;br&gt;
2- Using Postgres sql database&lt;br&gt;
3- containerize our app and database, and make them microservices &lt;/p&gt;

</description>
      <category>fastapi</category>
      <category>python</category>
      <category>sqlmodel</category>
      <category>jwt</category>
    </item>
    <item>
      <title>Your first ARGO-CD</title>
      <dc:creator>Mohamed Sambo</dc:creator>
      <pubDate>Mon, 25 Dec 2023 18:32:25 +0000</pubDate>
      <link>https://dev.to/sambo2021/2-your-first-argo-cd-5cn7</link>
      <guid>https://dev.to/sambo2021/2-your-first-argo-cd-5cn7</guid>
      <description>&lt;h2&gt;
  
  
  What are we going to do in the next steps?
&lt;/h2&gt;

&lt;p&gt;We are going to set up Argo CD on a Kubernetes cluster that we initiated in the last blog &lt;a href="https://dev.to/sambo2021/your-first-k8sistio-41jh"&gt;1- Your First K8S+Istio&lt;/a&gt;.&lt;br&gt;
Also, we will make argo-cd behind a reverse proxy, so we gonna use what we installed through Istio to reach the argo-cd ui through the browser&lt;/p&gt;
&lt;h2&gt;
  
  
  How will we install the argo-cd at first?
&lt;/h2&gt;

&lt;p&gt;We'll install it with Helm, create an application to use the app-of-apps pattern, and set Argo CD up so that it can update itself.&lt;/p&gt;
&lt;h2&gt;
  
  
  What is Argo CD?
&lt;/h2&gt;

&lt;p&gt;Argo CD is a GitOps tool to automatically synchronizes the cluster to the desired state defined in a Git repository. Each workload is defined declaratively through a resource manifest in a YAML file. Argo CD checks if the state defined in the Git repository matches what is running on the cluster, and synchronizes it if changes were detected.&lt;/p&gt;
&lt;h2&gt;
  
  
  Step 01: Initialize our argo-cd Helm chart
&lt;/h2&gt;

&lt;p&gt;We will use Helm to install Argo CD with the community-maintained chart from argoproj/argo-helm because The Argo project doesn't provide an official Helm chart.&lt;br&gt;
We will render thier helm chart for argocd locally on our side, manipulate it and overrides its default values, and also we can helm lint the chart and templating to see if there is some errors or not, We gonna use the chart version 5.50.0 which matches appVersion: v2.8.6 you can find all details for the &lt;a href="https://github.com/argoproj/argo-helm/tree/argo-cd-5.50.0/charts/argo-cd" rel="noopener noreferrer"&gt;chart&lt;/a&gt;&lt;br&gt;
and also we gonna override some values @ &lt;a href="https://github.com/sambo2021/helm-charts/blob/master/charts/argocd-test/generate-chart/values-default.yaml" rel="noopener noreferrer"&gt;default-values.yaml&lt;/a&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;configs:
  params:
    server.insecure: true
    server.basehref: '/argocd'
    server.rootpath: '/argocd'
dex:
  enabled: false
notifications:
  enabled: false
applicationSet:
  enabled: false

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We start the server with the --insecure flag to serve the Web UI over HTTP. &lt;br&gt;
For this tutorial, we're using a local k8s server without a TLS setup.&lt;/p&gt;

&lt;p&gt;also, we should override the basehref and rootpath to the subpath we gonna use to access the argo-cd UI -&amp;gt; &lt;a href="http://localhost:9080/argocd/" rel="noopener noreferrer"&gt;http://localhost:9080/argocd/&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Disable the dex component (integration with external auth providers).&lt;/p&gt;

&lt;p&gt;Disable the notifications controller (notify users about changes to the application state).&lt;/p&gt;

&lt;p&gt;Disable the ApplicationSet controller (automated generation of Argo CD Applications).&lt;/p&gt;

&lt;p&gt;and BTW in the render-helm script, I deleted the part of highly available argocd deployment, so we can deploy non-HA version of Argo CD by default. If you want to run Argo CD in HA mode please have a look on &lt;a href="https://github.com/argoproj/argo-helm/blob/argo-cd-5.50.0/charts/argo-cd/README.md" rel="noopener noreferrer"&gt;README.md&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;just go inside &lt;a href="https://github.com/sambo2021/helm-charts/tree/master/charts/argocd-test/generate-chart" rel="noopener noreferrer"&gt;helm_render.sh&lt;/a&gt;&lt;br&gt;
and run the script, it will generate for you &lt;a href="https://github.com/sambo2021/helm-charts/tree/master/charts/argocd-test/argo-cd" rel="noopener noreferrer"&gt;argo-cd&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;sure every time you want a higher version just look at their &lt;a href="https://github.com/argoproj/argo-helm/tree/main/charts/argo-cd" rel="noopener noreferrer"&gt;GitHub-repo&lt;/a&gt; and use the chart version you need and don't forget the appVersion also -&amp;gt; you can find the chart version at tags for ex: argo-cd-5.50.0, add values to &lt;a href="https://github.com/sambo2021/helm-charts/blob/master/charts/argocd-test/generate-chart/Chart.yaml" rel="noopener noreferrer"&gt;Chart.yaml&lt;/a&gt; and &lt;a href="https://github.com/sambo2021/helm-charts/tree/master/charts/argocd-test/generate-chart" rel="noopener noreferrer"&gt;helm_render.sh&lt;/a&gt; run the script helm_render.sh again.&lt;/p&gt;

&lt;p&gt;to check whether the manifests in templates are good or corrupted:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;~/helm-charts/charts/argocd-test/ $ helm lint ./argo-cd/ --debug

==&amp;gt; Linting ./argocd-test/argo-cd/
[INFO] Chart.yaml: icon is recommended

1 chart(s) linted, 0 chart(s) failed
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Step 02: Installing our argo-cd Helm chart
&lt;/h2&gt;

&lt;p&gt;We have to do the initial installation manually from our local machine&lt;br&gt;
Later we set up Argo CD to manage itself (meaning that Argo CD will automatically detect any changes to the helm chart and synchronize it):&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;~/helm-charts/charts/argocd-test/ $ helm install argo-cd argo-cd/
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;After a minute all resources should have been deployed:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5dwxu6l6fxbld9egqeql.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5dwxu6l6fxbld9egqeql.png" alt="arrgo-cd instances" width="800" height="400"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Accessing the Web UI&lt;br&gt;
all you need now to add the path of argo-cd under the virtual service we did at the previous blog &lt;/p&gt;

&lt;p&gt;the service&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Figfdyhfj3vo3ajt9dybr.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Figfdyhfj3vo3ajt9dybr.png" alt="argo-cd service" width="800" height="400"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;the virtual service&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fk7qsu81zvdie58mr1fz1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fk7qsu81zvdie58mr1fz1.png" alt="argo-cd virtual service" width="800" height="357"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;then you can go directly to UI &lt;a href="http://localhost:9080/argocd" rel="noopener noreferrer"&gt;http://localhost:9080/argocd&lt;/a&gt;&lt;br&gt;
username-&amp;gt; The default username is admin&lt;br&gt;
passowrd-&amp;gt; is auto-generated, we can get it with:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ kubectl get secret argocd-initial-admin-secret -o jsonpath="{.data.password}" | base64 -d
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;if something happened to istio deployment or you deployed argocd before istio then To access the Web UI we have to port-forward to the argocd-server service on port 443:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ kubectl port-forward svc/argo-cd-argocd-server 9081:80
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;then you can go directly to UI &lt;a href="http://localhost:9081/argocd" rel="noopener noreferrer"&gt;http://localhost:9081/argocd&lt;/a&gt;&lt;br&gt;
After logging in, we'll see the empty Web UI:&lt;br&gt;
At this point, Argo CD applications could be added through the Web UI or CLI, but we want to manage everything in a declarative way (Infrastructure as code). This means need to write Application manifests in YAML, and commit them to our Git repo.&lt;/p&gt;
&lt;h4&gt;
  
  
  Step 03: manage root-app
&lt;/h4&gt;

&lt;p&gt;In general, when we want to add an application to Argo CD, we need to add an Application resource in our Kubernetes cluster. The resource needs to specify where to find manifests for our application. &lt;/p&gt;

&lt;p&gt;The root-app is a Helm chart that renders Application manifests. Initially, it has to be added manually, and after, we will commit Application manifests with Git, and it will be deployed automatically to argo-cd apps&lt;/p&gt;

&lt;p&gt;Creating the &lt;a href="https://github.com/sambo2021/helm-charts/tree/master/charts/root-app" rel="noopener noreferrer"&gt;root-app&lt;/a&gt; Helm chart&lt;br&gt;
***note: we will add at first step the templates/root-app.yml application so don't add the templates/argo-cd.yml now-&amp;gt; only the templates/root-app.yml &lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/sambo2021/helm-charts/blob/master/charts/root-app/Chart.yaml" rel="noopener noreferrer"&gt;https://github.com/sambo2021/helm-charts/blob/master/charts/root-app/Chart.yaml&lt;/a&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: v2
name: root-app
version: 1.0.0
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;and empty values.yaml -&amp;gt; &lt;a href="https://github.com/sambo2021/helm-charts/blob/master/charts/root-app/values.yaml" rel="noopener noreferrer"&gt;https://github.com/sambo2021/helm-charts/blob/master/charts/root-app/values.yaml&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;then the root-app -&amp;gt; &lt;a href="https://github.com/sambo2021/helm-charts/blob/master/charts/root-app/templates/root-app.yml" rel="noopener noreferrer"&gt;https://github.com/sambo2021/helm-charts/blob/master/charts/root-app/templates/root-app.yml&lt;/a&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
  name: root-app
  finalizers:
  - resources-finalizer.argocd.argoproj.io
spec:
  project: default
  source:
    repoURL: https://github.com/sambo2021/helm-charts.git
    path: charts/root-app/
    targetRevision: master
  destination:
    server: https://kubernetes.default.svc
    namespace: default
  syncPolicy:
    automated:
      selfHeal: true
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The above Application watches our root-app Helm chart (under &lt;a href="https://github.com/sambo2021/helm-charts/blob/master/charts/root-app/templates/" rel="noopener noreferrer"&gt;https://github.com/sambo2021/helm-charts/blob/master/charts/root-app/templates/&lt;/a&gt;), and if changes are detected, synchronizes (meaning that it will render the Helm chart and apply the resulting manifests on the cluster) it.&lt;/p&gt;

&lt;p&gt;How does Argo CD know our application is a Helm chart? It looks for a Chart.yaml file under path in the Git repository.&lt;/p&gt;

&lt;p&gt;Argo CD will not use helm install to install charts. It will render the chart with helm template and then apply the output with kubectl. &lt;br&gt;
This means we can't run helm list on a local machine to get all installed releases.&lt;/p&gt;

&lt;p&gt;after pushing your charts to the remote repo &lt;/p&gt;

&lt;p&gt;Now let's apply the manifest in our Kubernetes cluster. The first time we have to do it manually&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;~/helm-charts/charts/ $ helm template root-app/ | kubectl apply -f -
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;***note: api-server will understand the kind of that manifest because you already provided it by necessary crds when you deployed arg-cd &lt;/p&gt;

&lt;p&gt;Now Argo CD manage the root-app and synchronize it automatically:&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 04: let argo-cd manage itself
&lt;/h2&gt;

&lt;p&gt;finally it is the moment of adding the argo-cd app that referring to our helm chart that we applied before, at the same level of root-app.yaml &lt;br&gt;
&lt;a href="https://github.com/sambo2021/helm-charts/blob/master/charts/root-app/templates/argo-cd.yml" rel="noopener noreferrer"&gt;https://github.com/sambo2021/helm-charts/blob/master/charts/root-app/templates/argo-cd.yml&lt;/a&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
  name: argo-cd
  finalizers:
  - resources-finalizer.argocd.argoproj.io
spec:
  project: default
  source:
    repoURL: https://github.com/sambo2021/helm-charts.git
    path: charts/argocd-test/argo-cd/
    targetRevision: master
  destination:
    server: https://kubernetes.default.svc
    namespace: default
  syncPolicy:
    automated:
      selfHeal: true
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;and push it to the remote repo and let argo-cd to do the magic &lt;br&gt;
We let the Argo CD controller watch for changes to the argo-cd helm chart in our repo (under  &lt;a href="https://github.com/sambo2021/helm-charts/tree/master/charts/argocd-test/argo-cd" rel="noopener noreferrer"&gt;https://github.com/sambo2021/helm-charts/tree/master/charts/argocd-test/argo-cd&lt;/a&gt;), render the Helm chart, and apply the resulting manifests. It's done using kubectl and asynchronous.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fs5m3vngh7e9is21mk9gu.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fs5m3vngh7e9is21mk9gu.png" alt="aargocd ui" width="800" height="400"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;note: sometimes some apps get stuck and hanging while being deleted or resync, a small tip, is to remove the finalizer of one/multiple argo-cd applications&lt;br&gt;
because if an Application or an ApplicationSet is stuck while deleting. It means it needs to wait for a response from "finalizers". So, the solution is to remove the "finalizers" from JSON&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl get applications  -o=jsonpath='{range .items[?(@.status.health.status=="Unknown")]}{.metadata.name}{"\n"}' | xargs -I {} kubectl patch application {}  --type=json -p='[{"op": "remove", "path": "/metadata/finalizers"}]'
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Step 05: manage istio charts by argocd
&lt;/h2&gt;

&lt;p&gt;we deployed istio-base, istiod and istio-ingress before by helm install, now this is the step to migrate them to our rago-cd &lt;br&gt;
the same we did for argo-cd, in every component we pull the chart local by the helm_render script and using the argocd app&lt;/p&gt;

&lt;p&gt;1- istio-base&lt;br&gt;
the chart -&amp;gt; &lt;a href="https://github.com/sambo2021/helm-charts/tree/master/charts/istio-base-test/" rel="noopener noreferrer"&gt;https://github.com/sambo2021/helm-charts/tree/master/charts/istio-base-test/&lt;/a&gt;&lt;br&gt;
the argocd-app-&amp;gt; &lt;a href="https://github.com/sambo2021/helm-charts/blob/master/charts/root-app/templates/istio-base.yml" rel="noopener noreferrer"&gt;https://github.com/sambo2021/helm-charts/blob/master/charts/root-app/templates/istio-base.yml&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;2- istiod&lt;br&gt;
the chart -&amp;gt; &lt;a href="https://github.com/sambo2021/helm-charts/tree/master/charts/istio-istiod-test" rel="noopener noreferrer"&gt;https://github.com/sambo2021/helm-charts/tree/master/charts/istio-istiod-test&lt;/a&gt;&lt;br&gt;
the argocd-app-&amp;gt; &lt;a href="https://github.com/sambo2021/helm-charts/blob/master/charts/root-app/templates/istio-istiod.yml" rel="noopener noreferrer"&gt;https://github.com/sambo2021/helm-charts/blob/master/charts/root-app/templates/istio-istiod.yml&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;3- istio-ingress&lt;br&gt;
the chart -&amp;gt; &lt;a href="https://github.com/sambo2021/helm-charts/tree/master/charts/istio-ingress-test" rel="noopener noreferrer"&gt;https://github.com/sambo2021/helm-charts/tree/master/charts/istio-ingress-test&lt;/a&gt;&lt;br&gt;
the argocd-app-&amp;gt; &lt;a href="https://github.com/sambo2021/helm-charts/blob/master/charts/root-app/templates/istio-ingress.yml" rel="noopener noreferrer"&gt;https://github.com/sambo2021/helm-charts/blob/master/charts/root-app/templates/istio-ingress.yml&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;finnaly : &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0kokaqcdjgo2is2xw3s5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0kokaqcdjgo2is2xw3s5.png" alt="argo-cd apps" width="800" height="400"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;an issue appeared to me for istiod-v1.20.1, specially istiod-default-validator&lt;br&gt;
but a quick fix to add ignore diff parameter to istio-base argoapp as the third link mentioned :&lt;br&gt;
1-&lt;a href="https://github.com/istio/istio/issues/46727" rel="noopener noreferrer"&gt;https://github.com/istio/istio/issues/46727&lt;/a&gt;&lt;br&gt;
2-&lt;a href="https://github.com/istio/istio/issues/45738" rel="noopener noreferrer"&gt;https://github.com/istio/istio/issues/45738&lt;/a&gt;&lt;br&gt;
3-&lt;a href="https://github.com/argoproj/argo-cd/issues/9323" rel="noopener noreferrer"&gt;https://github.com/argoproj/argo-cd/issues/9323&lt;/a&gt;&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>istio</category>
      <category>argocd</category>
      <category>gitops</category>
    </item>
    <item>
      <title>Your First K8S+Istio</title>
      <dc:creator>Mohamed Sambo</dc:creator>
      <pubDate>Sun, 24 Dec 2023 13:25:01 +0000</pubDate>
      <link>https://dev.to/sambo2021/your-first-k8sistio-41jh</link>
      <guid>https://dev.to/sambo2021/your-first-k8sistio-41jh</guid>
      <description>&lt;h2&gt;
  
  
  What do u need to build up k8s cluster
&lt;/h2&gt;

&lt;p&gt;I am using a Linux subsystem on my Windows 10 machine, so I was searching for the best way to install a quick Kubernetes cluster for dev/test purposes let's dive in quickly&lt;br&gt;
Using the k3d tool which lightweight wrapper to run k3s (Rancher Lab’s minimal Kubernetes distribution) in docker.&lt;br&gt;
Requirements:&lt;br&gt;
1- &lt;a href="https://ubuntu.com/tutorials/install-ubuntu-on-wsl2-on-windows-10#1-overview" rel="noopener noreferrer"&gt;Install Ubuntu on WSL2 on Windows 10&lt;/a&gt;&lt;br&gt;
what if i have virtual machine linux on my windows ?&lt;br&gt;
then you need to configure the network -&amp;gt; &lt;a href="https://serverfault.com/questions/225155/virtualbox-how-to-set-up-networking-so-both-host-and-guest-can-access-internet" rel="noopener noreferrer"&gt;https://serverfault.com/questions/225155/virtualbox-how-to-set-up-networking-so-both-host-and-guest-can-access-internet&lt;/a&gt; &lt;br&gt;
2- docker to be able to use k3d at all&lt;br&gt;
Note: k3d v5.x.x requires at least Docker v20.10.5 (runc &amp;gt;= v1.0.0-rc93) to work properly&lt;br&gt;
3- &lt;a href="https://kubernetes.io/docs/tasks/tools/install-kubectl-linux/" rel="noopener noreferrer"&gt;kubectl&lt;/a&gt;: to interact with the Kubernetes cluster&lt;br&gt;
4- &lt;a href="https://helm.sh/docs/intro/install/#from-script" rel="noopener noreferrer"&gt;helm&lt;/a&gt;: to use it later to install istio helm charts &lt;br&gt;
5- k9s: terminal-based UI to interact with your Kubernetes clusters&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;wget https://github.com/derailed/k9s/releases/download/v0.32.7/k9s_linux_amd64.deb &amp;amp;&amp;amp; apt install ./k9s_linux_amd64.deb &amp;amp;&amp;amp; rm k9s_linux_amd64.deb

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  step 1: install k3d tool v5.6.0 which comes with default k8s v1.27
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://k3d.io/v5.6.0/" rel="noopener noreferrer"&gt;k3d official page&lt;/a&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ curl -s https://raw.githubusercontent.com/rancher/k3d/main/install.sh | bash
$ k3d --version
  k3d version v5.6.0                                                                                                                                           
  k3s version v1.27.4-k3s1 (default)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  step 2: build up your cluster
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ k3d cluster create DevOps --agents 2 --api-port 0.0.0.0:6443 -p '9080:80@loadbalancer' --k3s-arg "--disable=traefik@server:*"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;cluster name: DevOps&lt;/li&gt;
&lt;li&gt;cluster master nodes: 1&lt;/li&gt;
&lt;li&gt;cluster worker nodes: 2&lt;/li&gt;
&lt;li&gt;api-server works on port: 6443

&lt;ul&gt;
&lt;li&gt;note: if you are using a firewall you can allow this port through&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;p&gt;&lt;code&gt;$ sudo ufw allow 6443&lt;/code&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;disable=traefik@server:* :k3d will not deploy the Traefik ingress controller because we will use istio in a modern fashion way&lt;/li&gt;
&lt;li&gt;9080:80@loadbalancer: means that the load balancer(in docker, which is exposed), will forward requests from port 9080 on your machine browser to 80 in the k8 cluster, you can check this out after creation by running docker ps&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;code&gt;$ docker ps&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fryygnq7c5zjfixzvb5wd.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fryygnq7c5zjfixzvb5wd.png" alt="k3d cluster containers" width="800" height="47"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;k3d uses NGINX (stream mode) as the load balancer
NGINX divides traffic using round-robin:
k3d-DevOps-agent-0:80
k3d-DevOps-agent-1:80
k3d-DevOps-server-0:80&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;These names are Docker DNS hostnames inside the same network.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sambo@DESKTOP-FQ599KT:~$ docker exec -it abbf49 sh
/ # cat /etc/nginx/nginx.conf
###################################
# Generated by confd 2025-11-21 19:43:00.997738828 +0000 UTC m=+0.010213309 #
#             #######             #
#             # k3d #             #
#             #######             #
###################################

error_log stderr notice;

worker_processes auto;
events {
  multi_accept on;
  use epoll;
  worker_connections 1024;
}

stream {

  upstream 6443_tcp {
    server k3d-DevOps-server-0:6443 max_fails=1 fail_timeout=10s;
  }

  server {
    listen        6443;
    proxy_pass    6443_tcp;
    proxy_timeout 600;
    proxy_connect_timeout 2s;
  }

  upstream 80_tcp {
    server k3d-DevOps-agent-0:80 max_fails=1 fail_timeout=10s;
    server k3d-DevOps-agent-1:80 max_fails=1 fail_timeout=10s;
    server k3d-DevOps-server-0:80 max_fails=1 fail_timeout=10s;
  }

  server {
    listen        80;
    proxy_pass    80_tcp;
    proxy_timeout 600;
    proxy_connect_timeout 2s;
  }

}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;now you can check ur cluster &lt;br&gt;
&lt;code&gt;$ kubectl cluster-info&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkuymstv2pcwfyzyhtd4c.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkuymstv2pcwfyzyhtd4c.png" alt="k3d cluster info" width="800" height="48"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;By using k9s&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fk1ggqmwv4v3hp9953tnq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fk1ggqmwv4v3hp9953tnq.png" alt="k3d cluster using k9s" width="800" height="105"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;but you need to taint your master node to prevent any pod to be scheduled on, This is the default behavior of k3s because tainting the control plane node is not a Kubernetes requirement especially for the dev/test environments.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ kubectl taint nodes k3d-devops-server-0 node-role.kubernetes.io/master:NoSchedule
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  step 3: installing istio using helm
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;note: I insttalled almost the latest vrsion of istio 
Istio repository contains the necessary configurations and Istio charts for installing Istio. The first step is to add it to Helm by running the command below.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;code&gt;$ helm repo add istio https://istio-release.storage.googleapis.com/charts&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Now, update Helm repository to get the latest charts:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;$ helm repo update&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Install Istio base chart, Enter the following command to install the Istio base chart, which contains cluster-wide Custom Resource Definitions (CRDs). (Note that this is a requirement for installing the Istio control plane.)&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ helm install istio-base istio/base -n istio-system --create-namespace --set defaultRevision=default
$ helm install istiod istio/istiod -n istio-system --wait
$ helm install istio-ingress istio/gateway -n istio-ingress --
create-namespace --wait
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;to list all helm releases deployed in namespace istio-system&lt;br&gt;
&lt;code&gt;$ helm ls -n istio-system&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;to get status of istio-system&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ helm status istio-base -n istio-system
$ helm status istiod -n istio-system
$ helm status istio-ingress -n istio-ingress
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;to get all deployed for relaese of istio-system&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ helm get all istio-base -n istio-system
$ helm get all istiod -n istio-system
$ helm get all istio-ingress -n istio-ingress
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Label namespace to onboard Istio&lt;br&gt;
For Istio to inject sidecars, we need to label a particular namespace. Once a namespace is onboarded (or labeled) for Istio to manage, Istio will automatically inject the sidecar proxy (Envoy) to any application pods deployed into that namespace.&lt;br&gt;
Use the below command to label the &lt;code&gt;default&lt;/code&gt; namespace with the Istio injection-enabled tag:&lt;br&gt;
&lt;code&gt;$ kubectl label namespace default istio-injection=enabled&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;if u gonna use argocd to deploy those charts, u need to getch those repos local on ur side for more visibility&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ helm fetch istio/base --untar
$ helm fetch istio/istiod --untar
$ helm fetch istio/gateway --untar 
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  step 4: enable istio envoy access logs
&lt;/h2&gt;

&lt;p&gt;Istio offers a few ways to enable access logs. Use of the Telemetry API is recommended&lt;br&gt;
The Telemetry API can be used to enable or disable access logs:&lt;br&gt;
&lt;a href="https://istio.io/latest/docs/tasks/observability/logs/access-log/" rel="noopener noreferrer"&gt;istio envoy access log&lt;/a&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: telemetry.istio.io/v1alpha1
kind: Telemetry
metadata:
  name: mesh-default
  namespace: istio-system
spec:
  accessLogging:
    - providers:
      - name: envoy
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The above example uses the default envoy access log provider, and we do not configure anything other than default settings.&lt;br&gt;
Similar configuration can also be applied on an individual namespace, or to an individual workload, to control logging at a fine grained level.&lt;/p&gt;
&lt;h2&gt;
  
  
  step 5: try your first apps using istio gateway
&lt;/h2&gt;

&lt;p&gt;we have in that example 2 simple apps [pod+service] deploy them in default namespace, which you enabled istio injection by default &lt;/p&gt;

&lt;p&gt;echo app&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: v1
kind: Pod
metadata:
  name: echo-server
  labels:
    app: echo-server
spec:
  containers:
  - name: echoserver
    image: gcr.io/google_containers/echoserver:1.0
    imagePullPolicy: IfNotPresent
    ports:
    - containerPort: 8080
---
apiVersion: v1
kind: Service
metadata:
  name: echo-service
  labels:
    app: echo-server
spec:
  selector:
    app: echo-server
  ports:
  - port: 8080
    targetPort: 8080
    name: http
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;hello app&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
apiVersion: v1
kind: Service
metadata:
  name: hello-service
spec:
  ports:
  - port: 8080
    name: http
  selector:
    app: hello-app
---
apiVersion: v1
kind: Pod
metadata:
  labels:
    app: hello-app
  name: hello-app
spec:
  containers:
  - command:
    - /agnhost
    - netexec
    - --http-port=8080
    image: registry.k8s.io/e2e-test-images/agnhost:2.39
    name: agnhost
    ports:
    - containerPort: 8080

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now that the 2 services are up and running, you need to make the 2 applications accessible from outside of your Kubernetes cluster, e.g., from a browser. A gateway is used for this purpose and virtual service to match the URI requested, we gonna link our gateway to istio-ingress to see traffic going into the cluster&lt;/p&gt;

&lt;p&gt;gateway&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
  name: ingress-gateway
  namespace: istio-ingress
spec:
  selector:
    app: istio-ingress
    istio: ingress
  servers:
  - port:
      number: 80
      name: http
      protocol: HTTP
    hosts:
    - "*"

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;virtualservice&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
  name: virtualservice
  namespace: istio-ingress
spec:
  hosts:
  - "*"
  gateways:
  - ingress-gateway
  http:
  - match:
    - uri:
        prefix: /echo
    route:
    - destination:
        host: echo-service.default.svc.cluster.local
        port:
          number: 8080
  - match:
    - uri:
        prefix: /hello
    route:
    - destination:
        host: hello-service.default.svc.cluster.local
        port:
          number: 8080
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;hitting the browser&lt;br&gt;
&lt;a href="http://localhost:9080/echo" rel="noopener noreferrer"&gt;http://localhost:9080/echo&lt;/a&gt;&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0enmfgmx6qtno0mco2di.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0enmfgmx6qtno0mco2di.png" alt="http://localhost:9080/echo" width="800" height="315"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="http://localhost:9080/hello" rel="noopener noreferrer"&gt;http://localhost:9080/hello&lt;/a&gt;&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fp0aytybb3ph9ccchxur8.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fp0aytybb3ph9ccchxur8.png" alt="http://localhost:9080/hello" width="612" height="75"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;look at the taffic going into cluster through istio-ingress&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ff4jop5ygavg3qufi1gus.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ff4jop5ygavg3qufi1gus.png" alt="istio-ingress logs" width="800" height="128"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;and the istio side car logs on each app&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2gqyl4vgqfyfcshj4wsw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2gqyl4vgqfyfcshj4wsw.png" alt="echo app side car container" width="800" height="116"&gt;&lt;/a&gt;&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwrfzknoflemgtx27nfwg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwrfzknoflemgtx27nfwg.png" alt="hello app side car container" width="800" height="118"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Finally, this is how your cluster looks like &lt;em&gt;almost not exact :D&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fovua03e2r236me68ku17.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fovua03e2r236me68ku17.png" alt="k3d-cluster" width="399" height="749"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Extra-Tip: Furthermore I need to configure specific domain and use TLS connection so We gonna need one more port mapped through the nginx-load balancer so we can make it by editing the cluster&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;k3d cluster edit DevOps --port-add '9443:443@loadbalancer'
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



</description>
      <category>kubernetes</category>
      <category>istio</category>
      <category>linux</category>
      <category>devops</category>
    </item>
    <item>
      <title>Your First K8S+Istio</title>
      <dc:creator>Mohamed Sambo</dc:creator>
      <pubDate>Sun, 24 Dec 2023 13:25:01 +0000</pubDate>
      <link>https://dev.to/sambo2021/your-first-k8sistio-4obj</link>
      <guid>https://dev.to/sambo2021/your-first-k8sistio-4obj</guid>
      <description>&lt;h2&gt;
  
  
  What do u need to build up k8s cluster
&lt;/h2&gt;

&lt;p&gt;I am using a Linux subsystem on my Windows 10 machine, so I was searching for the best way to install a quick Kubernetes cluster for dev/test purposes let's dive in quickly&lt;br&gt;
Using the k3d tool which lightweight wrapper to run k3s (Rancher Lab’s minimal Kubernetes distribution) in docker.&lt;br&gt;
Requirements:&lt;br&gt;
1- docker to be able to use k3d at all&lt;br&gt;
Note: k3d v5.x.x requires at least Docker v20.10.5 (runc &amp;gt;= v1.0.0-rc93) to work properly (see #807)&lt;br&gt;
2- kubectl: to interact with the Kubernetes cluster&lt;br&gt;
3- helm: to use it later to install istio helm charts &lt;br&gt;
4- k9s: terminal-based UI to interact with your Kubernetes clusters&lt;/p&gt;

&lt;p&gt;Note:&lt;/p&gt;
&lt;h2&gt;
  
  
  step 1: install k3d tool v5.6.0 which comes with default k8s v1.27
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://k3d.io/v5.6.0/" rel="noopener noreferrer"&gt;k3d official page&lt;/a&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ curl -s https://raw.githubusercontent.com/rancher/k3d/main/install.sh | bash
$ k3d --version
  k3d version v5.6.0                                                                                                                                           
  k3s version v1.27.4-k3s1 (default)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  step 2: build up your cluster
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ k3d cluster create DevOps --agents 2 --api-port 0.0.0.0:6443 -p '9080:80@loadbalancer --k3s-arg "--disable=traefik@server:*"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;cluster name: DevOps&lt;/li&gt;
&lt;li&gt;cluster master nodes: 1&lt;/li&gt;
&lt;li&gt;cluster worker nodes: 2&lt;/li&gt;
&lt;li&gt;api-server works on port: 6443

&lt;ul&gt;
&lt;li&gt;note: if you are using a firewall you can allow this port through 
&lt;code&gt;$ sudo ufw allow 6443&lt;/code&gt;
&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;disable=traefik@server:* :k3d will not deploy the Traefik ingress controller because we will use istio in a modern fashion way&lt;/li&gt;

&lt;li&gt;9080:80@loadbalancer: means that the load balancer(in docker, which is exposed), will forward requests from port 9080 on your machine browser to 80 in the k8 cluster, you can check this out after creation by running docker ps
&lt;code&gt;$ docker ps&lt;/code&gt;
&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fryygnq7c5zjfixzvb5wd.png" alt="k3d cluster containers" width="800" height="47"&gt;
&lt;/li&gt;

&lt;li&gt;now you can check ur cluster 
&lt;code&gt;$ kubectl cluster-info&lt;/code&gt;
&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkuymstv2pcwfyzyhtd4c.png" alt="k3d cluster info" width="800" height="48"&gt;
By using k9s
&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fk1ggqmwv4v3hp9953tnq.png" alt="k3d cluster using k9s" width="800" height="105"&gt;
&lt;/li&gt;

&lt;/ul&gt;

&lt;h2&gt;
  
  
  step 3: installing istio using helm
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;note: I insttalled almost the latest vrsion of istio 
Istio repository contains the necessary configurations and Istio charts for installing Istio. The first step is to add it to Helm by running the command below.
&lt;code&gt;$ helm repo add istio https://istio-release.storage.googleapis.com/charts&lt;/code&gt;
Now, update Helm repository to get the latest charts:
&lt;code&gt;$ helm repo update&lt;/code&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Install Istio base chart, Enter the following command to install the Istio base chart, which contains cluster-wide Custom Resource Definitions (CRDs). (Note that this is a requirement for installing the Istio control plane.)&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ helm install istio-base istio/base -n istio-system --create-namespace --set defaultRevision=default
$ helm install istiod istio/istiod -n istio-system --wait
$ helm install istio-ingress istio/gateway -n istio-ingress --
create-namespace --wait
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;to list all helm releases deployed in namespace istio-system&lt;br&gt;
&lt;code&gt;$ helm ls -n istio-system&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;to get status of istio-system&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ helm status istio-base -n istio-system
$ helm status istiod -n istio-system
$ helm status istio-ingress -n istio-ingress
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;to get all deployed for relaese of istio-system&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ helm get all istio-base -n istio-system
$ helm get all istiod -n istio-system
$ helm get all istio-ingress -n istio-ingress
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Label namespace to onboard Istio&lt;br&gt;
For Istio to inject sidecars, we need to label a particular namespace. Once a namespace is onboarded (or labeled) for Istio to manage, Istio will automatically inject the sidecar proxy (Envoy) to any application pods deployed into that namespace.&lt;br&gt;
Use the below command to label the &lt;code&gt;default&lt;/code&gt; namespace with the Istio injection-enabled tag:&lt;br&gt;
&lt;code&gt;$ kubectl label namespace default istio-injection=enabled&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;if u gonna use argocd to deploy those charts, u need to getch those repos local on ur side for more visibility&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ helm fetch istio/base --untar
$ helm fetch istio/istiod --untar
$ helm fetch istio/gateway --untar 
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  step 4: enable istio envoy access logs
&lt;/h2&gt;

&lt;p&gt;Istio offers a few ways to enable access logs. Use of the Telemetry API is recommended&lt;br&gt;
The Telemetry API can be used to enable or disable access logs:&lt;br&gt;
&lt;a href="https://istio.io/latest/docs/tasks/observability/logs/access-log/" rel="noopener noreferrer"&gt;istio envoy access log&lt;/a&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: telemetry.istio.io/v1alpha1
kind: Telemetry
metadata:
  name: mesh-default
  namespace: istio-system
spec:
  accessLogging:
    - providers:
      - name: envoy
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The above example uses the default envoy access log provider, and we do not configure anything other than default settings.&lt;br&gt;
Similar configuration can also be applied on an individual namespace, or to an individual workload, to control logging at a fine grained level.&lt;/p&gt;
&lt;h2&gt;
  
  
  step 5: try your first apps using istio gateway
&lt;/h2&gt;

&lt;p&gt;we have in that example 2 simple apps [pod+service] deploy them in default namespace, which you enabled istio injection by default &lt;/p&gt;

&lt;p&gt;echo app&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: v1
kind: Pod
metadata:
  name: echo-server
  labels:
    app: echo-server
spec:
  containers:
  - name: echoserver
    image: gcr.io/google_containers/echoserver:1.0
    imagePullPolicy: IfNotPresent
    ports:
    - containerPort: 8080
---
apiVersion: v1
kind: Service
metadata:
  name: echo-service
  labels:
    app: echo-server
spec:
  selector:
    app: echo-server
  ports:
  - port: 8080
    targetPort: 8080
    name: http
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;hello app&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
apiVersion: v1
kind: Service
metadata:
  name: hello-service
spec:
  ports:
  - port: 8080
    name: http
  selector:
    app: hello-app
---
apiVersion: v1
kind: Pod
metadata:
  labels:
    app: hello-app
  name: hello-app
spec:
  containers:
  - command:
    - /agnhost
    - netexec
    - --http-port=8080
    image: registry.k8s.io/e2e-test-images/agnhost:2.39
    name: agnhost
    ports:
    - containerPort: 8080

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now that the 2 services are up and running, you need to make the 2 applications accessible from outside of your Kubernetes cluster, e.g., from a browser. A gateway is used for this purpose and virtual service to match the URI requested, we gonna link our gateway to istio-ingress to see traffic going into the cluster&lt;/p&gt;

&lt;p&gt;gateway&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
  name: ingress-gateway
  namespace: istio-ingress
spec:
  selector:
    app: istio-ingress
    istio: ingress
  servers:
  - port:
      number: 80
      name: http
      protocol: HTTP
    hosts:
    - "*"

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;virtualservice&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
  name: virtualservice
  namespace: istio-ingress
spec:
  hosts:
  - "*"
  gateways:
  - ingress-gateway
  http:
  - match:
    - uri:
        prefix: /echo
    route:
    - destination:
        host: echo-service.default.svc.cluster.local
        port:
          number: 8080
  - match:
    - uri:
        prefix: /hello
    route:
    - destination:
        host: hello-service.default.svc.cluster.local
        port:
          number: 8080
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;hitting the browser&lt;br&gt;
&lt;a href="http://localhost:9080/echo" rel="noopener noreferrer"&gt;http://localhost:9080/echo&lt;/a&gt;&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0enmfgmx6qtno0mco2di.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0enmfgmx6qtno0mco2di.png" alt="http://localhost:9080/echo" width="800" height="315"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="http://localhost:9080/hello" rel="noopener noreferrer"&gt;http://localhost:9080/hello&lt;/a&gt;&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fp0aytybb3ph9ccchxur8.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fp0aytybb3ph9ccchxur8.png" alt="http://localhost:9080/hello" width="612" height="75"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;look at the taffic going into cluster through istio-ingress&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ff4jop5ygavg3qufi1gus.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ff4jop5ygavg3qufi1gus.png" alt="istio-ingress logs" width="800" height="128"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;and the istio side car logs on each app&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2gqyl4vgqfyfcshj4wsw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2gqyl4vgqfyfcshj4wsw.png" alt="echo app side car container" width="800" height="116"&gt;&lt;/a&gt;&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwrfzknoflemgtx27nfwg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwrfzknoflemgtx27nfwg.png" alt="hello app side car container" width="800" height="118"&gt;&lt;/a&gt;&lt;/p&gt;

</description>
    </item>
  </channel>
</rss>
