<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: maghsood esmaeili</title>
    <description>The latest articles on DEV Community by maghsood esmaeili (@maghsood_esmaeili).</description>
    <link>https://dev.to/maghsood_esmaeili</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/maghsood_esmaeili"/>
    <language>en</language>
    <item>
      <title>How to Sync Data from an Oracle Table to Elasticsearch using Kafka Connect</title>
      <dc:creator>maghsood esmaeili</dc:creator>
      <pubDate>Tue, 16 Dec 2025 06:31:22 +0000</pubDate>
      <link>https://dev.to/maghsood_esmaeili/how-to-sync-data-from-an-oracle-table-to-elasticsearch-using-kafka-connect-2k79</link>
      <guid>https://dev.to/maghsood_esmaeili/how-to-sync-data-from-an-oracle-table-to-elasticsearch-using-kafka-connect-2k79</guid>
      <description>&lt;p&gt;In this document, I’ll walk you through the challenge I faced when fetching data from an Oracle database, streaming it into Kafka, and finally consuming and writing that data into Elasticsearch. My goal is that this guide will help other teams build a reliable data pipeline using similar components.&lt;br&gt;
Before we begin, you’ll need a working Kafka setup. In my case, I prepared a Kafka cluster and verified that all the pods were running correctly. This ensures the environment is ready before configuring Kafka Connect and the connectors for Oracle and Elasticsearch.&lt;br&gt;
The general flow we’ll cover is:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Prerequisite&lt;/strong&gt;: Installing and running a Kafka cluster using the Strimzi operator.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Configure Kafka Connect&lt;/strong&gt;: Create a Kafka Connect instance.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Create a custom Dockerfile: Create a custom image and push it to the local registry.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Configure JDBC Connector&lt;/strong&gt;: Configure the JDBC  connector in the Kafka cluster.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Configure ElasticSearch Connector&lt;/strong&gt;: Add sink configuration to insert and create an index in ElasticSearch.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Configure ingest pipeline(optional)&lt;/strong&gt;: convert Oracle timestamp field to Elastic&lt;br&gt;
search timestamp field.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;By the end of this guide, you should have a running pipeline that automatically streams changes from Oracle to Elasticsearch with minimal manual intervention.&lt;br&gt;
Before starting, ensure that Kafka is installed and running in your cluster. In my setup, I deployed a Kafka cluster and confirmed that all pods are up and running:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ff3hewyz7mnkvo1d9diab.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ff3hewyz7mnkvo1d9diab.png" alt="list of kafka pods" width="624" height="165"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Afterward, Kafka Connect must be installed using the Strimzi operator. This requires creating a Kafka Connect Custom Resource Definition (CRD) instance on the cluster. The following configuration provides a sample cluster file:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;kafka.strimzi.io/v1beta2&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;KafkaConnect&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;annotations&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;strimzi.io/use-connector-resources&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;true'&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;my-connect-cluster&lt;/span&gt;
  &lt;span class="na"&gt;namespace&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;kafka-infra&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;bootstrapServers&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;my-cluster-kafka-bootstrap:9092'&lt;/span&gt;
  &lt;span class="na"&gt;config&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;config.storage.replication.factor&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;-1&lt;/span&gt;
    &lt;span class="na"&gt;config.storage.topic&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;connect-cluster-configs&lt;/span&gt;
    &lt;span class="na"&gt;group.id&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;connect-cluster&lt;/span&gt;
    &lt;span class="na"&gt;offset.storage.replication.factor&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;-1&lt;/span&gt;
    &lt;span class="na"&gt;offset.storage.topic&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;connect-cluster-offsets&lt;/span&gt;
    &lt;span class="na"&gt;status.storage.replication.factor&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;-1&lt;/span&gt;
    &lt;span class="na"&gt;status.storage.topic&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;connect-cluster-status&lt;/span&gt;
  &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;private-registery.com/strimzi/kafka-custom:v2'&lt;/span&gt;
  &lt;span class="na"&gt;replicas&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;1&lt;/span&gt;
  &lt;span class="na"&gt;version&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;4.0.0&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The next step, and one of the most critical components of the data pipeline, is the installation of the &lt;strong&gt;ElasticsearchSink&lt;/strong&gt; and &lt;strong&gt;ElasticsearchSink&lt;/strong&gt; plugins within the Kafka Connect pod. By default, Kafka Connect does not include these plugins, which means a custom image must be built. &lt;br&gt;
This process involves creating a Docker image that packages the required connector plugins and then updating the Kafka Connect custom resource to reference this new image.&lt;/p&gt;

&lt;p&gt;Doing so ensures that Kafka Connect can interact with both Elasticsearch and relational databases, enabling seamless data integration across the pipeline. The following sample Dockerfile demonstrates how to add the &lt;strong&gt;ElasticsearchSink&lt;/strong&gt; and &lt;strong&gt;JDBCConnector&lt;/strong&gt; plugins to the image:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;FROM quay.io/strimzi/kafka:0.46.0-kafka-4.0.0

USER root
RUN curl -o /tmp/confluentinc-kafka-connect-elasticsearch-15.0.1.zip https://hub-downloads.confluent.io/api/plugins/confluentinc/kafka-connect-elasticsearch/versions/15.0.1/confluentinc-kafka-connect-elasticsearch-15.0.1.zip

RUN curl -o /tmp/confluentinc-kafka-connect-jdbc-10.8.4.zip https://hub-downloads.confluent.io/api/plugins/confluentinc/kafka-connect-jdbc/versions/10.8.4/confluentinc-kafka-connect-jdbc-10.8.4.zip

RUN unzip /tmp/confluentinc-kafka-connect-jdbc-10.8.4.zip -d /opt/kafka/plugins/ &amp;amp;&amp;amp; \
    rm /tmp/confluentinc-kafka-connect-jdbc-10.8.4.zip


RUN unzip /tmp/confluentinc-kafka-connect-elasticsearch-15.0.1.zip -d /opt/kafka/plugins/ &amp;amp;&amp;amp; \
    rm /tmp/confluentinc-kafka-connect-elasticsearch-15.0.1.zip

USER 1001
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Finally, we need to add a Kafka connector to capture data from Oracle and publish it to Kafka. The configuration below illustrates how to define the JDBC connector configuration file:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;kafka.strimzi.io/v1beta2&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;KafkaConnector&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;labels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;strimzi.io/cluster&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;my-connect-cluster&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;my-source-connector-jdbc-testdb2-v0&lt;/span&gt;
  &lt;span class="na"&gt;namespace&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;kafka-infra&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;class&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;io.confluent.connect.jdbc.JdbcSourceConnector&lt;/span&gt;
  &lt;span class="na"&gt;config&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;poll.interval.ms&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;3600000&lt;/span&gt;
    &lt;span class="na"&gt;transforms.extractInt.field&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ORACLE_TIME_FIELD&lt;/span&gt;
    &lt;span class="na"&gt;mode&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;timestamp&lt;/span&gt;
    &lt;span class="na"&gt;query&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;select * from schema_name.table_name&lt;/span&gt;
    &lt;span class="na"&gt;timestamp.column.name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ORACLE_TIME_FIELD&lt;/span&gt;
    &lt;span class="na"&gt;connection.password&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="err"&gt;*****&lt;/span&gt;
    &lt;span class="na"&gt;topic.prefix&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;oracle-testdb2-audit-table&lt;/span&gt;
    &lt;span class="na"&gt;connection.user&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;thinuser&lt;/span&gt;
    &lt;span class="na"&gt;connection.url&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;jdbc:oracle:driver-name:@hostname:port/db-name'&lt;/span&gt; 
  &lt;span class="na"&gt;tasksMax&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;2&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We can now verify in Kafka that the Oracle table topic defined in the JDBC connector configuration has been created successfully and is receiving messages :&lt;br&gt;
topic name(by running):&lt;/p&gt;

&lt;p&gt;&lt;code&gt;bin/kafka-topics.sh --list --bootstrap-server localhost:9092&lt;br&gt;
&lt;/code&gt;&lt;br&gt;
output:&lt;br&gt;
&lt;code&gt;oracle-testdb2-audit-table&lt;br&gt;
&lt;/code&gt;&lt;br&gt;
 messages(by running):&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;bin/kafka-console-consumer.sh -–topic oracle-testdb2-audit-table --bootstrap-server localhost:9092 --from-beginning --max-messages 1
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Later, I will provide a Kafka consumer implemented in Go.&lt;br&gt;
The next section of the documentation will cover how to synchronize data from Kafka to Elasticsearch.&lt;br&gt;
First, we need to install a Kafka connector to fetch data from Kafka and synchronize it with Elasticsearch. The configuration for the Kafka connector is shown below:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;kafka.strimzi.io/v1beta2&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;KafkaConnector&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;labels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;strimzi.io/cluster&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;my-connect-cluster&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;my-source-connector-oracle-v1&lt;/span&gt;
  &lt;span class="na"&gt;namespace&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;kafka-infra&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;class&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;io.confluent.connect.elasticsearch.ElasticsearchSinkConnector&lt;/span&gt;
  &lt;span class="na"&gt;config&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;key.ignore&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
    &lt;span class="na"&gt;value.converter&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;org.apache.kafka.connect.json.JsonConverter&lt;/span&gt;
    &lt;span class="na"&gt;connection.username&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;elastic&lt;/span&gt;
    &lt;span class="na"&gt;value.converter.schemas.enable&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;false&lt;/span&gt;
    &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;elasticsearch-sink-connector&lt;/span&gt;
    &lt;span class="na"&gt;connection.password&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="err"&gt;******&lt;/span&gt;
    &lt;span class="na"&gt;key.converter&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;org.apache.kafka.connect.storage.StringConverter&lt;/span&gt;
    &lt;span class="na"&gt;drop.invalid.message&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
    &lt;span class="na"&gt;behavior.on.malformed.documents&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ignore&lt;/span&gt;
    &lt;span class="na"&gt;retry.backoff.ms&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;1000&lt;/span&gt;
    &lt;span class="na"&gt;behavior.on.null.values&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ignore&lt;/span&gt;
    &lt;span class="na"&gt;max.retries&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;5&lt;/span&gt;
    &lt;span class="na"&gt;topics.regex&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;oracle-test.*&lt;/span&gt;
    &lt;span class="na"&gt;type.name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;_doc&lt;/span&gt;
    &lt;span class="na"&gt;connection.url&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;http://elasticsearch-hostname:9200'&lt;/span&gt;
    &lt;span class="na"&gt;schema.ignore&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
  &lt;span class="na"&gt;tasksMax&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;2&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;After installing the connector, the Elasticsearch index will be created automatically. The image below illustrates the newly created index in Elasticsearch, with the &lt;strong&gt;document count&lt;/strong&gt; field indicating the amount of data inserted into the cluster.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flkqkqbtmc6sufx7luvc8.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flkqkqbtmc6sufx7luvc8.png" alt="show created index" width="624" height="140"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The image shows the &lt;em&gt;oracle-testdb2-audit-table&lt;/em&gt; index created.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Optional&lt;/strong&gt;: After completing the steps outlined above, we encountered an additional challenge related to the timestamp field. The timestamp column defined in the Oracle database was of type &lt;strong&gt;TIMESTAMP&lt;/strong&gt;, but after processing through the data pipeline (from Oracle to Elasticsearch), Elasticsearch interpreted this field as a &lt;strong&gt;Long type&lt;/strong&gt;. The following steps should be performed to resolve this issue.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Create a new Elasticsearch index. The image below demonstrates how to define a custom Elasticsearch index:&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flnq7bmt2x0k01s1m9cpg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flnq7bmt2x0k01s1m9cpg.png" alt="create new index with mapping" width="624" height="192"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Add an ingest pipeline in Elasticsearch to convert a &lt;strong&gt;Long-type&lt;/strong&gt; field into a &lt;strong&gt;Date-type&lt;/strong&gt; field. The image below illustrates how to define the ingest pipeline:&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmp9kvd37jyqk0fw6z2fl.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmp9kvd37jyqk0fw6z2fl.png" alt="add proccessor" width="624" height="312"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Attach the ingest pipeline to the Elasticsearch index.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqop99jhd4caw7chsd14g.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqop99jhd4caw7chsd14g.png" alt="attach processor to index" width="554" height="193"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  summery
&lt;/h2&gt;

&lt;p&gt;This document describes how to build a data pipeline that synchronizes data from an Oracle database to Elasticsearch using Kafka and Kafka Connect. It begins with preparing a Kafka cluster deployed via the Strimzi operator and creating a Kafka Connect instance.&lt;/p&gt;

&lt;p&gt;The pipeline first uses a JDBC Source Connector to read data from an Oracle table and publish it to Kafka topics. The setup is verified by checking topic creation and consuming sample messages from Kafka. Next, an Elasticsearch Sink Connector is configured to consume data from Kafka and automatically create and populate an Elasticsearch index.&lt;/p&gt;

</description>
      <category>database</category>
      <category>kubernetes</category>
      <category>dataengineering</category>
      <category>tutorial</category>
    </item>
    <item>
      <title>Kubernetes validating admission policy and admission binding</title>
      <dc:creator>maghsood esmaeili</dc:creator>
      <pubDate>Tue, 04 Nov 2025 07:41:30 +0000</pubDate>
      <link>https://dev.to/maghsood_esmaeili/kubernetes-validating-admission-policy-and-admission-binding-4gii</link>
      <guid>https://dev.to/maghsood_esmaeili/kubernetes-validating-admission-policy-and-admission-binding-4gii</guid>
      <description>&lt;p&gt;&lt;strong&gt;ValidatingAdmissionPolicy&lt;/strong&gt; is a new Kubernetes plugin designed to manage access to Kubernetes resources. It validates which users, groups, or service accounts are allowed to perform specific actions — such as &lt;strong&gt;CREATE&lt;/strong&gt;, &lt;strong&gt;DELETE&lt;/strong&gt;, &lt;strong&gt;UPDATE&lt;/strong&gt;, or &lt;strong&gt;CONNECT&lt;/strong&gt; — on various Kubernetes resources.&lt;/p&gt;

&lt;p&gt;It helps reduce malicious activities within the Kubernetes cluster and enhances overall security. Acting as a gatekeeper in front of Kubernetes resources, it ensures that only authenticated and authorized &lt;strong&gt;requests&lt;/strong&gt; are allowed to perform actions on the cluster.&lt;/p&gt;

&lt;p&gt;Validating admission policies uses the Common Expression Language (&lt;strong&gt;CEL&lt;/strong&gt;) to declare the validation rules of a policy.&lt;/p&gt;

&lt;p&gt;When a request is sent from the API server to apply or modify Kubernetes resources, the Validating Admission Webhooks intercept it to prevent unauthenticated or invalid requests. The image below illustrates the detailed flow of the Kubernetes API server.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fh7jv1zl58t4ezloskd52.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fh7jv1zl58t4ezloskd52.jpeg" alt=" " width="800" height="240"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Let’s see why we might need to use a &lt;strong&gt;Validating Admission Policy&lt;/strong&gt; in our Kubernetes cluster.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Use case:&lt;/strong&gt; We want only the &lt;strong&gt;DevOps group&lt;/strong&gt; to have permission to create, update, or delete on ArgoCD Custom Resources (Argocds), while preventing other groups in the cluster from performing these actions.&lt;/p&gt;

&lt;p&gt;One practical way to achieve this is by creating a &lt;strong&gt;ValidatingAdmissionPolicy&lt;/strong&gt; and &lt;strong&gt;ValidatingAdmissionPolicyBinding&lt;/strong&gt; resources at the cluster level.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;This resource is introduced at the cluster-wide level.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Below is a sample &lt;strong&gt;Validating Admission Policy&lt;/strong&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;admissionregistration.k8s.io/v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ValidatingAdmissionPolicy&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;check-argocds-operation&lt;/span&gt;
  &lt;span class="na"&gt;namespace&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;kube-system&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;matchConstraints&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;resourceRules&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;apiGroups&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;argoproj.io"&lt;/span&gt;
        &lt;span class="na"&gt;apiVersions&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;v1alpha1"&lt;/span&gt;
        &lt;span class="na"&gt;operations&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; 
          &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;CREATE"&lt;/span&gt;
          &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;UPDATE"&lt;/span&gt;
          &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;DELETE"&lt;/span&gt;
        &lt;span class="na"&gt;resources&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;argocds"&lt;/span&gt;
  &lt;span class="na"&gt;failurePolicy&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Fail&lt;/span&gt;
  &lt;span class="na"&gt;variables&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;requestedUsername&lt;/span&gt;
    &lt;span class="na"&gt;expression&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;request.userInfo.username'&lt;/span&gt;


  &lt;span class="na"&gt;validations&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;expression&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;("devops-group"&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;in&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;request.userInfo.groups)&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;||&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;(&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;request.userInfo.username&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;==&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;"system:serviceaccount:openshift-gitops-operator:openshift-gitops-operator-controller-manager")&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;'&lt;/span&gt;
      &lt;span class="na"&gt;messageExpression&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;&amp;gt;-&lt;/span&gt;
        &lt;span class="s"&gt;variables.requestedUsername&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;**&lt;/p&gt;

&lt;h2&gt;
  
  
  Explanation of the YAML Sample
&lt;/h2&gt;

&lt;p&gt;**Here’s a brief explanation of the YAML manifest I created:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;resourceRules&lt;/strong&gt;:&lt;br&gt;
This is one of the most important parts of the manifest. You must define the apiGroups and apiVersions for your target resource. (For more details about Kubernetes components, refer to the official Kubernetes documentation.)&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Operations&lt;/strong&gt;:&lt;br&gt;
Specifies which actions (e.g., CREATE, UPDATE, DELETE) should trigger validation.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Resources&lt;/strong&gt;:&lt;br&gt;
Defines which Kubernetes resources the policy applies to. In this example, the resource is &lt;strong&gt;argocds&lt;/strong&gt;, which we want to validate.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;failurePolicy&lt;/strong&gt; (optional):&lt;br&gt;
Determines how the webhook behaves if it fails. Options include:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Fail — Reject the request.&lt;/li&gt;
&lt;li&gt;Ignore — Allow the request to proceed, skipping 
webhook validation.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;variables&lt;/strong&gt; (optional):&lt;br&gt;
You can define variables to use in the messageExpression.This helps display clearer messages for users who don’t have permission to perform certain actions. It’s also useful for debugging the contents of your request object.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;validations&lt;/strong&gt;:&lt;br&gt;
Contains the list of validation expressions. In this example, the policy checks the requester’s username and group. The expression used here was tested on &lt;strong&gt;OpenShift&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;The &lt;strong&gt;message expression&lt;/strong&gt; defined earlier is used to display a clear message to users who lack sufficient permissions to act.&lt;/p&gt;

&lt;p&gt;Finally, to apply the policy to the cluster, we need to create a &lt;strong&gt;ValidatingAdmissionPolicyBinding&lt;/strong&gt; object. The YAML example below demonstrates how to use it.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;admissionregistration.k8s.io/v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ValidatingAdmissionPolicyBinding&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;argocds-operation-validating-binding&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;policyName&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;check-argocds-operation&lt;/span&gt;
  &lt;span class="na"&gt;validationActions&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Deny"&lt;/span&gt; &lt;span class="c1"&gt;# Warn, Audit&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;validationActions&lt;/strong&gt;: If the validation expression (defined in the &lt;strong&gt;ValidatingAdmissionPolicy&lt;/strong&gt;) is &lt;strong&gt;false&lt;/strong&gt;, it means the user does not have sufficient permissions to apply the specified action. In this example, the action denies the user from performing operations on the cluster.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Summary&lt;/strong&gt;&lt;br&gt;
ValidatingAdmissionPolicies in Kubernetes controls access to cluster resources by determining which users, groups, or service accounts can perform actions. They act as a gatekeeper, reducing malicious activity and ensuring that only authenticated and authorized requests are allowed. We discussed the ValidatingAdmissionPolicy plugin and explored an example demonstrating its usage in the Kubernetes cluster. To enforce the policy across the cluster, it is necessary to create a ValidatingAdmissionPolicyBinding resource.&lt;/p&gt;

</description>
      <category>devops</category>
      <category>kubernetes</category>
      <category>security</category>
      <category>openshift</category>
    </item>
    <item>
      <title>From Monoliths to Microservices: The Role of Service Mesh in Modern Applications</title>
      <dc:creator>maghsood esmaeili</dc:creator>
      <pubDate>Tue, 28 Oct 2025 07:23:14 +0000</pubDate>
      <link>https://dev.to/maghsood_esmaeili/from-monoliths-to-microservices-the-role-of-service-mesh-in-modern-applications-3a1o</link>
      <guid>https://dev.to/maghsood_esmaeili/from-monoliths-to-microservices-the-role-of-service-mesh-in-modern-applications-3a1o</guid>
      <description>&lt;p&gt;A &lt;strong&gt;service mesh&lt;/strong&gt; is a dedicated infrastructure layer that manages communication between microservices within an application. It provides features such as &lt;strong&gt;traffic routing&lt;/strong&gt;, &lt;strong&gt;security&lt;/strong&gt;, &lt;strong&gt;observability&lt;/strong&gt;, and &lt;strong&gt;resilience&lt;/strong&gt;, allowing developers to offload these cross-cutting concerns from individual services. By doing so, a service mesh simplifies service-to-service interactions and improves the reliability and manageability of distributed systems.&lt;br&gt;
Let’s start by understanding what &lt;strong&gt;monoliths&lt;/strong&gt; and &lt;strong&gt;microservices&lt;/strong&gt; are, how they differ, and why a &lt;strong&gt;service mesh&lt;/strong&gt; is important when running applications in production.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Monoliths Microservices&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;A monolithic application typically deploys all functionalities together within a unified codebase, with minimal separation between components. This tight coupling often leads to issues such as a single database acting as a performance bottleneck.&lt;br&gt;
In a monolithic service, each application depends on a specific version of another application.&lt;br&gt;&lt;br&gt;
Even minor updates require a full redeployment, making scalability and independent upgrades challenging.&lt;br&gt;
So suppose the product team wants to test a new feature in production for a specific rate of requests. It's not possible with this architecture.&lt;/p&gt;

&lt;p&gt;In larger enterprise applications with hundreds of modules maintained by numerous developers, loosely defined architectural rules can quickly transform the system into what is often referred to as a "big ball of mud"—an unmanageable, complex codebase.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fky7n3tqfkfh3zdj5ff29.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fky7n3tqfkfh3zdj5ff29.png" alt=" " width="606" height="346"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Transitioning to Microservices&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;This transformation is complex and requires cultural, technical, and organizational shifts toward cloud-native practices.&lt;br&gt;
In a microservices architecture, each module becomes an independent application and can be implemented with different programming languages.&lt;/p&gt;

&lt;p&gt;This microservices approach offers several benefits:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Independent Scaling&lt;/li&gt;
&lt;li&gt;Faster Releases&lt;/li&gt;
&lt;li&gt;Technological Flexibility&lt;/li&gt;
&lt;li&gt;Enhanced Resilience&lt;/li&gt;
&lt;li&gt;Manageability&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;However, moving to microservices also introduces challenges. In the monolith, functionalities such as networking, authentication, authorization, data transfer, logging, monitoring, and tracing were centrally managed. With microservices, these cross-cutting concerns are duplicated across independent teams, leading to increased complexity in managing certificates, monitoring agents, traffic rules, timeouts, and service discovery.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Service Mesh&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Now, let’s go back to the idea of the service mesh and take a closer look at it, as I mentioned before. In traditional microservice architectures, each service was tasked with handling its own routing, security, and observability functions. With a service mesh, these tasks are offloaded to sidecar proxies deployed alongside every microservice. The network communication between services is managed by these proxies, which also form the data plane.&lt;br&gt;
The proxies communicate with a central server-side component known as the Control Plane. The Control Plane oversees and directs all traffic entering and leaving the services, ensuring a larger, cohesive system.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Key Benefits&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Dynamic configuration of service interactions without direct code changes.&lt;/li&gt;
&lt;li&gt;Enhanced security through mutual TLS, protecting communications between services.&lt;/li&gt;
&lt;li&gt;Comprehensive observability, enabling real-time monitoring, performance assessments, and bottleneck detection.&lt;/li&gt;
&lt;li&gt;By abstracting the networking logic into a separate infrastructure, a service mesh allows you to dynamically configure and manage the interactions between services. This means enhanced security, improved scalability, and more efficient traffic management, leading to robust and resilient application performance.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fep7y689uvxjvfwsi8a9q.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fep7y689uvxjvfwsi8a9q.png" alt=" " width="800" height="524"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Example Implementations&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Istio:  Provides advanced traffic management, observability (metrics, logs, traces), and security (mTLS, policy enforcement).&lt;/li&gt;
&lt;li&gt;Linkerd:  Lightweight and performance-focused mesh designed for simplicity and speed.&lt;/li&gt;
&lt;li&gt;Consul: Offers service discovery, configuration, and segmentation capabilities in addition to service mesh features.&lt;/li&gt;
&lt;li&gt;AWS App Mesh: A managed service mesh that integrates with AWS services like ECS and EKS for consistent communication management.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;conclusion&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;The evolution from monolithic to microservices architectures has greatly enhanced scalability, flexibility, and resilience, but it has also introduced new challenges in managing communication, security, and observability across distributed services. A service mesh effectively addresses these challenges by providing a dedicated layer that centralizes control over service-to-service interactions. Through features like dynamic traffic routing, mutual TLS, and real-time monitoring, it simplifies operations and strengthens the reliability of modern cloud-native systems. Technologies such as Istio, Linkerd, Consul, and AWS App Mesh exemplify how service meshes are shaping the future of microservices management.&lt;/p&gt;

</description>
      <category>architecture</category>
      <category>microservices</category>
      <category>networking</category>
    </item>
  </channel>
</rss>
