<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Kaoutar</title>
    <description>The latest articles on DEV Community by Kaoutar (@chaira).</description>
    <link>https://dev.to/chaira</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/chaira"/>
    <language>en</language>
    <item>
      <title>Building a Scalable &amp; Secure ELK Stack Infrastructure – A Practical Guide</title>
      <dc:creator>Kaoutar</dc:creator>
      <pubDate>Fri, 14 Mar 2025 14:09:06 +0000</pubDate>
      <link>https://dev.to/chaira/building-a-scalable-secure-elk-stack-infrastructure-a-practical-guide-37hb</link>
      <guid>https://dev.to/chaira/building-a-scalable-secure-elk-stack-infrastructure-a-practical-guide-37hb</guid>
      <description>&lt;p&gt;Managing logs efficiently is critical for monitoring, troubleshooting, and security compliance in any modern IT environment. The ELK stack (Elasticsearch, Logstash, Kibana) provides a powerful, scalable, and real-time logging solution, whether deployed on-premise or in the cloud.&lt;/p&gt;

&lt;p&gt;In this guide, I’ll walk you through how to design and deploy a centralized log management system using ELK, covering architecture, best practices, and key optimizations.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why Centralized Logging?
&lt;/h2&gt;

&lt;p&gt;Handling logs across multiple applications and servers can be a nightmare. A centralized logging system helps:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Aggregate logs from multiple sources &lt;/li&gt;
&lt;li&gt;Ensure real-time monitoring and alerting&lt;/li&gt;
&lt;li&gt;Improve security compliance (e.g., encryption, access control)&lt;/li&gt;
&lt;li&gt;Optimize performance and storage&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Architecture Overview: Key Components
&lt;/h2&gt;

&lt;p&gt;A robust ELK architecture consists of multiple components working together:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Filebeat → Collects logs from various sources&lt;/li&gt;
&lt;li&gt;Logstash → Processes, filters, and enriches log data&lt;/li&gt;
&lt;li&gt;Elasticsearch → Stores and indexes logs for fast retrieval&lt;/li&gt;
&lt;li&gt;Kibana → Provides real-time dashboards and analytics&lt;/li&gt;
&lt;li&gt;Backup &amp;amp; Security Measures → Ensures compliance and disaster recovery&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Example Deployment (On-Premise or Cloud-based)
&lt;/h3&gt;

&lt;p&gt;A large-scale financial institution handling millions of transactions daily requires centralized log management to track system activity, detect fraud, and ensure compliance with regulations like PSI-DSS and GDPR. The logging infrastructure must be scalable, resilient, and secure, capable of processing high volumes of structured and unstructured logs from multiple applications, security tools, and databases.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Faveexolk7dqwou2q95ze.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Faveexolk7dqwou2q95ze.png" alt="Architecture Example" width="762" height="392"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;To meet these demands, the ELK stack is deployed across dedicated virtual machines (VMs) or containers with optimized resource allocation:&lt;/p&gt;

&lt;p&gt;🖥 &lt;strong&gt;Virtual Machine for Elasticsearch&lt;/strong&gt;&lt;br&gt;
Elasticsearch is the core of the ELK stack, handling indexing and search. It is resource-intensive, especially when processing a large volume of logs.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Recommended Specs:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;vCPU: 16 vCPUs (scalable based on log volume and queries)&lt;/li&gt;
&lt;li&gt;RAM: 64 GB (benefits from large heap size)&lt;/li&gt;
&lt;li&gt;Storage: 2 TB SSD (adjustable based on retention and log volume)&lt;/li&gt;
&lt;li&gt;OS: RHEL, Ubuntu, or any Linux-based production-optimized OS&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;-&amp;gt; &lt;strong&gt;Justification:&lt;/strong&gt;&lt;br&gt;
Elasticsearch requires high memory and fast storage for indexing and queries. Depending on data volume, adding more storage or clustering nodes can enhance fault tolerance and scalability.&lt;/p&gt;

&lt;p&gt;🖥 &lt;strong&gt;Virtual Machine for Logstash&lt;/strong&gt;&lt;br&gt;
Logstash processes, filters, and enriches logs before forwarding them to Elasticsearch. Complex pipelines can be CPU and memory-intensive.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Recommended Specs:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;vCPU: 8 vCPUs&lt;/li&gt;
&lt;li&gt;RAM: 16 GB&lt;/li&gt;
&lt;li&gt;Storage: 500 GB SSD (lower storage needs than Elasticsearch)&lt;/li&gt;
&lt;li&gt;OS: RHEL or Ubuntu&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;-&amp;gt; &lt;strong&gt;Justification:&lt;/strong&gt;&lt;br&gt;
CPU and RAM usage depends on log volume and pipeline complexity. Heavy filtering or data enrichment increases resource consumption.&lt;/p&gt;

&lt;p&gt;🖥 &lt;strong&gt;Virtual Machine for Kibana&lt;/strong&gt;&lt;br&gt;
Kibana provides visualization and analytics, but it requires fewer resources compared to Elasticsearch and Logstash.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Recommended Specs:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;vCPU: 4 vCPUs&lt;/li&gt;
&lt;li&gt;RAM: 8 GB&lt;/li&gt;
&lt;li&gt;Storage: 100 GB SSD (minimal storage requirements)&lt;/li&gt;
&lt;li&gt;OS: RHEL, Ubuntu, or any Linux-based system&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;-&amp;gt; &lt;strong&gt;Justification:&lt;/strong&gt;&lt;br&gt;
Kibana mainly handles dashboard rendering and visualization queries. Resource needs increase with more users and complex visualizations.&lt;/p&gt;

&lt;p&gt;📌 &lt;strong&gt;Filebeat (Lightweight Log Shipper)&lt;/strong&gt;&lt;br&gt;
Filebeat is a lightweight agent that collects and forwards logs to Logstash or Elasticsearch.&lt;br&gt;
-&amp;gt; &lt;strong&gt;Justification:&lt;/strong&gt;&lt;br&gt;
Filebeat is resource-efficient and has minimal processing overhead. It can be deployed on multiple servers depending on the log sources.&lt;/p&gt;

&lt;p&gt;🖥 &lt;strong&gt;Virtual Machine for Backup Server&lt;/strong&gt;&lt;br&gt;
A backup server stores Elasticsearch snapshots and automated backups to ensure data integrity and recovery.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Recommended Specs:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;vCPU: 4 vCPUs&lt;/li&gt;
&lt;li&gt;RAM: 16 GB (low RAM requirements since backups are storage-intensive)&lt;/li&gt;
&lt;li&gt;Storage: 4 TB HDD (encrypted)&lt;/li&gt;
&lt;li&gt;OS: Ubuntu Server, CentOS, or RHEL&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;-&amp;gt; &lt;strong&gt;Justification:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Backup storage does not require fast SSDs—high-capacity HDDs are sufficient. &lt;/li&gt;
&lt;li&gt;Backup strategies include snapshots, rotation policies, and periodic recovery tests.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Key Benefits of This Architecture
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Resource Isolation → Each component runs on its own VM, ensuring one service’s workload doesn’t impact others.&lt;/li&gt;
&lt;li&gt;Scalability → Each component can be scaled independently (e.g., Elasticsearch can be expanded as log volume grows).&lt;/li&gt;
&lt;li&gt;High Availability &amp;amp; Fault Tolerance → Elasticsearch can run as a cluster with multiple nodes, and backups ensure data security.&lt;/li&gt;
&lt;li&gt;Security Best Practices → Separate VMs allow granular firewall rules and network policies, restricting communication between components.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Data Flow &amp;amp; Processing
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Step 1:&lt;/strong&gt; Log Collection with Filebeat&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Filebeat runs on source servers (applications, databases, containers).&lt;/li&gt;
&lt;li&gt;Sends logs to Logstash or directly to Elasticsearch.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Step 2:&lt;/strong&gt; Processing with Logstash&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Filters and enriches logs (e.g., adds metadata, geo-location, or obfuscates sensitive data).&lt;/li&gt;
&lt;li&gt;Outputs to Elasticsearch for storage.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Step 3:&lt;/strong&gt; Indexing &amp;amp; Storage in Elasticsearch&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Optimized for fast queries and scalable storage.&lt;/li&gt;
&lt;li&gt;Index lifecycle management ensures log retention policies.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Step 4:&lt;/strong&gt; Visualization &amp;amp; Alerting with Kibana&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Dashboards provide real-time insights.&lt;/li&gt;
&lt;li&gt;Alerts notify teams of anomalies or system failures.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Security &amp;amp; Compliance Considerations
&lt;/h2&gt;

&lt;p&gt;Ensuring a secure logging infrastructure is crucial for data protection and regulatory compliance (e.g., SOC2, GDPR, PSI-DSS). Key practices include:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Data Encryption&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;In Transit:&lt;/strong&gt; All communication between Filebeat, Logstash, Elasticsearch, and Kibana is secured using SSL/TLS certificates to prevent data interception.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;At Rest:&lt;/strong&gt; Log indices in Elasticsearch are encrypted using the X-Pack security module, and physical disks on servers are also encrypted to protect stored data.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Authentication &amp;amp; Access Control&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Role-Based Access Control (RBAC):&lt;/strong&gt; Permissions are managed based on user roles, ensuring restricted access to logs only for authorized personnel.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;LDAP / Active Directory Integration:&lt;/strong&gt; Centralized authentication management allows seamless user provisioning and control.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Multi-Factor Authentication (MFA):&lt;/strong&gt; Enforced for Elasticsearch and Kibana administrators to enhance security against credential theft.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Access Logging &amp;amp; Monitoring&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Audit Logging:&lt;/strong&gt; User activities and system interactions in Elasticsearch and Kibana are logged for traceability.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Unauthorized Access Monitoring:&lt;/strong&gt; Failed login attempts and unusual modifications are actively monitored to detect potential security threats.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Additional Security Measures&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Log Retention Policies:&lt;/strong&gt; Automatic log purging using Index Lifecycle Management (ILM) ensures compliance with data retention regulations.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Backup &amp;amp; Disaster Recovery:&lt;/strong&gt; Regular Elasticsearch snapshots ensure data availability and protection against loss or corruption.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Network Security &amp;amp; Isolation:&lt;/strong&gt; Strict firewall rules and network segmentation prevent unauthorized access between components.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Deployment &amp;amp; Automation Best Practices
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Infrastructure as Code (IaC)&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Automate deployment using Ansible, Terraform, or CI/CD tools.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Scaling Strategy&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Deploy Elasticsearch as a cluster for high availability.&lt;/li&gt;
&lt;li&gt;Optimize sharding &amp;amp; indexing for better performance.&lt;/li&gt;
&lt;li&gt;Use log filtering to reduce unnecessary data ingestion.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Monitoring &amp;amp; Performance Tuning&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Leverage Elasticsearch monitoring tools (Kibana Stack Monitoring, Grafana).&lt;/li&gt;
&lt;li&gt;Tune heap size &amp;amp; JVM settings for optimal resource allocation.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Why You Should Build a Scalable ELK Stack?
&lt;/h2&gt;

&lt;p&gt;A well-designed ELK stack enables organizations to streamline log management, improve security, and gain valuable insights from their data. Whether deployed on premise or cloud, following best practices ensures scalability, performance, and compliance.&lt;/p&gt;

</description>
      <category>observability</category>
      <category>monitoring</category>
      <category>elkstack</category>
    </item>
    <item>
      <title>Terraform</title>
      <dc:creator>Kaoutar</dc:creator>
      <pubDate>Mon, 27 May 2024 11:27:32 +0000</pubDate>
      <link>https://dev.to/chaira/terraform-3ffh</link>
      <guid>https://dev.to/chaira/terraform-3ffh</guid>
      <description>&lt;p&gt;Utilizing reusable, shareable, human-readable configuration files, HashiCorp Terraform is an infrastructure as code (IaC) software solution that enables DevOps teams to automate infrastructure provisioning. Infrastructure provisioning may be automated using this technology in both on-premises and cloud scenarios.&lt;/p&gt;

&lt;h3&gt;
  
  
  Infrastructure as Code
&lt;/h3&gt;

&lt;p&gt;The process of supplying and controlling IT infrastructure using coding is known as "infrastructure as code" IaC enables DevOps teams to programmatically and automatically manage, monitor, and provide the resources they require, as opposed to manual infrastructure management, when each necessary resource is manually set by a human.&lt;/p&gt;

&lt;p&gt;Teams may use Terraform to describe and provision all of the infrastructure's parts using code. Config files, which are readily shared, reused, and versioned, contain the code. In order to manage the whole cloud or data center infrastructure and its resources over the course of their lifespan, the files aid in the creation of a standardized workflow.&lt;/p&gt;

&lt;p&gt;Declarative configuration files for Terraform define the final state of the infrastructure. Instead of having to give detailed instructions, which is a laborious and time-consuming procedure, to construct the necessary infrastructure resources, the tool handles the underlying &lt;br&gt;
logic itself.&lt;/p&gt;

&lt;p&gt;It is simple for DevOps teams to accomplish the following since the files codify the application programming interfaces (APIs) for cloud platforms and other services:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fq8knwzdgw4mefmkutcxw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fq8knwzdgw4mefmkutcxw.png" alt="IaC Process" width="467" height="117"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Any cloud provider may be used to provision resources.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Put up compliance and security barriers to harmonize the infrastructure.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Use defined and dependable procedures to ensure consistency in the provisioning, sharing, and reuse of infrastructure.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Integrate VCS, ITSM, and CI/CD with the self-service infrastructure. Terraform is capable of managing low-level components like DNS records as well as highlevel infrastructure elements like computation, storage, and networking resources. &lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Additionally, it may be used to automatically setup servers, databases, and firewall settings. Teams may manage infrastructure using their favorite programming language, including TypeScript, Python, Go, C#, and Java, with the use of a Cloud Development Kit for Terraform (CDKTF).&lt;/p&gt;

&lt;h3&gt;
  
  
  How Terraform works
&lt;/h3&gt;

&lt;p&gt;The ability to construct declarative configuration files using Terraform is made possible by the widely used APIs that are accessible from all major cloud service providers. The Terraform Registry has a list of these suppliers.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Feuewbr67sv4kv8o438f3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Feuewbr67sv4kv8o438f3.png" alt="Terraform Providers" width="800" height="634"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Teams may utilize the modules, policy libraries, and tasks included in the Registry to easily install standard infrastructure setups and maintain them automatically with code. The process for Terraform consists of three steps:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Write&lt;/strong&gt;&lt;br&gt;
A user defines the necessary resources in configuration files at this step. These resources might be spread out throughout several on-premises or cloud settings, as well as between various suppliers and services. &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Plan&lt;/strong&gt;&lt;br&gt;
This step starts once the user examines and approves the necessary phases. The steps that will be taken to develop or upgrade the infrastructure are described in the execution plan that Terraform produces in this case.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Apply&lt;/strong&gt;&lt;br&gt;
Before Terraform makes modifications to the infrastructure, the plan must be approved by the user. After receiving permission, Terraform executes the suggested procedures in the specified sequence. Before making changes, it will always consider resource dependencies. &lt;br&gt;
For example, in the event that a user decides to increase the number of virtual machines in a VPC (virtual private cloud), Terraform will first rebuild the VPC before scaling up the VMs&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvewk4rpvssubnehftzpe.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvewk4rpvssubnehftzpe.png" alt="Terraform Process" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Use Cases
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;IaC is Terraform's most popular use case. Terraform infrastructure deployments are simple to integrate with current CI/CD procedures. &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Teams may use Terraform, for instance, to automatically update member pools for load balancing and other crucial networking activities.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;For provisioning across many clouds, Terraform is also helpful. Development teams may use Terraform to provide load balancers in Google Cloud, manage Active Directory (AD) resources in Microsoft Azure, and deploy serverless operations in AWS. &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Manage Kubernetes clusters in any public cloud (AWS, Azure, Google).&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Enforce policy-as-code before infrastructure components are developed and deployed.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Use secrets and credentials in Terraform setups automatically.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Import current infrastructure into a blank Terraform workspace to codify it.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Transfer state to Terraform to protect it and make it simple for authorized collaborators to access it.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>devops</category>
      <category>terraform</category>
      <category>iac</category>
    </item>
    <item>
      <title>Kibana Fundamentals</title>
      <dc:creator>Kaoutar</dc:creator>
      <pubDate>Mon, 27 May 2024 11:06:27 +0000</pubDate>
      <link>https://dev.to/chaira/kibana-fundamentals-1ch</link>
      <guid>https://dev.to/chaira/kibana-fundamentals-1ch</guid>
      <description>&lt;p&gt;A data visualization platform that is primarily used to analyze massive volumes of logs in the form of line graphs, bar graphs, pie charts, heat maps, region maps, coordinate maps, gauge, goals, and other visual representations. It is simple to foresee or notice changes in trends of mistakes or other noteworthy events of the input source thanks to the display.&lt;/p&gt;

&lt;h3&gt;
  
  
  Key features
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Visualization&lt;/strong&gt;
There are several simple methods to view data with Kibana. Examples of some of the ones that are frequently used include heat maps, pie charts, line graphs, vertical bar charts, and horizontal bar charts.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkg106rg1n304xcml8ki6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkg106rg1n304xcml8ki6.png" alt="Example of Kibana Visualization" width="800" height="548"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Dashboard&lt;/strong&gt;
When the visualizations are prepared, they may all be arranged on the Dashboard, which is a single board. You can get a good sense of what is going overall by watching many portions at once.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fh2p2a4g1xgo8th568l0z.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fh2p2a4g1xgo8th568l0z.png" alt="Example of Kibana Dashboard" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Dev Tools&lt;/strong&gt;
Using dev tools, you may work with your indexes. Dummy indexes may be added using dev tools by beginners, and they can also add, amend, and remove data and utilize the indexes to generate visualization.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4l8k411yjbbh78ltup78.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4l8k411yjbbh78ltup78.png" alt="Kibana Dev Tools" width="800" height="376"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Reports&lt;/strong&gt;&lt;br&gt;
You may export all of the data from dashboards and visualizations as reports (in CSV format), embed them in code, or share them with others through URLs.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Search and Filter Query&lt;/strong&gt;&lt;br&gt;
You may use filters and search queries to find the information you need from a dashboard or visualization tool for a certain input.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0qwq42rt65z88g65p7rq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0qwq42rt65z88g65p7rq.png" alt="Example of Search Query" width="800" height="244"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Plugins&lt;/strong&gt;&lt;br&gt;
Third-party plugins can be added to Kibana to bring new visualizations or other UI additions.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Regional and Coordinate Maps&lt;/strong&gt;&lt;br&gt;
In Kibana, a coordinate and region map aids in displaying the visualization on the geographical map while providing a truthful representation of the data.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fw561iwcnopbl9mvwqdr8.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fw561iwcnopbl9mvwqdr8.png" alt="Example of Kibana Maps" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Timelion&lt;/strong&gt;
Another visualization tool that is generally used for time-based data analysis is Timelion, sometimes known as a timeline. When working with a timeline, we must employ a straightforward expression language that enables us to connect to the index and do computations on the data to provide the desired results. Comparing data to the prior cycle in terms of week, month, etc. is more helpful.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkxybodzcn8kpag98bh2f.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkxybodzcn8kpag98bh2f.png" alt="Example of Kibana Timelion" width="800" height="313"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Canvas&lt;/strong&gt;
Another useful element of Kibana is canvas. You may visualize your data using a canvas by using different color schemes, shapes, messages, and numerous pages, also known as a workpad&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpeffrudh0qroawaf8bwc.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpeffrudh0qroawaf8bwc.png" alt="Example of Kibana Canvas" width="800" height="449"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://dev.to/chaira/logstash-fundamentals-h1h"&gt;Logstash Fundamentals&lt;/a&gt;&lt;br&gt;
&lt;a href="https://dev.to/chaira/elasticsearch-fundamentals-151j"&gt;Elasticsearch Fundamentals&lt;/a&gt;&lt;/p&gt;

</description>
      <category>elk</category>
      <category>kibana</category>
    </item>
    <item>
      <title>Logstash Fundamentals</title>
      <dc:creator>Kaoutar</dc:creator>
      <pubDate>Mon, 27 May 2024 10:41:49 +0000</pubDate>
      <link>https://dev.to/chaira/logstash-fundamentals-h1h</link>
      <guid>https://dev.to/chaira/logstash-fundamentals-h1h</guid>
      <description>&lt;p&gt;Logstash is the next generation logging framework, it functions as a centralized framework for log collecting, processing, storage, and search, it can normalize data from several sources and dynamically combine them into the destinations of your choosing.&lt;br&gt;
With a wide range of input, filter, and output plugins, Logstash enables any sort of event to be enhanced and altered with many native codecs that simplify the ingestion process. By utilizing more data, both in terms of volume and diversity, Logstash offers insights. &lt;br&gt;
Files, Syslog, TCP/UDP, stdin, and many other input methods are just a few of the input sources that Logstash may accept. In order to change the events in the gathered logs, a wide variety of filters may be used.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5aqwnsox4fyzl9wnrsug.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5aqwnsox4fyzl9wnrsug.png" alt="Logstash Plug-ins" width="477" height="293"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;One or more pipelines are used to arrange the processing. One or more plug-ins receive or gather data in each pipeline, which is subsequently added to an internal queue that is typically minimal and only kept temporarily in memory, but you may customize it to be bigger and permanently saved to disk to increase dependability and resilience.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fq5yuj03v56s24fr3d6g3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fq5yuj03v56s24fr3d6g3.png" alt="Logstash Instance" width="591" height="221"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The processing threads read the data from the waiting file in micro-lots and process it using one of the configured plug-in filters sequentially. Logstash is ready for use and has a wide variety of plug-ins that focus on particular sorts of processing. This is how data is analyzed, &lt;br&gt;
processed, and expanded.&lt;br&gt;
Once the data has been processed, the processing threads send the data to the appropriate plug-ins for output, which are responsible for formatting and transferring the data (for example, to Elasticsearch).&lt;/p&gt;

&lt;h3&gt;
  
  
  Key features
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Event Object&lt;/strong&gt;&lt;br&gt;
It is Logstash's primary object and contains all of the data flow for the pipeline. This object is used by Logstash to retain the incoming data and add any new fields produced during the filtering process.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Pipeline&lt;/strong&gt;&lt;br&gt;
It consists of Logstash's data flow phases from input to output. The pipeline receives the input data and processes it in the form of an event. then transmits to an output location in the preferred format for the user or end system.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Input&lt;/strong&gt;&lt;br&gt;
The data must first pass through this step of the Logstash pipeline before it can be processed further. To obtain data from many systems, Logstash provides a variety of plugins. File, Syslog, Redis, and Beats are some of the plugins that are utilized the most frequently.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Filter&lt;/strong&gt;&lt;br&gt;
The intermediate step of Logstash is where the real event processing happens. To assist the developer in parsing and transforming the events into a desired format, Logstash provides a variety of plugins. Grok, Mutate, Drop, Clone, and Geoip are some of the filter plugins that are most often used.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Output&lt;/strong&gt;&lt;br&gt;
The output events from Logstash may now be formatted into the structure needed by the destination systems at this last step of the pipeline. Finally, it uses plugins to deliver the output event to the target when processing is finished. Elasticsearch, File, Graphite, Statsd, and other plugins are some of the ones that are most frequently utilized.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://dev.to/chaira/kibana-fundamentals-1ch"&gt;Kibana Fundamentals&lt;/a&gt;&lt;br&gt;
&lt;a href="https://dev.to/chaira/elasticsearch-fundamentals-151j"&gt;Elasticsearch Fundamentals&lt;/a&gt;&lt;/p&gt;

</description>
      <category>elk</category>
      <category>logstash</category>
    </item>
    <item>
      <title>Elasticsearch Fundamentals</title>
      <dc:creator>Kaoutar</dc:creator>
      <pubDate>Mon, 27 May 2024 10:34:21 +0000</pubDate>
      <link>https://dev.to/chaira/elasticsearch-fundamentals-151j</link>
      <guid>https://dev.to/chaira/elasticsearch-fundamentals-151j</guid>
      <description>&lt;p&gt;Elasticsearch is an Apache Lucene-based search engine. It is a real-time, distributed, multitenant-capable full-text search engine, it offers a RESTful API based on JSON documents, it can be used for full-text search, structured search, analytics, or all three. One of its most important advantages is the capacity to search quickly by indexing the text to be searched. Many search engines have long been available with the option to search by timestamp or precise quantities, Elasticsearch distinguishes itself by running full-text searches, managing synonyms, and evaluating items based on relevancy.&lt;br&gt;
Furthermore, it may provide real-time analytics and aggregation from the same data, it outperforms other search engines in this area. &lt;br&gt;
Elasticsearch is widely used in many large corporations. Here are some examples of applications:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fuqzfnf4g5ctls1y13oec.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fuqzfnf4g5ctls1y13oec.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Elasticsearch is used by &lt;strong&gt;Netflix&lt;/strong&gt; to deliver millions of messages to clients every day via various channels like as email, push alerts, text messaging, phone calls, and so on.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Salesforce&lt;/strong&gt; has created a bespoke plugin on top of Elasticsearch that collects Salesforce log data, allowing for insights on organizational usage trends and user activity.&lt;/li&gt;
&lt;li&gt;Elasticsearch is used by the &lt;strong&gt;New York Times&lt;/strong&gt; to store all 15 million articles written over the previous 160 years. This allows for fantastic archival search capabilities.&lt;/li&gt;
&lt;li&gt;Elasticsearch is used by &lt;strong&gt;Microsoft&lt;/strong&gt; for search and analytics capabilities in a variety of products, including MSN, Microsoft Social Listening, and Azure Search.&lt;/li&gt;
&lt;li&gt;Elasticsearch was utilized by &lt;strong&gt;eBay&lt;/strong&gt; to provide a versatile search platform.
Elasticsearch is not solely utilized by major enterprises, it is also used by many startups and small businesses. Elasticsearch's appeal is that it can operate on a laptop or expand up to hundreds of servers and petabytes of data.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Key features
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;It offers statistics and real-time search for your data&lt;/li&gt;
&lt;li&gt;Elasticsearch can run on anything from a basic laptop to hundreds of nodes and is a distributed system.&lt;/li&gt;
&lt;li&gt;It may be used to deploy multitenant, highly available clusters. It automatically rearranges and rebalances data upon the addition of a new node or the failure of a node.&lt;/li&gt;
&lt;li&gt;Elasticsearch distributes the processing of queries and data storage among many data nodes. Scalability, dependability, and performance are all improved.&lt;/li&gt;
&lt;li&gt;Data in an Elasticsearch cluster is duplicated across several nodes, so even if one node fails, it is still accessible.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fl5gsrlvlaeawx9mf3plk.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fl5gsrlvlaeawx9mf3plk.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Elasticsearch can comprehend and search natural language text since it is built on top of Lucene, a full-text search technology.&lt;/li&gt;
&lt;li&gt;Rather of storing documents as rows in a table, Elasticsearch saves them as JSON.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fscak23887n8z86p5l4ly.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fscak23887n8z86p5l4ly.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Elasticsearch makes use of a JSON-based query language rather than a SQL-based 
one&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzv39qx4dt5qgw6fcakmq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzv39qx4dt5qgw6fcakmq.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Elasticsearch does not enable JOINS between tables, in contrast to relational databases.&lt;/li&gt;
&lt;li&gt;Word aggregations, geographic searches, and support for scripting languages are just a few of Elasticsearch's built-in analytical features.&lt;/li&gt;
&lt;li&gt;In relational databases, a schema is the equivalent of a mapping in Elasticsearch. Elasticsearch will automatically assign a data type to a document field if one isn't explicitly specified if it hasn't before.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Key Components
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Cluster&lt;/strong&gt;&lt;br&gt;
A cluster is a grouping of one or more nodes that collectively contains all of the data and offers federated indexing and search capabilities across all nodes. Each node in a cluster should be given a distinct name.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Node&lt;/strong&gt;&lt;br&gt;
Node generally refers to a server that functions as a cluster member. A node is an instance, not a machine, in Elasticsearch context. This implies that several nodes can operate on a single machine. An Elasticsearch instance consists of one or more cluster-based nodes. By &lt;br&gt;
default, a node also starts up when an Elasticsearch instance does.&lt;br&gt;
A distinctive name is used to identify each node. A random UUID is used as the node identification at initialization if the node identifier is not supplied. The 'cluster.name' field is a part of every node setup. The cluster will automatically create, with each node having the same 'cluster.name' upon launch.&lt;br&gt;
A node must carry out several tasks:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Storing data&lt;/li&gt;
&lt;li&gt;Processing data (indexing, searching, aggregation, etc.)&lt;/li&gt;
&lt;li&gt;Preserving the cluster's health&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;p&gt;In a cluster, all these operations are available to every node. Elasticsearch offers the option to distribute duties among several nodes. This makes scaling, optimizing, and maintaining the cluster simple. &lt;br&gt;
The three primary ways to setup an Elasticsearch node are as follows:&lt;br&gt;
&lt;strong&gt;Elasticsearch master node&lt;/strong&gt; controls the Elasticsearch cluster by processing one cluster state at a time and broadcasting the state to all other nodes. The master node is in charge of all clusterwide operations, including the creation and deletion of indexes.&lt;br&gt;
&lt;strong&gt;Elasticsearch data node&lt;/strong&gt; contains data and the inverted index. This is the default configuration for nodes.&lt;br&gt;
&lt;strong&gt;Elasticsearch client node&lt;/strong&gt; serves as a load balancer that routes incoming requests to various cluster nodes.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Port 9200 and Port 9300&lt;/strong&gt;&lt;br&gt;
Two primary ports are used by the Elasticsearch architecture for communication:&lt;br&gt;
Filtering queries originating from outside the cluster is done using port 9200. This procedure responds to queries sent using REST APIs, which are used for querying, indexing, and other functions.&lt;br&gt;
For inter-node communication, use port 9300. The transport layer is where this happens.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Shards of Elasticsearch&lt;/strong&gt;&lt;br&gt;
Shards are the fundamental pieces of indexing that make up the Elasticsearch architecture. They are compact and scalable.&lt;br&gt;
You may store an unlimited number of documents on each index. Elasticsearch might, however, break if an index exceeds the hosting server's storage restrictions. Sharding, or dividing indexes into smaller parts, solves this problem. You can spread activities among shards to increase performance as a whole. The number of shards you produce after generating an index is up to you. Every shard functions as a separate Lucene index that can be hosted anywhere in the cluster.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgkkcq20auqvkngzfq0dm.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgkkcq20auqvkngzfq0dm.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Elasticsearch Replicas&lt;/strong&gt;&lt;br&gt;
Replicas in Elasticsearch are copies of index shards. For backup and recovery reasons, replicas are utilized as a fail-safe strategy. Duplicates are never added on the node hosting the primary (original) shards, replicas are kept at several places to assure availability. After the index is formed, replicas may be defined and made in any number and because of this, you can store more replicas than primary shards.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Index&lt;/strong&gt;&lt;br&gt;
Index is a container to store data similar to a database in the relational databases. An index contains a collection of documents that have similar characteristics or are logically related. If we use an e-commerce website as an example, there will be indexes for customers, items, and so on. &lt;br&gt;
We can create as many indexes as necessary inside a single cluster, depending on our needs. &lt;br&gt;
Elasticsearch searches an index rather than the text directly. As a result, it enables quick search results. Instead of searching every word on every page of the book, you may scan the index at the back of a book to find pages in the book that are relevant to a term. &lt;br&gt;
The name "Inverted Index" refers to this form of index because it converts a word-centric data structure (words-&amp;gt;pages) to a page-centric data structure (pages-&amp;gt;words). Elasticsearch has support for inverted indexes, which are built and maintained using Apache Lucene&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fe06oy64xa2vgtv96uy85.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fe06oy64xa2vgtv96uy85.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Document&lt;/strong&gt;
Document is the piece indexed by Elasticsearch in the JSON format. Any number of documents can be added to an index.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://dev.to/chaira/kibana-fundamentals-1ch"&gt;Kibana Fundamentals&lt;/a&gt;&lt;br&gt;
&lt;a href="https://dev.to/chaira/logstash-fundamentals-h1h"&gt;Logstash Fundamentals&lt;/a&gt;&lt;/p&gt;

</description>
      <category>elk</category>
      <category>elasticsearch</category>
    </item>
  </channel>
</rss>
