<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Hored Otniel</title>
    <description>The latest articles on DEV Community by Hored Otniel (@morten12).</description>
    <link>https://dev.to/morten12</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/morten12"/>
    <language>en</language>
    <item>
      <title>Introduction to PCI DSS and its contribution to FinTech companies</title>
      <dc:creator>Hored Otniel</dc:creator>
      <pubDate>Wed, 08 Mar 2023 10:42:42 +0000</pubDate>
      <link>https://dev.to/morten12/introduction-to-pci-dss-and-its-contribution-to-fintech-1mmp</link>
      <guid>https://dev.to/morten12/introduction-to-pci-dss-and-its-contribution-to-fintech-1mmp</guid>
      <description>&lt;p&gt;The emergence of multiple online payment solutions has completely transformed the online payment landscape. This revolution has not only led to the exponential growth of e-commerce but also completely revolutionized the way we conduct financial transactions. Today, businesses across industries have embraced the digital world, capitalizing on the benefits offered by online transactions. Nevertheless, it is known that many security risks exist online. Indeed, the financial system is one of the most targeted areas for cyber attacks. As a result, cybersecurity has become a major concern for the entire FinTech ecosystem, impacting every aspect of the industry.&lt;/p&gt;

&lt;p&gt;The PCI DSS is one of the main standards that protect users' credit card data. In this article, we will discuss the PCI DSS, its requirements, how it works and how it can help online payment institutions.&lt;/p&gt;

&lt;h2&gt;
  
  
  What is PCI DSS ?
&lt;/h2&gt;

&lt;p&gt;PCI DSS stands for Payment Card Industry Data Security Standard. It is an information security standard used to handle credit cards from major card brands. The standard is administered by the Payment Card Industry Security Standards Council and it is mandatory for companies that process, store, or transmit cardholder data (CHD) or secure authentication data (SAD) to comply with PCI DSS requirements. The standard was created to better control cardholder data and reduce credit card fraud. The Payment Card Industry Data Security Standard (PCI DSS) defines the minimum technical and operational requirements for data security. The current version of PCI DSS 4.0 was published the March 31, 2022. You can download it &lt;a href="https://www.pcisecuritystandards.org/document_library/"&gt;here&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Who created PCI DSS?
&lt;/h2&gt;

&lt;p&gt;The PCI DSS was created jointly in 2004 by four major credit-card companies: Visa, MasterCard, Discover and American Express. But when it comes to the history of the Standard we need to mention some details. Indeed Visa was the first of the major card companies to attempt to establish a set of security standards for businesses that accepted online payments. Visa announced the &lt;a href="https://en.wikipedia.org/wiki/Cardholder_Information_Security_Program"&gt;Cardholder Information Security Program (CISP)&lt;/a&gt; in 1999 and implemented it in 2001. Mastercard, American Express, and Discover will then offer their own security programs. &lt;/p&gt;

&lt;p&gt;Not surprisingly, a problem soon arose. Merchants who used to accept multiple credit card brands are now faced with multiple security compliance programs. This has only led to an increase in payment fraud. To find a solution, American Express, Discover Financial Services, JCB International, Mastercard, and Visa have come together to form the Payment Card Industry (PCI) which will introduce PCI DSS 1.0 in December 2004.&lt;/p&gt;

&lt;h2&gt;
  
  
  PCI DSS Requirements
&lt;/h2&gt;

&lt;p&gt;PCI DSS 4.0 is designed to further secure cardholder data by helping businesses adopt security measures and access controls. Each requirement of the standard is linked to an objective. In the following, the requirements are organized as sub-elements of the objective that is being addressed&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Goal&lt;/th&gt;
&lt;th&gt;Requirement&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Build and Maintain a Secure Network and Systems&lt;/td&gt;
&lt;td&gt;Requirement 1: Install and Maintain Network Security Controls&lt;br&gt;&lt;br&gt;Requirement 2: Apply Secure Configurations to All System Components&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Protect Account Data&lt;/td&gt;
&lt;td&gt;Requirement 3: Protect Stored Account Data&lt;br&gt;&lt;br&gt;Requirement 4: Protect Cardholder Data with Strong Cryptography During Transmission Over Open, Public Networks&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Maintain a Vulnerability Management Program&lt;/td&gt;
&lt;td&gt;Requirement 5: Protect All Systems and Networks from Malicious Software&lt;br&gt;&lt;br&gt;Requirement 6: Develop and Maintain Secure Systems and Software&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Implement Strong Access Control Measures&lt;/td&gt;
&lt;td&gt;Requirement 7: Restrict Access to System Components and Cardholder Data by Business Need to Know&lt;br&gt;&lt;br&gt;Requirement 8: Identify Users and Authenticate Access to System Components&lt;br&gt;&lt;br&gt;Requirement 9: Restrict Physical Access to Cardholder Data&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Regularly Monitor and Test Networks&lt;/td&gt;
&lt;td&gt;Requirement 10: Log and Monitor All Access to System Components and Cardholder Data&lt;br&gt;&lt;br&gt;Requirement 11: Test Security of Systems and Networks Regularly&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Maintain an Information Security Policy&lt;/td&gt;
&lt;td&gt;Requirement 12: Support Information Security with Organizational Policies and Programs&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;With regard to these requirements, the standard associates test procedures with the requirements. Thus, in terms of security, each requirement involves defined approaches which are associated with test procedures. The test procedure is used to verify compliance.&lt;/p&gt;

&lt;h2&gt;
  
  
  What are the PCI compliance levels?
&lt;/h2&gt;

&lt;p&gt;Although the PCI DSS is a unified standard that takes into account the rules of all the major players in the PCI Security Standards Council (PCI SSC), its criteria can have subtleties that depend on the number of transactions made by the payment solution and the number of users. Thus the PCI SSC has established a list of compliance levels that explains the requirements to all responsible parties.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Level 1: The first level sets requirements for companies that process more than 6 million Visa or Mastercard transactions per year, or more than 2.5 million American Express transactions, or those that have suffered a data breach. &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Level 2: This level applies to companies that process between 1 and 6 million transactions per year. &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Level 3: This level applies to companies that process between 20,000 and 1 million online transactions per year.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Level 4 : It’s the lowest compliance level for companies that process less than 20,000 transactions per year.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  How PCI DSS compliance works
&lt;/h2&gt;

&lt;p&gt;As regards compliance, how does the PCI DSS certification process work? The PCI DSS certification process involves an audit of the company. The first step is an assessment (details vary depending on your level), a quarterly network analysis, and the Attestation of Compliance. &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;For first-level companies, the process involves : &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The completion of an Annual Report on Compliance(&lt;a href="https://docs-prv.pcisecuritystandards.org/PCI%20DSS/Reporting%20Template%20or%20Form/PCI-DSS-v3-2-1-ROC-Reporting-Template-r2.pdf"&gt;ROC&lt;/a&gt;) by a Qualified Security Assessor (&lt;a href="https://www.pcisecuritystandards.org/assessors_and_solutions/qualified_security_assessors"&gt;QSA&lt;/a&gt;). The assessment takes place within the organization for : &lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Validate the scope of the assessment;&lt;/li&gt;
&lt;li&gt;Review your documentation and technical information;&lt;/li&gt;
&lt;li&gt;Determine whether the PCI DSS’s requirements are being met;&lt;/li&gt;
&lt;li&gt;Provide support and guidance during the compliance process; and&lt;/li&gt;
&lt;li&gt;Evaluate compensating controls.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Quarterly network analysis is carried out by an approved scanning vendor (&lt;a href="https://www.pcisecuritystandards.org/assessors_and_solutions/approved_scanning_vendors"&gt;ASV&lt;/a&gt;). The Attestation of Compliance (&lt;a href="https://www.pcisecuritystandards.org/document_library"&gt;AOC&lt;/a&gt;) for assessment must be produced.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt; For level 2-4 companies, there is a self-assessment questionnaire (&lt;a href="https://www.pcisecuritystandards.org/pdfs/SAQs_for_PCI_DSS_v4.0_Bulletin.pdf"&gt;SAQ&lt;/a&gt;).  There are 9 different questionnaires. Each of the 9 self-assessment questionnaires has its own AOC form. Level 2 organisations must also complete an ROC.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Why FinTech Companies Should Prioritize PCI DSS Compliance
&lt;/h2&gt;

&lt;p&gt;As explained in the chapters above, the PCI DSS is aimed at the security of cardholder data. Indeed, it is a standard that allows a company operating in online payment services to comply with rules that will ensure the security of the information it holds. &lt;/p&gt;

&lt;p&gt;The PCI DSS standard can really be the foundation for the security of a fintech company's IT infrastructure. Indeed, the requirements of this standard lead companies to implement the necessary methods to ensure a certain level of security against cyber attacks. For example, by following this standard, organizations are required to put in place, among other things :&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;securing key physical access areas&lt;/li&gt;
&lt;li&gt;monitoring network access&lt;/li&gt;
&lt;li&gt;conducting regular penetration tests&lt;/li&gt;
&lt;li&gt;limiting access to data&lt;/li&gt;
&lt;li&gt;deploying secure hardware and software&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;These methods not only guarantee the security of data but also foster a relationship of trust between merchants and their customers. &lt;/p&gt;

&lt;p&gt;As a standard established by the Payment Card Industry Security Standards Council ("PCI SSC"), PCI DSS can provide businesses with a significant boost in credibility when it comes to processing credit card data. By implementing the rigorous security measures required by the standard, companies can demonstrate their commitment to protecting their customers' sensitive data. This, in turn, can help build trust and confidence in their services among end-users. Ultimately, by obtaining PCI DSS certification, businesses can differentiate themselves in a crowded marketplace and position themselves for long-term success.&lt;/p&gt;

&lt;p&gt;However, it is important to clarify that PCI DSS compliance is a complicated and costly process for businesses.&lt;/p&gt;

</description>
      <category>fintech</category>
      <category>security</category>
      <category>dataprivacy</category>
    </item>
    <item>
      <title>Log centralization and security alert with ELK (part 1)</title>
      <dc:creator>Hored Otniel</dc:creator>
      <pubDate>Sat, 20 Aug 2022 10:45:00 +0000</pubDate>
      <link>https://dev.to/morten12/log-centralization-and-security-alert-with-elk-part-1-1gll</link>
      <guid>https://dev.to/morten12/log-centralization-and-security-alert-with-elk-part-1-1gll</guid>
      <description>&lt;p&gt;As a SysAdmin, DevOps, or cybersecurity analyst, the moment will inevitably come in your work when you will need to consult the logs to investigate an incident or a bug.&lt;/p&gt;

&lt;p&gt;Imagine a scenario where one of your collaborators often allows himself to connect in ssh as '&lt;strong&gt;root&lt;/strong&gt;' on the servers, or a scenario in which your main database replication server is down, wouldn't it be interesting to have a place where you have all these information without having to log in on each server?&lt;/p&gt;

&lt;h2&gt;
  
  
  Log aggregation
&lt;/h2&gt;

&lt;p&gt;In the previous scenarios, the best way to quickly have all these information is by log aggregation. So what is log aggregation? &lt;/p&gt;

&lt;p&gt;Log aggregation is the process of collecting and standardizing log events from various sources across your IT infrastructure for faster log analysis. Log aggregation is one of the early stages in the log management process. With log aggregation, you have the assurance of having centralized the logs and of having a tool for analyzing the logs.&lt;br&gt;
Using log aggregation gives you a lot of advantages. Indeed, you can: &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Perform real-time monitoring&lt;/li&gt;
&lt;li&gt;Troubleshoot production incidents&lt;/li&gt;
&lt;li&gt;Centralize logs in the same location&lt;/li&gt;
&lt;li&gt;Collaborate with others on log analysis&lt;/li&gt;
&lt;li&gt;Enhanced Security in your infrastructure&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;
  
  
  Elasticsearch-Logstash-Kibana
&lt;/h2&gt;

&lt;p&gt;ELK is a log analysis suite composed of 3 open source tools, developed by the company Elastic: Elasticsearch, Logstash and Kibana.&lt;/p&gt;

&lt;p&gt;ElasticSearch is a search and analysis engine that uses the JSON format. Its goal is to efficiently extract data from structured or unstructured data sources in real-time. Elasticsearch uses Lucene to provide the most powerful full-text search capabilities available in any open-source product.&lt;/p&gt;

&lt;p&gt;Logstash is a tool for entering, processing, and outputting log data. Its function is to analyze, filter, and cut the logs to transform them into formatted documents for Elasticsearch.&lt;/p&gt;

&lt;p&gt;Kibana is an interactive and configurable dashboard that allows you to visualize the data stored in ElasticSearch. Kibana provides insight into trends and patterns in all forms of diagrams and curves. This dashboard can be shared and combined with data visualizations for quick and smart communication.&lt;/p&gt;

&lt;p&gt;For the collection of data, Elastic has planned the beats which are agents that you install on your machines to monitor.&lt;/p&gt;

&lt;p&gt;With this stack, you, therefore, have the possibility of setting up a centralized system (SIEM) that offers total visibility on the activity of your infrastructure and which thus allows you to react to threats in real-time. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcy82w7ffzph2f9gureqr.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcy82w7ffzph2f9gureqr.png" alt="ELK"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://subscription.packtpub.com/book/big-data-and-business-intelligence/9781788831031/1/ch01lvl1sec10/what-is-elk-stack" rel="noopener noreferrer"&gt;Image source&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In the rest of this article, we will therefore make a demo where we will set up the suite for elk to collect logs from a machine and visualize the dashboards on kibana.&lt;/p&gt;
&lt;h2&gt;
  
  
  Demo
&lt;/h2&gt;

&lt;p&gt;Disclaimer: In this article, we will not cover logstash. Logstash is not essential for what we want to do in this series.&lt;/p&gt;

&lt;p&gt;To realize our work here, we will use a server to install elk. The information-gathering agents (Filebeat, metricbeat...) will be installed on a host.&lt;/p&gt;

&lt;p&gt;The server used to install elk is a GCP server with the following specifications:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;25GB of disk space&lt;/li&gt;
&lt;li&gt;4GB of memory&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Although these are fairly minimalist features, they should suffice for this item.&lt;/p&gt;
&lt;h3&gt;
  
  
  Prerequisites
&lt;/h3&gt;

&lt;p&gt;Make sure you have java installed and that ports 9200(elasticsearch) and 5601(kibana) are open on your server.&lt;/p&gt;
&lt;h3&gt;
  
  
  Elasticsearch
&lt;/h3&gt;

&lt;p&gt;We will start by importing the Elasticsearch public GPG key.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;wget -qO - https://artifacts.elastic.co/GPG-KEY-elasticsearch | sudo apt-key add -
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;After that, you have to get &lt;strong&gt;OK&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Next, we add the Elastic source list to the sources.list.d directory :&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;echo "deb https://artifacts.elastic.co/packages/7.x/apt stable main" | sudo tee –a /etc/apt/sources.list.d/elastic-7.x.list
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  Install and configure elasticsearch
&lt;/h4&gt;

&lt;p&gt;After that, you have to update your system and install elasticsearch&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo apt-get update

sudo apt-get install elasticsearch
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqjrjbpk21a2rgflhy03w.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqjrjbpk21a2rgflhy03w.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;To configure elasticsearch, we will modify the file &lt;code&gt;/etc/elasticsearch/elasticsearch.yml&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo nano /etc/elasticsearch/elasticsearch.yml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The most important things to change:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;network.host
http.port
discovery.type
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F21hc2sdooqyocyzycmvq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F21hc2sdooqyocyzycmvq.png" alt="Config elasticsearch"&gt;&lt;/a&gt; &lt;/p&gt;

&lt;p&gt;After that, we start the service:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo systemctl start elasticsearch
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You also need to allow Elasticsearch to start every time the server starts.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo systemctl enable elasticsearch
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;After that, we check if everything works well with an HTTP request. In my case, since I used my IP address, I use it for the request:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;curl -X GET "your_ip:9200"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If you used &lt;code&gt;localhost&lt;/code&gt;, it comes out to :&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;curl -X GET "localhost:9200"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbdvx1jsal5njdaohvyu9.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbdvx1jsal5njdaohvyu9.png" alt="Test"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Kibana
&lt;/h3&gt;

&lt;h4&gt;
  
  
  Install and configure kibana
&lt;/h4&gt;

&lt;p&gt;To install kibana :&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo apt install kibana

sudo systemctl enable kibana

sudo systemctl start kibana
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;After that, we configure kibana by modifying the file &lt;code&gt;/etc/kibana/kibana.yml&lt;/code&gt;. The most important elements that we modify here :&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;server.port: 5601

server.host: "0.0.0.0"

elasticsearch.hosts: ["http://localhost:9200"]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;For &lt;code&gt;elasticsearch.hosts&lt;/code&gt; replace localhost if you have configured elasticsearch on another IP address.&lt;/p&gt;

&lt;p&gt;Let's restart the service&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;systemctl restart kibana
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You should normally have access to the kibana dashboard with &lt;code&gt;http://localhost:5601&lt;/code&gt; ou &lt;code&gt;http://your-ip:5601&lt;/code&gt; :&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fm1tzy5kadd7axf1lxdsf.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fm1tzy5kadd7axf1lxdsf.png" alt="Kibana dashboard"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;As you may have noticed when logging into kibana, no authentication is required. While this doesn't particularly bother our work here, in a real environment, we can't let the dashboard be accessible so easily. So we will add authentication to our dashboard. To do this, elastic has prepared some tools that we can use.&lt;/p&gt;

&lt;p&gt;First, we need to stop the two services&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;systemctl stop elasticsearch kibana
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In the file &lt;code&gt;/etc/elasticsearch/elasticsearch.yml&lt;/code&gt;, let's add :&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;xpack.security.enabled: true
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;To communicate with our cluster, we need to configure a username for the embedded users. For this we have a program &lt;code&gt;elasticsearch&lt;/code&gt; in the folder &lt;code&gt;/usr/share/elasticsearch/&lt;/code&gt;. So start by stopping the elasticsearch service and running the following command from the folder :&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;./bin/elasticsearch
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Then in another terminal :&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;./bin/elasticsearch-setup-passwords auto

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You must answer with "y" here for this step&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxhnuautknumcx0mxtvsk.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxhnuautknumcx0mxtvsk.png" alt="passwords y"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;After that, you will get the passwords for the different users as shown here :&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6l4pxtzpazn2sxh9y7el.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6l4pxtzpazn2sxh9y7el.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Then we have to go back to the kibana configuration file to authenticate the user &lt;strong&gt;kibana_system&lt;/strong&gt;. Otherwise, kibana and elasticsearch will not be able to communicate.&lt;/p&gt;

&lt;p&gt;The following information must be added to the configuration by filling in the credentials obtained earlier. The user to indicate here is &lt;strong&gt;kibana_system&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;elasticsearch.username: "kibana_system"
elasticsearch.password: "your_password_for_kibana_system"

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You have to restart elasticsearch and kibana and voila! 😎 &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsq0154k6p80gs45lk93p.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsq0154k6p80gs45lk93p.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;As you can see kibana requires authentication. You have to connect with the credential of the user &lt;strong&gt;elastic&lt;/strong&gt; &lt;/p&gt;

&lt;h3&gt;
  
  
  Beats
&lt;/h3&gt;

&lt;p&gt;If you browse a bit on kibana you will notice that we don't have any dashboard currently. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0l9m1g9tnzu58z7g4xkd.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0l9m1g9tnzu58z7g4xkd.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This is quite normal since we have not created any dashboard and especially we have no data in elasticsearch for that.&lt;/p&gt;

&lt;p&gt;To solve this problem, we will use &lt;strong&gt;metricbeat&lt;/strong&gt;.&lt;/p&gt;

&lt;h4&gt;
  
  
  Metricbeat
&lt;/h4&gt;

&lt;p&gt;&lt;a href="https://www.elastic.co/" rel="noopener noreferrer"&gt;Elastic&lt;/a&gt; define Metricbeat as a lightweight shipper that you can install on your servers to periodically collect metrics from the operating system and from services running on the server. Metricbeat takes the metrics and statistics that it collects and ships them to the output that you specify, such as Elasticsearch or Logstash.&lt;/p&gt;

&lt;p&gt;The installation of Metricbeat is quite simple. You have to install it on the host you want to monitor and then configure it to send information to elasticsearch. We use version 7.17.5 of elasticsearch in this tutorial. So we have to install the same version of metricbeat.&lt;/p&gt;

&lt;p&gt;So we start by downloading the deb package &lt;a href="https://www.elastic.co/fr/downloads/past-releases/metricbeat-7-17-5" rel="noopener noreferrer"&gt;here&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Then we proceed to the installation with :&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo dpkg -i metricbeat-7.17.5-amd64.deb
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;After that, you have to configure the file &lt;code&gt;/etc/metricbeat/metricbeat.yml&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;In the basic configuration that we exploit here, we change the following elements:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;setup.kibana:

  # Kibana Host
  host: "your_ip:5601"


output.elasticsearch:
  # Array of hosts to connect to.
  hosts: ["localhost:9200"]
  #username: "elastic"
  #password: "changeme"

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Take care to change the information according to your infrastructure. For the username and password, these are the credentials of the user elastic.&lt;/p&gt;

&lt;p&gt;What is interesting with beats is that there are dashboards that are pre-loaded, which allows us to have a basic global view of the data transferred by the agent. You will be able to create your dashboards if you wish later on.&lt;/p&gt;

&lt;p&gt;To initialize metricbeat and load the dashboards, use the following command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo metricbeat setup -e

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Normally this command will take some time to finalize the loading of the data and dashboards.&lt;/p&gt;

&lt;p&gt;After that, you have to start the service.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo service metricbeat start
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Then go to kibana to see the change&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fg8greu1p7tu5mi1sm4tv.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fg8greu1p7tu5mi1sm4tv.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You can see that we now have a list of dashboards proposed by metricbeat. We are going to browse some dashboards concerning us to see the feedback. (It should be noted that some dashboards may be empty if your server to monitor does not have the metrics that concern this dashboard)&lt;/p&gt;

&lt;p&gt;Let's take a look at [Metricbeat System] Host overview ECS dashboard&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fy978ehz1kw6jdhahk3dk.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fy978ehz1kw6jdhahk3dk.png" alt="[Metricbeat System] Host overview ECS"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;As the name of the dashboard indicates, we have an overview of the host. There is more information  :&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9lpf1d7eejgq40rfyaql.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9lpf1d7eejgq40rfyaql.png" alt="Host overview ECS"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Let's take a look at another dashboard : [Metricbeat System] Containers overview ECS&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpfvwinp5c8hjn6mguopd.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpfvwinp5c8hjn6mguopd.png" alt="Containers overview ECS"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;As you can see, there is much less information than in the previous dashboard. This is due to the very few containers running on the host. The list of dashboards offered by metricbeat is quite large. You can also create your own dashboard.&lt;/p&gt;

&lt;h4&gt;
  
  
  Filebeat
&lt;/h4&gt;

&lt;p&gt;The installation of filebeat is done following the same process as for metricbeat. Download the deb package &lt;a href="https://www.elastic.co/fr/downloads/past-releases/filebeat-7-17-5" rel="noopener noreferrer"&gt;here&lt;/a&gt; &lt;/p&gt;

&lt;p&gt;You can also renew the basic configuration, taking care to change the modified information for metricbeat.&lt;/p&gt;

&lt;p&gt;Here we can also indicate the folders where filebeat can fetch the logs.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;- type: filestream
  # Unique ID among all inputs, an ID is required.
  id: my-filestream-id
  # Change to true to enable this input configuration.
  enabled: true
  paths:
    - /var/log/*.log
    #- c:\programdata\elasticsearch\logs\*

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;After that, we perform the setup and start the service.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo filebeat setup -e
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo service filebeat start
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Note that Metricbeat and Filebeat have modules that can be activated to get more information. Indeed if you have tools such as apache, Nginx, docker... you can activate the modules related to these tools to have data on the corresponding dashboard. You can consult the list of dashboards in the folder: &lt;code&gt;/etc/filebeat/modules.d&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;To enable MySQL module for example :&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo metricbeat modules enable mysql
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo  metricbeat modules enable apache
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You will need to re-setup and restart filebeat&lt;/p&gt;

&lt;p&gt;We can see that you now have dashboards for filebeat.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpwri07u8lh4937wdt6y7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpwri07u8lh4937wdt6y7.png" alt="Filebeat dashboard list"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Take the time to explore the dashboards. The basic dashboard that you will have with enough information will be that of Syslog.&lt;/p&gt;

&lt;p&gt;You have to enable the service first with&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo metricbeat modules enable system
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Summary
&lt;/h2&gt;

&lt;p&gt;In this tutorial we have covered the following points:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Installation and configuration of elasticsearch&lt;/li&gt;
&lt;li&gt;Installation and configuration of kibana&lt;/li&gt;
&lt;li&gt;Activation of authentication on kibana&lt;/li&gt;
&lt;li&gt;Installation and configuration of metricbeat and filebeat&lt;/li&gt;
&lt;li&gt;Exploration of some dashboards&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  What next?
&lt;/h2&gt;

&lt;p&gt;So in this first tutorial, we have covered the interesting points concerning the implementation of the ELK stack but there is still a lot to discover. In the next part of the series, we are going to focus on setting up security alerts with a very great tool (Elastalert). It will also be an opportunity to come back to the possibilities that ELK offers us.&lt;/p&gt;

</description>
      <category>monitoring</category>
      <category>elasticsearch</category>
      <category>security</category>
      <category>devops</category>
    </item>
    <item>
      <title>Log centralization and security alert with ELK (Part 2)</title>
      <dc:creator>Hored Otniel</dc:creator>
      <pubDate>Sun, 24 Jul 2022 08:30:00 +0000</pubDate>
      <link>https://dev.to/morten12/security-alert-with-elastalert2-part--1b27</link>
      <guid>https://dev.to/morten12/security-alert-with-elastalert2-part--1b27</guid>
      <description>&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--1OPuQeTg--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/zhi92j7c8cb0q1f61pev.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--1OPuQeTg--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/zhi92j7c8cb0q1f61pev.png" alt="Elastalert" width="512" height="150"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Assume that your information system has an enormous cloud infrastructure with lots of servers. As a cybersecurity engineer, it is your duty to put in place a set of processes to detect suspicious activities on the server, isn't it ?&lt;/p&gt;

&lt;p&gt;If you know exactly what you're looking for, it's pretty easy to straightly head to the logs, inspect them and pull those things out. But in this specific case where you have loads of logs on different files, you obviously need a log management system and most importantly a tool to keep you alert to the slightest incident on your servers. This is where ELK and Elastalert2 come in&lt;/p&gt;

&lt;h2&gt;
  
  
  ELK Stack
&lt;/h2&gt;

&lt;p&gt;In the &lt;a href="https://dev.to/morten12/log-centralization-and-security-alert-with-elk-part-1-1gll"&gt;first part&lt;/a&gt; of this series, we worked on deploying an ELK cluster and how to install beats on servers for monitoring. You should check this part if you haven't already because it is important to understand what will be going on in this second part.&lt;/p&gt;

&lt;h2&gt;
  
  
  Elastalert
&lt;/h2&gt;

&lt;p&gt;ElastAlert is a simple framework for alerting on anomalies, spikes, or other patterns of interest from data in Elasticsearch. This tool allows you to write your rules for the alerts you want.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://elastalert.readthedocs.io/en/latest/"&gt;ElastAlert&lt;/a&gt; queries Elasticsearch and provides an alerting mechanism with multiple output types, such as Slack, Email, JIRA, OpsGenie, among others.&lt;/p&gt;

&lt;p&gt;We are going to exploit here &lt;a href="https://elastalert2.readthedocs.io/en/latest/"&gt;elastalert2&lt;/a&gt; because the initial project is no longer maintained.&lt;/p&gt;

&lt;p&gt;We will use our cluster previously set up in the &lt;a href="https://dev.to/morten12/log-centralization-and-security-alert-with-elk-part-1-1gll"&gt;first part&lt;/a&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  Installation
&lt;/h3&gt;

&lt;p&gt;Note that there are various methods to deploy elastalert.  You can use a &lt;a href="https://elastalert2.readthedocs.io/en/latest/running_elastalert.html#as-a-docker-container"&gt;docker container&lt;/a&gt; but I preferred to use the python package option.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;git clone https://github.com/jertel/elastalert2.git
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Install the module:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;pip install "setuptools&amp;gt;=11.3"
python setup.py install
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This being done, you know that elasticsearch works with an index and it is important to create the index where Elastalert will store its data. For this purpose, the tool has prepared everything necessary.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;elastalert-create-index
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You will need to provide some informations such as the host, the port, the username to connect, the password or other optional information.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--9GkFcoqm--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ai13dh4giaktpvza2f24.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--9GkFcoqm--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ai13dh4giaktpvza2f24.png" alt="indexe-creation" width="880" height="256"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Great !! You just installed a very powerful tool for your security alerts. But keep in mind that in this article, we will address just a few basic rules.&lt;/p&gt;

&lt;p&gt;Before going any further, you must choose and configure the means by which you wish to receive emails. Here I chose to use Slack.&lt;/p&gt;

&lt;h3&gt;
  
  
  Configuring Slack
&lt;/h3&gt;

&lt;p&gt;You need to create the webhook on your slack channel.&lt;/p&gt;

&lt;p&gt;For this we can follow this documentation: &lt;a href="https://slack.com/help/articles/115005265063-Incoming-webhooks-for-Slack"&gt;https://slack.com/help/articles/115005265063-Incoming-webhooks-for-Slack&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;We will first need to create an app in slack &lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--cklEpkoL--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/qfd441dyudvy1551chuj.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--cklEpkoL--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/qfd441dyudvy1551chuj.png" alt="webhook" width="502" height="495"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Then add incoming webhooks as feature. You can connect it to the workspace you want to use. After that you need to copy the url you will use.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--4Dg5EiL8--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/m94d4pxg48cobbv2n4nq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--4Dg5EiL8--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/m94d4pxg48cobbv2n4nq.png" alt="webhook-2" width="655" height="224"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Once that is done, we can move on by writing our first rule.&lt;/p&gt;

&lt;h3&gt;
  
  
  First rule
&lt;/h3&gt;

&lt;p&gt;So to test this tool we will use the rule examples/rules/example_frequency.yaml as a template.&lt;br&gt;
Befor going on let's talk about the basics options you need to understand.&lt;br&gt;
As it explained in the &lt;a href="https://elastalert2.readthedocs.io/en/latest/running_elastalert.html#requirements"&gt;doc&lt;/a&gt; :&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;es_host&lt;/strong&gt; and &lt;strong&gt;es_port&lt;/strong&gt; should point to the Elasticsearch cluster we want to query.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;name&lt;/strong&gt; attribute must be unique.  ElastAlert 2 will not start if two rules share the same name.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;type&lt;/strong&gt;: Each rule has a different type which may take different parameters. The frequency type means “Alert when more than num_events occur within timeframe.” &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;num_events&lt;/strong&gt;: This parameter is specific to frequency type and is the threshold for when an alert is triggered.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;timeframe&lt;/strong&gt; is the time period in which num_events must occur.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;filter&lt;/strong&gt; is a list of Elasticsearch filters that are used to filter results. Here we have a single term filter for documents with some_field matching some_value&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;alert&lt;/strong&gt; is a list of alerts to run on each match.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;That is the most basic information you need to understand for this first rule. &lt;/p&gt;

&lt;p&gt;What we will do is to make an alert about all command that has been executed as "root".&lt;/p&gt;

&lt;p&gt;To do this let's check some informations on our dashboard.&lt;/p&gt;

&lt;p&gt;Remember that in the &lt;a href="https://dev.to/morten12/log-centralization-and-security-alert-with-elk-part-1-1gll"&gt;first part&lt;/a&gt; we deployed filebeat to collect information such as system logs. To write our rule, watch this dashboard :&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--Tl-hNW4w--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/qk60lhqo0yn6gxtk9av4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--Tl-hNW4w--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/qk60lhqo0yn6gxtk9av4.png" alt="dashboard" width="880" height="499"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;So we can see that in the last 15 minutes there has been a command executed as sudo.&lt;/p&gt;

&lt;p&gt;This first alert will therefore consist of sending a message in our slack app as soon as the is executed. To do this, if you have understood the explanations of the options correctly, we are going to put:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;type: frequency

index: filebeat-*

num_events: 1

timeframe:
  minutes: 1

filter:
- query:
    query_string:
      query: "system.auth.sudo.command: *"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now we read in indexes prefixed with filebeat and as soon as the event occurs at least once in a minute. For the event in question, we use the fields that filebeat proposes to do the search. On the dashboard previously illustrated you certainly noticed the system.auth.sudo.command field, so it was relatively simple to write this filter.&lt;/p&gt;

&lt;p&gt;Then the other very important part is the alerts&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;realert:
  minutes: 1

query_key:
  - host.ip

include:
  - host.hostname
  - user.name
  - host.ip

include_match_in_root: true

alert_subject: "sudo command on &amp;lt;{}&amp;gt;"
alert_subject_args:
  - host.hostname

alert_text: |-
  A command was executed as root on {}.
  Informations:
  User: {}
  IP: {}
alert_text_args:
  - host.hostname
  - user.name
  - host.ip
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Remember the importance of &lt;code&gt;realert&lt;/code&gt; here. The time defined will be the frequency of sending alerts. The other options use fields present in filebeat indexes for the alert.&lt;/p&gt;

&lt;p&gt;The other part is the configuration of the alert tool in this case slack in our case.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;alert:
  - slack:
      slack_webhook_url: "your_url"
      slack_username_override: "your_username"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You will also need to configure the config.yaml file in the example subdirectory. Once done, we'll test our rule:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;elastalert-test-rule --config examples/config.yaml examples/rules/example_frequency.yaml --alert
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;And voilà! &lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--9fXD_l0x--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/1kzbhu0lnokrmo05blyv.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--9fXD_l0x--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/1kzbhu0lnokrmo05blyv.png" alt="Alerte" width="816" height="558"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;As you can see, we received a message at the channel level. I hid sensitive information but you may notice the title and content as formatted in our alert.&lt;/p&gt;

&lt;h2&gt;
  
  
  Automating
&lt;/h2&gt;

&lt;p&gt;As you may have noticed, we just tested our rule with the elastalert-test-rule command. But as you can guess, to secure a real information system, alerts must reach you in real time without you needing to enter a command each time.&lt;/p&gt;

&lt;p&gt;So to automate the alert process, we'll go back to our config.yaml file.&lt;/p&gt;

&lt;p&gt;The method I use is quite simple. In this file you have the possibility to indicate the folder where Elastalert will look for the rules with the option &lt;code&gt;rules_folder&lt;/code&gt;. You also have the possibility to indicate other options which will be applied by default to all the rules. For example, &lt;code&gt;es_host; es_port; es_username&lt;/code&gt; or &lt;code&gt;es_username&lt;/code&gt; are fixed values.&lt;/p&gt;

&lt;p&gt;Once this file is prepared and your rules are well written, all you have to do is write a cron job that will keep elastalert up to send you the alerts at your convenience.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;5 4 5 10 5 /usr/local/bin/elastalert --config /opt/elastalert/config/config.yaml --verbose

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;I chose the periodicity of the job very randomly but you got the idea 😎️. So your alerts will reach you all the time.&lt;/p&gt;

&lt;h3&gt;
  
  
  Use case
&lt;/h3&gt;

&lt;p&gt;Here we have covered the sudo commands but you have a non-exhaustive list of rules that you can set up using the appropriate beat indexes (filebeat, metricbeat...).&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Disk space alert&lt;/li&gt;
&lt;li&gt;ssh connections alert&lt;/li&gt;
&lt;li&gt;Alert on uptime (heartbeat)&lt;/li&gt;
&lt;li&gt;...&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Elastalert is a very rich tool in terms of possibilities. Be imaginative and have fun with it.&lt;/p&gt;

</description>
      <category>security</category>
      <category>cybersecurity</category>
      <category>monitoring</category>
    </item>
  </channel>
</rss>
