<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Ibrahim Kabbash</title>
    <description>The latest articles on DEV Community by Ibrahim Kabbash (@ikabbash).</description>
    <link>https://dev.to/ikabbash</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/ikabbash"/>
    <language>en</language>
    <item>
      <title>Azure Sentinel in a Nutshell: A Beginner’s Overview</title>
      <dc:creator>Ibrahim Kabbash</dc:creator>
      <pubDate>Sat, 12 Apr 2025 11:44:05 +0000</pubDate>
      <link>https://dev.to/ikabbash/azure-sentinel-in-a-nutshell-a-beginners-overview-1ghg</link>
      <guid>https://dev.to/ikabbash/azure-sentinel-in-a-nutshell-a-beginners-overview-1ghg</guid>
      <description>&lt;p&gt;Security teams rely on SIEM tools to monitor and respond to threats, but managing them can be complex. &lt;a href="https://learn.microsoft.com/en-us/azure/sentinel/overview?tabs=azure-portal" rel="noopener noreferrer"&gt;Azure Sentinel&lt;/a&gt;, Microsoft’s cloud-native SIEM, not only addresses these challenges but also offers additional capabilities. This article breaks down what Azure Sentinel is and its basic components, followed by a quick demonstration of SSH brute force detection and alert creation as an example.&lt;/p&gt;

&lt;h1&gt;
  
  
  Table of Contents
&lt;/h1&gt;

&lt;ul&gt;
&lt;li&gt;Table of Contents&lt;/li&gt;
&lt;li&gt;Prerequisites&lt;/li&gt;
&lt;li&gt;
Azure Sentinel: The Basics

&lt;ul&gt;
&lt;li&gt;Components&lt;/li&gt;
&lt;li&gt;Log Analytics Workspace&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;

Demo: SSH Brute Force Detection

&lt;ul&gt;
&lt;li&gt;Installing Syslog from the Content Hub&lt;/li&gt;
&lt;li&gt;Collecting Logs from Azure VM&lt;/li&gt;
&lt;li&gt;Creating Alert&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;Conclusion&lt;/li&gt;

&lt;/ul&gt;

&lt;h1&gt;
  
  
  Prerequisites
&lt;/h1&gt;

&lt;ul&gt;
&lt;li&gt;Azure Portal account.&lt;/li&gt;
&lt;/ul&gt;

&lt;h1&gt;
  
  
  &lt;strong&gt;Azure Sentinel: The Basics&lt;/strong&gt;
&lt;/h1&gt;

&lt;p&gt;Azure Sentinel is Microsoft’s cloud-native Security Information and Event Management (SIEM) tool, with built-in Security Orchestration, Automation, and Response (SOAR) capabilities. At its core, it’s designed to collect, analyze, and respond to security threats across your organization.&lt;/p&gt;

&lt;p&gt;It can pull logs from Azure Active Directory, Office 365, firewalls, Syslog, and many more, then analyze and act on them. For example, with Syslog, Azure Sentinel can detect repeated failed SSH login attempts, which often indicate a brute force attack. It can then automatically trigger a playbook to block the suspicious IP address or notify your security team for further action.&lt;/p&gt;

&lt;p&gt;Azure Sentinel’s Content Hub simplifies setup by providing pre-built solutions, including data connectors, analytics rules, and playbooks, making it easy to integrate sources like Azure Active Directory, Office 365, firewalls, and Syslog. Let’s break down some of them.&lt;/p&gt;

&lt;h2&gt;
  
  
  Components
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyieb4kkekk6l5gnqocu0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyieb4kkekk6l5gnqocu0.png" alt="sentinel-components" width="800" height="319"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;These are the core elements you’ll find in Azure Sentinel’s sidebar, each playing a key role:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Content Hub&lt;/strong&gt;: A centralized repository of out-of-the-box solutions and packages (plugins) for extending Sentinel’s capabilities. It includes pre-built content like data connectors, analytics rules, workbooks, playbooks, and hunting queries.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Data Connectors&lt;/strong&gt;: Ingest data from a wide range of sources, including Azure services, AWS, on-premise systems, and third-party tools.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Workbooks&lt;/strong&gt;: Customizable dashboards that provide interactive visualizations of your data, helping you analyze trends, investigate incidents, and respond to security threats.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Analytics&lt;/strong&gt;: Create and manage detection rules for real-time threat monitoring. Includes pre-built rule templates, active rules, and scheduled analytics to identify suspicious activities automatically.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Hunting&lt;/strong&gt;: Proactively search for threats using built-in or custom queries. Leverage advanced query languages like KQL (Kusto Query Language) to uncover hidden risks in your data.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Notebooks&lt;/strong&gt;: Automate and enrich investigations using Jupyter notebooks. These are ideal for advanced threat hunting, incident investigation, and building machine learning models to enhance security operations.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Incidents&lt;/strong&gt;: Group and manage related alerts into incidents, providing a centralized view for tracking and resolving security issues.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Playbooks&lt;/strong&gt;: Automate responses to threats using workflows built in Logic Apps (an Azure service). Playbooks can trigger actions like blocking IPs, sending notifications, or escalating incidents based on predefined conditions.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Watchlist&lt;/strong&gt;: Create custom lists of key data (e.g., high-risk IPs, privileged users) to prioritize or filter alerts during threat detection and investigation.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Log Analytics Workspace
&lt;/h2&gt;

&lt;p&gt;Before we go further with Azure Sentinel, we’ll need to talk about Log Analytics first. Log Analytics is the backbone of Azure Sentinel, as it stores and organizes all the log data collected from your connected sources. Azure Sentinel cannot be created without a Log Analytics workspace because it relies on this workspace to query, analyze, and visualize the data for threat detection and response.&lt;/p&gt;

&lt;p&gt;In short, Log Analytics provides the foundation, while Azure Sentinel builds on it to deliver advanced security operations.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7dzhwex1xa38uyuwus88.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7dzhwex1xa38uyuwus88.png" alt="log-analytics-architecture" width="800" height="283"&gt;&lt;/a&gt;&lt;a href="https://learn.microsoft.com/en-us/azure/azure-monitor/logs/log-analytics-workspace-overview" rel="noopener noreferrer"&gt;Reference&lt;/a&gt;&lt;/p&gt;

&lt;h1&gt;
  
  
  Demo: SSH Brute Force Detection
&lt;/h1&gt;

&lt;p&gt;We’ll do a demo to set up Azure Sentinel for detecting SSH brute force attacks. This includes installing the Syslog content hub, using the SSH rule template to create an alert, and seeing how Azure Sentinel responds to suspicious activity in real time.&lt;/p&gt;

&lt;p&gt;Before starting, ensure you’ve setup Azure Sentinel and a Log Analytics workspace—both are straightforward to set up in the Azure portal, so just go through the creation process. You’ll also need to create an Azure Linux VM to collect the Syslog data for the demo.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Installing Syslog from the Content Hub&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;After setting up Azure Sentinel, go to &lt;strong&gt;Content Hub&lt;/strong&gt; and search for Syslog.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxq8jjpjiucjd2iutcgli.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxq8jjpjiucjd2iutcgli.png" alt="sentinel-content-hub" width="800" height="392"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Select it and click on &lt;strong&gt;Install&lt;/strong&gt;. It should install the following:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;2 data connectors (Syslog via AMA and Syslog via Legacy Agent).&lt;/li&gt;
&lt;li&gt;1 workbook.&lt;/li&gt;
&lt;li&gt;7 analytic rules (one of them is the SSH brute force).&lt;/li&gt;
&lt;li&gt;9 hunting queries.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Collecting Logs from Azure VM
&lt;/h2&gt;

&lt;p&gt;After installing the Syslog content solution, click on &lt;strong&gt;Data connectors&lt;/strong&gt; in the sidebar. Select the &lt;strong&gt;Syslog via AMA&lt;/strong&gt; and click on &lt;strong&gt;Open connector page&lt;/strong&gt; then click on &lt;strong&gt;+Create data collection rule&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flit0lnu5dz6slu2d6544.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flit0lnu5dz6slu2d6544.png" alt="data-connectors" width="800" height="395"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Name the rule anything you wish, I named mine &lt;code&gt;vm-syslog&lt;/code&gt;. For resources, select the Azure VM you just created (or the one you want to collect Syslog data from). Set &lt;code&gt;LOG_DEBUG&lt;/code&gt; minimum log level for both &lt;code&gt;LOG_AUTH&lt;/code&gt; and &lt;code&gt;LOG_AUTHPRIV&lt;/code&gt; then create. This will create a Data Collection Rule that forwards the specified Syslog data to your Log Analytics workspace, where Azure Sentinel can query and analyze it for threat detection.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fb1zo952dij1ugdl8nuw5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fb1zo952dij1ugdl8nuw5.png" alt="data-collection-rules" width="800" height="532"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Try SSHing into the Azure VM you created. Make sure to perform one successful login and one failed attempt with a non-existent user (&lt;code&gt;test-user&lt;/code&gt; for example). You’ll soon see why as we move forward.&lt;/p&gt;

&lt;p&gt;To confirm if the logs are collected, go to Azure Sentinel or Log Analytics Workspace and click on &lt;strong&gt;Logs&lt;/strong&gt; in the sidebar. Open an empty query in KQL (Kusto Query Language) mode and run &lt;code&gt;Syslog&lt;/code&gt; so you can see all the data in the Syslog table. If you don’t see anything yet, you may need to wait 5-15 minutes after the creation of the data collection rule.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7hqsavqng9mmph47gq5w.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7hqsavqng9mmph47gq5w.png" alt="syslog-query-output" width="800" height="334"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Creating Alert
&lt;/h2&gt;

&lt;p&gt;Rules are the heart of Azure Sentinel, they are used to detect security threats and generate alerts. Now that we’ve installed the Syslog content solution in Azure Sentinel, created a VM, and set up a data collection rule to gather the VM’s Syslog data, let’s proceed to create an alert. Navigate to &lt;strong&gt;Analytics&lt;/strong&gt; in the Azure Sentinel sidebar.&lt;/p&gt;

&lt;p&gt;Here, you’ll find &lt;strong&gt;Active rules&lt;/strong&gt;, which are custom or built-in analytics rules currently running in Azure Sentinel, and &lt;strong&gt;Rule templates&lt;/strong&gt;, which are pre-configured, out-of-the-box rule definitions provided by the content solution you’ve installed. These templates can be enabled or customized to create active rules. Click on &lt;strong&gt;Rule templates&lt;/strong&gt; and you’ll find the &lt;strong&gt;SSH - Potential Brute Force&lt;/strong&gt; rule template here, select it.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnqxrvp60p57vtb0wafzu.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnqxrvp60p57vtb0wafzu.png" alt="rule-template-query" width="800" height="326"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Each rule template comes with its own predefined rule query. You can also create your own custom query if you want to build a custom rule. Before we create the active rule, let’s test the query first. Open another empty query and copy the rule query from the SSH brute force rule template and execute it, just like we did with the &lt;code&gt;Syslog&lt;/code&gt; query last time. We won’t see any results yet, right? That’s because we haven’t attempted 15 failed SSH logins. To test the query, let’s remove the lines &lt;code&gt;let threshold = 15;&lt;/code&gt; and &lt;code&gt;| where PerHourCount &amp;gt; threshold&lt;/code&gt;. This will give you the modified query to work with.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;Syslog
| where ProcessName &lt;span class="o"&gt;=&lt;/span&gt;~ &lt;span class="s2"&gt;"sshd"&lt;/span&gt;
| where SyslogMessage contains &lt;span class="s2"&gt;"Failed password for invalid user"&lt;/span&gt;
| parse &lt;span class="nv"&gt;kind&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;relaxed SyslogMessage with &lt;span class="k"&gt;*&lt;/span&gt; &lt;span class="s2"&gt;"invalid user "&lt;/span&gt; user &lt;span class="s2"&gt;" from "&lt;/span&gt; ip &lt;span class="s2"&gt;" port"&lt;/span&gt; port &lt;span class="s2"&gt;" ssh2"&lt;/span&gt; &lt;span class="k"&gt;*&lt;/span&gt;
// using distinct below as it has been seen that Syslog can duplicate entries depending on implementation
| distinct TimeGenerated, Computer, user, ip, port, SyslogMessage, _ResourceId
| summarize EventTimes &lt;span class="o"&gt;=&lt;/span&gt; make_list&lt;span class="o"&gt;(&lt;/span&gt;TimeGenerated&lt;span class="o"&gt;)&lt;/span&gt;, PerHourCount &lt;span class="o"&gt;=&lt;/span&gt; count&lt;span class="o"&gt;()&lt;/span&gt; by bin&lt;span class="o"&gt;(&lt;/span&gt;TimeGenerated,4h&lt;span class="o"&gt;)&lt;/span&gt;, ip, Computer, user, _ResourceId
| mvexpand EventTimes
| extend EventTimes &lt;span class="o"&gt;=&lt;/span&gt; tostring&lt;span class="o"&gt;(&lt;/span&gt;EventTimes&lt;span class="o"&gt;)&lt;/span&gt;
| summarize StartTime &lt;span class="o"&gt;=&lt;/span&gt; min&lt;span class="o"&gt;(&lt;/span&gt;EventTimes&lt;span class="o"&gt;)&lt;/span&gt;, EndTime &lt;span class="o"&gt;=&lt;/span&gt; max&lt;span class="o"&gt;(&lt;/span&gt;EventTimes&lt;span class="o"&gt;)&lt;/span&gt;, UserList &lt;span class="o"&gt;=&lt;/span&gt; make_set&lt;span class="o"&gt;(&lt;/span&gt;user&lt;span class="o"&gt;)&lt;/span&gt;, ComputerList &lt;span class="o"&gt;=&lt;/span&gt; make_set&lt;span class="o"&gt;(&lt;/span&gt;Computer&lt;span class="o"&gt;)&lt;/span&gt;, ResourceIdList &lt;span class="o"&gt;=&lt;/span&gt; make_set&lt;span class="o"&gt;(&lt;/span&gt;_ResourceId&lt;span class="o"&gt;)&lt;/span&gt;, &lt;span class="nb"&gt;sum&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;PerHourCount&lt;span class="o"&gt;)&lt;/span&gt; by IPAddress &lt;span class="o"&gt;=&lt;/span&gt; ip
// bringing through single computer and user &lt;span class="k"&gt;if &lt;/span&gt;array only has 1, otherwise, referencing the column and hashing the ComputerList or UserList so we don&lt;span class="s1"&gt;'t get accidental entity matches when reviewing alerts
| extend HostName = iff(array_length(ComputerList) == 1, tostring(ComputerList[0]), strcat("SeeComputerListField","_", tostring(hash(tostring(ComputerList)))))
| extend Account = iff(array_length(ComputerList) == 1, tostring(UserList[0]), strcat("SeeUserListField","_", tostring(hash(tostring(UserList)))))
| extend ResourceId = iff(array_length(ResourceIdList) == 1, tostring(ResourceIdList[0]), strcat("SeeResourceIdListField","_", tostring(hash(tostring(ResourceIdList)))))
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Execute the query, and you should see the one failed SSH login attempt with the non-existent user that you made earlier. You did it though, right? &lt;em&gt;Right?&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fd31i6od6621a7onolm4m.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fd31i6od6621a7onolm4m.png" alt="ssh-brute-force-query-output" width="800" height="565"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now, back to the rule template. Click on &lt;strong&gt;Create rule&lt;/strong&gt;, and consider updating the description to avoid confusion when reviewing incidents later. In the &lt;strong&gt;Set rule logic&lt;/strong&gt; section, change the threshold from 15 to 5 and set the query scheduling to run every 5 minutes. Leave everything else unchanged and create the active rule. Once created, you’ll find it under &lt;strong&gt;Active rules&lt;/strong&gt; in &lt;strong&gt;Analytics&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F97re6jzcf9006l8e9gta.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F97re6jzcf9006l8e9gta.png" alt="active-rules" width="768" height="358"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Let’s test the alert now. Execute the command below (if you’re using Linux), it should do 6 failed attempts to the server. You may need to install &lt;code&gt;sshpass&lt;/code&gt; package.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="k"&gt;for &lt;/span&gt;i &lt;span class="k"&gt;in&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;1..6&lt;span class="o"&gt;}&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="k"&gt;do &lt;/span&gt;sshpass &lt;span class="nt"&gt;-p&lt;/span&gt; &lt;span class="s1"&gt;'randomPassword'&lt;/span&gt; ssh &lt;span class="nt"&gt;-o&lt;/span&gt; &lt;span class="nv"&gt;StrictHostKeyChecking&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;no dummy-user@&amp;lt;ip-address&amp;gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="k"&gt;done&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;After 5 minutes or more, go to Azure Sentinel and click on &lt;strong&gt;Incidents&lt;/strong&gt; in the sidebar. You should see the incident created from the alert triggered by the brute force attempt we simulated. To confirm, you can run the query in &lt;strong&gt;Logs&lt;/strong&gt; with the threshold set, and you should see matching results.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fppxzcc46blod0qwv255p.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fppxzcc46blod0qwv255p.png" alt="sentinel-incidents" width="800" height="273"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h1&gt;
  
  
  Conclusion
&lt;/h1&gt;

&lt;p&gt;We covered what Azure Sentinel is, its components, and walked through a demo to detect SSH brute force attacks. By setting up Syslog, creating alerts, and simulating threats, we saw how Azure Sentinel efficiently monitors and creates incidents.&lt;/p&gt;

&lt;p&gt;We didn’t get to cover playbooks, but they can be really handy for automated workflows. Playbooks are built on Logic Apps, a no-code/low-code Azure service, and they enable automated responses, like blocking IPs, sending alerts, and many other applications. Playbooks can even be used to forward incident data to Azure Event Hub, enabling integration between Azure Sentinel and Azure Event Hub. If you’re interested in sending Event Hub data to a log shipper like Fluentd, you can check out my &lt;a href="https://dev.to/ikabbash/configuring-fluentd-to-collect-data-from-azure-event-hub-41g7"&gt;article&lt;/a&gt; on integrating Azure Event Hub with Fluentd.&lt;/p&gt;

&lt;p&gt;In summary, Azure Sentinel is a comprehensive solution for modern security teams, combining SIEM and SOAR functionalities to streamline threat management. It also offers advanced hunting, machine learning, and customizable workbooks for deeper insights.&lt;/p&gt;

</description>
      <category>azure</category>
      <category>sentinel</category>
      <category>siem</category>
    </item>
    <item>
      <title>Configuring Fluentd to Collect Data from Azure Event Hub</title>
      <dc:creator>Ibrahim Kabbash</dc:creator>
      <pubDate>Sun, 30 Mar 2025 15:37:24 +0000</pubDate>
      <link>https://dev.to/ikabbash/configuring-fluentd-to-collect-data-from-azure-event-hub-41g7</link>
      <guid>https://dev.to/ikabbash/configuring-fluentd-to-collect-data-from-azure-event-hub-41g7</guid>
      <description>&lt;p&gt;This guide covers setting up Fluentd to fetch data from Azure Event Hub using Kafka Fluentd &lt;a href="https://github.com/fluent/fluent-plugin-kafka" rel="noopener noreferrer"&gt;plugin&lt;/a&gt;.&lt;/p&gt;

&lt;h1&gt;
  
  
  Table of Contents
&lt;/h1&gt;

&lt;ul&gt;
&lt;li&gt;Prerequisites&lt;/li&gt;
&lt;li&gt;Understanding Azure Event Hub&lt;/li&gt;
&lt;li&gt;What is Fluentd?&lt;/li&gt;
&lt;li&gt;
Setting Up Fluentd for Azure Event Hub

&lt;ul&gt;
&lt;li&gt;Creating Event Hub&lt;/li&gt;
&lt;li&gt;Fluentd Docker Compose Setup&lt;/li&gt;
&lt;li&gt;Using the Kafka Fluentd Plugin&lt;/li&gt;
&lt;li&gt;Testing and Verifying Data Collection&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;Conclusion&lt;/li&gt;

&lt;/ul&gt;

&lt;h1&gt;
  
  
  Prerequisites
&lt;/h1&gt;

&lt;ul&gt;
&lt;li&gt;Azure Portal account.&lt;/li&gt;
&lt;li&gt;Docker.&lt;/li&gt;
&lt;/ul&gt;

&lt;h1&gt;
  
  
  Understanding Azure Event Hub
&lt;/h1&gt;

&lt;p&gt;Azure Event Hub is a real-time data streaming service similar to Apache Kafka. It supports Kafka protocol, allowing Kafka applications to send and receive messages without modification. Like Kafka, it handles high-throughput event ingestion, partitioning, and consumer groups.&lt;/p&gt;

&lt;p&gt;Event streaming platforms like Azure Event Hub and Kafka follow a publish-subscribe model. Producers send messages to a specific topic (or event hub), where they are stored in partitions. Consumers, which are subscribed to the topic, read messages from these partitions, processing them in real-time or batch mode. Each consumer group tracks its offset to ensure messages are processed efficiently without duplication.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fctznoze8z6539pja71x3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fctznoze8z6539pja71x3.png" alt="kafka-architecture" width="800" height="224"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;TL;DR: Producers generate data to a specific topic, consumers are subscribed to the topic and receive the data.&lt;/p&gt;

&lt;p&gt;The following table maps concepts between Kafka and Event Hubs.&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;&lt;strong&gt;Kafka Concept&lt;/strong&gt;&lt;/th&gt;
&lt;th&gt;&lt;strong&gt;Event Hubs Concept&lt;/strong&gt;&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Cluster&lt;/td&gt;
&lt;td&gt;Namespace&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Topic&lt;/td&gt;
&lt;td&gt;An event hub&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Partition&lt;/td&gt;
&lt;td&gt;Partition&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Consumer Group&lt;/td&gt;
&lt;td&gt;Consumer Group&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Offset&lt;/td&gt;
&lt;td&gt;Offset&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;Note that you’ll need at least Standard &lt;a href="https://learn.microsoft.com/en-us/azure/event-hubs/compare-tiers" rel="noopener noreferrer"&gt;tier&lt;/a&gt; because there is no Kafka consumer in the Basic tier.&lt;/p&gt;

&lt;h1&gt;
  
  
  &lt;strong&gt;What is Fluentd?&lt;/strong&gt;
&lt;/h1&gt;

&lt;p&gt;Fluentd is an open-source data collector used to unify logging by collecting, transforming, and routing log data across different systems. You can collect data from various sources like files, databases, protocols, etc.. and send them to other various destinations as well.&lt;/p&gt;

&lt;p&gt;This Fluentd config sets up an HTTP input on port 9880 to receive data and saves it as JSON where the destination is a file. The &lt;code&gt;tag-name&lt;/code&gt; is used to route logs, ensuring data flows from the source to the correct output.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&amp;lt;&lt;span class="nb"&gt;source&lt;/span&gt; &lt;span class="o"&gt;[&lt;/span&gt;tag-name]&amp;gt;
  @type http
  port 9880
  &lt;span class="nb"&gt;bind &lt;/span&gt;0.0.0.0
  body_size_limit 32m
  keepalive_timeout 10s
&amp;lt;/source&amp;gt;

&amp;lt;match &lt;span class="o"&gt;[&lt;/span&gt;tag-name]&amp;gt;
  @type file
  path /output/alert-logs
  append &lt;span class="nb"&gt;true
  &lt;/span&gt;format json
&amp;lt;/match&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Fluentd can also parse logs, format them, and route them to more than just files—it can send data to databases, cloud storage, or even message queues, making it great for centralized logging.&lt;/p&gt;

&lt;p&gt;We’ll take similar concept from the config above where the source will be a Kafka input plugin.&lt;/p&gt;

&lt;h1&gt;
  
  
  Setting Up Fluentd for Azure Event Hub
&lt;/h1&gt;

&lt;h2&gt;
  
  
  Creating Event Hub
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;Go to your Azure portal and search for Event Hubs then click on “Create event hubs namespace.”&lt;/li&gt;
&lt;li&gt;Pick or create a resource group.&lt;/li&gt;
&lt;li&gt;Pick a unique namespace name.&lt;/li&gt;
&lt;li&gt;Choose the Standard pricing tier.&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Keep everything else default and click on “Review + create.”&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fo94598zpbn54yd0h8bue.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fo94598zpbn54yd0h8bue.png" alt="create-ns" width="628" height="702"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Go to the event hub namespace resource you created.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;On the side bar, click on “Event Hubs” under entities and create an Event Hub with a name (I named mine test-hub).&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdcpm40gazq2lrxb1eppa.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdcpm40gazq2lrxb1eppa.png" alt="create-hub" width="599" height="482"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Select the event hub you just created.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;On the side bar, click on “Shared access policies” under settings and add a new policy named &lt;code&gt;listen-policy&lt;/code&gt; with listen access.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;After creation, copy one of the connection strings and keep it somewhere for later steps.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7ymvq3ogax8ny93t4jys.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7ymvq3ogax8ny93t4jys.png" alt="create-sas" width="800" height="222"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;- Be sure you’ve created the shared access policy in the event hub itself, not the event hub namespace.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h2&gt;
  
  
  Fluentd Docker Compose Setup
&lt;/h2&gt;

&lt;p&gt;You can use these files from my &lt;a href="https://github.com/ikabbash/tutorials/tree/main/fluentd-azure-eventhub" rel="noopener noreferrer"&gt;tutorials repo&lt;/a&gt; or create the following below:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;code&gt;Dockerfile&lt;/code&gt; for Fluentd, installing the Kafka plugin.&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight docker"&gt;&lt;code&gt;&lt;span class="k"&gt;FROM&lt;/span&gt;&lt;span class="s"&gt; fluent/fluentd:v1.17-debian-1&lt;/span&gt;
&lt;span class="c"&gt;# Install Kafka plugin&lt;/span&gt;
&lt;span class="k"&gt;USER&lt;/span&gt;&lt;span class="s"&gt; root&lt;/span&gt;
&lt;span class="k"&gt;RUN &lt;/span&gt;fluent-gem &lt;span class="nb"&gt;install &lt;/span&gt;fluent-plugin-kafka
&lt;span class="k"&gt;USER&lt;/span&gt;&lt;span class="s"&gt; fluent&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Fluentd &lt;code&gt;docker-compose.yaml&lt;/code&gt; file.&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;services&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;fluentd&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;container_name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;fluentd&lt;/span&gt;
    &lt;span class="na"&gt;hostname&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;fluentd&lt;/span&gt;
    &lt;span class="na"&gt;build&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;.&lt;/span&gt;
    &lt;span class="na"&gt;volumes&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;./fluentd.conf:/fluentd/etc/fluentd.conf:ro&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;./logs:/var/log/fluentd&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Create &lt;code&gt;logs&lt;/code&gt; directory and change its ownership to Fluentd’s container UID.&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;mkdir &lt;/span&gt;logs &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class="nb"&gt;sudo chown &lt;/span&gt;999:999 logs
&lt;/code&gt;&lt;/pre&gt;

&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;
  
  
  Using the Kafka Fluentd Plugin
&lt;/h2&gt;

&lt;p&gt;You can use the plugin to both consume and produce data, in our case, we’ll configure to consume data from Event Hub using Event Hub’s &lt;a href="https://learn.microsoft.com/en-us/azure/event-hubs/azure-event-hubs-apache-kafka-overview#shared-access-signature-sas" rel="noopener noreferrer"&gt;Shared Access Signatures (SAS)&lt;/a&gt; for delegated access to Event Hubs for Kafka resources.&lt;/p&gt;

&lt;p&gt;Use the &lt;code&gt;fluentd.conf&lt;/code&gt; below and update it with your Event Hub’s name, Event Hub namespace name, and connection string. The &lt;code&gt;HUB_NAME&lt;/code&gt; in this article’s case is &lt;code&gt;test-hub&lt;/code&gt;, the namespace name is &lt;code&gt;test1Mx4.servicebus.windows.net&lt;/code&gt;, and replace the connection string accordingly.&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&amp;lt;&lt;span class="nb"&gt;source&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;
  @type kafka
  brokers &lt;span class="o"&gt;[&lt;/span&gt;EVENT_HUB_NAMESPACE].servicebus.windows.net:9093
  topics &lt;span class="o"&gt;[&lt;/span&gt;HUB_NAME]
  username &lt;span class="nv"&gt;$ConnectionString&lt;/span&gt;
  password &lt;span class="o"&gt;[&lt;/span&gt;EVENT_HUB_CONNECTION_STRING]
  ssl_ca_certs_from_system &lt;span class="nb"&gt;true
  &lt;/span&gt;format json
&amp;lt;/source&amp;gt;

&amp;lt;match &lt;span class="o"&gt;[&lt;/span&gt;HUB_NAME].&lt;span class="k"&gt;**&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;
  @type file
  path /var/log/fluentd/
  append &lt;span class="nb"&gt;true&lt;/span&gt;
  &amp;lt;format&amp;gt;
      @type json
  &amp;lt;/format&amp;gt;
&amp;lt;/match&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;It should look like this after updating the &lt;code&gt;fluentd.conf&lt;/code&gt; file.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&amp;lt;&lt;span class="nb"&gt;source&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;
  @type kafka
  brokers test1Mx4.servicebus.windows.net:9093
  topics test-hub
  username &lt;span class="nv"&gt;$ConnectionString&lt;/span&gt;
  password &lt;span class="nv"&gt;Endpoint&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;sb://test1mx4.servicebus.windows.net/&lt;span class="p"&gt;;&lt;/span&gt;&lt;span class="nv"&gt;SharedAccessKeyName&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;listen-policy&lt;span class="p"&gt;;&lt;/span&gt;&lt;span class="nv"&gt;SharedAccessKey&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&amp;lt;secret-key&amp;gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;&lt;span class="nv"&gt;EntityPath&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;test-hub
  ssl_ca_certs_from_system &lt;span class="nb"&gt;true
  &lt;/span&gt;format json
&amp;lt;/source&amp;gt;

&amp;lt;match test-hub.&lt;span class="k"&gt;**&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;
  @type file
  path /var/log/fluentd/
  append &lt;span class="nb"&gt;true&lt;/span&gt;
  &amp;lt;format&amp;gt;
      @type json
  &amp;lt;/format&amp;gt;
&amp;lt;/match&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  &lt;strong&gt;Testing and Verifying Data Collection&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;After setting up the files, execute &lt;code&gt;docker compose up --build&lt;/code&gt; and be sure that the container is running fine.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;CONTAINER ID   IMAGE                            COMMAND                  CREATED              STATUS              PORTS                 NAMES
73654ee6e84b   fluentd-azure-eventhub-fluentd   &lt;span class="s2"&gt;"tini -- /bin/entryp…"&lt;/span&gt;   About a minute ago   Up About a minute   5140/tcp, 24224/tcp   fluentd
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;To test, open your Event Hub’s Data Explorer from the sidebar and click "Send Events." Choose a pre-canned dataset as an example, then click "Send" to produce it to the hub for Fluentd to consume.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F66j4tbofmhx8i01cs6cm.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F66j4tbofmhx8i01cs6cm.png" alt="data-explorer" width="800" height="395"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;After the data gets sent to Event Hub, check your Fluentd logs directory as it should generate buffer files that store the produced data by consuming them from Event Hub.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fy2a9qjmcdg44p7wcoxia.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fy2a9qjmcdg44p7wcoxia.png" alt="logs-file" width="619" height="214"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h1&gt;
  
  
  Conclusion
&lt;/h1&gt;

&lt;p&gt;In this guide, we set up Fluentd to fetch data from Azure Event Hub using the Kafka Fluentd plugin. We covered the basics of Azure Event Hub, configured Fluentd with Docker, and verified data collection by producing events and checking Fluentd’s log directory for buffered data.&lt;/p&gt;

&lt;p&gt;With this setup, you can integrate Fluentd into your logging pipeline to collect, process, and route event data efficiently. For example, you can create a playbook in Azure Sentinel to export incident data to Event Hub and Fluentd will act as a log shipper for other destinations. You could also apply this approach to Azure Log Analytics, exporting data to Event Hub for Fluentd to forward to other destinations.&lt;/p&gt;

</description>
      <category>azure</category>
      <category>fluentd</category>
      <category>kafka</category>
      <category>datapipline</category>
    </item>
    <item>
      <title>Customize Your Linux Desktop: A Beginner’s Ricing Guide</title>
      <dc:creator>Ibrahim Kabbash</dc:creator>
      <pubDate>Mon, 24 Mar 2025 14:29:24 +0000</pubDate>
      <link>https://dev.to/ikabbash/customize-your-linux-desktop-a-beginners-ricing-guide-3pp4</link>
      <guid>https://dev.to/ikabbash/customize-your-linux-desktop-a-beginners-ricing-guide-3pp4</guid>
      <description>&lt;p&gt;Ever wanted to customize your Linux desktop but felt lost in all the components and choices? You’re not alone—I’ve been there too. This guide is for beginners and anyone looking to rice their setup, covering most things you need to know, from window managers and bars to shells and more, no matter what distro you use.&lt;/p&gt;

&lt;h1&gt;
  
  
  Table of Contents
&lt;/h1&gt;

&lt;ul&gt;
&lt;li&gt;Prerequisites&lt;/li&gt;
&lt;li&gt;Note&lt;/li&gt;
&lt;li&gt;
Desktop Environments vs Window Managers

&lt;ul&gt;
&lt;li&gt;Desktop Environments&lt;/li&gt;
&lt;li&gt;Window Managers&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;

Ricing Components

&lt;ul&gt;
&lt;li&gt;Bar&lt;/li&gt;
&lt;li&gt;File Manager&lt;/li&gt;
&lt;li&gt;
Terminal

&lt;ul&gt;
&lt;li&gt;Shell&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;Application Launcher&lt;/li&gt;

&lt;li&gt;Compositors&lt;/li&gt;

&lt;/ul&gt;

&lt;/li&gt;

&lt;li&gt;Conclusion&lt;/li&gt;

&lt;/ul&gt;

&lt;h1&gt;
  
  
  Prerequisites
&lt;/h1&gt;

&lt;ul&gt;
&lt;li&gt;Linux knowledge is preferable.&lt;/li&gt;
&lt;/ul&gt;

&lt;h1&gt;
  
  
  Note
&lt;/h1&gt;

&lt;p&gt;Regarding the display servers (X11 and Wayland), not every component is compatible with Wayland. Some apps, window managers, and ricing tools may work differently or not at all compared to X11. Check compatibility before choosing your setup. I recommend looking them up and doing your research while ricing.&lt;/p&gt;

&lt;h1&gt;
  
  
  Desktop Environments vs Window Managers
&lt;/h1&gt;

&lt;p&gt;Before diving in, you’ll need to decide whether to base your setup on a desktop environment or a window manager. Desktop environments come with a structured layout and built-in tools, making them easier to set up but more restrictive in customization. Window managers, on the other hand, give you full control over your system’s look and behavior, allowing you to customize everything from keybindings to window management—but they require more manual configuration.&lt;/p&gt;

&lt;h2&gt;
  
  
  Desktop Environments
&lt;/h2&gt;

&lt;p&gt;When ricing a desktop environment, keep in mind that customization is often limited by its design, some require extensions or third-party tools, and updates may break custom themes or configurations.&lt;/p&gt;

&lt;p&gt;Some of the popular desktop environments are:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;GNOME: Minimalist and modern but not very customizable without extensions.&lt;/li&gt;
&lt;li&gt;KDE: Customizable and feature-rich.&lt;/li&gt;
&lt;li&gt;XFCE: Lightweight and fast.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Take your time looking into each one of them and pick the one best suited for you if you’re going for desktop environment.&lt;/p&gt;

&lt;h2&gt;
  
  
  Window Managers
&lt;/h2&gt;

&lt;p&gt;When ricing a window manager, you have near-complete control, but setup requires manual configuration. Many Window Managers do not have built-in features like launchers, requiring extra tools and time to setup. Config complexity varies by window manager, but it pays off.&lt;/p&gt;

&lt;p&gt;Some of the popular window managers are:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Qtile: Written in Python, highly configurable and easy to extend, but requires scripting knowledge.&lt;/li&gt;
&lt;li&gt;AwesomeWM: Highly customizable, but its Lua-based configuration has a learning curve.&lt;/li&gt;
&lt;li&gt;Hyprland: A modern Wayland compositor with smooth animations but but Wayland’s compatibility limitations can cause issues (as of writing this).&lt;/li&gt;
&lt;/ul&gt;

&lt;h1&gt;
  
  
  &lt;strong&gt;Ricing Components&lt;/strong&gt;
&lt;/h1&gt;

&lt;p&gt;Ricing involves customizing various components to however you want. Key components include bars for system info, file managers for navigation, terminals, and so on. We’ll be looking into each and their popular options.&lt;/p&gt;

&lt;h2&gt;
  
  
  Bar
&lt;/h2&gt;

&lt;p&gt;The bar is a panel that displays system information, workspaces, open windows, etc..&lt;/p&gt;

&lt;p&gt;For bar customization, you can use the one from your desktop environment, configure your window manager’s built-in bar (if available), or install a third-party option.&lt;/p&gt;

&lt;p&gt;Some of the popular third-party options (compatibility varies between X11 and Wayland, so check support before choosing): Polybar, Waybar, Latte-dock, etc..&lt;/p&gt;

&lt;h2&gt;
  
  
  File Manager
&lt;/h2&gt;

&lt;p&gt;The file manager is where you navigate, organize, and manage files and directories, using either a graphical interface or a terminal-based alternative. Popular options include Thunar, Dolphin, and Nautilus for GUI, or ranger for terminal-based management.&lt;/p&gt;

&lt;h2&gt;
  
  
  Terminal
&lt;/h2&gt;

&lt;p&gt;The terminal is where you run commands, manage your system through, you know? Popular options include Alacritty, which is fast and minimal with GPU acceleration, Kitty, which also uses the GPU acceleration but offers extra features like tab management, and Terminator, which focuses on tiling multiple terminal panes in one window.&lt;/p&gt;

&lt;h3&gt;
  
  
  Shell
&lt;/h3&gt;

&lt;p&gt;You can also customize your shell to improve functionality and appearance. Popular options include Bash (the default on most systems), Zsh, which supports plugins, and Fish, which has user-friendly syntax. You can change how your shell looks using customizable prompts like Starship, no matter which shell you’re using.&lt;/p&gt;

&lt;h2&gt;
  
  
  Application Launcher
&lt;/h2&gt;

&lt;p&gt;The application launcher lets you find and run apps instantly without navigating menus. Popular options include Rofi and dmenu.&lt;/p&gt;

&lt;h2&gt;
  
  
  Compositors
&lt;/h2&gt;

&lt;p&gt;A compositor is responsible for rendering and managing window effects, transparency, and animations. Although take note that it’s not ideal for gaming as it can cause frame drops.&lt;/p&gt;

&lt;p&gt;One of the popular compositors is Picom, which is a lightweight X11 compositor with transparency, blur, and shadow effects.&lt;/p&gt;

&lt;h1&gt;
  
  
  Conclusion
&lt;/h1&gt;

&lt;p&gt;Ricing your Linux setup is all about making it your own, both in looks and functionality. Whether you choose a desktop environment or a window manager, understanding the key components like bars, terminals, and shells will help you build a setup that suits your workflow. Take your time to experiment, research, and tweak configurations according to your need.&lt;/p&gt;

&lt;p&gt;You can find inspiration for ricing by exploring community setups on forums and GitHub dotfiles, helping you discover new ideas and tools.&lt;/p&gt;

&lt;p&gt;The screenshots below showcase my setup using Qtile as the window manager with a customized built-in bar, Alacritty as the terminal with Bash and Starship, and Rofi as the application launcher. Feel free to take inspiration from it, and you can find my &lt;a href="https://github.com/ikabbash/dotfiles/tree/qtile" rel="noopener noreferrer"&gt;dotfiles&lt;/a&gt; on GitHub.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F70s6u3xbqkoo4zsgmia5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F70s6u3xbqkoo4zsgmia5.png" alt="desktop-1" width="800" height="449"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Foe7f13oq7wewppzwb1sy.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Foe7f13oq7wewppzwb1sy.png" alt="desktop-2" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

</description>
      <category>linux</category>
      <category>archlinux</category>
    </item>
    <item>
      <title>Helm On Azure Container Registry: The Simple Way</title>
      <dc:creator>Ibrahim Kabbash</dc:creator>
      <pubDate>Sat, 02 Dec 2023 12:44:20 +0000</pubDate>
      <link>https://dev.to/ikabbash/helm-on-azure-container-registry-the-simple-way-5dm9</link>
      <guid>https://dev.to/ikabbash/helm-on-azure-container-registry-the-simple-way-5dm9</guid>
      <description>&lt;p&gt;This article walks through how to push and pull Helm charts on Azure Container Registry (ACR) locally. The same concept can also be used on a pipeline for automation.&lt;/p&gt;

&lt;h1&gt;
  
  
  Table of Contents
&lt;/h1&gt;

&lt;ul&gt;
&lt;li&gt;Prerequisites&lt;/li&gt;
&lt;li&gt;Note&lt;/li&gt;
&lt;li&gt;Pushing To ACR&lt;/li&gt;
&lt;li&gt;Pulling From ACR&lt;/li&gt;
&lt;li&gt;Conclusion&lt;/li&gt;
&lt;/ul&gt;

&lt;h1&gt;
  
  
  Prerequisites
&lt;/h1&gt;

&lt;ul&gt;
&lt;li&gt;Azure Portal account&lt;/li&gt;
&lt;li&gt;Helm installed&lt;/li&gt;
&lt;li&gt;Azure Container Registry created&lt;/li&gt;
&lt;/ul&gt;

&lt;h1&gt;
  
  
  Note
&lt;/h1&gt;

&lt;p&gt;The Helm chart version I’m using is version 3.10, there is an &lt;a href="https://github.com/helm/helm/issues/12423" rel="noopener noreferrer"&gt;issue&lt;/a&gt; with Helm version 3.13 unable to push or pull charts to ACR even after logging into the registry successfully, throwing a status code 401 error.&lt;/p&gt;

&lt;h1&gt;
  
  
  Pushing To ACR
&lt;/h1&gt;

&lt;p&gt;Navigate to your created ACR (mine is named blogacr21), on the sidebar, scroll down then click on Tokens (right under Repository permissions) and create a new token. Define the token name and scope map (&lt;code&gt;_repositories_push&lt;/code&gt; for the current case). Click on your newly created token then generate for yourself a password.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F55mwx2amrw6q0plo0fk8.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F55mwx2amrw6q0plo0fk8.png" alt="Image description" width="800" height="391"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxyr9r7wer09egfjejcz4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxyr9r7wer09egfjejcz4.png" alt="Image description" width="800" height="391"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3x32l4qa5v9hztgm0ruw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3x32l4qa5v9hztgm0ruw.png" alt="Image description" width="800" height="391"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Login to your ACR with your created token by using the following command (blogacr21 is my acr_name, PushingToken is my token_name). You’ll be asked to enter a password, paste the one that you generated.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;helm registry login &amp;lt;acr_name&amp;gt;.azurecr.io/helm &lt;span class="nt"&gt;--username&lt;/span&gt; &amp;lt;token_name&amp;gt;
&lt;span class="c"&gt;# Login Succeeded&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In case you don’t have a Helm chart, create one and package it for it to be pushed.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;helm create demo-chart
helm package demo-chart
&lt;span class="c"&gt;# Successfully packaged chart and saved it to: /path/to/demo-chart-0.1.0.tgz&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Push the chart using &lt;code&gt;helm push&lt;/code&gt; command, it’ll be a successful push when it returns the hash digest, you should find the same digest if you checked on your repository.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;helm push demo-chart-0.1.0.tgz oci://blogacr21.azurecr.io/helm
&lt;span class="c"&gt;# Pushed: blogacr21.azurecr.io/helm/demo-chart:0.1.0&lt;/span&gt;
&lt;span class="c"&gt;# Digest: sha256:d6aecffcxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fn56v1os5vphfitwcajfb.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fn56v1os5vphfitwcajfb.png" alt="Image description" width="800" height="391"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h1&gt;
  
  
  Pulling From ACR
&lt;/h1&gt;

&lt;p&gt;Just like in the previous steps for pushing to the repository, you’ll need to create another token that has pulling permissions (or you could use the admin scope map but wouldn’t be a good security practice if it got compromised). ACR offers that you can create your custom scope map even targeting a specific repository only than having it on all repositories (just an extra best practice!) Go to Scope maps and create a new scope map. To have pulling permissions only, check just content/read and metadata/read permissions.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frx2sk85smmk4veoesw1a.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frx2sk85smmk4veoesw1a.png" alt="Image description" width="800" height="391"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Create a new token and assign the scope map to the one that you just created for pulling the helm chart, then generate a new password and do registry login using the new token.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5n038zewahbtyv6do96h.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5n038zewahbtyv6do96h.png" alt="Image description" width="800" height="391"&gt;&lt;/a&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;helm registry login &amp;lt;acr_name&amp;gt;.azurecr.io/helm &lt;span class="nt"&gt;--username&lt;/span&gt; &amp;lt;token_name&amp;gt;
&lt;span class="c"&gt;# Login Succeeded&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;When a successful &lt;code&gt;helm pull&lt;/code&gt; command is executed, you’ll find the packaged chart downloaded.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;helm pull oci://blogacr21.azurecr.io/helm/demo-chart
&lt;span class="c"&gt;# Pulled: blogacr21.azurecr.io/helm/demo-chart:0.1.0&lt;/span&gt;
&lt;span class="c"&gt;# Digest: sha256:d6aecffcxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If you tried pulling another Helm chart (say helm/demo-chart2) using the same pulling token where it has scoped just on the demo-chart repository, it shouldn’t be authorized and you should receive a status code 401.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9abggzuf023rb1vswqmo.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9abggzuf023rb1vswqmo.png" alt="Image description" width="800" height="391"&gt;&lt;/a&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;helm pull oci://blogacr21.azurecr.io/helm/demo-chart2
&lt;span class="c"&gt;# Error: GET "https://blogacr21.azurecr.io/v2/helm/demo-chart2/tags/list": unexpected status code 401: unauthorized: authentication required, visit https://aka.ms/acr/authorization for more information.&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Even if you tried pushing the same chart using the pulling token it shouldn’t be authorized as well because it lacks write permissions.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;helm push demo-chart-0.2.0.tgz oci://blogacr21.azurecr.io/helm
&lt;span class="c"&gt;# Error: server message: insufficient_scope: authorization failed&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h1&gt;
  
  
  Conclusion
&lt;/h1&gt;

&lt;p&gt;There can be other ways to use ACR and storing your Helm charts in them. However, using tokens can be the quickest way to set things up even on pipelines for automation, and if scope maps were used you can avoid having the risk of getting the entire repository compromised.&lt;/p&gt;

</description>
      <category>azure</category>
      <category>helm</category>
      <category>tutorial</category>
      <category>acr</category>
    </item>
    <item>
      <title>Configure mTLS with Nginx and EasyRSA</title>
      <dc:creator>Ibrahim Kabbash</dc:creator>
      <pubDate>Sat, 18 Nov 2023 11:09:49 +0000</pubDate>
      <link>https://dev.to/ikabbash/configure-mtls-with-nginx-and-easyrsa-59ek</link>
      <guid>https://dev.to/ikabbash/configure-mtls-with-nginx-and-easyrsa-59ek</guid>
      <description>&lt;p&gt;The goal of this article is to explain the concept of Mutual Transport Layer Security (mTLS) protocol, how it works, the role of each component and all with an example as a proof of concept.&lt;/p&gt;

&lt;h1&gt;
  
  
  Table of Contents
&lt;/h1&gt;

&lt;ul&gt;
&lt;li&gt;Prerequisites&lt;/li&gt;
&lt;li&gt;mTLS Anatomy&lt;/li&gt;
&lt;li&gt;
Demo

&lt;ul&gt;
&lt;li&gt;Generate certificates and keys&lt;/li&gt;
&lt;li&gt;Setup Nginx&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;Testing&lt;/li&gt;

&lt;li&gt;Conclusion&lt;/li&gt;

&lt;/ul&gt;

&lt;h1&gt;
  
  
  Prerequisites
&lt;/h1&gt;

&lt;ul&gt;
&lt;li&gt;Nginx knowledge&lt;/li&gt;
&lt;li&gt;Secure Sockets Layer (SSL) / Transport Layer Security (TLS) and authentication knowledge&lt;/li&gt;
&lt;li&gt;A virtual machine with Nginx and EasyRSA installed (preferably from the repo)&lt;/li&gt;
&lt;li&gt;Linux knowledge&lt;/li&gt;
&lt;/ul&gt;

&lt;h1&gt;
  
  
  mTLS Anatomy
&lt;/h1&gt;

&lt;p&gt;Before implementing the demo, let’s first have a good idea about mTLS and how does it compare to the usual SSL/TLS security protocols. SSL is primarily used to secure communication between a client (such as a web browser) and a server (such as a web server) which focuses on server authentication, where the client verifies the server's identity. Meanwhile mTLS on the other hand, is designed for mutual authentication. Both the client and the server are required to present their digital certificates to prove their identities, this ensures both parties can trust each other. The mTLS protocol can expand in case scenarios from VPNs to other case scenarios such as IoT for device authentication.&lt;/p&gt;

&lt;p&gt;In mTLS, both the client and the server are verified and authenticated using digital certificates through a trusted Certificate Authority (CA). The following figure illustrates the request flow and how trust is established.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fprod-files-secure.s3.us-west-2.amazonaws.com%2Fe06215d6-7d6a-4aca-bd15-24d5a8dbf5cf%2F9b1cd2c6-da2a-4a58-9a0d-747346a96951%2FFlowchart.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fprod-files-secure.s3.us-west-2.amazonaws.com%2Fe06215d6-7d6a-4aca-bd15-24d5a8dbf5cf%2F9b1cd2c6-da2a-4a58-9a0d-747346a96951%2FFlowchart.png" alt="Flowchart.png" width="800" height="400"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h1&gt;
  
  
  Demo
&lt;/h1&gt;

&lt;p&gt;So we’re gonna need a server which has an Nginx running on and to generate a self-assigned certificate authority and using that certificate authority we’ll also need a client and server certificates and keys. All the certificates will be created using EasyRSA.&lt;/p&gt;

&lt;h2&gt;
  
  
  Generate certificates and keys
&lt;/h2&gt;

&lt;p&gt;In your EasyRSA directory (where you installed it) create a public key infrastructure (PKI) in case you haven’t yet, if you did skip this step.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;./easyrsa init-pki
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;After that, generate the CA certificate using the following command (if you want to add a pass phrase remove the &lt;code&gt;nopass&lt;/code&gt; option)&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;./easyrsa build-ca nopass
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You have created a self-assigned CA certificate, you can begin creating your own server and client certificates. We’ll create a server certificate for Nginx using the following command so it can host on SSL, then copy the server certificate and key to Nginx’s SSL directory&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;./easyrsa build-server-full server nopass
&lt;span class="nb"&gt;mkdir&lt;/span&gt; /etc/nginx/ssl
&lt;span class="nb"&gt;cp&lt;/span&gt; /path/to/easyrsa/pki/issued/server.crt /etc/nginx/ssl
&lt;span class="nb"&gt;cp&lt;/span&gt; /path/to/easyrsa/pki/private/server.key /etc/nginx/ssl
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Server’s done, now you can create the client’s certificate and keys and they’ll be called client (or any other name you’d like). Be sure to copy them into your local machine for testing from the issued directory and private directory inside EasyRSA as well&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;./easyrsa build-client-full client nopass
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Lastly generate a Certificate Revoke List (CRL), it’ll be explained further in the testing section why its needed&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;./easyrsa gen-crl
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Setup Nginx
&lt;/h2&gt;

&lt;p&gt;In order to use mTLS we’ll configure Nginx to require client authentication and ensure the client presents their own certificate that was made from our CA when they connect. Make sure to specify the location of your CA root certificate and the CRL below as they’re the crucial components for verification.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;server &lt;span class="o"&gt;{&lt;/span&gt;
        listen 80&lt;span class="p"&gt;;&lt;/span&gt;
        server_name haha-example-server.com&lt;span class="p"&gt;;&lt;/span&gt;
        location / &lt;span class="o"&gt;{&lt;/span&gt;
                &lt;span class="k"&gt;return &lt;/span&gt;301 https://&lt;span class="nv"&gt;$host$request_uri&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
        &lt;span class="o"&gt;}&lt;/span&gt;
&lt;span class="o"&gt;}&lt;/span&gt;

server &lt;span class="o"&gt;{&lt;/span&gt;
  listen 443 ssl&lt;span class="p"&gt;;&lt;/span&gt;
  server_name haha-example-server.com&lt;span class="p"&gt;;&lt;/span&gt;

    &lt;span class="c"&gt;# server's certificate&lt;/span&gt;
  ssl_certificate     /etc/nginx/ssl/server.crt&lt;span class="p"&gt;;&lt;/span&gt;
  ssl_certificate_key /etc/nginx/ssl/server.key&lt;span class="p"&gt;;&lt;/span&gt;

    &lt;span class="c"&gt;# specify path to CA&lt;/span&gt;
  ssl_client_certificate /path/to/easyrsa/pki/ca.crt&lt;span class="p"&gt;;&lt;/span&gt;
    &lt;span class="c"&gt;# specify path to CRL&lt;/span&gt;
  ssl_crl /path/to/easyrsa/pki/crl.pem&lt;span class="p"&gt;;&lt;/span&gt;
  ssl_verify_client  on&lt;span class="p"&gt;;&lt;/span&gt;

    &lt;span class="c"&gt;# optional in case you wanted to troubleshoot&lt;/span&gt;
  access_log /var/log/nginx/mtls.access.log&lt;span class="p"&gt;;&lt;/span&gt;
  error_log /var/log/nginx/mtls.error.log&lt;span class="p"&gt;;&lt;/span&gt;

  location / &lt;span class="o"&gt;{&lt;/span&gt;
    &lt;span class="k"&gt;return &lt;/span&gt;200 OK&lt;span class="p"&gt;;&lt;/span&gt;
      &lt;span class="o"&gt;}&lt;/span&gt;
&lt;span class="o"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The &lt;code&gt;ssl_client_certificate&lt;/code&gt; used to verify client certificates presented during mTLS, while &lt;code&gt;ssl_crl&lt;/code&gt; helps Nginx check if client certificates have been revoked by the CA before accepting them for mTLS. Finally, &lt;code&gt;ssl_verify_client&lt;/code&gt; determines whether the server verifies client certificates during mTLS handshake or not. Can also be set to either &lt;code&gt;off&lt;/code&gt; or &lt;code&gt;optional&lt;/code&gt;.&lt;/p&gt;

&lt;h1&gt;
  
  
  Testing
&lt;/h1&gt;

&lt;p&gt;We’ll be making an authenticated request with the &lt;code&gt;curl&lt;/code&gt; command using the client’s certificate and private key. Make sure to add the host in your hosts file with the IP of the machine that you configured the web server with.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;curl &lt;span class="nt"&gt;-Lks&lt;/span&gt; &lt;span class="nt"&gt;--cert&lt;/span&gt; /path/to/client.crt &lt;span class="nt"&gt;--key&lt;/span&gt; /path/to/client.key https://haha-example-server.com
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The &lt;code&gt;curl&lt;/code&gt; command’s options explained:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;L&lt;/code&gt; redo the request on the new redirection&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;k&lt;/code&gt; explicitly allows "insecure" SSL connections&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;s&lt;/code&gt; silent mode to not show progress bar&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If all is well, you should be getting an &lt;code&gt;OK&lt;/code&gt; response hence the mTLS verification succeeded! Otherwise you’ll be getting the following response if the client is revoked, if the certificate is from a different CA, or if the certificate has been expired.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&amp;lt;html&amp;gt;
&amp;lt;&lt;span class="nb"&gt;head&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;&amp;lt;title&amp;gt;400 The SSL certificate error&amp;lt;/title&amp;gt;&amp;lt;/head&amp;gt;
&amp;lt;body&amp;gt;
&amp;lt;center&amp;gt;&amp;lt;h1&amp;gt;400 Bad Request&amp;lt;/h1&amp;gt;&amp;lt;/center&amp;gt;
&amp;lt;center&amp;gt;The SSL certificate error&amp;lt;/center&amp;gt;
&amp;lt;hr&amp;gt;&amp;lt;center&amp;gt;nginx/1.24.0&amp;lt;/center&amp;gt;
&amp;lt;/body&amp;gt;
&amp;lt;/html&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;What if you don’t want that client to be able to authenticate anymore? Simply go back to your web server’s EasyRSA directory and execute the following.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;md5sum&lt;/span&gt; /path/to/easyrsa/pki/crl.pem
./easyrsa revoke client
./easyrsa gen-crl
nginx &lt;span class="nt"&gt;-s&lt;/span&gt; reload
&lt;span class="nb"&gt;md5sum&lt;/span&gt; /path/to/easyrsa/pki/crl.pem
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;To explain the role of the CRL file further, if you ran the md5sum before revoking and after you’ll notice the hash value of the file is different, and that’s because it was changed. The CRL file contains a list of revoked certificates, allowing Nginx to check the revocation status of certificates during the request initiation. If you tried to use the &lt;code&gt;curl&lt;/code&gt; command on the same certificate again it should give you a status code 400.&lt;/p&gt;

&lt;p&gt;If you would like to create a certificate that’ll expire tomorrow using EasyRSA, open the vars file using your favorite text editor and uncomment the following line adding 1 so it’ll expire tomorrow.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;set_var EASYRSA_CERT_EXPIRE 1
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h1&gt;
  
  
  Conclusion
&lt;/h1&gt;

&lt;p&gt;The mTLS can be implemented in many other ways and for many other use cases, it could be used as a license checker, an identity verification, or even to secure cross-service communication between microservices. You can test using an API instead of Curl command if you would like. Just be sure to manage your certificates properly and you’re good to go!&lt;/p&gt;

</description>
    </item>
  </channel>
</rss>
