<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Ivan Cvitkovic</title>
    <description>The latest articles on DEV Community by Ivan Cvitkovic (@cvitaa11).</description>
    <link>https://dev.to/cvitaa11</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/cvitaa11"/>
    <language>en</language>
    <item>
      <title>Server Monitoring with Grafana and Prometheus</title>
      <dc:creator>Ivan Cvitkovic</dc:creator>
      <pubDate>Fri, 05 Jan 2024 15:06:36 +0000</pubDate>
      <link>https://dev.to/cvitaa11/server-monitoring-with-grafana-and-prometheus-51ab</link>
      <guid>https://dev.to/cvitaa11/server-monitoring-with-grafana-and-prometheus-51ab</guid>
      <description>&lt;p&gt;Server monitoring is crucial for maintaining the health and performance of your systems. In this blog post, we'll walk through a basic setup using Grafana and Prometheus to monitor your servers. Before diving into the configuration details, let's briefly outline the components we'll be using in this setup.&lt;/p&gt;

&lt;h2&gt;
  
  
  What is Grafana?
&lt;/h2&gt;

&lt;p&gt;Grafana is an open-source platform for monitoring and observability. It allows you to create, explore, and share interactive dashboards, enabling you to visualize and understand your metrics, logs, and other data sources easily.&lt;/p&gt;

&lt;h2&gt;
  
  
  What is Prometheus?
&lt;/h2&gt;

&lt;p&gt;Prometheus is an open-source monitoring and alerting toolkit designed for reliability and scalability. It collects and stores time-series data, making it an excellent choice for monitoring systems and generating alerts based on predefined conditions.&lt;/p&gt;

&lt;h2&gt;
  
  
  What is Node Exporter?
&lt;/h2&gt;

&lt;p&gt;Node Exporter is a Prometheus exporter for hardware and OS metrics. It runs on the servers you want to monitor and collects various system-level metrics, such as CPU usage, memory usage, disk activity, and network statistics. Prometheus scrapes these metrics from Node Exporter, providing a centralized location for monitoring your infrastructure.&lt;/p&gt;

&lt;h2&gt;
  
  
  Requirements
&lt;/h2&gt;

&lt;p&gt;Before we begin, ensure you have Docker installed on your system. Once Docker is set up, you can use the provided &lt;code&gt;docker-compose.yml&lt;/code&gt; and &lt;code&gt;prometheus.yml&lt;/code&gt; files to launch the monitoring stack.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;docker-compose.yml&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;version&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;3.8'&lt;/span&gt;

&lt;span class="na"&gt;networks&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;monitoring&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;driver&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;bridge&lt;/span&gt;

&lt;span class="na"&gt;volumes&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;prometheus_data&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;{}&lt;/span&gt;
  &lt;span class="na"&gt;grafana_storage&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;{}&lt;/span&gt;

&lt;span class="na"&gt;services&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;node-exporter&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;prom/node-exporter:latest&lt;/span&gt;
    &lt;span class="na"&gt;container_name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;node-exporter&lt;/span&gt;
    &lt;span class="na"&gt;restart&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;unless-stopped&lt;/span&gt;
    &lt;span class="na"&gt;expose&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="m"&gt;9100&lt;/span&gt;
    &lt;span class="na"&gt;networks&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;monitoring&lt;/span&gt;

  &lt;span class="na"&gt;prometheus&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;prom/prometheus:latest&lt;/span&gt;
    &lt;span class="na"&gt;container_name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;prometheus&lt;/span&gt;
    &lt;span class="na"&gt;restart&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;unless-stopped&lt;/span&gt;
    &lt;span class="na"&gt;volumes&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;./prometheus.yml:/etc/prometheus/prometheus.yml&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;prometheus_data:/prometheus&lt;/span&gt;
    &lt;span class="na"&gt;command&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;--config.file=/etc/prometheus/prometheus.yml'&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;--storage.tsdb.path=/prometheus'&lt;/span&gt;
    &lt;span class="na"&gt;expose&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="m"&gt;9090&lt;/span&gt;
    &lt;span class="na"&gt;networks&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;monitoring&lt;/span&gt;

  &lt;span class="na"&gt;grafana&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;grafana/grafana:9.5.15-ubuntu&lt;/span&gt;
    &lt;span class="na"&gt;container_name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;grafana&lt;/span&gt;
    &lt;span class="na"&gt;restart&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;unless-stopped&lt;/span&gt;
    &lt;span class="na"&gt;ports&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
     &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;3000:3000'&lt;/span&gt;
    &lt;span class="na"&gt;volumes&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;grafana_storage:/var/lib/grafana'&lt;/span&gt;
    &lt;span class="na"&gt;networks&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;monitoring&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;prometheus.yml&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;global&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;scrape_interval&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;1m&lt;/span&gt;

&lt;span class="na"&gt;scrape_configs&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;job_name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;prometheus'&lt;/span&gt;
    &lt;span class="na"&gt;scrape_interval&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;1m&lt;/span&gt;
    &lt;span class="na"&gt;static_configs&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;targets&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[&lt;/span&gt;&lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;localhost:9090'&lt;/span&gt;&lt;span class="pi"&gt;]&lt;/span&gt;

  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;job_name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;node'&lt;/span&gt;
    &lt;span class="na"&gt;static_configs&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;targets&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[&lt;/span&gt;&lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;node-exporter:9100'&lt;/span&gt;&lt;span class="pi"&gt;]&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This file serves as the configuration for Prometheus, specifying the intervals at which it scrapes metrics and targets it monitors. In our case, it targets the Node Exporter on port 9100 since we are running that container in our Docker Compose setup.&lt;/p&gt;

&lt;h2&gt;
  
  
  Setting Up Grafana and Prometheus
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Create a Directory and Copy YAML Files:&lt;/strong&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight console"&gt;&lt;code&gt;&lt;span class="go"&gt;mkdir server-monitoring
cd server-monitoring
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Launch the Stack:&lt;/strong&gt;
Run the following command to start the monitoring services.
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight console"&gt;&lt;code&gt;&lt;span class="go"&gt;docker-compose up -d
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This command will pull the necessary Docker images and launch the Grafana, Prometheus and Node Exporter containers.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Access Grafana Dashboard:&lt;/strong&gt;
Open your web browser and navigate to &lt;a href="http://localhost:3000" rel="noopener noreferrer"&gt;http://localhost:3000&lt;/a&gt; (or IP address of your virtual machine). Log in using the default credentials (username: admin, password: admin).&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;After initial login you will be prompted to change the password.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Configure Prometheus as a Data Source:&lt;/strong&gt;&lt;br&gt;
In Grafana, add Prometheus as a data source by specifying the URL &lt;a href="http://prometheus:9090" rel="noopener noreferrer"&gt;http://prometheus:9090&lt;/a&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Import Node Exporter Full Dashboard:&lt;/strong&gt;&lt;br&gt;
Grafana provides a rich collection of dashboards that can be imported to visualize various metrics. To import the Node Exporter Full dashboard, follow these steps:&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;ul&gt;
&lt;li&gt;From the left pane menu, select "Dashboards."&lt;/li&gt;
&lt;li&gt;On the right side, click "New" and then select "Import" from the dropdown menu.&lt;/li&gt;
&lt;li&gt;In the "Grafana.com Dashboard" section, enter the dashboard ID 1860 and click "Load."&lt;/li&gt;
&lt;li&gt;Configure the Prometheus data source (if not configured already) by selecting it from the drop-down menu.&lt;/li&gt;
&lt;li&gt;Finally, click "Import" to add the Node Exporter Full dashboard to your Grafana instance.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This dashboard (ID 1860) is specifically designed for Node Exporter metrics and provides a comprehensive view of your server's performance.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fiqjln4i1y0hptgfgfjdu.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fiqjln4i1y0hptgfgfjdu.png" alt="Node Exporter Full Dashboard in Grafana" width="800" height="394"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;With Grafana and Prometheus, you now have a basic yet powerful server monitoring setup. Explore the imported Node Exporter Full dashboard and other Grafana features to gain insights into your system's performance. Feel free to customize the setup and dashboards to fit your specific monitoring needs.&lt;/p&gt;

&lt;p&gt;Happy monitoring! 🚀&lt;/p&gt;

</description>
      <category>devops</category>
      <category>monitoring</category>
      <category>docker</category>
      <category>linux</category>
    </item>
    <item>
      <title>Azure Blob Storage as Terraform backend</title>
      <dc:creator>Ivan Cvitkovic</dc:creator>
      <pubDate>Sun, 19 Feb 2023 17:00:17 +0000</pubDate>
      <link>https://dev.to/cvitaa11/azure-blob-storage-as-terraform-backend-2bhj</link>
      <guid>https://dev.to/cvitaa11/azure-blob-storage-as-terraform-backend-2bhj</guid>
      <description>&lt;p&gt;Managing Infrastructure as Code can be challenging, especially when working within a team. Terraform  is a powerful tool for managing infrastructure resources and we briefly described it in one of the previous &lt;a href="https://dev.to/cvitaa11/introduction-to-terraform-2hfe"&gt;blog posts&lt;/a&gt;, but it can be tricky to keep track of the current state of your infrastructure when working in a team environment.&lt;/p&gt;

&lt;p&gt;This is where &lt;strong&gt;Terraform state&lt;/strong&gt; comes in. Terraform state is a snapshot of your infrastructure that is stored as a file on your local machine. This file contains information about the resources you've created, their dependencies, and their current configuration.&lt;/p&gt;

&lt;p&gt;The Terraform state file is essential for managing your infrastructure, as it allows Terraform to determine the changes that need to be applied to your resources. Without a proper state file, Terraform wouldn't be able to properly manage your infrastructure resources.&lt;/p&gt;

&lt;p&gt;In essence it's just a JSON file which is kind of like a map that tells Terraform what it has already built, and what it still needs to build. It's crucial to keep the state file safe and up-to-date, because if Terraform doesn't know what it has already built, it might accidentally create duplicate resources or overwrite existing ones, which could cause all kinds of problems.&lt;/p&gt;

&lt;p&gt;You can store the state file locally, but there is an issue with that approach because it's not easily shareable. If you're working within a team of engineers, it's important for everyone to have access to the same state file. But if it's stored locally, it can be difficult for others to get access to it. That's why we usually store state file remotely on services like AWS S3, HashiCorp Consul or Azure Blob Storage.&lt;/p&gt;

&lt;p&gt;In this post we will demonstrate how to set up an Azure Blob Storage backend for your Terraform state file. For that we will need to create &lt;a href="https://learn.microsoft.com/en-us/azure/azure-resource-manager/management/manage-resource-groups-portal#what-is-a-resource-group" rel="noopener noreferrer"&gt;a resource group&lt;/a&gt; and &lt;a href="https://learn.microsoft.com/en-us/azure/storage/common/storage-account-overview" rel="noopener noreferrer"&gt;storage account&lt;/a&gt;. Of course, you will need an Azure subscription. If you don't have one already, you can create a &lt;a href="https://azure.microsoft.com/free/?ref=microsoft.com&amp;amp;utm_source=microsoft.com&amp;amp;utm_medium=docs&amp;amp;utm_campaign=visualstudio" rel="noopener noreferrer"&gt;free account&lt;/a&gt;. We can create these resources via Azure portal, but since we are talking about Infrastructure as Code, let's use Terraform to create them as well. In your code editor create &lt;code&gt;main.tf&lt;/code&gt; file with the following code:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight hcl"&gt;&lt;code&gt;

&lt;span class="nx"&gt;terraform&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;required_providers&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;azurerm&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="nx"&gt;source&lt;/span&gt;  &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"hashicorp/azurerm"&lt;/span&gt;
      &lt;span class="nx"&gt;version&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"3.44.1"&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="nx"&gt;provider&lt;/span&gt; &lt;span class="s2"&gt;"azurerm"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;features&lt;/span&gt; &lt;span class="p"&gt;{}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="nx"&gt;resource&lt;/span&gt; &lt;span class="s2"&gt;"random_string"&lt;/span&gt; &lt;span class="s2"&gt;"resource_code"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;length&lt;/span&gt;  &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;5&lt;/span&gt;
  &lt;span class="nx"&gt;special&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="kc"&gt;false&lt;/span&gt;
  &lt;span class="nx"&gt;upper&lt;/span&gt;   &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="kc"&gt;false&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="nx"&gt;resource&lt;/span&gt; &lt;span class="s2"&gt;"azurerm_resource_group"&lt;/span&gt; &lt;span class="s2"&gt;"tfstate"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;name&lt;/span&gt;     &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"tfstate"&lt;/span&gt;
  &lt;span class="nx"&gt;location&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"West Europe"&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="nx"&gt;resource&lt;/span&gt; &lt;span class="s2"&gt;"azurerm_storage_account"&lt;/span&gt; &lt;span class="s2"&gt;"tfstate"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;name&lt;/span&gt;                     &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"tfstate${random_string.resource_code.result}"&lt;/span&gt;
  &lt;span class="nx"&gt;resource_group_name&lt;/span&gt;      &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;azurerm_resource_group&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;tfstate&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;name&lt;/span&gt;
  &lt;span class="nx"&gt;location&lt;/span&gt;                 &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;azurerm_resource_group&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;tfstate&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;location&lt;/span&gt;
  &lt;span class="nx"&gt;account_tier&lt;/span&gt;             &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"Standard"&lt;/span&gt;
  &lt;span class="nx"&gt;account_replication_type&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"LRS"&lt;/span&gt;

  &lt;span class="nx"&gt;tags&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;environment&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"demo"&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="nx"&gt;resource&lt;/span&gt; &lt;span class="s2"&gt;"azurerm_storage_container"&lt;/span&gt; &lt;span class="s2"&gt;"tfstate"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;name&lt;/span&gt;                  &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"tfstate"&lt;/span&gt;
  &lt;span class="nx"&gt;storage_account_name&lt;/span&gt;  &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;azurerm_storage_account&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;tfstate&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;name&lt;/span&gt;
  &lt;span class="nx"&gt;container_access_type&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"private"&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Make sure you are logged in your Azure account via Azure CLI. Now when we have Terraform configuration we run &lt;code&gt;terraform init&lt;/code&gt; and after that &lt;code&gt;terraform apply&lt;/code&gt; to create those resources. To verify if resources have been provisioned go to Azure portal and navigate to the Resource groups section where you should see &lt;code&gt;tfstate&lt;/code&gt; resource group with storage account &lt;code&gt;tfstate&lt;/code&gt; followed by the 5 characters long random string.&lt;/p&gt;

&lt;p&gt;Let's further explore the storage account containers, it should be empty without blobs. Why is that so?&lt;/p&gt;

&lt;p&gt;Currently, our state is stored locally inside &lt;code&gt;terraform.tfstate&lt;/code&gt; file and it keeps track of our resources which leads us to a chicken-egg problem when it comes to state management. So what is the solution?&lt;/p&gt;

&lt;p&gt;Let's go and create &lt;code&gt;backend.tf&lt;/code&gt; file with the following code:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight hcl"&gt;&lt;code&gt;

&lt;span class="nx"&gt;terraform&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;backend&lt;/span&gt; &lt;span class="s2"&gt;"azurerm"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;resource_group_name&lt;/span&gt;  &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"tfstate"&lt;/span&gt;
    &lt;span class="nx"&gt;storage_account_name&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"tfstate&amp;lt;RANDOM-STRING&amp;gt;"&lt;/span&gt;
    &lt;span class="nx"&gt;container_name&lt;/span&gt;       &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"tfstate"&lt;/span&gt;
    &lt;span class="nx"&gt;key&lt;/span&gt;                  &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"terraform.tfstate"&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;



&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Now run &lt;code&gt;terraform init&lt;/code&gt; again, and if you are using the latest Terraform version you should receive a prompt similar to:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight console"&gt;&lt;code&gt;&lt;span class="go"&gt;

Do you want to copy existing state to the new backend?
 Pre-existing state was found while migrating the previous "local" backend to the newly configured "azurerm" backend. 
No existing state was found in the newly configured "azurerm" backend. 
Do you want to copy this state to the new "azurerm" backend? Enter "yes" to copy and "no" to start with an empty state.


&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Type &lt;code&gt;yes&lt;/code&gt; and check &lt;code&gt;tfstate&lt;/code&gt; container in Azure portal. Your Terraform state is now successfully stored remotely in Azure Blob Storage.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftj7g3ps7gp70rfkeqwwp.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftj7g3ps7gp70rfkeqwwp.jpeg" alt="Terraform state in Azure Blob Storage"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;If you did not receive this prompt you can import local state manually using the command &lt;code&gt;terraform state push terraform.tfstate&lt;br&gt;
&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;To verify if everything is setup correctly delete the local instance of &lt;code&gt;terraform.tfstate&lt;/code&gt; file and run command &lt;code&gt;terraform state list&lt;/code&gt;. You should receive output like this one:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight console"&gt;&lt;code&gt;&lt;span class="go"&gt;

azurerm_resource_group.tfstate
azurerm_storage_account.tfstate
azurerm_storage_container.tfstate
random_string.resource_code


&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Let's cover another topic before conclusion. One important aspect of Terraform state management is &lt;strong&gt;state locking&lt;/strong&gt;. State locking prevents multiple Terraform instances from modifying the state file at the same time, which can cause issues and inconsistencies in your infrastructure.&lt;/p&gt;

&lt;p&gt;So, how do we implement state locking when using Azure as a backend for our Terraform state file? The good news is that Azure Blob Storage supports state locking for Terraform using native capabilities. Azure Storage blobs are automatically locked before any operation that writes state. This pattern prevents concurrent state operations, which can cause corruption.&lt;/p&gt;

&lt;p&gt;In other words, we don't need any additional configuration, this comes out of the box when using Azure Blob Storage for Terraform backend configuration.&lt;/p&gt;

</description>
      <category>devops</category>
      <category>terraform</category>
      <category>azure</category>
      <category>cloud</category>
    </item>
    <item>
      <title>Building multi-architecture Docker images</title>
      <dc:creator>Ivan Cvitkovic</dc:creator>
      <pubDate>Tue, 23 Aug 2022 10:48:04 +0000</pubDate>
      <link>https://dev.to/cvitaa11/building-multi-architecture-docker-images-2e9k</link>
      <guid>https://dev.to/cvitaa11/building-multi-architecture-docker-images-2e9k</guid>
      <description>&lt;p&gt;In the last few years, the need for multi-architectural container images has grown significantly. Let's say you develop on your local Linux or Windows machine with an amd64 processor and want to publish your work to AWS machines with a Graviton2 processor, or simply want to share your work with colleagues who use Macbooks with an M1 chip, you need to ensure that your image works on both architectures. This process is significantly facilitated by the advent of the Docker Buildx tool.&lt;/p&gt;

&lt;p&gt;But what is Buildx actually? According to the official documentation Docker Buildx is a CLI plugin that extends the docker command with the full support of the features provided by &lt;a href="https://github.com/moby/buildkit" rel="noopener noreferrer"&gt;Moby BuildKit&lt;/a&gt; builder toolkit. It provides the same user experience as &lt;code&gt;docker build&lt;/code&gt; with many new features like creating scoped builder instances and building against multiple nodes concurrently. Buildx also supports new features that are not yet available for regular &lt;code&gt;docker build&lt;/code&gt; like building manifest lists, distributed caching, and exporting build results to OCI image tarballs.&lt;/p&gt;

&lt;p&gt;In our demo, we will show how to setup buildx on a local machine and build a simple Node.js application. You can find the complete source code on &lt;a href="https://github.com/cvitaa11/docker-multi-arch" rel="noopener noreferrer"&gt;this&lt;/a&gt; GitHub repository.&lt;/p&gt;

&lt;h3&gt;
  
  
  Creating Node.js application
&lt;/h3&gt;

&lt;p&gt;In the demo application, we created a web server using Node.js. Node.js provides extremely simple HTTP APIs so the example is very easy to understand even for non-javascript developers.&lt;/p&gt;

&lt;p&gt;Basically, we define the port and then invoke the &lt;code&gt;createServer()&lt;/code&gt; function on http module and create a response with a status code of 200 (OK), set a header and print a message on which architecture the program is running. We obtained the architecture of the CPU through the &lt;code&gt;arch&lt;/code&gt; property of the built-in &lt;code&gt;process&lt;/code&gt; variable. At the end we simply start a server listening for connections.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;const http = require("http");

const port = 3000;

const server = http.createServer((req, res) =&amp;gt; {
  res.statusCode = 200;
  res.setHeader("Content-Type", "text/plain");
  res.end(`Hello from ${process.arch} architecture!`);
});

server.listen(port, () =&amp;gt; {
  console.log(`Server running on port ${port}`);
});
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If you want to test the app locally open the terminal in the working directory and run &lt;code&gt;node server.js&lt;/code&gt; command.&lt;/p&gt;

&lt;p&gt;In order to package the application in the form of a container, we have to write a Dockerfile. The first thing we need to do is define from what image we want to build from. Here we will use the version &lt;code&gt;16.17.0-alpine&lt;/code&gt; of the official &lt;code&gt;node&lt;/code&gt; image that is available on the Docker Hub. Right after the base image we will create a directory to hold the application code inside the image.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;FROM node:16.17.0-alpine
WORKDIR /usr/src/app
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;To put the source code of our application into a Docker image, we'll use a simple copy command that will store the application code in the working directory.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;COPY . .
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Application is listening on port 3000 so we need to expose it and then finally start the server.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;EXPOSE 3000
CMD ["node", "server.js"]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Setup Buildx and create the image
&lt;/h3&gt;

&lt;p&gt;The easiest way to setup &lt;code&gt;buildx&lt;/code&gt; is by using &lt;a href="https://docs.docker.com/desktop/" rel="noopener noreferrer"&gt;Docker Desktop&lt;/a&gt;, because the tool is already included in the application. Docker Desktop is available for Windows, Linux and macOS so you can use it on any platform of your choice.&lt;/p&gt;

&lt;p&gt;If you don't want to use Docker Desktop you can also download the latest binary from the &lt;a href="https://github.com/docker/buildx/releases/tag/v0.9.1" rel="noopener noreferrer"&gt;releases page&lt;/a&gt; on GitHub, rename the binary to &lt;code&gt;docker-buildx&lt;/code&gt; (&lt;code&gt;docker-buildx.exe&lt;/code&gt; for Windows) and copy it to the destination matching your OS. For Linux and macOS that is &lt;code&gt;$HOME/.docker/cli-plugins&lt;/code&gt;, for Windows that is &lt;code&gt;%USERPROFILE%\.docker\cli-plugins&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;In the code below you can see the setup for macOS:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;ARCH=amd64 # change to 'arm64' if you have M1 chip
VERSION=v0.8.2
curl -LO https://github.com/docker/buildx/releases/download/${VERSION}/buildx-${VERSION}.darwin-${ARCH}
mkdir -p ~/.docker/cli-plugins
mv buildx-${VERSION}.darwin-${ARCH} ~/.docker/cli-plugins/docker-buildx
chmod +x ~/.docker/cli-plugins/docker-buildx
docker buildx version # verify installation
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;After installing &lt;code&gt;buildx&lt;/code&gt; we need to create a new builder instace. Builder instances are isolated environments where builds can be invoked.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker buildx create --name builder
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;When new builder instance is created we need to switch to it from the default one:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker buildx use builder
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now let's see more informations about our builder instance. We will also pass &lt;code&gt;--bootstrap&lt;/code&gt; option to ensure that the builder is running before inspecting it.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker buildx inspect --bootstrap
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fe5xw5zpaqb5st8b76m77.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fe5xw5zpaqb5st8b76m77.png" alt="docker buildx inspect" width="800" height="148"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Once we have made sure which platforms our builder instance supports, we can start creating the container image. Buildx is very similar to the &lt;code&gt;docker build&lt;/code&gt; command and it takes the same arguments, of which we will primarily focus on &lt;code&gt;--platform&lt;/code&gt; that sets target platform for build. In the code below we will sign in to Docker account, build the image and push it to Docker Hub.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker login # prompts for username and password

docker buildx build \
 --platform linux/amd64,linux/arm64,linux/arm/v7 \
 -t cvitaa11/multi-arch:demo \
 --push \
 .
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;When the command completes we can go to Docker Hub and see &lt;a href="https://hub.docker.com/r/cvitaa11/multi-arch/tags" rel="noopener noreferrer"&gt;our image&lt;/a&gt; with all the supported architectures.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwkgn1co6ik02rrki4fky.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwkgn1co6ik02rrki4fky.png" alt="Docker Hub image digest" width="722" height="287"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;It's time to test how the image works on different machines. First we will run it on Windows (Intel Core i5 CPU which falls under amd64 architecture) with the command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker run -p 3000:3000 cvitaa11/multi-arch:demo
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Let's navigate to the web browser to &lt;code&gt;localhost:3000&lt;/code&gt; and chech the response.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpoc559vou1p54c188iz4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpoc559vou1p54c188iz4.png" alt="Docker Windows Intel" width="487" height="190"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now let's switch to Macbook Pro with M1 chip and run the same command.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fak4g6ykq7f34cb21f6zr.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fak4g6ykq7f34cb21f6zr.png" alt="Docker run macOS" width="800" height="305"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Open the web browser and again go to the &lt;code&gt;localhost:3000&lt;/code&gt;:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frs7hs03i26jxikqg1q2w.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frs7hs03i26jxikqg1q2w.png" alt="Docker macOS M1" width="800" height="295"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;We see that our container image runs successfully on both processor architectures, which was our primary goal.&lt;/p&gt;

</description>
      <category>devops</category>
      <category>docker</category>
      <category>javascript</category>
      <category>node</category>
    </item>
    <item>
      <title>Introduction to Terraform</title>
      <dc:creator>Ivan Cvitkovic</dc:creator>
      <pubDate>Tue, 15 Feb 2022 18:57:22 +0000</pubDate>
      <link>https://dev.to/cvitaa11/introduction-to-terraform-2hfe</link>
      <guid>https://dev.to/cvitaa11/introduction-to-terraform-2hfe</guid>
      <description>&lt;h3&gt;
  
  
  Infrastructure as Code
&lt;/h3&gt;

&lt;p&gt;Almost every software engineer has experienced that provisioning servers, operating systems, storage, and other infrastructure components is a demanding and time-consuming process. Provisioning is usually done manually, and when we take into account that in addition to the development environment, the staging and production workloads should also be deployed, we come to a large amount of work. In such a large task, human error can easily occur, which means that only a small configuration error can cause problems and disable the proper operation of the application.&lt;/p&gt;

&lt;p&gt;To avoid potential outages we come to the concept of &lt;b&gt;Infrastructure as Code&lt;/b&gt;. IaC is the managing and provisioning of infrastructure through code instead of through manual processes. Using the infrastructure as code allows configuration files on virtual machines, disks, network and other components to be stored together with the application code within source control management system. Storing them within the SCM allows you to version and track changes over time. This approach helps you to avoid undocumented, ad-hoc configuration changes.&lt;/p&gt;

&lt;p&gt;Deploying your infrastructure as code also means that you can divide your infrastructure into modular components that can then be combined in different ways through automation. Automating infrastructure provisioning means that there is no need for manual interventions and thus eliminates the human error factor. There are several IaC tools and one of the most popular is &lt;b&gt;Terraform&lt;/b&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  Terraform basic concepts
&lt;/h3&gt;

&lt;p&gt;Terraform is HashiCorp's infrastructure as code tool. It lets you define resources and infrastructure in human-readable, declarative configuration files, and manages your infrastructure's lifecycle. It lets you define both cloud and on-premise resources trough configuration language called &lt;b&gt;HCL&lt;/b&gt;, which looks like simplified JSON (JavaScript Object Notation). Terraform can manage low-level components like compute, storage, and networking resources, as well as high-level components like DNS entries and SaaS features.&lt;/p&gt;

&lt;p&gt;But how does it all actually work under the hood? Terraform creates resources on cloud platforms and on-premises based on their application programming interfaces (APIs), while communication with APIs takes place through &lt;b&gt;providers&lt;/b&gt;. Providers are plugins that interact with various platforms and manage their resources, serving as a logical abstraction of an upstream API.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpckhikvfzydxd80kqecl.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpckhikvfzydxd80kqecl.png" alt="Terraform provider" width="800" height="251"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Together with the community, Terraform has written more than 1800 providers, including those for major cloud providers like Azure, AWS and Google Cloud Platform. There are also providers for Docker, Kubernetes, Helm and more. You can find the full list at the &lt;a href="https://registry.terraform.io/" rel="noopener noreferrer"&gt;Terraform Registry&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;As in everything else, the best way to learn Terraform is through practice, but keep in mind that creating resources on cloud providers can cost you money. If you no longer need certain resources after exercise, simply delete them and clean up your environment. Also, for practice you can use various free tiers on platforms like &lt;a href="https://azure.microsoft.com/en-us/pricing/free-services/" rel="noopener noreferrer"&gt;Microsoft Azure&lt;/a&gt; and &lt;a href="https://aws.amazon.com/free/" rel="noopener noreferrer"&gt;Amazon Web Services&lt;/a&gt; or some provider to create resources locally on your development machine. There is a lot of beginner friendly examples, like this one, on &lt;a href="https://learn.hashicorp.com/terraform" rel="noopener noreferrer"&gt;HashiCorp Learn&lt;/a&gt; website which can help you get started.&lt;/p&gt;

&lt;h3&gt;
  
  
  Demo: Terraform Docker provider
&lt;/h3&gt;

&lt;p&gt;The complete source code for this demo is on GitHub, and you can find the repository &lt;a href="https://github.com/cvitaa11/terraform-demo" rel="noopener noreferrer"&gt;here&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;We'll start with the &lt;code&gt;main.tf&lt;/code&gt; file. Inside that file is placed &lt;code&gt;terraform {}&lt;/code&gt; block which contains all the settings, including the required providers Terraform will use to provision your infrastructure components. Each provider consists of source attribute which defines where is provider actually located. By default it will be installed from the Terraform Registry, but optionally you can pass hostname parameter. In this example configuration, the &lt;code&gt;docker&lt;/code&gt; provider's source is defined as &lt;code&gt;kreuzwerker/docker&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;The &lt;code&gt;version&lt;/code&gt; attribute is optional, but it's highly recommended to use it so that Terraform does not install a version of the provider that does not work with your configuration. Each module should at least declare the minimum provider version it is known to work with, using the &lt;b&gt;&amp;gt;=&lt;/b&gt; version constraint syntax. If the version is not passed, Terraform will download and install the latest one. Although this is not the case here, keep in mind that you can define multiple different providers in your configuration block and use them together.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;terraform {
  required_providers {
    docker = {
      source = "kreuzwerker/docker"
    version = "&amp;gt;= 2.13.0" }
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Under the &lt;code&gt;provider&lt;/code&gt; block we don't have much stuff, it just configures the specified provider, in this case &lt;code&gt;docker&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;The &lt;code&gt;resource&lt;/code&gt; section is the most important part of every Terraform project. Each resource block describes one or more infrastructure components, which can be a physical or virtual object. Before the configuration block there are two parameters that define the resource type and its name. In this case we declare the Docker image and Docker container where both resources are named &lt;code&gt;nginx&lt;/code&gt;.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;resource "docker_image" "nginx" {
  name         = "nginx:latest"
  keep_locally = false
}

resource "docker_container" "nginx" {
  image = docker_image.nginx.latest
  name  = var.container_name
  ports {
    internal = 80
    external = 8080
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In the &lt;code&gt;docker_container&lt;/code&gt; resource, under &lt;code&gt;image&lt;/code&gt; parameter we are referencing previously declared &lt;code&gt;docker_image&lt;/code&gt;. In the &lt;code&gt;ports&lt;/code&gt; section we publish local port 80, which is the default for Nginx, to port 8080 on our local machine. For the container name, instead of some hard-coded string, we decided to use value assigned to a variable &lt;code&gt;container_name&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;You can name your configuration files however you want because Terraform loads all files in the current directory ending in &lt;code&gt;.tf&lt;/code&gt;, but we choose to follow the naming convention and name the file &lt;code&gt;variables.tf&lt;/code&gt;. Variables file is quite simple, each variable is declared by keyword and name. Inside the configuration block we define variable type, and can optionally pass default value or &lt;code&gt;sensitive&lt;/code&gt; flag which limits Terraform UI output when the variable is used in configuration.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;variable "container_name" {
  type = string
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;When variables are declared in your configuration, they can be set in a number of ways. They can be passed individually with the &lt;code&gt;-var&lt;/code&gt; command line option, as environment variables in &lt;code&gt;TF_VAR_&lt;/code&gt; format or inside variable definitions files (.tfvars). We used the latter option to set a name for our Docker container:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;container_name = "nginx-example"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now we are ready to deploy our infrastructure, and to do that we need to initialize a working directory containing Terraform configuration files with &lt;code&gt;terraform init&lt;/code&gt; command. This command performs several different initialization steps in order to prepare the current working directory for use with Terraform which include downloading and installating the provider.&lt;/p&gt;

&lt;p&gt;After the initialization process is complete we can create a new execution plan and then apply it. The easiest way to achieve this is by using &lt;code&gt;terraform apply&lt;/code&gt; command. Execution plan is displayed in the command line and user is prompted to apply that plan. You should get similar output:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ terraform apply

Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the following symbols:
  + create

Terraform will perform the following actions:

  # docker_container.nginx will be created
  + resource "docker_container" "nginx" {
      + attach           = false
      + bridge           = (known after apply)
      + command          = (known after apply)
      + container_logs   = (known after apply)
      + entrypoint       = (known after apply)
      + env              = (known after apply)
      + exit_code        = (known after apply)
      + gateway          = (known after apply)
      + hostname         = (known after apply)
      + id               = (known after apply)
      + image            = (known after apply)
      + init             = (known after apply)
      + ip_address       = (known after apply)
      + ip_prefix_length = (known after apply)
      + ipc_mode         = (known after apply)
      + log_driver       = (known after apply)
      + logs             = false
      + must_run         = true
      + name             = "nginx-example"
      + network_data     = (known after apply)
      + read_only        = false
      + remove_volumes   = true
      + restart          = "no"
      + rm               = false
      + security_opts    = (known after apply)
      + shm_size         = (known after apply)
      + start            = true
      + stdin_open       = false
      + tty              = false

      + healthcheck {
          + interval     = (known after apply)
          + retries      = (known after apply)
          + start_period = (known after apply)
          + test         = (known after apply)
          + timeout      = (known after apply)
        }


Do you want to perform these actions?
  Terraform will perform the actions described above.
  Only 'yes' will be accepted to approve.

  Enter a value: yes

docker_image.nginx: Creating...
docker_image.nginx: Still creating... [10s elapsed]
docker_image.nginx: Still creating... [20s elapsed]
docker_image.nginx: Still creating... [30s elapsed]
docker_image.nginx: Still creating... [40s elapsed]
docker_image.nginx: Still creating... [50s elapsed]
docker_image.nginx: Still creating... [1m0s elapsed]
docker_image.nginx: Still creating... [1m10s elapsed]
docker_image.nginx: Still creating... [1m20s elapsed]
docker_image.nginx: Creation complete after 1m23s [id=sha256:c316d5a335a5cf324b0dc83b3da82d7608724769f6454f6d9a621f3ec2534a5anginx:latest]
docker_container.nginx: Creating...
docker_container.nginx: Creation complete after 1s [id=0e3fda53befd697a51a3b15a615c236c62f65469f4a02450c1899289f998f128]

Apply complete! Resources: 2 added, 0 changed, 0 destroyed.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now, navigate to your web browser and check the &lt;code&gt;localhost:8080&lt;/code&gt;, you should see Nginx web server up and running.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2gz7uphbdmyodpqryzja.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2gz7uphbdmyodpqryzja.png" alt="Nginx Browser" width="800" height="256"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;When you are done with the exercise, resources can be deleted with a simple command &lt;code&gt;terraform destroy&lt;/code&gt;.&lt;/p&gt;

</description>
      <category>terraform</category>
      <category>docker</category>
      <category>devops</category>
      <category>tutorial</category>
    </item>
    <item>
      <title>Packaging Java apps with Maven and GitHub Actions</title>
      <dc:creator>Ivan Cvitkovic</dc:creator>
      <pubDate>Sat, 20 Nov 2021 15:27:39 +0000</pubDate>
      <link>https://dev.to/cvitaa11/packaging-java-apps-with-maven-and-github-actions-4gbn</link>
      <guid>https://dev.to/cvitaa11/packaging-java-apps-with-maven-and-github-actions-4gbn</guid>
      <description>&lt;p&gt;This post shows how to create workflows that package Java application with Maven and then store it as an artifact or publish to GitHub Packages.&lt;/p&gt;

&lt;p&gt;In the previous &lt;a href="https://dev.to/cvitaa11/continuous-integration-with-github-actions-2mo5"&gt;post&lt;/a&gt; we described GitHub Actions and how they work, so if you need a quick reminder on the jobs, steps and syntax check it out.&lt;/p&gt;

&lt;h3&gt;
  
  
  Development environment
&lt;/h3&gt;

&lt;p&gt;On the following &lt;a href="https://github.com/cvitaa11/java-spring-demo" rel="noopener noreferrer"&gt;link&lt;/a&gt; you can find repository with the source code. It's a simple Spring Boot application that returns students from the database. The application was bootstrapped using Spring Initializr. For the dependencies we added &lt;code&gt;Spring Web&lt;/code&gt; which is used for building web applications using Spring MVC. This package also uses Apache Tomcat as the default embedded container. We also used &lt;code&gt;Spring Data JDBC&lt;/code&gt; for persisting data in SQL with plain JDBC and &lt;code&gt;PostgreSQL Driver&lt;/code&gt; that allows Java programs to connect to a Postgres database using standard, database independent Java code.&lt;/p&gt;

&lt;p&gt;For local development, we can easily start Postgres instance with Docker by using the following command: &lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

docker run -d --name postgresDB -p &amp;lt;port&amp;gt;:5432 -e POSTGRES_PASSWORD=&amp;lt;YourPassword&amp;gt;-v /postgresdata:/var/lib/postgresql/data postgres:latest


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Spring Boot follows a layered architecture in which each layer communicates with the layer directly below or above it. We followed that practice and implemented Controller, Service and Repository inside our application and demonstrated dependency injection principles. Source code also contains &lt;code&gt;StudentConfig&lt;/code&gt; class which simply inserts student in database.&lt;/p&gt;

&lt;p&gt;After successfully setting up development environment and writing some code we decided to push our work into the source control management system, in this case GitHub. Now we need to build the code and publish it as a Maven package. This process can be done manually, but we would like to do it automatically when changes are merged into the &lt;code&gt;main&lt;/code&gt; branch. In this way we will avoid manual tasks when publishing a new version.&lt;/p&gt;

&lt;p&gt;Like most other things, this problem can be solved in several ways, and we will use two different approaches. Firstly, we will publish our package as an build artifact and make it available for download, while in the second approach we will publish package to the &lt;a href="https://github.com/features/packages" rel="noopener noreferrer"&gt;GitHub Packages&lt;/a&gt; Maven repository.&lt;/p&gt;

&lt;h3&gt;
  
  
  Storing workflow data as artifacts
&lt;/h3&gt;

&lt;p&gt;In the &lt;code&gt;main.yaml&lt;/code&gt; file first couple of lines tell us which events will start the workflow. Except push and pull request on the main branch we also added &lt;code&gt;workflow_dispatch&lt;/code&gt; tag to enable manual workflow run.&lt;/p&gt;

&lt;p&gt;Under the jobs section we defined &lt;code&gt;build&lt;/code&gt; job that will be executed on the Ubuntu runner. First two steps checkout main branch from GitHub and set up JDK (Java Development Kit).&lt;/p&gt;

&lt;p&gt;Next up, we build the project and set up a cache for Maven:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

- name: Build Maven project
  run: |
    mvn -B package --file pom.xml -Dmaven.test.skip
    mkdir staging &amp;amp;&amp;amp; cp target/*.jar staging

- name: Set up a cache for Maven
  uses: actions/cache@v2
  with:
    path: ~/.m2
    key: ${{ runner.os }}-m2-${{ hashFiles('**/pom.xml') }}
    restore-keys: ${{ runner.os }}-m2


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Once the build is completed, staging directory will contain produced .jar file. Each job in a workflow runs in a fresh virtual environment which means that once the &lt;code&gt;build&lt;/code&gt; job is done we cannot access that environment and our .jar file is gone. That's where our last step comes in place which uploads artifacts from your workflow allowing you to share data between jobs and store data once a workflow is complete. &lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

- name: Persist workflow data as artifacts
  uses: actions/upload-artifact@v2
  with:
    name: github-actions-artifact
    path: staging


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;By default, the artifacts generated by workflows are retained for 90 days before they are automatically deleted. You can adjust the retention period, depending on the type of repository. When you customize the retention period, it only applies to new artifacts and does not retroactively apply to existing objects. &lt;/p&gt;

&lt;p&gt;Artifacts can be found under the Actions tab when you click on the desired workflow run.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvpiy1wxdrrw5jyrtfc84.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvpiy1wxdrrw5jyrtfc84.png" alt="GitHub Actions Artifacts"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Publishing to the GitHub Packages
&lt;/h3&gt;

&lt;p&gt;You can configure Apache Maven to publish packages to GitHub Packages and to use packages stored on GitHub Packages as dependencies in a Java project.&lt;/p&gt;

&lt;p&gt;Beside Maven, GitHub Packages offers different package registries for commonly used package managers like npm, NuGet, Gradle and RubyGems. It's also possible to store Docker and other OCI images. With all these features you can create end-to-end DevOps solutions and centralize your software development on GitHub.&lt;/p&gt;

&lt;p&gt;In &lt;code&gt;maven-publish.yaml&lt;/code&gt; file you can find workflow details for publishing package to the GitHub Packages. Just like in the previous solution we provide name and events that will trigger workflow run. Next, under the jobs section we selected Ubuntu runner as environment for the &lt;code&gt;publish&lt;/code&gt; job and defined permissions to read the content and write packages.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

name: Publish package to GitHub Packages
on:
  push:
    branches: [main]
jobs:
  publish:
    runs-on: ubuntu-latest 
    permissions: 
      contents: read
      packages: write 


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;In the following steps we checkout the main branch and set up JDK with &lt;code&gt;java-version&lt;/code&gt; parameter. The last step is publishing the package and it needs personal access token for the authentication. PAT is a sensitive information and we don't want to store it as a plain text, so we defined the secret on the repository level and accessed it as an environment variable. For simplicity we skipped the tests during deployment.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

- name: Publish package
  run: mvn --batch-mode deploy -Dmaven.test.skip
  env:
    GITHUB_TOKEN: ${{ secrets.TOKEN }}


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Before running the workflow we also need one configuration change in the application source code. Inside &lt;code&gt;pom.xml&lt;/code&gt; file we need to pass information about package distribution management.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

&amp;lt;project ...&amp;gt;
  ...
  &amp;lt;distributionManagement&amp;gt;
    &amp;lt;repository&amp;gt;
      &amp;lt;id&amp;gt;github&amp;lt;/id&amp;gt;
      &amp;lt;name&amp;gt;GitHub Packages&amp;lt;/name&amp;gt;
      &amp;lt;url&amp;gt;https://maven.pkg.github.com/cvitaa11/java-spring-demo&amp;lt;/url&amp;gt;
    &amp;lt;/repository&amp;gt;
  &amp;lt;/distributionManagement&amp;gt;
&amp;lt;/project&amp;gt;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;After pushing changes to the main branch workflow starts automatically and we can follow the log output under the Actions tab on the repository page. When all steps are completed we can see Maven package, ready for use, in our code repository under the Packages section.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbino2ic60brye9iv39fj.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbino2ic60brye9iv39fj.png" alt="Java repository"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffgpisr4w0c4lnvtg7uke.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffgpisr4w0c4lnvtg7uke.png" alt="Maven package"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;With these two approaches, we have successfully solved the problem of automatic publish of Java applications as Maven packages and demonstrated use of GitHub Actions and GitHub Packages to create complete end-to-end development solution in one place.&lt;/p&gt;

</description>
      <category>java</category>
      <category>maven</category>
      <category>github</category>
      <category>devops</category>
    </item>
    <item>
      <title>Continuous integration with GitHub Actions</title>
      <dc:creator>Ivan Cvitkovic</dc:creator>
      <pubDate>Tue, 12 Oct 2021 10:20:11 +0000</pubDate>
      <link>https://dev.to/cvitaa11/continuous-integration-with-github-actions-2mo5</link>
      <guid>https://dev.to/cvitaa11/continuous-integration-with-github-actions-2mo5</guid>
      <description>&lt;p&gt;Continuous integration is cheap, but not integrating continuously can be very expensive. Nowdays, it's hard to imagine software development lifecycle without automated solutions that take care of all those repetitive tasks. There is a ton of different technologies that can solve our problems and in this post we will focus on GitHub Actions.&lt;/p&gt;

&lt;h3&gt;
  
  
  Understanding GitHub Actions
&lt;/h3&gt;

&lt;p&gt;GitHub Actions are event-driven workflows that come right in your code repository. By saying event-driven, it means that you can execute series of commands when specific event occurs. For example, every time someone creates a pull request for a repository, you can automatically run a command that executes a software testing script. Aside from event trigger, procedure can also be scheduled.&lt;/p&gt;

&lt;p&gt;Procedures are defined in YAML file called &lt;code&gt;workflow&lt;/code&gt;. Workflow consists of one or more &lt;code&gt;jobs&lt;/code&gt; that can be executed simultaneously or chronologically. Each job then uses &lt;code&gt;steps&lt;/code&gt; to control the order in which &lt;code&gt;actions&lt;/code&gt; are run. These actions are the commands that automate your software testing, building etc.&lt;/p&gt;

&lt;p&gt;Event is the starting point in this cycle. It's an activity that triggers workflows and it can originate from GitHub when someone creates issue, pull request or merge changes to the specific branch. Also, it's possible to use &lt;code&gt;repository dispatch webhook&lt;/code&gt; and trigger workflow when an external event occurs.&lt;/p&gt;

&lt;p&gt;Job is collection of steps that are executed on the same &lt;code&gt;runner&lt;/code&gt;. By default all jobs in workflow are running in parallel, but there are some situations where job depends on the result of the previous one so we can configure them to execute sequentially. For some simple tasks we can create workflows with only one job, but for more complex scenarios this is not recommended approach. In that case multi-job workflows are way to go.&lt;/p&gt;

&lt;p&gt;Each step in a job represents an individual task that can be shell command or an action. Since every job is executed on the same runner it allows steps to share data with each other. For example first step of a job can build container image and then in the second step that image can be pushed to a container registry. Data can also be shared between jobs by storing workflow data as artifacts. Artifacts allow you to persist data after a job has completed, and share that data with another job in the same workflow. To use the data in another job you just need to download artifacts. Some of the common artifacts are log outputs, test results, binary or compressed files and code coverage results.&lt;/p&gt;

&lt;p&gt;Per official documentation, actions are standalone commands that are combined into steps to create a job. Actions are the smallest portable building block of a workflow and you can create your own actions, or use actions created by the GitHub community. To use an action in a workflow, you must include it as a step.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5sa5msje5z1jvzdovyco.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5sa5msje5z1jvzdovyco.png" alt="Alt Text" width="352" height="335"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Another important concept we have to mention are &lt;code&gt;runners&lt;/code&gt;. In essence a runner is a server with GitHub Actions runner application installed on it. A runner listens for available jobs, runs one job at a time, and reports the progress, logs, and results back to GitHub. Hosted runners, provided by GitHub, are based on Ubuntu, Windows, and macOS, and each job in a workflow runs in a fresh virtual environment. Trough workflow you can install additional tools and binaries on a runner. If for some reason you need different operating system or require specific configuration you can use self-hosted runners where you have full control of the entire environment. However, GitHub does not recommend self-hosted runners for public repositories. &lt;br&gt;
Windows and Ubuntu runners are hosted in Azure and subsequently have the same IP address ranges as the Azure datacenters. macOS runners are hosted in GitHub's own macOS cloud. Each Windows or Ubuntu runner has 2-core CPU, 7GB of RAM memory and 14 GB of SSD disk space. macOS runners have 3-core CPU, 14GB of RAM memory and 14GB of SSD disk space.&lt;br&gt;
In terms of pricing, on a free plan you have 2000 automation minutes per month which is enough for learning and some smaller projects. You can find more about pricing plans on the following &lt;a href="https://github.com/pricing?gclid=EAIaIQobChMI_8L5ipTD8wIVC893Ch1o-QfUEAAYASABEgKIK_D_BwE" rel="noopener noreferrer"&gt;link&lt;/a&gt;. &lt;/p&gt;
&lt;h3&gt;
  
  
  Demo
&lt;/h3&gt;

&lt;p&gt;In the following demo we have .NET REST API application. Our task is to run unit tests against the solution and if they succeed package the application as the container image and push it to the container registry. Of course, we want to do all of that in an automated way when push or pull request on the &lt;code&gt;master&lt;/code&gt; branch occurs.&lt;/p&gt;

&lt;p&gt;Source code can be found &lt;a href="https://github.com/cvitaa11/dotnet-ci" rel="noopener noreferrer"&gt;here&lt;/a&gt;. As you can see, workflow is placed inside &lt;code&gt;.github/workflows&lt;/code&gt; directory as YAML file.&lt;/p&gt;

&lt;p&gt;First couple of lines are used to define workflow name and specify events that will trigger the workflow run.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;name: CI master

on:
  push:
    branches: [master]
  pull_request:
    branches: [master]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Next, &lt;code&gt;workflow_dispatch&lt;/code&gt; section allows us to run the workflow manually using Actions tab on GitHub or trough CLI. &lt;/p&gt;

&lt;p&gt;The &lt;code&gt;jobs&lt;/code&gt; section is the main part which defines tasks and controls their order of execution.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;jobs:
  test:
    runs-on: ubuntu-latest

    steps:
      - uses: actions/checkout@v2
      - name: Test the solution 
        run: dotnet test
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;First attribute is the name of the job followed by the &lt;code&gt;runs-on&lt;/code&gt; parameter which specifies the infrastructure environment, in this case ubuntu-latest. Under the &lt;code&gt;steps&lt;/code&gt; segment we define actual tasks. The &lt;code&gt;dotnet test&lt;/code&gt; command is used to execute unit tests in a given solution. The &lt;code&gt;dotnet test&lt;/code&gt; command builds the solution and runs a test host application for each test project in the solution. The test host executes tests in the given project using a test framework, for example: MSTest, NUnit, or xUnit, and reports the success or failure of each test. If all tests are successful, the test runner returns 0 as an exit code. Otherwise if any test fails, it returns 1 and the workflow will be stopped.&lt;/p&gt;

&lt;p&gt;Obviously, if any test fails we don't want to build and package our application so the second job depends on the results of the first one.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;  build:
    needs: test
    runs-on: ubuntu-latest

    steps:
      - uses: actions/checkout@v2
      - name: Build container image
        run: |
          docker build -f ./dotnet-ci/Dockerfile -t ${{secrets.registry}}/${{github.repository}}:${{ github.run_number }} .
      - name: Container registry login
        uses: docker/login-action@v1.10.0
        with:
          registry: ${{secrets.registry}}
          username: ${{secrets.username}}
          password: ${{secrets.password}}

      - name: Push image to container registry
        run: |
          docker push ${{secrets.registry}}/${{github.repository}}:${{ github.run_number }}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Attribute &lt;code&gt;needs&lt;/code&gt; will make this job to wait for the &lt;code&gt;test&lt;/code&gt; to finish and just if &lt;code&gt;test&lt;/code&gt; succeeds, workflow will start the &lt;code&gt;build&lt;/code&gt; job on a new runner.&lt;/p&gt;

&lt;p&gt;Workflow needs access to some sensitive informations like login credentials and we definitely don't want to store it as plain text. That's where secrets come in place, they are definied on a repository level and can only be created by the repository owner. Secret value is not visible, they can only be modified or deleted. For container registry we will use &lt;a href="https://github.com/features/packages" rel="noopener noreferrer"&gt;GitHub Packages&lt;/a&gt; and authenticate with username and PAT (Personal Access Token) as password. This way we will have all the assets in one place, our code repository.&lt;/p&gt;

&lt;p&gt;Each container image needs to have a tag, and we want that value to be unique. Thankfully, GitHub provides us with number of environment variables and we decided to tag image with &lt;code&gt;run_number&lt;/code&gt;. That is a unique number for each run of a particular workflow in a repository. This number begins at 1 for the workflow's first run, and increments with each new run. This number does not change if you re-run the workflow run. Alternative options would be to tag image with commit SHA, &lt;code&gt;run_id&lt;/code&gt; or use semantic versioning, but for simplicity we chose &lt;code&gt;run_number&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;Now we can take a look at GitHub web console under Actions tab.&lt;br&gt;
&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkljuygg603vqyny6tx6i.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkljuygg603vqyny6tx6i.png" alt="Workflow summary" width="800" height="383"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;As you can see, under Summary we have a nice overview of all the jobs in our workflow and and their interrelationship.&lt;/p&gt;

&lt;p&gt;Click on the single job will give us list of all the steps of which the job is composed. We can also click on any step and get the entire log output which is really helpful when it comes to debugging. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgq0hdclyeqhnlg8h73v0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgq0hdclyeqhnlg8h73v0.png" alt="Log output" width="800" height="381"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Conclusion
&lt;/h3&gt;

&lt;p&gt;Ever since Microsoft acquired GitHub they have been adding new features and functionalities that made GitHub not only great source control managment tool, but also very mature DevOps platform. GitHub Actions are very practical and easy to use which makes them a great choice for smaller projects and getting started with continuous integration. However, for projects at a large scale they are still not as powerful as Azure Pipelines or some other enterprise solutions from cloud providers.&lt;/p&gt;

</description>
      <category>dotnet</category>
      <category>devops</category>
      <category>github</category>
      <category>docker</category>
    </item>
    <item>
      <title>Running Dapr on Kubernetes</title>
      <dc:creator>Ivan Cvitkovic</dc:creator>
      <pubDate>Tue, 07 Sep 2021 10:53:46 +0000</pubDate>
      <link>https://dev.to/cvitaa11/running-dapr-on-kubernetes-89g</link>
      <guid>https://dev.to/cvitaa11/running-dapr-on-kubernetes-89g</guid>
      <description>&lt;p&gt;The distributed application runtime, Dapr, is a portable, event-driven runtime that can run on the cloud or any edge infrastructure. It puts together the best practices for building microservice applications into components called building blocks.&lt;/p&gt;

&lt;p&gt;Each building block is completely independent so you can use one, some, or all of them in your application. Building blocks are extensible, so you can also write your own.&lt;/p&gt;

&lt;p&gt;Dapr supports a wide range of programming languages and frameworks such as .NET, Java, Node.js, Go and Python. That means you can write microservice apps using your favorite tools and deploy them literally anywhere.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdocs.dapr.io%2Fimages%2Foverview.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdocs.dapr.io%2Fimages%2Foverview.png" alt="Architecture Diagram"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Basically, building blocks are just HTTP or gRPC APIs that can be called from application code and use one or more Dapr components. They abstract some of the major challenges during development such as service-to-service communication, state managment, pub/sub, observability and more. Building blocks do not depend on underlying technology. This means if you need to implement, for example, pub/sub functionality you can use Apache Kafka, RabbitMQ, Redis Streams, Azure Service Bus or any other supported broker that interface with Dapr.&lt;/p&gt;

&lt;p&gt;In this example we will show how to run Dapr on the Kubernetes cluster with two .NET applications. First one will send messages to Apache Kafka while the second one will read those messages and store them in Redis. Communication to Kafka and Redis will be realized using the Dapr Client, which means that we will not have any dependencies on NuGet packages like &lt;code&gt;Confluent.Kafka&lt;/code&gt; or &lt;code&gt;StackExchange.Redis&lt;/code&gt;.&lt;/p&gt;

&lt;h4&gt;
  
  
  Architecture diagram
&lt;/h4&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fraw.githubusercontent.com%2Fcvitaa11%2Fdapr-demo%2Fmain%2FArchitecture_diagram.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fraw.githubusercontent.com%2Fcvitaa11%2Fdapr-demo%2Fmain%2FArchitecture_diagram.jpeg" alt="Architecture Diagram"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h4&gt;
  
  
  Prerequisites
&lt;/h4&gt;

&lt;p&gt;This demo requires you to have the following installed on your machine:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Kubernetes CLI &lt;a href="https://kubernetes.io/docs/tasks/tools/install-kubectl/" rel="noopener noreferrer"&gt;kubectl&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Kubernetes cluster, such as &lt;a href="https://docs.dapr.io/operations/hosting/kubernetes/cluster/setup-minikube/" rel="noopener noreferrer"&gt;Minikube&lt;/a&gt; or &lt;a href="https://www.docker.com/products/docker-desktop" rel="noopener noreferrer"&gt;Docker Desktop&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Also, clone the repository and &lt;code&gt;cd&lt;/code&gt; into the right directory:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;git clone https://github.com/cvitaa11/dapr-demo
cd dapr-demo
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  Step 1 - Setup Dapr on your Kubernetes cluster
&lt;/h4&gt;

&lt;p&gt;The first thing you need is an RBAC enabled Kubernetes cluster. This could be running on your machine using Minikube/Docker Desktop, or it could be a fully-fledged cluser in Azure using &lt;a href="https://azure.microsoft.com/en-us/services/kubernetes-service/" rel="noopener noreferrer"&gt;AKS&lt;/a&gt; or some other managed Kubernetes instance from different cloud vendor.&lt;/p&gt;

&lt;p&gt;Once you have a cluster, follow the steps below to deploy Dapr to it. For more details, look &lt;a href="https://docs.dapr.io/getting-started/install-dapr/#install-dapr-on-a-kubernetes-cluster" rel="noopener noreferrer"&gt;here&lt;/a&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ dapr init -k
⌛  Making the jump to hyperspace...
ℹ️  Note: To install Dapr using Helm, see here: https://docs.dapr.io/getting-started/install-dapr-kubernetes/#install-with-helm-advanced

✅  Deploying the Dapr control plane to your cluster...
✅  Success! Dapr has been installed to namespace dapr-system. To verify, run `dapr status -k' in your terminal. To get started, go here: https://aka.ms/dapr-getting-started
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The &lt;code&gt;dapr&lt;/code&gt; CLI will exit as soon as the kubernetes deployments are created. Kubernetes deployments are asyncronous, so you will need to make sure that the dapr deployments are actually completed before continuing.&lt;/p&gt;

&lt;h4&gt;
  
  
  Step 2 - Setup Apache Kafka
&lt;/h4&gt;

&lt;p&gt;The easiest way to setup Apache Kafka on your Kubernetes cluster is by using &lt;a href="https://helm.sh/" rel="noopener noreferrer"&gt;Helm&lt;/a&gt; package manager. To install Helm on your development machine follow this &lt;a href="https://helm.sh/docs/intro/install/" rel="noopener noreferrer"&gt;guide&lt;/a&gt;. &lt;br&gt;
We will use &lt;a href="https://github.com/bitnami/charts" rel="noopener noreferrer"&gt;Bitnamy Library for Kubernetes&lt;/a&gt; to launch Zookeper and Kafka message broker.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;helm repo add bitnami https://charts.bitnami.com/bitnami
helm install my-release bitnami/kafka
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  Step 3 - Setup Redis
&lt;/h4&gt;

&lt;p&gt;Just like Apache Kafka, easy way to spin up Redis on your Kubernetes cluster is by using Helm.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;helm repo add bitnami https://charts.bitnami.com/bitnami
helm install redis bitnami/redis
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;To verify the installation of Kafka and Redis run &lt;code&gt;kubectl get all&lt;/code&gt; and you should see similiar output:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;NAME                             READY   STATUS        RESTARTS   AGE
pod/my-release-kafka-0           1/1     Running       0          18m
pod/my-release-zookeeper-0       1/1     Running       0          18m
pod/redis-master-0               1/1     Running       1          11m
pod/redis-slave-0                1/1     Running       1          11m
pod/redis-slave-1                1/1     Running       1          11m

NAME                                    TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)                      AGE
service/kubernetes                      ClusterIP   10.96.0.1        &amp;lt;none&amp;gt;        443/TCP                      15d
service/my-release-kafka                ClusterIP   10.110.225.238   &amp;lt;none&amp;gt;        9092/TCP                     18m
service/my-release-kafka-headless       ClusterIP   None             &amp;lt;none&amp;gt;        9092/TCP,9093/TCP            18m
service/my-release-zookeeper            ClusterIP   10.99.95.252     &amp;lt;none&amp;gt;        2181/TCP,2888/TCP,3888/TCP   18m
service/my-release-zookeeper-headless   ClusterIP   None             &amp;lt;none&amp;gt;        2181/TCP,2888/TCP,3888/TCP   18m
service/redis-headless                  ClusterIP   None             &amp;lt;none&amp;gt;        6379/TCP                     11m
service/redis-master                    ClusterIP   10.111.109.148   &amp;lt;none&amp;gt;        6379/TCP                     11m
service/redis-slave                     ClusterIP   10.111.66.85     &amp;lt;none&amp;gt;        6379/TCP                     11m

NAME                                    READY   AGE
statefulset.apps/my-release-kafka       1/1     18m
statefulset.apps/my-release-zookeeper   1/1     18m
statefulset.apps/redis-master           1/1     11m
statefulset.apps/redis-slave            2/2     11m
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  Step 4 - Create Dapr components in Kubernetes cluster
&lt;/h4&gt;

&lt;p&gt;To deploy pub/sub and state store components make sure you are positioned in the right directory and then apply Dapr YAML manifests.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;cd dapr-components
kubectl apply -f .\kafka.yaml
kubectl apply -f .\redis.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  Step 5 - Deploy .NET Core applications
&lt;/h4&gt;

&lt;p&gt;Now when all prerequisites are ready we can deploy our apps. To deploy .NET Core publisher and consumer applications make sure you are positioned in the right directory and then apply Kubernetes manifests.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;cd k8s
kubectl apply -f .\publisher.yaml
kubectl apply -f .\consumer.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Each manifest contains &lt;code&gt;Deployment&lt;/code&gt; object for the application and &lt;code&gt;Service&lt;/code&gt; object for accessing the application through a browser.&lt;/p&gt;

&lt;p&gt;Navigate to the &lt;code&gt;localhost:8081/swagger&lt;/code&gt; and you will se our publisher app with a POST method on &lt;code&gt;MessageController&lt;/code&gt;. This action sends a message to &lt;em&gt;newMessage&lt;/em&gt; topic on Kafka pub/sub component. Communication between application and message broker is not performed directly. Dapr is running as a sidecar container inside publisher pod and handles the entire process of sending a message.&lt;/p&gt;

&lt;p&gt;Our consumer application is running on &lt;code&gt;localhost:9091&lt;/code&gt; and is subscribed to &lt;em&gt;newMessage&lt;/em&gt; topic on Kafka pub/sub component. When new message arrives it reads the content and trough the Dapr client saves it to Redis state store uneder the key &lt;em&gt;message&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;To test the entire process we can run Redis client pod and check if content is stored. First of all we will export password to REDIS_PASSWORD variable:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;export REDIS_PASSWORD=$(kubectl get secret --namespace default redis -o jsonpath="{.data.redis-password}" | base64 --decode)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Then run the client with the following command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl run --namespace default redis-client --rm --tty -i --restart='Never' \
    --env REDIS_PASSWORD=$REDIS_PASSWORD \
   --image docker.io/bitnami/redis:6.0.12-debian-10-r3 -- bash
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;and connect using Redis CLI:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;redis-cli -h redis-master -a $REDIS_PASSWORD
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now that you are connected to Redis you can use command &lt;code&gt;HGETALL message&lt;/code&gt; that will return content of the message we sent to Kafka. With this we have confirmed that the whole process works.&lt;/p&gt;

&lt;p&gt;If you want to find out more about Dapr, the best place to start is the official &lt;a href="https://dapr.io/" rel="noopener noreferrer"&gt;documentation&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>dotnet</category>
      <category>microservices</category>
      <category>architecture</category>
    </item>
  </channel>
</rss>
