<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Siri Varma Vegiraju</title>
    <description>The latest articles on DEV Community by Siri Varma Vegiraju (@sirivarma).</description>
    <link>https://dev.to/sirivarma</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/sirivarma"/>
    <language>en</language>
    <item>
      <title>Docker Scout Commands</title>
      <dc:creator>Siri Varma Vegiraju</dc:creator>
      <pubDate>Wed, 20 Aug 2025 05:03:01 +0000</pubDate>
      <link>https://dev.to/sirivarma/docker-scout-commands-4p9d</link>
      <guid>https://dev.to/sirivarma/docker-scout-commands-4p9d</guid>
      <description>&lt;h2&gt;
  
  
  Docker Scout Overview
&lt;/h2&gt;

&lt;p&gt;Docker Scout is a solution for proactively enhancing your software supply chain security. By analyzing your images, Docker Scout compiles an inventory of components, also known as a Software Bill of Materials (SBOM). The SBOM is matched against a continuously updated vulnerability database to identify security vulnerabilities and provide actionable insights for improving your container security posture.&lt;/p&gt;

&lt;h2&gt;
  
  
  Docker Scout Commands
&lt;/h2&gt;

&lt;p&gt;Docker Scout provides 18 subcommands for various security analysis and management tasks:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Core Analysis Commands:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;&lt;code&gt;docker scout quickview&lt;/code&gt;&lt;/strong&gt; - Displays a quick overview of an image. It displays a summary of the vulnerabilities in the specified image and vulnerabilities from the base image. If available, it also displays base image refresh and update recommendations.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;&lt;code&gt;docker scout cves&lt;/code&gt;&lt;/strong&gt; - Analyzes a software artifact for vulnerabilities. If no image is specified, the most recently built image is used.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;&lt;code&gt;docker scout compare&lt;/code&gt;&lt;/strong&gt; - Compares two images and displays the differences in vulnerabilities and components&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;&lt;code&gt;docker scout sbom&lt;/code&gt;&lt;/strong&gt; - Generates or analyzes Software Bill of Materials for images&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Management and Configuration:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;&lt;code&gt;docker scout config&lt;/code&gt;&lt;/strong&gt; - Configure Docker Scout settings and organization&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;&lt;code&gt;docker scout enroll&lt;/code&gt;&lt;/strong&gt; - Enroll repositories for Docker Scout analysis&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;&lt;code&gt;docker scout push&lt;/code&gt;&lt;/strong&gt; - Push analysis results to Docker Scout&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;&lt;code&gt;docker scout cache&lt;/code&gt;&lt;/strong&gt; - Manage local analysis cache&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Advanced Features:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;&lt;code&gt;docker scout policy&lt;/code&gt;&lt;/strong&gt; - Manage and evaluate security policies&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;&lt;code&gt;docker scout recommendations&lt;/code&gt;&lt;/strong&gt; - Get actionable recommendations for improving security&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;&lt;code&gt;docker scout attestation&lt;/code&gt;&lt;/strong&gt; - Work with supply chain attestations&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;&lt;code&gt;docker scout environment&lt;/code&gt;&lt;/strong&gt; - Manage environments for policy evaluation&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;&lt;code&gt;docker scout integration&lt;/code&gt;&lt;/strong&gt; - Manage third-party integrations&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;&lt;code&gt;docker scout repo&lt;/code&gt;&lt;/strong&gt; - Repository management commands&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;&lt;code&gt;docker scout stream&lt;/code&gt;&lt;/strong&gt; - Stream analysis data&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;&lt;code&gt;docker scout watch&lt;/code&gt;&lt;/strong&gt; - Monitor repositories for changes&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;&lt;code&gt;docker scout version&lt;/code&gt;&lt;/strong&gt; - Display version information&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The CLI provides both local analysis capabilities and integration with Docker's cloud-based Scout service for comprehensive vulnerability management across your container ecosystem.&lt;/p&gt;

</description>
      <category>cloud</category>
      <category>cloudnative</category>
    </item>
    <item>
      <title>Docker 4.44 is here</title>
      <dc:creator>Siri Varma Vegiraju</dc:creator>
      <pubDate>Tue, 19 Aug 2025 04:35:01 +0000</pubDate>
      <link>https://dev.to/sirivarma/docker-444-is-here-25gb</link>
      <guid>https://dev.to/sirivarma/docker-444-is-here-25gb</guid>
      <description>&lt;h2&gt;
  
  
  AI/ML Enhancements
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Docker Model Runner improvements&lt;/strong&gt; include an inspector for AI inference requests and responses, allowing developers to debug model behavior by examining HTTP payloads, prompts, and outputs. The update also adds real-time resource checks to prevent system freezes when running multiple models concurrently, with warnings for GPU and memory constraints.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Expanded MCP (Model Context Protocol) support&lt;/strong&gt; now includes Goose and Gemini CLI as clients, providing one-click access to over 140 MCP servers like GitHub, Postgres, and Neo4j through the Docker MCP Catalog.&lt;/p&gt;

&lt;h2&gt;
  
  
  Developer Experience
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;New Kubernetes CLI command&lt;/strong&gt; (&lt;code&gt;docker desktop kubernetes&lt;/code&gt;) allows managing Kubernetes clusters directly from the Docker Desktop CLI without switching between tools or UI screens.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Improved Settings search&lt;/strong&gt; helps users find configurations faster without navigating through multiple menus.&lt;/p&gt;

&lt;h2&gt;
  
  
  Platform Performance
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Apple Virtualization&lt;/strong&gt; is now the default backend on macOS (QEMU support fully removed), delivering better performance with faster cold starts and more efficient memory management.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;WSL2 improvements&lt;/strong&gt; on Windows include reduced memory consumption, smarter CPU throttling for idle containers, and better stability for graphics-heavy workloads.&lt;/p&gt;

&lt;p&gt;The release focuses on enhanced reliability, better AI development tools, and streamlined workflows to help developers build and test applications more efficiently, particularly those working with AI/ML models.&lt;/p&gt;

</description>
      <category>cloud</category>
      <category>cloudnative</category>
    </item>
    <item>
      <title>Docker AI and MCP commands</title>
      <dc:creator>Siri Varma Vegiraju</dc:creator>
      <pubDate>Mon, 11 Aug 2025 04:43:26 +0000</pubDate>
      <link>https://dev.to/sirivarma/docker-ai-and-mcp-commands-2n3l</link>
      <guid>https://dev.to/sirivarma/docker-ai-and-mcp-commands-2n3l</guid>
      <description>&lt;h2&gt;
  
  
  Core Docker AI Commands
&lt;/h2&gt;

&lt;h3&gt;
  
  
  &lt;code&gt;docker ai&lt;/code&gt;
&lt;/h3&gt;

&lt;p&gt;The primary command to interact with Gordon (Docker's AI agent) from the terminal. Gordon looks for a &lt;code&gt;gordon-mcp.yml&lt;/code&gt; file in your working directory to determine which MCP servers to use.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Usage:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker ai &lt;span class="s2"&gt;"your question or request"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  MCP Configuration File
&lt;/h2&gt;

&lt;h3&gt;
  
  
  &lt;code&gt;gordon-mcp.yml&lt;/code&gt;
&lt;/h3&gt;

&lt;p&gt;A Docker Compose configuration file that defines MCP servers for Gordon to use. This file should be placed in your working directory.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Example structure:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;services&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;mcp-time&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;mcp/time&lt;/span&gt;
    &lt;span class="c1"&gt;# Additional MCP server configurations&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Docker MCP Toolkit Features
&lt;/h2&gt;

&lt;h3&gt;
  
  
  MCP Toolkit Extension
&lt;/h3&gt;

&lt;p&gt;An extension available in Docker Desktop Dashboard that lets you run MCP-enabled tools as containers behind an MCP Server proxy.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Key capabilities:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Browse and install MCP servers from the Docker MCP Catalog&lt;/li&gt;
&lt;li&gt;Configure MCP servers with necessary credentials&lt;/li&gt;
&lt;li&gt;Enable/disable MCP tools for your AI workflows&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Available MCP Servers
&lt;/h3&gt;

&lt;p&gt;Based on the search results, here are some notable MCP servers available:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Docker Hub MCP Server&lt;/strong&gt; - Allows interaction with Docker Hub, requires Docker Hub username and personal access token for configuration&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;PostgreSQL MCP Server&lt;/strong&gt; (&lt;code&gt;mcp/postgres&lt;/code&gt;) - Enables AI assistants to interact directly with PostgreSQL databases, allowing database-aware conversations, queries, and schema modifications&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Time MCP Server&lt;/strong&gt; (&lt;code&gt;mcp/time&lt;/code&gt;) - Provides time-related functionality, as shown in the example where Gordon can answer time queries for different locations&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  How MCP Works with Docker AI
&lt;/h2&gt;

&lt;p&gt;MCP functions as a client-server protocol where the client (like Gordon) sends requests, and the server processes those requests to deliver necessary context to the AI. When Gordon uses MCP, you'll see output indicating it's calling the MCP server's tools, such as "Calling get_current_time ✔️".&lt;/p&gt;

&lt;h2&gt;
  
  
  Installation and Setup
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;Ensure Docker Desktop is installed and updated&lt;/li&gt;
&lt;li&gt;Add the MCP Toolkit Extension to the Docker Desktop Dashboard&lt;/li&gt;
&lt;li&gt;Browse available MCP servers in the Catalog tab&lt;/li&gt;
&lt;li&gt;Configure servers with necessary credentials&lt;/li&gt;
&lt;li&gt;Create a &lt;code&gt;gordon-mcp.yml&lt;/code&gt; file in your project directory&lt;/li&gt;
&lt;li&gt;Use &lt;code&gt;docker ai&lt;/code&gt; commands to interact with Gordon&lt;/li&gt;
&lt;/ol&gt;

</description>
      <category>cloud</category>
      <category>cloudnative</category>
    </item>
    <item>
      <title>Dapr support with Postgres</title>
      <dc:creator>Siri Varma Vegiraju</dc:creator>
      <pubDate>Fri, 08 Aug 2025 05:47:11 +0000</pubDate>
      <link>https://dev.to/sirivarma/dapr-support-with-postgres-1e3b</link>
      <guid>https://dev.to/sirivarma/dapr-support-with-postgres-1e3b</guid>
      <description>&lt;h2&gt;
  
  
  Overview
&lt;/h2&gt;

&lt;p&gt;Dapr supports using &lt;strong&gt;PostgreSQL&lt;/strong&gt; as a &lt;strong&gt;configuration store&lt;/strong&gt; component of type &lt;code&gt;configuration.postgresql&lt;/code&gt;, with a &lt;strong&gt;stable API (v1)&lt;/strong&gt; since Dapr runtime version &lt;strong&gt;1.11&lt;/strong&gt; ([Dapr Docs][1]).&lt;/p&gt;




&lt;h2&gt;
  
  
  Component Setup
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Component Definition
&lt;/h3&gt;

&lt;p&gt;You create a Dapr component manifest like so:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;dapr.io/v1alpha1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Component&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;&amp;lt;YOUR_NAME&amp;gt;&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;configuration.postgresql&lt;/span&gt;
  &lt;span class="na"&gt;version&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;v1&lt;/span&gt;
  &lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;connectionString&lt;/span&gt;
      &lt;span class="na"&gt;value&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;&amp;lt;your&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;connection&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;string&amp;gt;"&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;table&lt;/span&gt;
      &lt;span class="na"&gt;value&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;&amp;lt;your_configuration_table_name&amp;gt;"&lt;/span&gt;
    &lt;span class="c1"&gt;# Optional metadata fields include:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;timeout&lt;/span&gt;
      &lt;span class="na"&gt;value&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;30s"&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;maxConns&lt;/span&gt;
      &lt;span class="na"&gt;value&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;4"&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;connectionMaxIdleTime&lt;/span&gt;
      &lt;span class="na"&gt;value&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;5m"&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;queryExecMode&lt;/span&gt;
      &lt;span class="na"&gt;value&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;simple_protocol"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Key metadata options:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;&lt;code&gt;connectionString&lt;/code&gt;&lt;/strong&gt; (required): Standard PostgreSQL connection string, allowing pool configuration parameters ([Dapr Docs][2]).&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;&lt;code&gt;table&lt;/code&gt;&lt;/strong&gt; (required): Name of the table to store configuration entries ([Dapr Docs][2]).&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Optional tuning metadata:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;timeout&lt;/code&gt; (database operation timeout)&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;maxConns&lt;/code&gt; (connection pool size)&lt;/li&gt;
&lt;li&gt;&lt;code&gt;connectionMaxIdleTime&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;queryExecMode&lt;/code&gt;: useful for compatibility with certain proxies like PgBouncer ([Dapr Docs][2]).&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;




&lt;h2&gt;
  
  
  Database Schema &amp;amp; Triggers
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Table Schema
&lt;/h3&gt;

&lt;p&gt;You must create a table with the following structure:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight sql"&gt;&lt;code&gt;&lt;span class="k"&gt;CREATE&lt;/span&gt; &lt;span class="k"&gt;TABLE&lt;/span&gt; &lt;span class="n"&gt;IF&lt;/span&gt; &lt;span class="k"&gt;NOT&lt;/span&gt; &lt;span class="k"&gt;EXISTS&lt;/span&gt; &lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="k"&gt;table_name&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;
  &lt;span class="k"&gt;KEY&lt;/span&gt; &lt;span class="nb"&gt;VARCHAR&lt;/span&gt; &lt;span class="k"&gt;NOT&lt;/span&gt; &lt;span class="k"&gt;NULL&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="n"&gt;VALUE&lt;/span&gt; &lt;span class="nb"&gt;VARCHAR&lt;/span&gt; &lt;span class="k"&gt;NOT&lt;/span&gt; &lt;span class="k"&gt;NULL&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="k"&gt;VERSION&lt;/span&gt; &lt;span class="nb"&gt;VARCHAR&lt;/span&gt; &lt;span class="k"&gt;NOT&lt;/span&gt; &lt;span class="k"&gt;NULL&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="n"&gt;METADATA&lt;/span&gt; &lt;span class="n"&gt;JSON&lt;/span&gt;
&lt;span class="p"&gt;);&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;KEY&lt;/code&gt;: configuration attribute key&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;VALUE&lt;/code&gt;: associated value&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;VERSION&lt;/code&gt;: version identifier&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;METADATA&lt;/code&gt;: optional JSON metadata ([Dapr Docs][2])&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Notification Trigger
&lt;/h3&gt;

&lt;p&gt;To enable subscription notifications, define a trigger function and create a trigger:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight sql"&gt;&lt;code&gt;&lt;span class="k"&gt;CREATE&lt;/span&gt; &lt;span class="k"&gt;OR&lt;/span&gt; &lt;span class="k"&gt;REPLACE&lt;/span&gt; &lt;span class="k"&gt;FUNCTION&lt;/span&gt; &lt;span class="n"&gt;notify_event&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="k"&gt;RETURNS&lt;/span&gt; &lt;span class="k"&gt;TRIGGER&lt;/span&gt; &lt;span class="k"&gt;AS&lt;/span&gt; &lt;span class="err"&gt;$$&lt;/span&gt;
&lt;span class="k"&gt;DECLARE&lt;/span&gt; 
    &lt;span class="k"&gt;data&lt;/span&gt; &lt;span class="n"&gt;json&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
    &lt;span class="n"&gt;notification&lt;/span&gt; &lt;span class="n"&gt;json&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="k"&gt;BEGIN&lt;/span&gt;
    &lt;span class="n"&gt;IF&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;TG_OP&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s1"&gt;'DELETE'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;THEN&lt;/span&gt;
        &lt;span class="k"&gt;data&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;row_to_json&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="k"&gt;OLD&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
    &lt;span class="k"&gt;ELSE&lt;/span&gt;
        &lt;span class="k"&gt;data&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;row_to_json&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="k"&gt;NEW&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
    &lt;span class="k"&gt;END&lt;/span&gt; &lt;span class="n"&gt;IF&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
    &lt;span class="n"&gt;notification&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;json_build_object&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
                      &lt;span class="s1"&gt;'table'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;TG_TABLE_NAME&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
                      &lt;span class="s1"&gt;'action'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;TG_OP&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
                      &lt;span class="s1"&gt;'data'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="k"&gt;data&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
    &lt;span class="n"&gt;PERFORM&lt;/span&gt; &lt;span class="n"&gt;pg_notify&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s1"&gt;'config'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;notification&lt;/span&gt;&lt;span class="p"&gt;::&lt;/span&gt;&lt;span class="nb"&gt;text&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
    &lt;span class="k"&gt;RETURN&lt;/span&gt; &lt;span class="k"&gt;NULL&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="k"&gt;END&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="err"&gt;$$&lt;/span&gt; &lt;span class="k"&gt;LANGUAGE&lt;/span&gt; &lt;span class="n"&gt;plpgsql&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="k"&gt;CREATE&lt;/span&gt; &lt;span class="k"&gt;TRIGGER&lt;/span&gt; &lt;span class="n"&gt;config&lt;/span&gt;
&lt;span class="k"&gt;AFTER&lt;/span&gt; &lt;span class="k"&gt;INSERT&lt;/span&gt; &lt;span class="k"&gt;OR&lt;/span&gt; &lt;span class="k"&gt;UPDATE&lt;/span&gt; &lt;span class="k"&gt;OR&lt;/span&gt; &lt;span class="k"&gt;DELETE&lt;/span&gt; &lt;span class="k"&gt;ON&lt;/span&gt; &lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="n"&gt;configtable&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;
&lt;span class="k"&gt;FOR&lt;/span&gt; &lt;span class="k"&gt;EACH&lt;/span&gt; &lt;span class="k"&gt;ROW&lt;/span&gt; &lt;span class="k"&gt;EXECUTE&lt;/span&gt; &lt;span class="k"&gt;PROCEDURE&lt;/span&gt; &lt;span class="n"&gt;notify_event&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You will then use the same channel name (e.g., &lt;code&gt;"config"&lt;/code&gt;) in &lt;code&gt;pg_notify&lt;/code&gt; when subscribing ([Dapr Docs][2]).&lt;/p&gt;




&lt;h2&gt;
  
  
  Dapr Configuration API Integration
&lt;/h2&gt;

&lt;p&gt;Dapr provides REST APIs to interact with the configuration store:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Get Configuration&lt;/strong&gt;&lt;br&gt;
&lt;code&gt;GET http://localhost:&amp;lt;daprPort&amp;gt;/v1.0/configuration/&amp;lt;storeName&amp;gt;?key=...&lt;/code&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Returns key-value pairs as JSON ([Dapr Docs][3]).&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;

&lt;p&gt;&lt;strong&gt;Subscribe to Changes&lt;/strong&gt;&lt;br&gt;
&lt;code&gt;GET http://localhost:&amp;lt;daprPort&amp;gt;/v1.0/configuration/&amp;lt;storeName&amp;gt;/subscribe?...&amp;amp;metadata.pgNotifyChannel=&amp;lt;channel&amp;gt;&lt;/code&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;metadata.pgNotifyChannel&lt;/code&gt; must match the channel used in your PostgreSQL trigger (&lt;code&gt;"config"&lt;/code&gt; in the example) ([Dapr Docs][3]).&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;

&lt;p&gt;&lt;strong&gt;Unsubscribe&lt;/strong&gt;&lt;br&gt;
&lt;code&gt;GET http://localhost:&amp;lt;daprPort&amp;gt;/v1.0/configuration/&amp;lt;storeName&amp;gt;/&amp;lt;subscription-id&amp;gt;/unsubscribe&lt;/code&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Use the subscription ID returned earlier to cancel listening ([Dapr Docs][3]).&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

</description>
      <category>postgres</category>
      <category>cloud</category>
    </item>
    <item>
      <title>Policy as code with Open Policy Agent</title>
      <dc:creator>Siri Varma Vegiraju</dc:creator>
      <pubDate>Wed, 06 Aug 2025 05:22:53 +0000</pubDate>
      <link>https://dev.to/sirivarma/policy-as-code-with-open-policy-agent-3fo7</link>
      <guid>https://dev.to/sirivarma/policy-as-code-with-open-policy-agent-3fo7</guid>
      <description>&lt;h3&gt;
  
  
  &lt;strong&gt;Policy as Code&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Policy as Code&lt;/strong&gt; is the practice of writing and managing &lt;strong&gt;security, compliance, and operational rules&lt;/strong&gt; in code—just like you manage application code. It allows policies to be:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Automated&lt;/strong&gt;: Integrated into CI/CD pipelines and runtime systems&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Versioned&lt;/strong&gt;: Stored in source control systems (e.g., Git)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Tested&lt;/strong&gt;: With unit/integration tests, reducing human error&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Audited&lt;/strong&gt;: For traceability and accountability&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This approach promotes &lt;strong&gt;consistency, repeatability&lt;/strong&gt;, and &lt;strong&gt;scalability&lt;/strong&gt; in enforcing rules across infrastructure, Kubernetes, APIs, IAM, and more.&lt;/p&gt;




&lt;h3&gt;
  
  
  &lt;strong&gt;Open Policy Agent (OPA)&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;OPA&lt;/strong&gt; is a &lt;strong&gt;general-purpose policy engine&lt;/strong&gt; that lets you enforce fine-grained policies across a wide range of systems.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Uses a high-level declarative language called &lt;strong&gt;Rego&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;Decouples &lt;strong&gt;policy decisions&lt;/strong&gt; from &lt;strong&gt;policy enforcement&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;Can be embedded in services (e.g., microservices, Kubernetes admission controllers, CI/CD pipelines)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Common Use Cases&lt;/strong&gt;:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Kubernetes Admission Control (via Gatekeeper)&lt;/li&gt;
&lt;li&gt;API access authorization&lt;/li&gt;
&lt;li&gt;Cloud infrastructure policies (Terraform, CI/CD)&lt;/li&gt;
&lt;li&gt;Data filtering and masking&lt;/li&gt;
&lt;/ul&gt;




&lt;h3&gt;
  
  
  Example (OPA Rego Policy):
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight rego"&gt;&lt;code&gt;&lt;span class="ow"&gt;package&lt;/span&gt; &lt;span class="n"&gt;httpapi&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;authz&lt;/span&gt;

&lt;span class="n"&gt;allow&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="n"&gt;input&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;user&lt;/span&gt; &lt;span class="o"&gt;==&lt;/span&gt; &lt;span class="s2"&gt;"admin"&lt;/span&gt;
  &lt;span class="n"&gt;input&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;method&lt;/span&gt; &lt;span class="o"&gt;==&lt;/span&gt; &lt;span class="s2"&gt;"DELETE"&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This policy allows only &lt;code&gt;admin&lt;/code&gt; users to perform &lt;code&gt;DELETE&lt;/code&gt; operations.&lt;/p&gt;




&lt;h3&gt;
  
  
  Why it Matters
&lt;/h3&gt;

&lt;p&gt;OPA and Policy as Code are central to &lt;strong&gt;Cloud Native Security&lt;/strong&gt;, &lt;strong&gt;Zero Trust Architecture&lt;/strong&gt;, and &lt;strong&gt;automated compliance&lt;/strong&gt; in modern DevSecOps environments.&lt;/p&gt;

</description>
      <category>cloud</category>
      <category>programming</category>
    </item>
    <item>
      <title>How Docker MCP helps discover containerized MCP serviers</title>
      <dc:creator>Siri Varma Vegiraju</dc:creator>
      <pubDate>Fri, 01 Aug 2025 04:27:37 +0000</pubDate>
      <link>https://dev.to/sirivarma/how-docker-mcp-helps-discover-containerized-mcp-serviers-394e</link>
      <guid>https://dev.to/sirivarma/how-docker-mcp-helps-discover-containerized-mcp-serviers-394e</guid>
      <description>&lt;p&gt;Docker has launched the &lt;strong&gt;Docker MCP Catalog and Toolkit&lt;/strong&gt; in beta to address key challenges in the Model Context Protocol (MCP) ecosystem for AI agents. Here's what's new:&lt;/p&gt;

&lt;h3&gt;
  
  
  Docker MCP Catalog
&lt;/h3&gt;

&lt;p&gt;The Docker MCP Catalog is now integrated into Docker Hub and serves as a centralized discovery platform for MCP servers. It features over 100 curated, containerized MCP servers from trusted partners including Stripe, Elastic, Heroku, Pulumi, Grafana Labs, Kong Inc., Neo4j, New Relic, and Continue.dev. The catalog makes it easy to find official, trustworthy MCP tools in one place rather than searching across fragmented registries and community lists.&lt;/p&gt;

&lt;h3&gt;
  
  
  Container MCP Approach
&lt;/h3&gt;

&lt;p&gt;By packaging MCP servers as Docker containers, the solution eliminates common deployment challenges like runtime setup, dependency conflicts, and environment inconsistencies. Containers provide built-in security features, isolation, and sandboxing that traditional MCP installations lack. This containerized approach also enables better protection against emerging threats specific to MCP servers, such as Tool Poisoning and Tool Rug Pull attacks.&lt;/p&gt;

&lt;h3&gt;
  
  
  MCP Toolkit Features
&lt;/h3&gt;

&lt;p&gt;The MCP Toolkit, available as a Docker Desktop extension, provides:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Simple Installation&lt;/strong&gt;: One-click setup and management of MCP servers&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Secure Authentication&lt;/strong&gt;: Built-in OAuth support and secure credential storage, eliminating the need for plaintext environment variables&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Client Integration&lt;/strong&gt;: Seamless connection to popular MCP clients like Gordon (Docker AI Agent), Claude, Cursor, VSCode, Windsurf, Continue.dev, and Goose&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Enterprise Features&lt;/strong&gt;: Access control, policy enforcement, and audit capabilities&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Key Benefits
&lt;/h3&gt;

&lt;p&gt;The containerized MCP approach solves three major pain points in the current MCP ecosystem: fragmented discovery of trustworthy tools, complex installations with dependency conflicts, and inadequate security with full host access. Docker's solution provides a secure, scalable foundation that inherits Docker's trusted container security model while simplifying the developer experience for building AI agents with external tool integrations.&lt;/p&gt;

&lt;p&gt;The beta is available now through Docker Desktop's extensions menu, with plans to expand enterprise features and enable custom MCP sharing through Docker Hub.&lt;/p&gt;

</description>
      <category>cloud</category>
      <category>cloudnative</category>
    </item>
    <item>
      <title>Docker Offload is Simply Amazing</title>
      <dc:creator>Siri Varma Vegiraju</dc:creator>
      <pubDate>Tue, 29 Jul 2025 05:30:11 +0000</pubDate>
      <link>https://dev.to/sirivarma/docker-offload-is-simply-amazing-15l0</link>
      <guid>https://dev.to/sirivarma/docker-offload-is-simply-amazing-15l0</guid>
      <description>&lt;h2&gt;
  
  
  🚀 What Is Docker Offload?
&lt;/h2&gt;

&lt;p&gt;Docker Offload is a &lt;strong&gt;beta feature&lt;/strong&gt; of Docker Desktop (version 4.43+), allowing you to &lt;strong&gt;offload Docker image builds and container execution to remote cloud infrastructure&lt;/strong&gt; while maintaining your familiar local development workflow.&lt;/p&gt;




&lt;h2&gt;
  
  
  🔑 Core Features
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Remote builds &amp;amp; runs&lt;/strong&gt;: Your &lt;code&gt;docker build&lt;/code&gt; and &lt;code&gt;docker run&lt;/code&gt; commands execute on cloud-based BuildKit instances or managed container environments. Builds and GPU workloads run remotely, not on your local machine.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;GPU support&lt;/strong&gt;: Optionally run on NVIDIA L4 GPU instances—ideal for machine learning, AI inferencing, or media processing .&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Ephemeral environments&lt;/strong&gt;: A fresh cloud session is provisioned for each user session and automatically torn down after ~5 minutes of inactivity, cleaning up containers, images, and volumes.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Shared build cache&lt;/strong&gt;: Persistent cache across builds and team members speeds up build performance and avoids redundant downloads.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Local-like UX&lt;/strong&gt;: Port forwarding, bind mounts, and accessing containers via &lt;code&gt;localhost&lt;/code&gt; work just like local Docker. No change to Dockerfiles or commands is required.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Secure communication&lt;/strong&gt;: Docker Desktop connects to cloud builders over encrypted tunnels, using credentials and secure access flows.&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  ⚙ How to Get Started (Quickstart)
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;Install &lt;strong&gt;Docker Desktop 4.43 or later&lt;/strong&gt; and sign up for the Offload beta.&lt;/li&gt;
&lt;li&gt;Run:
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;   docker offload start
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Follow prompts to select your account and enable optional GPU support.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Check status with:
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;   docker offload status
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;And verify context using:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;   docker context &lt;span class="nb"&gt;ls&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You’ll see a cloud-based context—often named &lt;code&gt;docker-cloud&lt;/code&gt;—active once offloading is enabled.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Run containers normally, for example:
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;   docker run &lt;span class="nt"&gt;--rm&lt;/span&gt; hello-world
   docker run &lt;span class="nt"&gt;--rm&lt;/span&gt; &lt;span class="nt"&gt;--gpus&lt;/span&gt; all hello-world
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The containers execute in the cloud but behave just like local ones (even ports map to &lt;code&gt;localhost&lt;/code&gt;).&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;When finished, stop the session via:
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;   docker offload stop
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Builds and runs revert to local execution.&lt;/p&gt;




&lt;h2&gt;
  
  
  📊 Why Use Docker Offload?
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Speed and scale&lt;/strong&gt;: Ideal for resource-intensive builds and workloads that would overwhelm a local machine (e.g. monorepos, large Node dependencies).&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Consistency&lt;/strong&gt;: Ensures identical builds across developers and CI environments without managing custom infrastructure or Docker-in-Docker setups.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Cost flexibility&lt;/strong&gt;: You get 300 minutes of free usage to start.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Hardware offloading&lt;/strong&gt;: Keep your laptop cool and responsive while heavy workloads run remotely.&lt;/li&gt;
&lt;/ul&gt;




&lt;h3&gt;
  
  
  ✅ Summary
&lt;/h3&gt;

&lt;p&gt;Docker Offload is a powerful way to run container workloads and builds in the cloud—with GPU support, managed infrastructure, and consistent UI—while preserving your local development habits. It’s ideal for heavy-duty workflows, complex builds, or low-power environments, offering major performance gains with minimal setup changes.&lt;/p&gt;

</description>
      <category>docker</category>
      <category>cloud</category>
      <category>cloudnative</category>
    </item>
    <item>
      <title>Setting up Azure Container Apps and Dapr</title>
      <dc:creator>Siri Varma Vegiraju</dc:creator>
      <pubDate>Mon, 28 Jul 2025 04:38:49 +0000</pubDate>
      <link>https://dev.to/sirivarma/setting-up-azure-container-apps-and-dapr-2pjh</link>
      <guid>https://dev.to/sirivarma/setting-up-azure-container-apps-and-dapr-2pjh</guid>
      <description>&lt;h1&gt;
  
  
  Getting Started with Azure Container Apps and Dapr
&lt;/h1&gt;

&lt;p&gt;&lt;strong&gt;Azure Container Apps&lt;/strong&gt; is a serverless container platform that enables you to deploy microservices without managing complex infrastructure. When combined with &lt;strong&gt;Dapr (Distributed Application Runtime)&lt;/strong&gt;, it unlocks powerful capabilities like service invocation, pub/sub messaging, state management, and more—ideal for building resilient, cloud-native apps.&lt;/p&gt;

&lt;h2&gt;
  
  
  Prerequisites
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://learn.microsoft.com/en-us/cli/azure/install-azure-cli" rel="noopener noreferrer"&gt;Azure CLI&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://docs.dapr.io/getting-started/install-dapr-cli/" rel="noopener noreferrer"&gt;Dapr CLI&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;A GitHub repo or container image (e.g., from Docker Hub)&lt;/li&gt;
&lt;li&gt;Azure Subscription&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  Step 1: Enable Azure CLI Extensions
&lt;/h2&gt;

&lt;p&gt;Install the required extensions:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;az extension add &lt;span class="nt"&gt;--name&lt;/span&gt; containerapp &lt;span class="nt"&gt;--upgrade&lt;/span&gt;
az provider register &lt;span class="nt"&gt;--namespace&lt;/span&gt; Microsoft.App
az provider register &lt;span class="nt"&gt;--namespace&lt;/span&gt; Microsoft.OperationalInsights
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;h2&gt;
  
  
  Step 2: Create a Resource Group and Environment
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;az group create &lt;span class="nt"&gt;--name&lt;/span&gt; dapr-app-rg &lt;span class="nt"&gt;--location&lt;/span&gt; westus

az containerapp &lt;span class="nb"&gt;env &lt;/span&gt;create &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--name&lt;/span&gt; dapr-env &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--resource-group&lt;/span&gt; dapr-app-rg &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--location&lt;/span&gt; westus
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;h2&gt;
  
  
  Step 3: Deploy a Container App with Dapr Enabled
&lt;/h2&gt;

&lt;p&gt;Deploy a sample app with Dapr sidecar:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;az containerapp create &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--name&lt;/span&gt; dapr-service &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--resource-group&lt;/span&gt; dapr-app-rg &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--environment&lt;/span&gt; dapr-env &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--image&lt;/span&gt; ghcr.io/dapr/samples/hello-k8s-node:latest &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--target-port&lt;/span&gt; 3000 &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--ingress&lt;/span&gt; external &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--enable-dapr&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--dapr-app-id&lt;/span&gt; nodeapp &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--dapr-app-port&lt;/span&gt; 3000
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;code&gt;--enable-dapr&lt;/code&gt; deploys the Dapr sidecar&lt;br&gt;
&lt;code&gt;--dapr-app-id&lt;/code&gt; is used for service invocation&lt;/p&gt;


&lt;h2&gt;
  
  
  Step 4: Test Dapr Service Invocation
&lt;/h2&gt;

&lt;p&gt;To invoke the service from another app or tool:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;curl http://&amp;lt;container-app-url&amp;gt;/ &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;-H&lt;/span&gt; &lt;span class="s2"&gt;"dapr-app-id: nodeapp"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;h2&gt;
  
  
  Step 5: Add Pub/Sub or State Store (Optional)
&lt;/h2&gt;

&lt;p&gt;Attach a pub/sub component or state store by uploading a Dapr component YAML file to your Container App environment via the Azure Portal or Azure CLI. For example, use Azure Storage, Service Bus, or Redis as backends.&lt;/p&gt;




&lt;h2&gt;
  
  
  Summary
&lt;/h2&gt;

&lt;p&gt;By enabling Dapr in Azure Container Apps, you can focus on building scalable microservices without worrying about infrastructure. You get out-of-the-box service discovery, retries, pub/sub, and state—making your app more robust and cloud-native.&lt;/p&gt;

</description>
      <category>azure</category>
      <category>cloud</category>
      <category>dapr</category>
    </item>
    <item>
      <title>Dapr and Grafana for Metrics</title>
      <dc:creator>Siri Varma Vegiraju</dc:creator>
      <pubDate>Fri, 25 Jul 2025 06:27:57 +0000</pubDate>
      <link>https://dev.to/sirivarma/dapr-and-grafana-for-metrics-4jk7</link>
      <guid>https://dev.to/sirivarma/dapr-and-grafana-for-metrics-4jk7</guid>
      <description>&lt;h2&gt;
  
  
  Prerequisites
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Prometheus&lt;/strong&gt; must be set up and scraping Dapr metrics before using Grafana (&lt;a href="https://docs.dapr.io/operations/observability/metrics/?utm_source=chatgpt.com" rel="noopener noreferrer"&gt;Dapr Docs&lt;/a&gt;).&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Setup on Kubernetes
&lt;/h2&gt;

&lt;h3&gt;
  
  
  1. Install Grafana
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Add the Helm repo:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;  helm repo add grafana https://grafana.github.io/helm-charts
  helm repo update
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Install Grafana in the &lt;code&gt;dapr-monitoring&lt;/code&gt; namespace:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;  helm &lt;span class="nb"&gt;install &lt;/span&gt;grafana grafana/grafana &lt;span class="nt"&gt;-n&lt;/span&gt; dapr-monitoring
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;
&lt;em&gt;Optional:&lt;/em&gt; For development or Minikube, disable persistent volumes:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;  &lt;span class="nt"&gt;--set&lt;/span&gt; persistence.enabled&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nb"&gt;false&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Retrieve the Grafana admin password via Kubernetes secret and base64 decoding: remove trailing &lt;code&gt;%&lt;/code&gt; from output string (&lt;a href="https://docs.dapr.io/operations/observability/metrics/grafana/?utm_source=chatgpt.com" rel="noopener noreferrer"&gt;Dapr Docs&lt;/a&gt;)&lt;/li&gt;
&lt;li&gt;Confirm Grafana and Prometheus pods are running via &lt;code&gt;kubectl get pods -n dapr-monitoring&lt;/code&gt; (&lt;a href="https://docs.dapr.io/operations/observability/metrics/grafana/?utm_source=chatgpt.com" rel="noopener noreferrer"&gt;Dapr Docs&lt;/a&gt;).&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  2. Configure Prometheus Data Source in Grafana
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Port-forward the Grafana service to &lt;code&gt;localhost:8080&lt;/code&gt;, then navigate to &lt;code&gt;http://localhost:8080&lt;/code&gt;:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;  kubectl port-forward svc/grafana 8080:80 &lt;span class="nt"&gt;-n&lt;/span&gt; dapr-monitoring
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Log in with user &lt;code&gt;admin&lt;/code&gt; and the decoded password&lt;/li&gt;
&lt;li&gt;Under &lt;strong&gt;Configuration → Data Sources&lt;/strong&gt;, add &lt;strong&gt;Prometheus&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;Set the &lt;strong&gt;HTTP URL&lt;/strong&gt; to the Prometheus server endpoint, e.g.:
&lt;code&gt;http://dapr-prom-prometheus-server.dapr-monitoring&lt;/code&gt; based on service name and namespace (&lt;a href="https://docs.dapr.io/operations/observability/metrics/grafana/?utm_source=chatgpt.com" rel="noopener noreferrer"&gt;Dapr Docs&lt;/a&gt;)&lt;/li&gt;
&lt;li&gt;Enable as default and disable TLS verification (&lt;code&gt;Skip TLS Verify On&lt;/code&gt;) for connection saving to succeed&lt;/li&gt;
&lt;li&gt;Save &amp;amp; Test to ensure the data source is correctly connected (&lt;a href="https://docs.dapr.io/operations/observability/metrics/grafana/?utm_source=chatgpt.com" rel="noopener noreferrer"&gt;Dapr Docs&lt;/a&gt;)&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  3. Import Dashboards
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;From Grafana home screen, click &lt;strong&gt;"+ → Import"&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;Upload the &lt;code&gt;.json&lt;/code&gt; dashboard file(s) corresponding to your Dapr version&lt;/li&gt;
&lt;li&gt;Available dashboard templates include:

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;System Service&lt;/strong&gt;: shows control-plane components like operator, injector, sentry, placement&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Sidecars&lt;/strong&gt;: shows health, resource use, throughput/latency (HTTP, gRPC), mTLS, Actor metrics&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Actors&lt;/strong&gt;: shows actor invocation metrics, timers, reminders, concurrency usage (&lt;a href="https://v1-14.docs.dapr.io/zh-hans/operations/observability/metrics/grafana/?utm_source=chatgpt.com" rel="noopener noreferrer"&gt;Dapr Docs&lt;/a&gt;)&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;After importing, locate and open the dashboard(s) to begin visualizing metrics&lt;/li&gt;

&lt;/ul&gt;

&lt;h2&gt;
  
  
  Observability Insights
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Dapr sidecars and control-plane services expose &lt;strong&gt;Prometheus-formatted metrics&lt;/strong&gt;, which Prometheus scrapes on default ports (&lt;code&gt;9090&lt;/code&gt; for sidecar, &lt;code&gt;9091&lt;/code&gt; for control-plane) (&lt;a href="https://dev.to/sirivarma/using-dapr-and-opentelemetry-for-metrics-1jj?utm_source=chatgpt.com"&gt;DEV Community&lt;/a&gt;, &lt;a href="https://docs.dapr.io/operations/observability/metrics/metrics-overview/?utm_source=chatgpt.com" rel="noopener noreferrer"&gt;Dapr Docs&lt;/a&gt;)&lt;/li&gt;
&lt;li&gt;Sidecar metrics include latency for service invocation, state/store calls, pub-sub, memory usage, error rates&lt;/li&gt;
&lt;li&gt;Control-plane metrics include CPU usage, actor placements, injection failures, etc. (&lt;a href="https://dev.to/sirivarma/using-dapr-and-opentelemetry-for-metrics-1jj?utm_source=chatgpt.com"&gt;DEV Community&lt;/a&gt;, &lt;a href="https://docs.dapr.io/concepts/observability-concept/?utm_source=chatgpt.com" rel="noopener noreferrer"&gt;Dapr Docs&lt;/a&gt;)&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Summary &amp;amp; Tips
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Install Prometheus and Grafana&lt;/strong&gt; (via Helm)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Connect Grafana to Prometheus&lt;/strong&gt; as a data source&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Import pre-built dashboards&lt;/strong&gt; for system services, sidecars, and actor workloads&lt;/li&gt;
&lt;li&gt;Use Grafana to visualize key metrics: latency, throughput, resource usage, failures, actor behavior&lt;/li&gt;
&lt;li&gt;Hover over the "i" icons inside Grafana charts for descriptions of what each metric means (&lt;a href="https://docs.dapr.io/operations/observability/metrics/grafana/?utm_source=chatgpt.com" rel="noopener noreferrer"&gt;Dapr Docs&lt;/a&gt;)&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>cloud</category>
      <category>dapr</category>
      <category>cloudnative</category>
    </item>
    <item>
      <title>Intro to OpenTelemetry Weaver</title>
      <dc:creator>Siri Varma Vegiraju</dc:creator>
      <pubDate>Tue, 22 Jul 2025 06:15:50 +0000</pubDate>
      <link>https://dev.to/sirivarma/intro-to-opentelemetry-weaver-4om1</link>
      <guid>https://dev.to/sirivarma/intro-to-opentelemetry-weaver-4om1</guid>
      <description>&lt;h3&gt;
  
  
  📌 TL;DR
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;OpenTelemetry Weaver&lt;/strong&gt; is a powerful tool that brings &lt;strong&gt;observability-by-design&lt;/strong&gt; into practice. It empowers teams to standardize, automate, and maintain telemetry data through semantic conventions—offering type safety, validation, documentation, and deployment in one package ([OpenTelemetry][1]).&lt;/p&gt;




&lt;h3&gt;
  
  
  Why Weaver Matters
&lt;/h3&gt;

&lt;p&gt;Weaver addresses common issues in telemetry reliability and consistency:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Broken alerts&lt;/strong&gt; due to metric name changes&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Hard-to-read queries&lt;/strong&gt; from inconsistent naming&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Undocumented telemetry&lt;/strong&gt; leading to confusion&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Missing instrumentation&lt;/strong&gt;, only discovered in production ([GitHub][2])&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;By treating telemetry signals (traces, metrics, logs) like code APIs, Weaver ensures they are versioned, consistent, and documented upfront.&lt;/p&gt;




&lt;h3&gt;
  
  
  Core Features
&lt;/h3&gt;

&lt;p&gt;OTel Weaver supports:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Capability&lt;/th&gt;
&lt;th&gt;Description&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Schema Definition&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Define telemetry schemas via semantic conventions.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Validation&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Validate telemetry against defined schemas manually or in CI.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Type-safe Code Gen&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Generate idiomatic client SDKs (Go, Rust, etc.) from those schemas.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Documentation Gen&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Auto-generate markdown docs for your signals.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Live Checking&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Integrate in CI or runtime to ensure emitted telemetry conforms ([Honeycomb][3], [GitHub][2], [GitHub][4]).&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;




&lt;h3&gt;
  
  
  How It Works
&lt;/h3&gt;

&lt;p&gt;At its core, Weaver is a CLI and platform tool that uses a schema-first workflow:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Search&lt;/strong&gt; or resolve semantic convention registries and schemas.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Validate&lt;/strong&gt; schemas via the CLI.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Generate client SDKs&lt;/strong&gt;, docs, or dashboards using its built‑in template engine or custom WASM plugins ([GitHub][4]).&lt;/li&gt;
&lt;/ol&gt;




&lt;h3&gt;
  
  
  Current Maturity &amp;amp; Roadmap
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Versioned as CLI v0.X, Weaver is &lt;strong&gt;production-ready&lt;/strong&gt; with active releases (v0.16.1 as of July 4, 2025) ([GitHub][2])&lt;/li&gt;
&lt;li&gt;Future roadmap includes expanding language support, integrating OTel Arrow Protocol, adding SDK masking, obfuscation, and a richer ecosystem with WASM plugins for data catalogs, privacy compliance, dashboards, and more ([GitHub][4]).&lt;/li&gt;
&lt;/ul&gt;




&lt;h3&gt;
  
  
  Learn More
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://www.youtube.com/watch?v=ReZzjR8Anrs&amp;amp;utm_source=chatgpt.com" rel="noopener noreferrer"&gt;Get Better OpenTelemetry with Weaver&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Check out the CNCF 2025 presentation &lt;strong&gt;“Observability by Design”&lt;/strong&gt; and the SRECon talk &lt;strong&gt;“OpenTelemetry Semantic Conventions and How to Avoid Broken Observability”&lt;/strong&gt;, both linked in the repo ([GitHub][2]).&lt;/p&gt;




&lt;h3&gt;
  
  
  💡 Quick Take
&lt;/h3&gt;

&lt;p&gt;OTel Weaver elevates telemetry from artistry to engineering discipline. It helps teams define telemetry as a "public API," bringing consistency, traceability, and automation. Ideal for organizations prioritizing high-quality, maintainable observability at scale.&lt;/p&gt;




&lt;p&gt;Source: &lt;a href="https://opentelemetry.io/blog/2025/otel-weaver/" rel="noopener noreferrer"&gt;https://opentelemetry.io/blog/2025/otel-weaver/&lt;/a&gt;&lt;/p&gt;

</description>
      <category>cloud</category>
      <category>cloudnative</category>
      <category>observability</category>
    </item>
    <item>
      <title>How Dapr Binding works ?</title>
      <dc:creator>Siri Varma Vegiraju</dc:creator>
      <pubDate>Mon, 21 Jul 2025 06:04:11 +0000</pubDate>
      <link>https://dev.to/sirivarma/how-dapr-binding-works--4np9</link>
      <guid>https://dev.to/sirivarma/how-dapr-binding-works--4np9</guid>
      <description>&lt;h3&gt;
  
  
  🔗 &lt;strong&gt;Dapr Bindings Overview Summary&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Bindings&lt;/strong&gt; in Dapr provide a way for applications to interact with external systems—both to &lt;em&gt;ingest events&lt;/em&gt; (input bindings) and to &lt;em&gt;invoke external systems&lt;/em&gt; (output bindings)—using a simple, consistent API.&lt;/p&gt;




&lt;h3&gt;
  
  
  🧩 &lt;strong&gt;Types of Bindings&lt;/strong&gt;
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;&lt;strong&gt;Input Bindings&lt;/strong&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;ul&gt;
&lt;li&gt;Trigger the application by receiving events from external systems (e.g., message queues, databases, cloud services).&lt;/li&gt;
&lt;li&gt;Dapr invokes a specified endpoint in the app when a new event arrives.&lt;/li&gt;
&lt;/ul&gt;

&lt;ol&gt;
&lt;li&gt;&lt;strong&gt;Output Bindings&lt;/strong&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;ul&gt;
&lt;li&gt;Allow the app to send data to external systems.&lt;/li&gt;
&lt;li&gt;Can be invoked using Dapr SDKs or HTTP/gRPC APIs.&lt;/li&gt;
&lt;/ul&gt;




&lt;h3&gt;
  
  
  🔄 &lt;strong&gt;How Bindings Work&lt;/strong&gt;
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Input&lt;/strong&gt;: External system → Dapr → App&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Output&lt;/strong&gt;: App → Dapr → External system&lt;/li&gt;
&lt;li&gt;Bindings are configured via a component YAML file defining the type (e.g., Kafka, HTTP, Cron) and metadata.&lt;/li&gt;
&lt;/ul&gt;




&lt;h3&gt;
  
  
  🧰 &lt;strong&gt;Use Cases&lt;/strong&gt;
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Trigger functions on a schedule (e.g., &lt;code&gt;cron&lt;/code&gt;)&lt;/li&gt;
&lt;li&gt;Send messages to systems like Kafka, MQTT&lt;/li&gt;
&lt;li&gt;Respond to cloud events from AWS, Azure, GCP&lt;/li&gt;
&lt;li&gt;Connect with databases, queues, or custom services&lt;/li&gt;
&lt;/ul&gt;




&lt;h3&gt;
  
  
  ✅ &lt;strong&gt;Benefits&lt;/strong&gt;
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Abstracts complex integrations&lt;/li&gt;
&lt;li&gt;Uniform API across different services&lt;/li&gt;
&lt;li&gt;Event-driven programming with minimal boilerplate&lt;/li&gt;
&lt;/ul&gt;




</description>
      <category>dapr</category>
      <category>cloud</category>
      <category>cloudnative</category>
    </item>
    <item>
      <title>Generating CylconeDX and SPDX format SBOMs using Docker Scout</title>
      <dc:creator>Siri Varma Vegiraju</dc:creator>
      <pubDate>Fri, 18 Jul 2025 05:56:21 +0000</pubDate>
      <link>https://dev.to/sirivarma/generating-cylconedx-and-spdx-format-sboms-using-docker-scout-nh2</link>
      <guid>https://dev.to/sirivarma/generating-cylconedx-and-spdx-format-sboms-using-docker-scout-nh2</guid>
      <description>&lt;h1&gt;
  
  
  Software Bill of Materials (SBOM) Guide with Docker Scout
&lt;/h1&gt;

&lt;h2&gt;
  
  
  What is an SBOM?
&lt;/h2&gt;

&lt;p&gt;A Software Bill of Materials (SBOM) is a comprehensive inventory of all software components, libraries, dependencies, and packages that make up a software application or system. Think of it as an "ingredients list" for your software - just like food labels list ingredients, an SBOM lists all the software components used in your application.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why are SBOMs Important?
&lt;/h2&gt;

&lt;p&gt;SBOMs have become critical for modern software development and security for several reasons:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Security &amp;amp; Vulnerability Management&lt;/strong&gt;: Quickly identify if your software contains vulnerable components&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Compliance &amp;amp; Regulatory Requirements&lt;/strong&gt;: Many industries and government contracts now require SBOMs&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Supply Chain Transparency&lt;/strong&gt;: Understand what third-party code you're using and its origins&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;License Compliance&lt;/strong&gt;: Track open-source licenses and ensure compliance with terms&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Risk Assessment&lt;/strong&gt;: Evaluate the security posture of your software stack&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Incident Response&lt;/strong&gt;: Rapidly determine if security incidents affect your applications&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Key SBOM Formats
&lt;/h2&gt;

&lt;h3&gt;
  
  
  1. SPDX (Software Package Data Exchange)
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Industry Standard&lt;/strong&gt;: Developed by the Linux Foundation&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Format&lt;/strong&gt;: Available in JSON, YAML, RDF, and tag-value formats&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Use Case&lt;/strong&gt;: Widely adopted, especially in open-source communities&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Strengths&lt;/strong&gt;: Comprehensive license information, mature specification&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  2. CycloneDX
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Modern Format&lt;/strong&gt;: Designed specifically for application security use cases&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Format&lt;/strong&gt;: Available in JSON and XML formats&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Use Case&lt;/strong&gt;: Popular in DevSecOps and vulnerability management&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Strengths&lt;/strong&gt;: Rich vulnerability data, component relationships, build metadata&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  3. SWID (Software Identification Tags)
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Legacy Format&lt;/strong&gt;: Older standard, less commonly used for modern applications&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Use Case&lt;/strong&gt;: Primarily for software asset management&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Using Docker Scout to Generate SBOMs
&lt;/h2&gt;

&lt;p&gt;Docker Scout is Docker's built-in security and supply chain tool that can generate SBOMs for container images. Here's how to use it:&lt;/p&gt;

&lt;h3&gt;
  
  
  Basic SBOM Generation
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Generate SPDX SBOM (JSON format)&lt;/span&gt;
docker scout sbom &lt;span class="nt"&gt;--format&lt;/span&gt; spdx my-app:latest

&lt;span class="c"&gt;# Generate CycloneDX SBOM (JSON format)&lt;/span&gt;
docker scout sbom &lt;span class="nt"&gt;--format&lt;/span&gt; cyclonedx my-app:latest
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Save SBOMs to Files
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Save SPDX SBOM to file&lt;/span&gt;
docker scout sbom &lt;span class="nt"&gt;--format&lt;/span&gt; spdx &lt;span class="nt"&gt;--output&lt;/span&gt; my-app-sbom.spdx.json my-app:latest

&lt;span class="c"&gt;# Save CycloneDX SBOM to file&lt;/span&gt;
docker scout sbom &lt;span class="nt"&gt;--format&lt;/span&gt; cyclonedx &lt;span class="nt"&gt;--output&lt;/span&gt; my-app-sbom.cyclonedx.json my-app:latest

&lt;span class="c"&gt;# Generate XML format for CycloneDX&lt;/span&gt;
docker scout sbom &lt;span class="nt"&gt;--format&lt;/span&gt; cyclonedx &lt;span class="nt"&gt;--output&lt;/span&gt; my-app-sbom.cyclonedx.xml my-app:latest
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Advanced Options
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Generate SBOM for specific platform&lt;/span&gt;
docker scout sbom &lt;span class="nt"&gt;--format&lt;/span&gt; spdx &lt;span class="nt"&gt;--platform&lt;/span&gt; linux/amd64 my-app:latest

&lt;span class="c"&gt;# Generate SBOM for remote image&lt;/span&gt;
docker scout sbom &lt;span class="nt"&gt;--format&lt;/span&gt; cyclonedx nginx:alpine

&lt;span class="c"&gt;# Generate SBOM with organization context&lt;/span&gt;
docker scout sbom &lt;span class="nt"&gt;--format&lt;/span&gt; spdx &lt;span class="nt"&gt;--org&lt;/span&gt; my-org my-app:latest
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Practical Workflow Example
&lt;/h2&gt;

&lt;p&gt;Here's a typical workflow for integrating SBOM generation into your CI/CD pipeline:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# 1. Build your Docker image&lt;/span&gt;
docker build &lt;span class="nt"&gt;-t&lt;/span&gt; my-app:v1.0.0 &lt;span class="nb"&gt;.&lt;/span&gt;

&lt;span class="c"&gt;# 2. Generate SBOMs in both formats&lt;/span&gt;
docker scout sbom &lt;span class="nt"&gt;--format&lt;/span&gt; spdx &lt;span class="nt"&gt;--output&lt;/span&gt; artifacts/sbom.spdx.json my-app:v1.0.0
docker scout sbom &lt;span class="nt"&gt;--format&lt;/span&gt; cyclonedx &lt;span class="nt"&gt;--output&lt;/span&gt; artifacts/sbom.cyclonedx.json my-app:v1.0.0

&lt;span class="c"&gt;# 3. Store SBOMs with your artifacts for compliance and security tracking&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  What Information is Included?
&lt;/h2&gt;

&lt;p&gt;Docker Scout-generated SBOMs typically include:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Package Information&lt;/strong&gt;: Name, version, type (npm, pip, apt, etc.)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Dependencies&lt;/strong&gt;: Direct and transitive dependencies&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;File Locations&lt;/strong&gt;: Where components are installed in the container&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Licenses&lt;/strong&gt;: License information for each component&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Checksums&lt;/strong&gt;: File integrity information&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Metadata&lt;/strong&gt;: Build information, timestamps, and more&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Integration with Security Tools
&lt;/h2&gt;

&lt;p&gt;SBOMs generated by Docker Scout can be consumed by various security and compliance tools:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Vulnerability Scanners&lt;/strong&gt;: Import SBOMs to identify known vulnerabilities&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Compliance Tools&lt;/strong&gt;: Verify license compliance and policy adherence&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Supply Chain Security&lt;/strong&gt;: Track component provenance and integrity&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Risk Management&lt;/strong&gt;: Assess overall security posture of applications&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>docker</category>
      <category>sbom</category>
      <category>cloud</category>
    </item>
  </channel>
</rss>
