<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Daniel Marques</title>
    <description>The latest articles on DEV Community by Daniel Marques (@dmo2000).</description>
    <link>https://dev.to/dmo2000</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/dmo2000"/>
    <language>en</language>
    <item>
      <title>Using AutoGen to automate wiki content review</title>
      <dc:creator>Daniel Marques</dc:creator>
      <pubDate>Mon, 26 Jan 2026 00:38:49 +0000</pubDate>
      <link>https://dev.to/dmo2000/using-autogen-to-automate-wiki-content-g8l</link>
      <guid>https://dev.to/dmo2000/using-autogen-to-automate-wiki-content-g8l</guid>
      <description>&lt;p&gt;Using AI to review documentation wikis, identify inconsistencies, and suggest structural improvements. This post explains how to perform this analysis locally without needing API keys. We’ll use AutoGen and Ollama to analyze a documentation wiki, examining both its content and hierarchy, and then ask AI agents to propose improvements.&lt;/p&gt;

&lt;h2&gt;
  
  
  What Are AutoGen and Ollama?
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;AutoGen&lt;/strong&gt; is an open-source, multi-agent framework developed by Microsoft designed to simplify the creation and orchestration of applications powered by Large Language Models (LLMs). It enables developers to create AI agent systems where multiple, specialized agents communicate with each other, use tools, and incorporate human feedback to solve complex tasks.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Ollama&lt;/strong&gt; is an open-source tool designed to simplify running and managing Large Language Models (LLMs) directly on your local machine (computer or server). It acts as a bridge between powerful open-source models (such as Llama, Mistral, and Gemma) and your hardware, making it easy to use AI without needing deep technical expertise.&lt;/p&gt;

&lt;h2&gt;
  
  
  Requirements
&lt;/h2&gt;

&lt;p&gt;To follow this tutorial, you will need:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Ollama&lt;/strong&gt; installed on your local machine. You can download it from &lt;a href="https://ollama.com/" rel="noopener noreferrer"&gt;Ollama's official website&lt;/a&gt;.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;A sample documentation repository&lt;/strong&gt; (or use your own). In my case, I used the &lt;a href="https://github.com/kubernetes/website" rel="noopener noreferrer"&gt;Kubernetes official documentation&lt;/a&gt;.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Python 3.13&lt;/strong&gt; (note: Python 3.14 may not yet be fully supported by all dependencies)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Once you have these prerequisites, proceed to set up your Python environment:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;pip &lt;span class="nb"&gt;install &lt;/span&gt;autogen ag2[openai]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;The ag2[openai] dependency is only needed because if not installed, autogen raises runtime errors&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Starting Ollama
&lt;/h2&gt;

&lt;p&gt;First, download and install a model in Ollama. For this tutorial, we'll use the gemma3:4B model:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;ollama pull gemma3:4b
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Next, &lt;strong&gt;start the Ollama server&lt;/strong&gt;. This step is essential—the Python script will connect to this server at &lt;code&gt;http://localhost:11434/v1&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;ollama serve
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Important:&lt;/strong&gt; Ensure the Ollama server is running before executing your Python script. You should see output confirming the server is listening.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  Setting Up the AutoGen Agents
&lt;/h2&gt;

&lt;p&gt;Now, let's create a Python script to set up the AutoGen agents that will analyze the documentation.&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 1: Configure the LLM
&lt;/h3&gt;

&lt;p&gt;First, configure the LLM settings:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="n"&gt;OLLAMA_MODEL&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;gemma3:4b&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
&lt;span class="n"&gt;OLLAMA_BASE_URL&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;http://localhost:11434/v1&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;

&lt;span class="n"&gt;llm_config&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;model&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;OLLAMA_MODEL&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;base_url&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;OLLAMA_BASE_URL&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;api_key&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;ollama&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;temperature&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;  &lt;span class="c1"&gt;# Set to 0 for deterministic output
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Setting &lt;code&gt;temperature&lt;/code&gt; to 0 ensures deterministic, consistent responses from the model.&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 2: Create the Content Evaluation Agent
&lt;/h3&gt;

&lt;p&gt;Next, create an agent to evaluate the quality of individual documentation files:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;autogen&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;AssistantAgent&lt;/span&gt;

&lt;span class="n"&gt;DOC_TYPE&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;setup guide&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
&lt;span class="n"&gt;DOC_LANGUAGE&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;English&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;

&lt;span class="n"&gt;content_agent&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;AssistantAgent&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="n"&gt;name&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;ContentAgent&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;llm_config&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;llm_config&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;system_message&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sa"&gt;f&lt;/span&gt;&lt;span class="sh"&gt;"""&lt;/span&gt;&lt;span class="s"&gt;
You evaluate individual markdown files as follows:
- document type is &lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;DOC_TYPE&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="s"&gt;
- language is &lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;DOC_LANGUAGE&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="s"&gt;
- the evaluation should return a score between 0 and 1, where 1 is best
- this is an evaluation task; do not suggest rewrites
&lt;/span&gt;&lt;span class="sh"&gt;"""&lt;/span&gt;
&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The system prompt makes it clear that this agent should evaluate content, not rewrite it.&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 3: Execute the Evaluation Prompt
&lt;/h3&gt;

&lt;p&gt;Now, execute the evaluation prompt for each file. The prompt explicitly requires JSON output, which makes it easy to parse results programmatically.&lt;/p&gt;

&lt;p&gt;The following code shows how to do that.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="bp"&gt;...&lt;/span&gt;
&lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="n"&gt;path&lt;/span&gt; &lt;span class="ow"&gt;in&lt;/span&gt; &lt;span class="n"&gt;files&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
    &lt;span class="k"&gt;with&lt;/span&gt; &lt;span class="nf"&gt;open&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;path&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;r&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;encoding&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;utf-8&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;as&lt;/span&gt; &lt;span class="n"&gt;f&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
        &lt;span class="n"&gt;content&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;f&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;read&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;

    &lt;span class="n"&gt;content_prompt&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="sa"&gt;f&lt;/span&gt;&lt;span class="sh"&gt;"""&lt;/span&gt;&lt;span class="s"&gt;
You are a documentation-quality evaluator. Evaluate this markdown file and return ONLY valid JSON (either a raw JSON object or a fenced ```
&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="o"&gt;%&lt;/span&gt; &lt;span class="n"&gt;endraw&lt;/span&gt; &lt;span class="o"&gt;%&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="s"&gt;
json block). Do NOT include any extra text, commentary, or explanations.

Output requirements (MANDATORY):
- Reply with exactly one JSON object with these top-level keys and types:
  - path (string): must equal the provided path.
  - score (number): 0.00 to 1.00 (float). Holistic quality combining clarity, correctness, and completeness. Round to two decimal places.
  - status (string): one of &lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;OK&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;, &lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;WARN&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;, or &lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;FAIL&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt; determined by score as follows:
      - score &amp;gt;= 0.70 -&amp;gt; &lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;OK&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;
      - 0.50 &amp;lt;= score &amp;lt; 0.70 -&amp;gt; &lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;WARN&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;
      - score &amp;lt; 0.50 -&amp;gt; &lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;FAIL&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;
  - notes (string, optional): up to 300 characters with concise diagnostic observations (do NOT include rewritten text or long examples).

Validation rules:
- The &lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;path&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt; value must exactly match the provided path.
- Numeric fields must be within [0.00, 1.00] and formatted with two decimal places.
- Do not include any additional top-level keys beyond path, score, status, notes.

Example valid response:
{{&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;path&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;:&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;path&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;,&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;score&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;:0.78,&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;status&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;:&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;WARN&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;,&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;notes&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;:&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Clear structure but missing prerequisites section.&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;}}

Input (do not modify):
- path: &lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;path&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="s"&gt;
- content: &lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;content&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="s"&gt;
&lt;/span&gt;&lt;span class="sh"&gt;"""&lt;/span&gt;

    &lt;span class="n"&gt;reply&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;content_agent&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;generate_reply&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
        &lt;span class="n"&gt;messages&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;[{&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;role&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;user&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;content&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;content_prompt&lt;/span&gt;&lt;span class="p"&gt;}]&lt;/span&gt;
    &lt;span class="p"&gt;)&lt;/span&gt;

    &lt;span class="bp"&gt;...&lt;/span&gt;
&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="o"&gt;%&lt;/span&gt; &lt;span class="n"&gt;raw&lt;/span&gt; &lt;span class="o"&gt;%&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;&lt;strong&gt;Note on large files:&lt;/strong&gt; If you're evaluating large documentation files, consider truncating the content to avoid exceeding token limits. Add this before sending the prompt:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
python
MAX_CONTENT_LENGTH = 4000
if len(content) &amp;gt; MAX_CONTENT_LENGTH:
    content = content[:MAX_CONTENT_LENGTH] + "\n... [content truncated] ..."
    # Note this in your prompt so the evaluator knows


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Analyzing the Results
&lt;/h2&gt;

&lt;p&gt;The full code that processes results and generates a markdown report is available in my GitHub repository: &lt;a href="https://github.com/dmo2000/documentation-advises" rel="noopener noreferrer"&gt;documentation-advises&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;You can find the complete implementation in &lt;a href="https://github.com/dmo2000/documentation-advises/blob/main/doc_review_agents.py" rel="noopener noreferrer"&gt;&lt;code&gt;doc_review_agents.py&lt;/code&gt;&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;The script generates a markdown report with:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Folder &amp;amp; File Moves:&lt;/strong&gt; Structural improvements recommended by the AI&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Document Quality Scores:&lt;/strong&gt; Individual file assessments with status (OK/WARN/FAIL)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Example report output:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fx4o7bn0dertznhe6uolt.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fx4o7bn0dertznhe6uolt.png" alt=" " width="800" height="434"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Problems Encountered and Solutions
&lt;/h2&gt;

&lt;p&gt;During my tests, I faced several challenges:&lt;/p&gt;

&lt;h3&gt;
  
  
  1. &lt;strong&gt;Prompt Debugging Difficulty&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Problem:&lt;/strong&gt; There's no easy way to debug prompts sent to the LLM. If output is unexpected, testing becomes tedious.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Solution:&lt;/strong&gt; &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Use Ollama's desktop app to test prompts interactively before integrating them&lt;/li&gt;
&lt;li&gt;Log all prompts and responses to a file for analysis&lt;/li&gt;
&lt;li&gt;Start with simple, single-purpose prompts before adding complexity&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  2. &lt;strong&gt;Unreliable JSON Output&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Problem:&lt;/strong&gt; The LLM sometimes returns invalid JSON or mixes JSON with explanatory text (not following the prompt request).&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Solution:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Implement validation: check for required fields before processing&lt;/li&gt;
&lt;li&gt;Set &lt;code&gt;temperature: 0&lt;/code&gt; for deterministic output&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  3. &lt;strong&gt;Conditional Instructions Fail&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Problem:&lt;/strong&gt; Conditional instructions like "if content is truncated, do X, else do Y" are often ignored by LLMs.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Solution:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Avoid conditionals; use explicit, imperative instructions instead&lt;/li&gt;
&lt;li&gt;Pre-process data before sending (truncate files yourself rather than asking the model to)&lt;/li&gt;
&lt;li&gt;Keep prompts focused on a single task&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Future Improvements
&lt;/h2&gt;

&lt;p&gt;Here are some enhancements I'm considering for this approach:&lt;/p&gt;

&lt;h3&gt;
  
  
  1. &lt;strong&gt;Vector Database Integration&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Store document embeddings in a local vector database (e.g. ChromaDB) to enable semantic comparison across files (without using the LLM). This would help detect duplicate content or similar documentation that could be consolidated.&lt;/p&gt;

&lt;h3&gt;
  
  
  2. &lt;strong&gt;Purpose-Aware Evaluation&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Create evaluation prompts that understand document purpose:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;index.md&lt;/code&gt; files should provide an overview of the folder's documentation&lt;/li&gt;
&lt;li&gt;Setup guides should explain installation and initial configuration&lt;/li&gt;
&lt;li&gt;Tutorial pages should include step-by-step instructions with expected outputs&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This would improve accuracy of quality assessments.&lt;/p&gt;

&lt;h1&gt;
  
  
  Conclusion
&lt;/h1&gt;

&lt;p&gt;Using AutoGen and Ollama provides a practical way to automate documentation quality checks and structural analysis. While LLMs have limitations (non-deterministic output, occasional errors), these can be mitigated with careful prompt design, validation, and error handling.&lt;/p&gt;

&lt;p&gt;The approach is particularly valuable for teams maintaining large documentation repositories where manual review is impractical. Start small, validate results, and gradually expand the scope of automation.&lt;/p&gt;




&lt;p&gt;&lt;strong&gt;Resources:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://microsoft.github.io/autogen/" rel="noopener noreferrer"&gt;AutoGen Documentation&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://ollama.com/" rel="noopener noreferrer"&gt;Ollama Official Website&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/dmo2000/documentation-advises" rel="noopener noreferrer"&gt;Full Code Repository&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/dmo2000/documentation-advises/blob/main/doc_review_agents.py" rel="noopener noreferrer"&gt;Complete Implementation&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>llm</category>
      <category>autogen</category>
      <category>python</category>
    </item>
    <item>
      <title>Manual version bumps using semantic release with Azure DevOps</title>
      <dc:creator>Daniel Marques</dc:creator>
      <pubDate>Sun, 28 Dec 2025 21:43:54 +0000</pubDate>
      <link>https://dev.to/dmo2000/manual-version-bumps-using-semantic-release-with-azure-devops-2pj1</link>
      <guid>https://dev.to/dmo2000/manual-version-bumps-using-semantic-release-with-azure-devops-2pj1</guid>
      <description>&lt;p&gt;The semantic-release package (&lt;a href="https://github.com/semantic-release/semantic-release" rel="noopener noreferrer"&gt;https://github.com/semantic-release/semantic-release&lt;/a&gt;) automates the version management and release process of a project. It determines the next version number based on the commit messages that adhere to the Conventional Commits specification, generates release notes, and publishes the release automatically.&lt;/p&gt;

&lt;p&gt;However, some developers prefer more control over when to increment major and minor versions. Companies like JetBrains use the year as a major version and an auto-incrementing integer as a minor version as illustrated in the image below.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fb466nrztrxka14etreoc.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fb466nrztrxka14etreoc.png" alt="JetBrain versioning" width="800" height="498"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Since semantic-release offers various built-in tools, such as a release notes generator, it is interesting to keep using it and just change the bumping logic. That is what I will show in this post.&lt;/p&gt;

&lt;h1&gt;
  
  
  Azure DevOps pipeline for Manual bumping
&lt;/h1&gt;

&lt;p&gt;My approach involves creating an Azure DevOps pipeline that runs the semantic-release with the following steps:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;The pipeline prompts the user to specify the bump type (major, minor, or patch) and the desired version number.&lt;/li&gt;
&lt;li&gt;Once the user confirms the choice, if the user opted for a major or minor version, the pipeline transitions to an approval state.&lt;/li&gt;
&lt;li&gt;Upon approval, the pipeline increments the version according to the chosen bump type.&lt;/li&gt;
&lt;li&gt;The pipeline verifies that the bump aligns with the desired version number.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;The code snipet below shows the Azure Devops pipeline YAML with steps above. The parameters is used to request user bump type (default bump type set to patch) and wished version. Its values is set as environment variables that are used at semantic release configuration.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;parameters&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;bumpType&lt;/span&gt;
    &lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;string&lt;/span&gt;
    &lt;span class="na"&gt;default&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;patch"&lt;/span&gt;
    &lt;span class="na"&gt;values&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;major&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;minor&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;patch&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;bumpNumber&lt;/span&gt;
    &lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;string&lt;/span&gt;
    &lt;span class="na"&gt;default&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;0"&lt;/span&gt;

&lt;span class="na"&gt;pool&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;vmImage&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ubuntu-latest&lt;/span&gt;

&lt;span class="na"&gt;jobs&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;job&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;approval&lt;/span&gt;
  &lt;span class="na"&gt;pool&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;server&lt;/span&gt;
  &lt;span class="na"&gt;steps&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;task&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ManualValidation@1&lt;/span&gt;
      &lt;span class="s"&gt;...&lt;/span&gt;
      &lt;span class="na"&gt;condition&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ne('${{ parameters.bumpType }}', 'patch')&lt;/span&gt;
&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;job&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;create_tag&lt;/span&gt;
  &lt;span class="na"&gt;dependsOn&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;approval&lt;/span&gt;
  &lt;span class="na"&gt;steps&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;

   &lt;span class="s"&gt;...&lt;/span&gt;

    &lt;span class="s"&gt;- script&lt;/span&gt;&lt;span class="err"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;|&lt;/span&gt;
        &lt;span class="s"&gt;npx semantic-release&lt;/span&gt;
      &lt;span class="na"&gt;displayName&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Run semantic-release setting ${{ parameters.bumpType }} version to ${{ parameters.bumpNumber }}&lt;/span&gt;
      &lt;span class="na"&gt;env&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;SEMANTIC_RELEASE_BUMP_TYPE&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;${{ parameters.bumpType }}&lt;/span&gt;
        &lt;span class="na"&gt;SEMANTIC_RELEASE_BUMP_NUMBER&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;${{ parameters.bumpNumber }&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;That’s it! With this setup, you can manually bump the versions of your project without having to worry about commit messages.&lt;/p&gt;

&lt;p&gt;The following code snipet, show the semantic release configuration plugin configuration at &lt;strong&gt;release.config.cjs&lt;/strong&gt;. For the plugin &lt;strong&gt;@semantic-release/commit-analyzer&lt;/strong&gt; (which bumps version according to commit messages) is set to always increase the version according to bump type options choosen by the user.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="c1"&gt;// file release.config.cjs&lt;/span&gt;
&lt;span class="nx"&gt;module&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;exports&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="p"&gt;...&lt;/span&gt;
  &lt;span class="na"&gt;plugins&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;
    &lt;span class="p"&gt;...&lt;/span&gt;
    &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;@semantic-release/commit-analyzer&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="na"&gt;releaseRules&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;
        &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="na"&gt;release&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;process&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;env&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;SEMANTIC_RELEASE_BUMP_TYPE&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt;
    &lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="p"&gt;}],&lt;/span&gt;
    &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;./verify-release.js&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;
  &lt;span class="p"&gt;]&lt;/span&gt;
&lt;span class="p"&gt;};&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The &lt;strong&gt;verify-release.js&lt;/strong&gt; plugin verifies if the new version is incremented as expected. This ensures that if the pipeline is executed with the same input for the second time, it will fail because the bump will go to an undesired value (taking the JetBrains example, setting the major version to the next year). You can see the verify-release.js code in the next snippet.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="c1"&gt;// file verify-release.js&lt;/span&gt;
&lt;span class="nx"&gt;module&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;exports&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="na"&gt;verifyRelease&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="k"&gt;async &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;pluginConfig&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;context&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;lastRelease&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{},&lt;/span&gt; &lt;span class="nx"&gt;nextRelease&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{},&lt;/span&gt; &lt;span class="nx"&gt;logger&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;console&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;context&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
        &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;bumpType&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;process&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;env&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;SEMANTIC_RELEASE_BUMP_TYPE&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
        &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;bumpNumber&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;process&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;env&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;SEMANTIC_RELEASE_BUMP_NUMBER&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
        &lt;span class="nx"&gt;logger&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;log&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;Verifying expected release.&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
        &lt;span class="k"&gt;if &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="o"&gt;!&lt;/span&gt;&lt;span class="nx"&gt;bumpType&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
          &lt;span class="nx"&gt;logger&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;log&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;SEMANTIC_RELEASE_BUMP_TYPE not set — skipping version verification.&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
          &lt;span class="k"&gt;return&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
        &lt;span class="p"&gt;}&lt;/span&gt;
        &lt;span class="k"&gt;if &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;bumpType&lt;/span&gt; &lt;span class="o"&gt;==&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;patch&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
          &lt;span class="nx"&gt;logger&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;log&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;Bump type set to patch. Nothing to verify.&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
          &lt;span class="k"&gt;return&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
        &lt;span class="p"&gt;}&lt;/span&gt;
        &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;actual&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;nextRelease&lt;/span&gt; &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class="nx"&gt;nextRelease&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;version&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
        &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;match&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;actual&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;match&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sr"&gt;/^&lt;/span&gt;&lt;span class="se"&gt;(\d&lt;/span&gt;&lt;span class="sr"&gt;+&lt;/span&gt;&lt;span class="se"&gt;)\.(\d&lt;/span&gt;&lt;span class="sr"&gt;+&lt;/span&gt;&lt;span class="se"&gt;)\.\d&lt;/span&gt;&lt;span class="sr"&gt;+$/&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

        &lt;span class="k"&gt;if &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="o"&gt;!&lt;/span&gt;&lt;span class="nx"&gt;match&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
          &lt;span class="k"&gt;throw&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;Error&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s2"&gt;`Invalid tag format: &lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="nx"&gt;lastRelease&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;`&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
        &lt;span class="p"&gt;}&lt;/span&gt;

        &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;actualMajor&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;Number&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;match&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;]);&lt;/span&gt;
        &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;actualMinor&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;Number&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;match&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="mi"&gt;2&lt;/span&gt;&lt;span class="p"&gt;]);&lt;/span&gt;
        &lt;span class="k"&gt;if &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;bumpType&lt;/span&gt; &lt;span class="o"&gt;==&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;major&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt; &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class="nx"&gt;actualMajor&lt;/span&gt; &lt;span class="o"&gt;!=&lt;/span&gt; &lt;span class="nx"&gt;bumpNumber&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
          &lt;span class="nx"&gt;logger&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;error&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s2"&gt;`Major version mismatch: expected &lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="nx"&gt;bumpNumber&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="s2"&gt; but will publish &lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="nx"&gt;actualMajor&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;`&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
          &lt;span class="k"&gt;throw&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;Error&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s2"&gt;`Version verification failed: expected major version &lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="nx"&gt;bumpNumber&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;, got &lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="nx"&gt;actualMajor&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;`&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
        &lt;span class="p"&gt;}&lt;/span&gt;
        &lt;span class="k"&gt;if &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;bumpType&lt;/span&gt; &lt;span class="o"&gt;==&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;minor&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt; &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class="nx"&gt;actualMinor&lt;/span&gt; &lt;span class="o"&gt;!=&lt;/span&gt; &lt;span class="nx"&gt;bumpNumber&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
          &lt;span class="nx"&gt;logger&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;error&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s2"&gt;`Minor version mismatch: expected &lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="nx"&gt;bumpNumber&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="s2"&gt; but will publish &lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="nx"&gt;actualMinor&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;`&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
          &lt;span class="k"&gt;throw&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;Error&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s2"&gt;`Version verification failed: expected minor version &lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="nx"&gt;bumpNumber&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;, got &lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="nx"&gt;actualMinor&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;`&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
        &lt;span class="p"&gt;}&lt;/span&gt;
        &lt;span class="nx"&gt;logger&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;log&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s2"&gt;`Version verification passed: &lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="nx"&gt;actual&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;`&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
      &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The picture below shows one example of pipeline execution that bumped major version.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2grjz8ru5nv2v1awde0h.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2grjz8ru5nv2v1awde0h.png" alt=" " width="800" height="347"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h1&gt;
  
  
  Building new version
&lt;/h1&gt;

&lt;p&gt;Once the release is ready, you can begin building your application. For this example, I’ve created a simple hello world CLI in Go.&lt;/p&gt;

&lt;p&gt;The pipeline’s steps are as follows:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Checkout the code with tags&lt;/li&gt;
&lt;li&gt;Gets a tag-based description of the current commit. If the commit lacks a tag, it creates a descriptive version based on the latest tag.&lt;/li&gt;
&lt;li&gt;Sets Azure DevOps build number to the description from previous step&lt;/li&gt;
&lt;li&gt;Generates cli executable and publishes it as build artifact.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Here’s a code snippet that shows the above pipeline:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;
&lt;span class="na"&gt;variables&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;appName&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;hello-world&lt;/span&gt;
  &lt;span class="na"&gt;buildDir&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;build&lt;/span&gt;

&lt;span class="na"&gt;steps&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;checkout&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;self&lt;/span&gt;
    &lt;span class="na"&gt;fetchTags&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
    &lt;span class="na"&gt;fetchDepth&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;0&lt;/span&gt;

  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;script&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;|&lt;/span&gt;
      &lt;span class="s"&gt;export VERSION=$(git describe --tags)&lt;/span&gt;
      &lt;span class="s"&gt;echo "##vso[build.updatebuildnumber]${VERSION}"&lt;/span&gt;
    &lt;span class="na"&gt;displayName&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Set&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;build&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;number"&lt;/span&gt;

  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;task&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;GoTool@0&lt;/span&gt;
    &lt;span class="na"&gt;inputs&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;version&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;1.25"&lt;/span&gt;

  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;script&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;|&lt;/span&gt;
      &lt;span class="s"&gt;mkdir -p $(buildDir)&lt;/span&gt;
      &lt;span class="s"&gt;go build -o $(buildDir)/$(appName) ./cmd&lt;/span&gt;
    &lt;span class="na"&gt;displayName&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Build&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;Go&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;binary"&lt;/span&gt;

  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;script&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;|&lt;/span&gt;
      &lt;span class="s"&gt;cd $(buildDir)&lt;/span&gt;
      &lt;span class="s"&gt;zip $(appName)-$(Build.BuildNumber).zip $(appName)&lt;/span&gt;
    &lt;span class="na"&gt;displayName&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Create&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;ZIP&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;with&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;version"&lt;/span&gt;

  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;task&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;PublishBuildArtifacts@1&lt;/span&gt;
    &lt;span class="na"&gt;inputs&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;PathtoPublish&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;$(buildDir)"&lt;/span&gt;
      &lt;span class="na"&gt;ArtifactName&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;release"&lt;/span&gt;
      &lt;span class="na"&gt;publishLocation&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Container"&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The following image illustrates one successful build.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2zq4gljkfeyhfauoc1o6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2zq4gljkfeyhfauoc1o6.png" alt=" " width="800" height="600"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The code used in this post is available in the github repository &lt;a href="https://github.com/dmo2000/semantic-release-manual" rel="noopener noreferrer"&gt;https://github.com/dmo2000/semantic-release-manual&lt;/a&gt;&lt;/p&gt;

</description>
      <category>semver</category>
      <category>semanticrelease</category>
      <category>version</category>
      <category>product</category>
    </item>
    <item>
      <title>Real-Time Data with gRPC Streaming: .NET &amp; React with Connect RPC</title>
      <dc:creator>Daniel Marques</dc:creator>
      <pubDate>Sat, 05 Jul 2025 20:53:48 +0000</pubDate>
      <link>https://dev.to/dmo2000/real-time-data-with-grpc-streaming-net-react-with-connect-rpc-20i8</link>
      <guid>https://dev.to/dmo2000/real-time-data-with-grpc-streaming-net-react-with-connect-rpc-20i8</guid>
      <description>&lt;p&gt;gRPC streaming is a powerful technology for building real-time, high-performance applications. Unlike traditional request/response APIs, gRPC streaming enables continuous data exchange between client and server. This makes it ideal for scenarios such as live dashboards or IoT telemetry, where clients need to receive updates as soon as they are available, reducing latency and greatly improving user experience.&lt;/p&gt;

&lt;p&gt;To get started, you first define your service in a protobuf file. In this example, I created a service called &lt;code&gt;PatientMonitor&lt;/code&gt; that retrieves a patient's heartbeat, temperature, and SpO2 levels. The server returns a stream, allowing the client to receive multiple responses over time. Here’s the protobuf definition:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;message VitalRequest {
  string patientId = 1;
}

message VitalResponse {
  string timestamp = 1;
  double heartRate = 2;
  double spo2 = 3;
  double temperature = 4;
}

service PatientMonitorService {
  rpc StreamVitals (VitalRequest) returns (stream VitalResponse);
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;With this definition, you can generate the code needed to implement both the server and client.&lt;/p&gt;

&lt;h2&gt;
  
  
  Server-side Implementation
&lt;/h2&gt;

&lt;p&gt;On the server side, I used C# to implement the gRPC streaming endpoint. The .NET gRPC library makes it straightforward to define streaming services and handle asynchronous data flows. By adding &lt;code&gt;Grpc.Tools&lt;/code&gt; to your &lt;code&gt;.csproj&lt;/code&gt; file, the necessary client and server code is generated automatically via the &lt;code&gt;&amp;lt;Protobuf /&amp;gt;&lt;/code&gt; elements. Here’s an example project file snippet:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;&amp;lt;Project Sdk="Microsoft.NET.Sdk.Web"&amp;gt;

  ...

  &amp;lt;ItemGroup&amp;gt;
    &amp;lt;Protobuf Include="../protos/patient_monitor.proto" GrpcServices="Server" /&amp;gt;
  &amp;lt;/ItemGroup&amp;gt;

  &amp;lt;ItemGroup&amp;gt;
    ...
    &amp;lt;PackageReference Include="Grpc.Tools" Version="2.54.0"&amp;gt;
      &amp;lt;PrivateAssets&amp;gt;All&amp;lt;/PrivateAssets&amp;gt;
    &amp;lt;/PackageReference&amp;gt;
  &amp;lt;/ItemGroup&amp;gt;

&amp;lt;/Project&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Once the base implementation is generated, you can override the methods to provide your custom logic. Below is an example implementation for the &lt;code&gt;StreamVitals&lt;/code&gt; endpoint. Here, a loop sends updates of the patient's vitals every 5 seconds, and the stream ends only when the client cancels the request.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;using Grpc.Core;
using PatientMonitoring;

public class PatientMonitorServiceImpl : PatientMonitorService.PatientMonitorServiceBase
{
    private readonly Random _rand = new();

    public override async Task StreamVitals(VitalRequest request,
        IServerStreamWriter&amp;lt;VitalResponse&amp;gt; responseStream,
        ServerCallContext context)
    {
        var HEALTHY_HEARTBEAT = new List&amp;lt;int&amp;gt;([
            72, 73, 71, 74, 72, 70, 71, 73, 74, 72, 73, 72
        ]);
        int count = 0;
        while (!context.CancellationToken.IsCancellationRequested)
        {
            var vitals = new VitalResponse
            {
                Timestamp = DateTimeOffset.UtcNow.ToString(),
                HeartRate = HEALTHY_HEARTBEAT[count % HEALTHY_HEARTBEAT.Count],
                Spo2 = Math.Round(95 + _rand.NextDouble() * 5, 1),
                Temperature = Math.Round(36 + _rand.NextDouble() * 1.5, 1)
            };

            await responseStream.WriteAsync(vitals);
            await Task.Delay(5000);
            count++;
        }
    }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Frontend Visualization
&lt;/h2&gt;

&lt;p&gt;For the frontend, I chose JavaScript and React to build a dynamic, real-time UI.&lt;/p&gt;

&lt;p&gt;Instead of using the standard gRPC JavaScript library (&lt;a href="https://github.com/grpc/grpc-web" rel="noopener noreferrer"&gt;grpc-web&lt;/a&gt;), which hasn’t been updated in almost two years, I opted for the &lt;a href="https://github.com/connectrpc/connect-es" rel="noopener noreferrer"&gt;Connect RPC&lt;/a&gt; implementation. Connect RPC offers a modern, robust, and developer-friendly experience for gRPC in the browser, making it easy to consume streaming endpoints and integrate them seamlessly into React components.&lt;/p&gt;

&lt;p&gt;First, install the following npm packages. With them is possible to generate both client and server code with &lt;code&gt;buf&lt;/code&gt; command, which is similar to gRPC &lt;code&gt;protoc&lt;/code&gt;.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;npm install -D @bufbuild/buf @bufbuild/protobuf @bufbuild/protoc-gen-es
npm install -D @connectrpc/connect @connectrpc/connect-web
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Next, create a &lt;code&gt;buf.gen.yaml&lt;/code&gt; file to configure code generation (for example, generating TypeScript for the frontend).&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;version: v2
plugins:
- local: ./node_modules/.bin/protoc-gen-es
  out: src/gen
  opt: target=ts
inputs:
  - proto_file: '../protos/patient_monitor.proto'
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Then execute &lt;code&gt;buf generate&lt;/code&gt; at the &lt;code&gt;buf.gen.yaml&lt;/code&gt; folder. This command will create typescript client implementation at &lt;code&gt;src/gen&lt;/code&gt; local folder.&lt;/p&gt;

&lt;p&gt;You can set up the gRPC client using the &lt;code&gt;createClient&lt;/code&gt; function providing generated metadata for the server and the transport type. Connect RPC supports both standard gRPC and its own protocol. It is worth to mention that Connect RPC does not provide a server implementation for .NET of its own protocol.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import { createClient } from "@connectrpc/connect";
import { createGrpcWebTransport } from "@connectrpc/connect-web";
import { PatientMonitorService } from "./gen/patient_monitor_pb";

...

const transport = createGrpcWebTransport({
    baseUrl: '',
});
return createClient(
    PatientMonitorService,
    transport,
);
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Once the client is configured, you can call the streaming endpoint. This returns an async generator that yields values as they arrive. You can iterate over this stream using &lt;code&gt;for await&lt;/code&gt;. Here’s an example integrated with React state:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;export default function App() {
  const [vitals, setVitals] = useState&amp;lt;VitalResponse[]&amp;gt;([]);
  const [latestVital, setLatestVital] = useState&amp;lt;VitalResponse | null&amp;gt;(null);

  ...

  useEffect(() =&amp;gt; {
    const streamVitals = async () =&amp;gt; {
      const req = { patientId: "12345" } as VitalRequest;
      const stream = client.streamVitals(req);
      for await (const item of stream) {
        setVitals(prev =&amp;gt; {
          if (prev.length &amp;gt;= config.heartTrendLength) {
            return [...prev.slice(1), item];
          }
          return [...prev, item];
        });
        setLatestVital(item);
      }
    };

    streamVitals().catch(console.error);
  }, []);
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;With this setup, you can visualize the patient's vital signals in real time.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5lucq019qu2vjga91peo.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5lucq019qu2vjga91peo.png" alt="Image description" width="800" height="246"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;For more details, check out the code at &lt;a href="https://github.com/dmo2000/grpc-streaming-dot-net" rel="noopener noreferrer"&gt;https://github.com/dmo2000/grpc-streaming-dot-net&lt;/a&gt;.&lt;/p&gt;

&lt;h1&gt;
  
  
  References
&lt;/h1&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://github.com/grpc/grpc-dotnet" rel="noopener noreferrer"&gt;https://github.com/grpc/grpc-dotnet&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://connectrpc.com/docs/introduction/" rel="noopener noreferrer"&gt;https://connectrpc.com/docs/introduction/&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://recharts.org/" rel="noopener noreferrer"&gt;https://recharts.org/&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://antoniolago.github.io/react-gauge-component/" rel="noopener noreferrer"&gt;https://antoniolago.github.io/react-gauge-component/&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>grpc</category>
      <category>stream</category>
      <category>connectrpc</category>
      <category>realtime</category>
    </item>
    <item>
      <title>Review Qodana static code analysis and SCA/SBOM license audit</title>
      <dc:creator>Daniel Marques</dc:creator>
      <pubDate>Thu, 12 Jun 2025 00:51:05 +0000</pubDate>
      <link>https://dev.to/dmo2000/review-qodana-static-code-analysis-and-scasbom-license-audit-ehf</link>
      <guid>https://dev.to/dmo2000/review-qodana-static-code-analysis-and-scasbom-license-audit-ehf</guid>
      <description>&lt;p&gt;I was on the hunt for a tool that could give me a clear picture of my system’s SBOM (software bill of materials). I wanted to check license info and see which parts are used in all my microservices. That’s when I stumbled upon Qodana, which has a feature called SCA (Software Component Analysis). In this post, I’ll share my thoughts on this tool.&lt;/p&gt;

&lt;p&gt;I requested a trial account on &lt;a href="https://qodana.cloud/" rel="noopener noreferrer"&gt;https://qodana.cloud/&lt;/a&gt; so I could test out all the features. Then I looked over public repositories for the technologies I usually work with. This resulted in the following table.&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Language&lt;/th&gt;
&lt;th&gt;Repository&lt;/th&gt;
&lt;th&gt;# lines(1)&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;GO&lt;/td&gt;
&lt;td&gt;&lt;a href="https://github.com/jackc/pgx" rel="noopener noreferrer"&gt;https://github.com/jackc/pgx&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;91K&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Javascript/Typescript&lt;/td&gt;
&lt;td&gt;&lt;a href="https://github.com/ngxs/store" rel="noopener noreferrer"&gt;https://github.com/ngxs/store&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;100K&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Java&lt;/td&gt;
&lt;td&gt;&lt;a href="https://github.com/liquibase/liquibase-hibernate" rel="noopener noreferrer"&gt;https://github.com/liquibase/liquibase-hibernate&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;7K&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;C#&lt;/td&gt;
&lt;td&gt;&lt;a href="https://github.com/FluentValidation/FluentValidation" rel="noopener noreferrer"&gt;https://github.com/FluentValidation/FluentValidation&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;32K&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;ul&gt;
&lt;li&gt;(1) - To get number of lines, I used the command inside clone directory &lt;code&gt;git ls-files | xargs wc -l&lt;/code&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Getting started
&lt;/h2&gt;

&lt;p&gt;To begin, you’ll need to create a project. Once you’ve done that, you’ll need to run qodana. For simplicity, I chose qodana CLI for simplicity. Here’s a visual guide that shows all the steps involved in executing the analysis.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fim4xoqc8lnfh9x0e9igo.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fim4xoqc8lnfh9x0e9igo.png" alt="qodana configuration" width="800" height="876"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Under the hoods, qodana executes the analysis in Docker containers. On my personal MacBook, I had to install Docker Desktop because it doesn’t work with Rancher Desktop. I also noticed that Docker images are quite large (starting from 4GB), as shown below.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Feog9zeuit8v2hjx57ynh.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Feog9zeuit8v2hjx57ynh.png" alt="qodana docker image sizes" width="800" height="432"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Analysis overview
&lt;/h2&gt;

&lt;p&gt;After analyzing the data, you can check the problems in the first tab. One cool feature is that you can mark problems you won’t solve in the short term and move them to the baseline.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7omhu0b36j0zkyvh6zeo.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7omhu0b36j0zkyvh6zeo.png" alt="problem tab" width="800" height="390"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The gadget that gives an overview of the problems is visually appealing, but it’s not very user-friendly because:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;You have to click in several dropdowns, which makes it hard to drill down on the problems because you have to keep clicking. On the other hand, the dropdowns allow you to make multiple selections.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;The text orientation is set to around the gadget circle, which makes it hard to read.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fp7es8707sb60tjh6tetb.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fp7es8707sb60tjh6tetb.png" alt="problems gadget" width="800" height="373"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You can also configure which code inspection rules will be enabled or not.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqg859pmidenoxt02wq2k.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqg859pmidenoxt02wq2k.png" alt="inspection rules" width="800" height="243"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The final tab shows the license audit results, which is the main reason I wanted to test this tool. You can easily navigate through the dependency tree.&lt;/p&gt;

&lt;p&gt;You can also download the SBOM license list in different formats, including CSV. This can be useful so you do not have create a SBOM gathering for every language in your projects. Although, it lacks to provide dependency type or package manager (like npm or Nuget) because some packages have the same name but come from different repositories (for instance, azure sdks for &lt;a href="https://github.com/Azure/azure-sdk-for-python/tree/main/sdk" rel="noopener noreferrer"&gt;python&lt;/a&gt; and &lt;a href="https://github.com/Azure/azure-sdk-for-java/tree/main/sdk" rel="noopener noreferrer"&gt;java&lt;/a&gt;).&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Foo27iz3oui0bgnhykzcv.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Foo27iz3oui0bgnhykzcv.png" alt="SBOM export" width="800" height="447"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Tested languages
&lt;/h2&gt;

&lt;p&gt;Out of the languages I tested, the only major issue I encountered was that is not possible to provide license audit for the NGXS repository (a yarn-based project). Even though the official documentation states that yarn is supported, this was the only sticking point so far.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqgrxv89h6poc7jic3y38.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqgrxv89h6poc7jic3y38.png" alt="license audit" width="800" height="549"&gt;&lt;/a&gt;&lt;/p&gt;

</description>
      <category>qodana</category>
      <category>sca</category>
      <category>sbom</category>
      <category>analysis</category>
    </item>
    <item>
      <title>How to create custom Azure DevOps Pipelines that autoscale with Virtual Machine Scale Sets (VMSS)</title>
      <dc:creator>Daniel Marques</dc:creator>
      <pubDate>Thu, 29 May 2025 00:07:51 +0000</pubDate>
      <link>https://dev.to/dmo2000/how-to-create-custom-azure-devops-pipelines-that-autoscale-with-virtual-machine-scale-sets-vmss-1k16</link>
      <guid>https://dev.to/dmo2000/how-to-create-custom-azure-devops-pipelines-that-autoscale-with-virtual-machine-scale-sets-vmss-1k16</guid>
      <description>&lt;p&gt;Microsoft-hosted Azure DevOps pipelines have some limitations, such as not being able to access Azure resources in private networks or having a disk size limit of 10GB. Fortunately, you can work around these by using custom pipelines. One effective approach is to use Virtual Machine Scale Sets (VMSS), which I’ll explain in detail in this post. The source code is available on &lt;a href="https://github.com/dmo2000/azuredevops-pipelines" rel="noopener noreferrer"&gt;my GitHub&lt;/a&gt;. You can also read the official comparison between VMSS and Microsoft-hosted agents &lt;a href="https://learn.microsoft.com/en-us/azure/devops/managed-devops-pools/migrate-from-scale-set-agents?view=azure-devops" rel="noopener noreferrer"&gt;here&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;A VMSS is an Azure compute resource that lets you deploy and manage a group of identical virtual machines at scale. In Azure Pipelines, VMSS can host custom agents and automatically scale the number of build and deployment agents based on workload. This provides efficient resource usage, access to private networks, and more control over your build environment compared to Microsoft-hosted agents.&lt;/p&gt;

&lt;p&gt;For this post, we’ll consider a scenario where you need to access an Azure Key Vault and it are only visible for specific networks.&lt;/p&gt;

&lt;p&gt;The picture below illustrates that.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F115hyedcpdr9jtf931tn.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F115hyedcpdr9jtf931tn.png" alt="Image description" width="752" height="442"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;We’ll achieve this by following these steps:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Install Packer and Azure CLI&lt;/li&gt;
&lt;li&gt;Create a VM image that will be used in the VMSS&lt;/li&gt;
&lt;li&gt;Create the VMSS&lt;/li&gt;
&lt;li&gt;Create a service principal&lt;/li&gt;
&lt;li&gt;Configure the VMSS in Azure DevOps&lt;/li&gt;
&lt;li&gt;Allow VMSS to access Azure Key Vault&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  1. Install Packer and Azure CLI
&lt;/h2&gt;

&lt;p&gt;First, install Packer and the Azure CLI using the official guides:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://learn.microsoft.com/en-us/cli/azure/install-azure-cli-linux" rel="noopener noreferrer"&gt;Install Azure CLI&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://developer.hashicorp.com/packer/install#linux" rel="noopener noreferrer"&gt;Install Packer&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Next, install the Packer Azure plugin:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;packer plugins &lt;span class="nb"&gt;install &lt;/span&gt;github.com/hashicorp/azure
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If you are running in Windows, you can execute packer in WSL.&lt;/p&gt;

&lt;h2&gt;
  
  
  2. Create a VM image that will be used in the VMSS
&lt;/h2&gt;

&lt;p&gt;Azure provides several base VM images, but they may not include all the tools you need (for example, Ubuntu images do not come with the Azure CLI pre-installed). To create a custom image, we will use &lt;a href="https://developer.hashicorp.com/packer" rel="noopener noreferrer"&gt;Packer&lt;/a&gt; to automate the process.&lt;/p&gt;

&lt;p&gt;You’ll need a Packer configuration file and two scripts: one to install your required software (such as the Azure CLI) and another to deprovision the VM, removing machine-specific data. Packer will output a reusable VM image in your chosen region and resource group.&lt;/p&gt;

&lt;p&gt;After preparing your files, build the image (replace &lt;code&gt;my-image&lt;/code&gt; as needed):&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;packer build &lt;span class="nt"&gt;-var&lt;/span&gt; &lt;span class="s1"&gt;'output_image_name=my-image'&lt;/span&gt; modified-ubuntu-image.pkr.hcl
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Key points about these files:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The &lt;code&gt;azure-arm&lt;/code&gt; / &lt;code&gt;source_image&lt;/code&gt; section defines the temporary VM details used by Packer.&lt;/li&gt;
&lt;li&gt;The &lt;code&gt;output_image_name&lt;/code&gt; variable makes the image name configurable.&lt;/li&gt;
&lt;li&gt;The &lt;code&gt;use_azure_cli_auth&lt;/code&gt; option uses authentication from the &lt;code&gt;az login&lt;/code&gt; command.&lt;/li&gt;
&lt;li&gt;Choose the architecture (ARM or Intel) based on your workload; ARM VMs are often cheaper if you don't need x64 compatibility (like running x64 docker images).&lt;/li&gt;
&lt;li&gt;The &lt;a href="https://github.com/dmo2000/azuredevops-pipelines/blob/main/vmss-image/install-requirements.sh" rel="noopener noreferrer"&gt;&lt;code&gt;install-requirements.sh&lt;/code&gt;&lt;/a&gt; script installs required tools, such as the &lt;a href="https://learn.microsoft.com/en-us/cli/azure/install-azure-cli-linux?view=azure-cli-latest&amp;amp;pivots=apt" rel="noopener noreferrer"&gt;Azure CLI&lt;/a&gt;, without using sudo commands.&lt;/li&gt;
&lt;li&gt;The &lt;a href="https://github.com/dmo2000/azuredevops-pipelines/blob/main/vmss-image/deprovision.sh" rel="noopener noreferrer"&gt;&lt;code&gt;deprovision.sh&lt;/code&gt;&lt;/a&gt; script removes machine-specific data and credentials, ensuring each VM created from the image starts in a clean, secure state.&lt;/li&gt;
&lt;li&gt;The &lt;code&gt;execute_command&lt;/code&gt; tells Packer to run scripts as root (&lt;code&gt;sudo -E sh&lt;/code&gt;), so you don't need to add sudo commands inside your install script.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;&lt;em&gt;File: modified-ubuntu-image.pkr.hcl&lt;/em&gt;&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight terraform"&gt;&lt;code&gt;&lt;span class="k"&gt;variable&lt;/span&gt; &lt;span class="s2"&gt;"output_image_name"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;type&lt;/span&gt;    &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;string&lt;/span&gt;
  &lt;span class="nx"&gt;default&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;""&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="nx"&gt;source&lt;/span&gt; &lt;span class="s2"&gt;"azure-arm"&lt;/span&gt; &lt;span class="s2"&gt;"source_image"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;azure_tags&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;dept&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"Engineering"&lt;/span&gt;
    &lt;span class="nx"&gt;task&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"Image deployment"&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;
  &lt;span class="nx"&gt;use_azure_cli_auth&lt;/span&gt;                &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
  &lt;span class="nx"&gt;image_offer&lt;/span&gt;                       &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"ubuntu-24_04-lts"&lt;/span&gt;
  &lt;span class="nx"&gt;image_publisher&lt;/span&gt;                   &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"Canonical"&lt;/span&gt;
  &lt;span class="nx"&gt;image_sku&lt;/span&gt;                         &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"server"&lt;/span&gt;
  &lt;span class="nx"&gt;location&lt;/span&gt;                          &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"South Central US"&lt;/span&gt;
  &lt;span class="nx"&gt;managed_image_name&lt;/span&gt;                &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="kd"&gt;var&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;output_image_name&lt;/span&gt;
  &lt;span class="nx"&gt;managed_image_resource_group_name&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"my-resource-group"&lt;/span&gt;
  &lt;span class="nx"&gt;os_type&lt;/span&gt;                           &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"Linux"&lt;/span&gt;
  &lt;span class="nx"&gt;vm_size&lt;/span&gt;                           &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"Standard_D2s_v6"&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="nx"&gt;build&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;sources&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"source.azure-arm.source_image"&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;

  &lt;span class="k"&gt;provisioner&lt;/span&gt; &lt;span class="s2"&gt;"shell"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;execute_command&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"chmod +x {{ .Path }}; {{ .Vars }} sudo -E sh '{{ .Path }}'"&lt;/span&gt;
    &lt;span class="nx"&gt;scripts&lt;/span&gt;          &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"./install-requirements.sh"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s2"&gt;"./deprovision.sh"&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; 
  &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;&lt;em&gt;File: deprovision.sh&lt;/em&gt;&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;/usr/sbin/waagent &lt;span class="nt"&gt;-force&lt;/span&gt; &lt;span class="nt"&gt;-deprovision&lt;/span&gt;+user &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class="nb"&gt;export &lt;/span&gt;&lt;span class="nv"&gt;HISTSIZE&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;0 &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class="nb"&gt;sync&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You can also automate this process using an Azure DevOps pipeline with the file &lt;a href="https://github.com/dmo2000/azuredevops-pipelines/blob/main/vmss-image/create-vmss-image.yaml" rel="noopener noreferrer"&gt;create-vmss-image.yaml&lt;/a&gt;. The picture below shows the pipeline execution result followed by the resource created in Azure Portal.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqactfcrfgn9bfexckju0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqactfcrfgn9bfexckju0.png" alt="Image description" width="800" height="539"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fam6l9jba65ya98f0so82.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fam6l9jba65ya98f0so82.png" alt="Image description" width="800" height="552"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  3. Create the VMSS
&lt;/h2&gt;

&lt;p&gt;Create the VMSS using the Azure CLI command below. Update the variables as needed for your environment:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;vmss_name&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;my-azure-devops-pool
&lt;span class="nv"&gt;vmss_image_name&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;my-image
&lt;span class="nv"&gt;resource_group&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;my-resource-group
&lt;span class="nv"&gt;vm_size&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;Standard_DC1s_v3
&lt;span class="nv"&gt;vm_storage&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;Standard_LRS

az vmss create &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;--name&lt;/span&gt; &lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;vmss_name&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;--resource-group&lt;/span&gt; &lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;resource_group&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;--image&lt;/span&gt; &lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;vmss_image_name&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;--vm-sku&lt;/span&gt; &lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;vm_size&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;--storage-sku&lt;/span&gt; &lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;vm_storage&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;--authentication-type&lt;/span&gt; SSH &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;--generate-ssh-keys&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;--instance-count&lt;/span&gt; 0 &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;--disable-overprovision&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;--upgrade-policy-mode&lt;/span&gt; manual &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;--single-placement-group&lt;/span&gt; &lt;span class="nb"&gt;false&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;--platform-fault-domain-count&lt;/span&gt; 1 &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;--load-balancer&lt;/span&gt; &lt;span class="s2"&gt;""&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;--orchestration-mode&lt;/span&gt; Uniform
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Tips:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Set the initial scaling to 0 (&lt;code&gt;--instance-count 0&lt;/code&gt;); Azure DevOps will scale the VMSS based on pipeline demand.&lt;/li&gt;
&lt;li&gt;Choosing the right VM size is important for cost and performance. You may want to select any available size initially and adjust it later in the Azure Portal. The image below shows VM sizes ordered by cost — make sure that you select a size with the same architecture as your custom image.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fv1j7n8nbdk5ozz5g9r8p.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fv1j7n8nbdk5ozz5g9r8p.png" alt="Image description" width="800" height="368"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You can also automate this process using an Azure DevOps pipeline with the file &lt;a href="https://github.com/dmo2000/azuredevops-pipelines/blob/main/vmss/create-vmss.yaml" rel="noopener noreferrer"&gt;create-vmss.yaml&lt;/a&gt;. The picture below shows the pipeline execution result followed by the VMSS and its private network created in Azure Portal.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6crpm88vnwj8j0cxj01h.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6crpm88vnwj8j0cxj01h.png" alt="Image description" width="800" height="601"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F56lddboz4n34cegwsxrl.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F56lddboz4n34cegwsxrl.png" alt="Image description" width="800" height="333"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  4. Create a service principal
&lt;/h2&gt;

&lt;p&gt;To enable Azure DevOps to manage your VMSS, you need a technical user known as a service principal. In many organizations, this is created by the corporate IT administrator team. If that's your case, you can skip to the next section.&lt;/p&gt;

&lt;p&gt;If you need to create it yourself, follow the &lt;a href="https://learn.microsoft.com/en-us/entra/identity-platform/quickstart-register-app" rel="noopener noreferrer"&gt;official guide to register an app&lt;/a&gt;. For this setup, register with the option &lt;strong&gt;&lt;em&gt;Accounts in this organizational directory only&lt;/em&gt;&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;After registration, create credentials for the service principal by generating a client secret as described &lt;a href="https://learn.microsoft.com/en-us/entra/identity-platform/how-to-add-credentials?tabs=client-secret" rel="noopener noreferrer"&gt;here&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  5. Configure the VMSS in Azure DevOps
&lt;/h2&gt;

&lt;p&gt;With your service principal ready, assign it the Contributor role on your VMSS. This allows Azure DevOps to scale the VMSS up and down as needed for your pipelines.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgyjmh8gp6ikqd27rgiw6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgyjmh8gp6ikqd27rgiw6.png" alt="Image description" width="800" height="392"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Next, in Azure DevOps, create a service connection using &lt;strong&gt;&lt;em&gt;Azure Resource Manager&lt;/em&gt;&lt;/strong&gt; and enter the service principal details. Follow the steps in the &lt;a href="https://learn.microsoft.com/en-us/azure/devops/pipelines/library/connect-to-azure?view=azure-devops#create-a-service-connection-for-an-existing-user-assigned-managed-identity" rel="noopener noreferrer"&gt;official documentation&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Once the service connection is set up, create an agent pool in Azure DevOps. This pool name will be referenced in your pipeline YAML files.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvavzbwo1h9f5likiprb5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvavzbwo1h9f5likiprb5.png" alt="Image description" width="800" height="574"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;To verify your setup, create a simple pipeline that runs a command on the VMSS agent. For example, use &lt;code&gt;az --version&lt;/code&gt; to confirm the Azure CLI is installed. Make sure the job uses the VMSS agent pool you created. The screenshot below shows a successful test run.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fogx4rkmnxqhjr8rtn7o4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fogx4rkmnxqhjr8rtn7o4.png" alt="Image description" width="800" height="415"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  6. Allow VMSS to access Azure Key Vault
&lt;/h2&gt;

&lt;p&gt;First, configure the VMSS to use a system-assigned identity.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Famux6m3q4ns8uwzvvmbn.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Famux6m3q4ns8uwzvvmbn.png" alt="Image description" width="800" height="714"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Then assign the &lt;strong&gt;&lt;em&gt;Key Vault Administrator&lt;/em&gt;&lt;/strong&gt; role to your user and the system-assigned identity on your Key Vault.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7f8m7f9sez6h762v4x0o.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7f8m7f9sez6h762v4x0o.png" alt="Image description" width="800" height="575"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Next, make sure to disable public access and attach the Key Vault to the same network as the VMSS, as illustrated below.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9z02n4hb0uo7rsmzb74d.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9z02n4hb0uo7rsmzb74d.png" alt="Image description" width="800" height="411"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Finally, create a pipeline using the VMSS agent pool and paste the following command line code:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;az login &lt;span class="nt"&gt;--identity&lt;/span&gt;

az keyvault key create &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--vault-name&lt;/span&gt; my-private-key-vault &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--name&lt;/span&gt; my-pipeline-key &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--protection&lt;/span&gt; software &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--kty&lt;/span&gt; RSA &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--size&lt;/span&gt; 2048
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Then execute the pipeline and check that the key was created.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Faczqyapu14adm6k594mj.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Faczqyapu14adm6k594mj.png" alt="Image description" width="800" height="407"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h1&gt;
  
  
  References
&lt;/h1&gt;

&lt;ul&gt;
&lt;li&gt;Building VM images with Packer - &lt;a href="https://learn.microsoft.com/en-us/azure/virtual-machines/linux/build-image-with-packer" rel="noopener noreferrer"&gt;https://learn.microsoft.com/en-us/azure/virtual-machines/linux/build-image-with-packer&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Installing Azure Cli - &lt;a href="https://learn.microsoft.com/en-us/cli/azure/install-azure-cli-linux" rel="noopener noreferrer"&gt;https://learn.microsoft.com/en-us/cli/azure/install-azure-cli-linux&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Description of Azure VM size notations - &lt;a href="https://learn.microsoft.com/en-us/azure/virtual-machines/sizes/overview" rel="noopener noreferrer"&gt;https://learn.microsoft.com/en-us/azure/virtual-machines/sizes/overview&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Registering an App - &lt;a href="https://learn.microsoft.com/en-us/entra/identity-platform/quickstart-register-app" rel="noopener noreferrer"&gt;https://learn.microsoft.com/en-us/entra/identity-platform/quickstart-register-app&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Create secrets for an App - &lt;a href="https://learn.microsoft.com/en-us/entra/identity-platform/how-to-add-credentials?tabs=client-secret" rel="noopener noreferrer"&gt;https://learn.microsoft.com/en-us/entra/identity-platform/how-to-add-credentials?tabs=client-secret&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Creating a service connection with Service Principal - &lt;a href="https://learn.microsoft.com/en-us/azure/devops/pipelines/library/connect-to-azure?view=azure-devops#create-a-service-connection-for-an-existing-user-assigned-managed-identity" rel="noopener noreferrer"&gt;https://learn.microsoft.com/en-us/azure/devops/pipelines/library/connect-to-azure?view=azure-devops#create-a-service-connection-for-an-existing-user-assigned-managed-identity&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Create Scale Set Agents - &lt;a href="https://learn.microsoft.com/en-us/azure/devops/pipelines/agents/scale-set-agents" rel="noopener noreferrer"&gt;https://learn.microsoft.com/en-us/azure/devops/pipelines/agents/scale-set-agents&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>azure</category>
      <category>virtualmachine</category>
      <category>devops</category>
      <category>automation</category>
    </item>
    <item>
      <title>Shading Removal of Illustrated Documents</title>
      <dc:creator>Daniel Marques</dc:creator>
      <pubDate>Sat, 17 May 2025 16:17:10 +0000</pubDate>
      <link>https://dev.to/dmo2000/shading-removal-of-illustrated-documents-1aa0</link>
      <guid>https://dev.to/dmo2000/shading-removal-of-illustrated-documents-1aa0</guid>
      <description>&lt;p&gt;During my PhD, I developed and published an algorithm — together with my advisor and a colleague — to enhance photos of documents captured with digital cameras. These images often suffered from uneven lighting across the page. Our algorithm corrects this by normalizing the illumination, similar to what &lt;a href="https://www.camscanner.com/" rel="noopener noreferrer"&gt;CamScanner&lt;/a&gt; does. The implementation is written in C++ and is available on my GitHub: &lt;a href="https://github.com/dmo2000/shading-removal-illustrated-docs" rel="noopener noreferrer"&gt;https://github.com/dmo2000/shading-removal-illustrated-docs&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;The method works in four steps:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Detection of low-variation regions&lt;/strong&gt;: The algorithm first identifies areas in the image with low color variation, which are likely to correspond to blank regions of the paper (i.e., without text or illustrations).&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Background clustering&lt;/strong&gt;: From these low-variation regions, it extracts the cluster most likely to represent the paper's true background.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Shading estimation&lt;/strong&gt;: Once the paper background is identified, the algorithm estimates the shading pattern for the rest of the document, including non-blank areas.&lt;/p&gt;

&lt;p&gt;To perform this step, I used a technique known as Natural Neighbor Interpolation (also called Voronoi interpolation). This method is especially useful when data points (in this case, known shading values) are unevenly distributed — a situation similar to how weather data is interpolated from scattered measurement stations.&lt;/p&gt;

&lt;p&gt;For the interpolation and related geometric computations, I used the &lt;a href="https://www.cgal.org/" rel="noopener noreferrer"&gt;CGAL&lt;/a&gt; library. I also relied on data structures from the &lt;a href="https://www.boost.org/" rel="noopener noreferrer"&gt;Boost&lt;/a&gt; C++ libraries to support the implementation.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Final enhancement&lt;/strong&gt;: Finally, the image is enhanced removing the shading obtained in previous step.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;The images below shows one example of processing:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Original Image&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzma5amrivracctl3wsgw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzma5amrivracctl3wsgw.png" alt="Original image" width="800" height="602"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Cluster with detected background (steps 1 and 2)&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbcycqwodwhe15kcjo1f3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbcycqwodwhe15kcjo1f3.png" alt="Detected background" width="800" height="598"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Background estimation (step 3)&lt;/strong&gt;&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fw4ndxq1dr2hh3sec16b0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fw4ndxq1dr2hh3sec16b0.png" alt="Background estimation" width="800" height="600"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Final enhancement (step 4)&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2loeotv5uavmzogoi6s7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2loeotv5uavmzogoi6s7.png" alt="Final enhancement" width="800" height="598"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The complete source code is publicly available on GitHub:&lt;br&gt;
👉 &lt;a href="https://github.com/dmo2000/shading-removal-illustrated-docs" rel="noopener noreferrer"&gt;https://github.com/dmo2000/shading-removal-illustrated-docs&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The full paper is available at: &lt;a href="https://www.researchgate.net/publication/300028589_Shading_Removal_of_Illustrated_Documents" rel="noopener noreferrer"&gt;https://www.researchgate.net/publication/300028589_Shading_Removal_of_Illustrated_Documents&lt;/a&gt;&lt;/p&gt;

</description>
      <category>digitalization</category>
      <category>enhancement</category>
      <category>shading</category>
      <category>cpp</category>
    </item>
    <item>
      <title>How to Call gRPC Methods Dynamically in Go</title>
      <dc:creator>Daniel Marques</dc:creator>
      <pubDate>Sat, 10 May 2025 20:49:23 +0000</pubDate>
      <link>https://dev.to/dmo2000/how-to-call-grpc-methods-dynamically-in-go-2h7f</link>
      <guid>https://dev.to/dmo2000/how-to-call-grpc-methods-dynamically-in-go-2h7f</guid>
      <description>&lt;p&gt;gRPC (Google Remote Procedure Call) is a high-performance, open-source framework for making remote procedure calls. Unlike traditional REST APIs that use HTTP/1.1 and JSON, gRPC leverages HTTP/2 and Protocol Buffers for more efficient communication between distributed systems. This modern approach offers significant advantages in terms of performance, type safety, and code generation capabilities.&lt;/p&gt;

&lt;p&gt;When working with gRPC, you need to understand the fundamental components and workflow that differentiate it from other API communication methods. The process begins with defining your service interface using Protocol Buffers (&lt;strong&gt;protobuf&lt;/strong&gt;) in a &lt;strong&gt;proto file&lt;/strong&gt;, which serves as a &lt;strong&gt;contract between client and server&lt;/strong&gt;. This definition is then &lt;strong&gt;compiled into language-specific code&lt;/strong&gt; that handles all the underlying communication complexities.&lt;/p&gt;

&lt;p&gt;You can make gRPC calls dynamically in &lt;strong&gt;two ways&lt;/strong&gt;: using &lt;strong&gt;server reflection&lt;/strong&gt; or providing uncompiled &lt;strong&gt;proto files&lt;/strong&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Dynamic Calls Using gRPC Server Reflection
&lt;/h2&gt;

&lt;p&gt;Typically, services are registered with a gRPC server immediately after creation, as shown in the &lt;a href="https://github.com/grpc/grpc-go/blob/master/examples/helloworld/greeter_server/main.go" rel="noopener noreferrer"&gt;helloworld&lt;/a&gt; example provided by grpc team:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight go"&gt;&lt;code&gt;&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="s"&gt;"google.golang.org/grpc"&lt;/span&gt;
&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="k"&gt;type&lt;/span&gt; &lt;span class="n"&gt;server&lt;/span&gt; &lt;span class="k"&gt;struct&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="n"&gt;helloworld&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;UnimplementedGreeterServer&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="k"&gt;func&lt;/span&gt; &lt;span class="n"&gt;main&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="c"&gt;// ...&lt;/span&gt;
    &lt;span class="n"&gt;s&lt;/span&gt; &lt;span class="o"&gt;:=&lt;/span&gt; &lt;span class="n"&gt;grpc&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;NewServer&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
    &lt;span class="n"&gt;helloworld&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;RegisterGreeterServer&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;s&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="o"&gt;&amp;amp;&lt;/span&gt;&lt;span class="n"&gt;server&lt;/span&gt;&lt;span class="p"&gt;{})&lt;/span&gt;
    &lt;span class="c"&gt;// ...&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;To enable server reflection, register the reflection service by calling the &lt;code&gt;Register&lt;/code&gt; function from the reflection package as follows.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight go"&gt;&lt;code&gt;&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="s"&gt;"google.golang.org/grpc"&lt;/span&gt;
    &lt;span class="s"&gt;"google.golang.org/grpc/reflection"&lt;/span&gt;
&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="k"&gt;type&lt;/span&gt; &lt;span class="n"&gt;server&lt;/span&gt; &lt;span class="k"&gt;struct&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="n"&gt;helloworld&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;UnimplementedGreeterServer&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="k"&gt;func&lt;/span&gt; &lt;span class="n"&gt;main&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="c"&gt;// ...&lt;/span&gt;
    &lt;span class="n"&gt;s&lt;/span&gt; &lt;span class="o"&gt;:=&lt;/span&gt; &lt;span class="n"&gt;grpc&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;NewServer&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
    &lt;span class="n"&gt;helloworld&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;RegisterGreeterServer&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;s&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="o"&gt;&amp;amp;&lt;/span&gt;&lt;span class="n"&gt;server&lt;/span&gt;&lt;span class="p"&gt;{})&lt;/span&gt;
    &lt;span class="n"&gt;reflection&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Register&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;s&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="c"&gt;// ...&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;After starting the server, you can check the new service by using Kreya's  (&lt;a href="https://kreya.app/" rel="noopener noreferrer"&gt;https://kreya.app/&lt;/a&gt;)   server reflection importing. The picture below shows this result with &lt;strong&gt;server reflection service&lt;/strong&gt;:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fd8cxzlwojukpxhq6kfdp.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fd8cxzlwojukpxhq6kfdp.jpeg" alt="Kreya Server reflection" width="800" height="242"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Calling gRPC dynamically with reflection involves two steps: first, a request to the &lt;code&gt;ServerReflectionInfo&lt;/code&gt; stream retrieves &lt;strong&gt;proto files&lt;/strong&gt; (FileDescriptors). The sequence diagram illustrates this process. Note that for every service call, one call must be made to the reflection service to fetch the proto file.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fuzivzzgvfmeappe27wcr.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fuzivzzgvfmeappe27wcr.jpeg" alt="server reflection sequence diagram" width="772" height="358"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The following code implements the process shown in the diagram above. The function &lt;code&gt;grpcurl.DescriptorSourceFromServer&lt;/code&gt; retrieves the file descriptor from the reflection service. The call to the target method is done by &lt;code&gt;grpcurl.InvokeRPC&lt;/code&gt; function.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight go"&gt;&lt;code&gt;&lt;span class="k"&gt;package&lt;/span&gt; &lt;span class="n"&gt;main&lt;/span&gt;

&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="s"&gt;"bytes"&lt;/span&gt;
    &lt;span class="s"&gt;"context"&lt;/span&gt;
    &lt;span class="s"&gt;"log"&lt;/span&gt;
    &lt;span class="s"&gt;"strings"&lt;/span&gt;

    &lt;span class="s"&gt;"github.com/fullstorydev/grpcurl"&lt;/span&gt;
    &lt;span class="s"&gt;"github.com/jhump/protoreflect/grpcreflect"&lt;/span&gt;

    &lt;span class="s"&gt;"google.golang.org/grpc"&lt;/span&gt;
    &lt;span class="s"&gt;"google.golang.org/grpc/credentials/insecure"&lt;/span&gt;
&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="k"&gt;func&lt;/span&gt; &lt;span class="n"&gt;main&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="c"&gt;// Inputs&lt;/span&gt;
    &lt;span class="n"&gt;serverAddr&lt;/span&gt; &lt;span class="o"&gt;:=&lt;/span&gt; &lt;span class="s"&gt;"localhost:50051"&lt;/span&gt;
    &lt;span class="n"&gt;methodFullName&lt;/span&gt; &lt;span class="o"&gt;:=&lt;/span&gt; &lt;span class="s"&gt;"helloworld.Greeter/SayHello"&lt;/span&gt;
    &lt;span class="n"&gt;jsonRequest&lt;/span&gt; &lt;span class="o"&gt;:=&lt;/span&gt; &lt;span class="s"&gt;`{ "name": "goodbye, hello goodbye, you say stop and I say go go..." }`&lt;/span&gt;

    &lt;span class="c"&gt;// Output buffer&lt;/span&gt;
    &lt;span class="k"&gt;var&lt;/span&gt; &lt;span class="n"&gt;output&lt;/span&gt; &lt;span class="n"&gt;bytes&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Buffer&lt;/span&gt;

    &lt;span class="c"&gt;// Create gRPC channel&lt;/span&gt;
    &lt;span class="n"&gt;grpcChannel&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;err&lt;/span&gt; &lt;span class="o"&gt;:=&lt;/span&gt; &lt;span class="n"&gt;grpc&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Dial&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;serverAddr&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;grpc&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;WithTransportCredentials&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;insecure&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;NewCredentials&lt;/span&gt;&lt;span class="p"&gt;()))&lt;/span&gt;
    &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;err&lt;/span&gt; &lt;span class="o"&gt;!=&lt;/span&gt; &lt;span class="no"&gt;nil&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="n"&gt;log&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Fatalf&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"Failed to dial server: %v"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;err&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
    &lt;span class="k"&gt;defer&lt;/span&gt; &lt;span class="n"&gt;grpcChannel&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Close&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;

    &lt;span class="c"&gt;// Create reflection client&lt;/span&gt;
    &lt;span class="n"&gt;reflectionClient&lt;/span&gt; &lt;span class="o"&gt;:=&lt;/span&gt; &lt;span class="n"&gt;grpcreflect&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;NewClient&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;context&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Background&lt;/span&gt;&lt;span class="p"&gt;(),&lt;/span&gt; &lt;span class="n"&gt;grpcreflect&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;NewClientV1&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;grpcChannel&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt;
    &lt;span class="k"&gt;defer&lt;/span&gt; &lt;span class="n"&gt;reflectionClient&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Reset&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;

    &lt;span class="c"&gt;// Use grpcurl to get the method descriptor&lt;/span&gt;
    &lt;span class="n"&gt;descriptorSource&lt;/span&gt; &lt;span class="o"&gt;:=&lt;/span&gt; &lt;span class="n"&gt;grpcurl&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;DescriptorSourceFromServer&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;context&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Background&lt;/span&gt;&lt;span class="p"&gt;(),&lt;/span&gt; &lt;span class="n"&gt;reflectionClient&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

    &lt;span class="c"&gt;// Prepare formatter for the response&lt;/span&gt;
    &lt;span class="n"&gt;options&lt;/span&gt; &lt;span class="o"&gt;:=&lt;/span&gt; &lt;span class="n"&gt;grpcurl&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;FormatOptions&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="n"&gt;EmitJSONDefaultFields&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt; &lt;span class="no"&gt;true&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;
    &lt;span class="n"&gt;jsonRequestReader&lt;/span&gt; &lt;span class="o"&gt;:=&lt;/span&gt; &lt;span class="n"&gt;strings&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;NewReader&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;jsonRequest&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="n"&gt;rf&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;formatter&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;err&lt;/span&gt; &lt;span class="o"&gt;:=&lt;/span&gt; &lt;span class="n"&gt;grpcurl&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;RequestParserAndFormatter&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;grpcurl&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Format&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"json"&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt; &lt;span class="n"&gt;descriptorSource&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;jsonRequestReader&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;options&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;err&lt;/span&gt; &lt;span class="o"&gt;!=&lt;/span&gt; &lt;span class="no"&gt;nil&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="n"&gt;log&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Fatalf&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"Failed to construct request parser and formatter: %v"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;err&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
    &lt;span class="n"&gt;eventHandler&lt;/span&gt; &lt;span class="o"&gt;:=&lt;/span&gt; &lt;span class="o"&gt;&amp;amp;&lt;/span&gt;&lt;span class="n"&gt;grpcurl&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;DefaultEventHandler&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="n"&gt;Out&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt;            &lt;span class="o"&gt;&amp;amp;&lt;/span&gt;&lt;span class="n"&gt;output&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="n"&gt;Formatter&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt;      &lt;span class="n"&gt;formatter&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="n"&gt;VerbosityLevel&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt; &lt;span class="m"&gt;0&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;

    &lt;span class="n"&gt;headers&lt;/span&gt; &lt;span class="o"&gt;:=&lt;/span&gt; &lt;span class="p"&gt;[]&lt;/span&gt;&lt;span class="kt"&gt;string&lt;/span&gt;&lt;span class="p"&gt;{}&lt;/span&gt;

    &lt;span class="n"&gt;err&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;grpcurl&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;InvokeRPC&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;context&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Background&lt;/span&gt;&lt;span class="p"&gt;(),&lt;/span&gt; &lt;span class="n"&gt;descriptorSource&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;grpcChannel&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;methodFullName&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;headers&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;eventHandler&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;rf&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Next&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;err&lt;/span&gt; &lt;span class="o"&gt;!=&lt;/span&gt; &lt;span class="no"&gt;nil&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="n"&gt;log&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Fatalf&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"RPC call failed: %v"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;err&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
    &lt;span class="n"&gt;log&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Println&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"Received output:"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="n"&gt;log&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;output&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;String&lt;/span&gt;&lt;span class="p"&gt;())&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The images below show the console outputs from the client and server runs, respectively. You can seem from the log output one call for the stream and other for the target service.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flnrs0rqheb4fsvq9e4pi.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flnrs0rqheb4fsvq9e4pi.jpeg" alt="server reflection grpc call" width="800" height="85"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Dynamic Calls Using Proto Files
&lt;/h2&gt;

&lt;p&gt;If you have access to the proto file, you can call the target service directly by changing the function &lt;code&gt;grpcurl.DescriptorSourceFromServer&lt;/code&gt; to  &lt;code&gt;grpcurl.DescriptorSourceFromProtoFiles&lt;/code&gt; . In this way, the steps needed to call the service are simplified as shown in the sequence diagram below.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8q9g8f2f6mvgznyzp5mp.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8q9g8f2f6mvgznyzp5mp.jpeg" alt="proto call sequence diagram" width="800" height="498"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The results from the server and client can be seen below, where the stream is not called anymore.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fiw6zqbf9j345csme6z3y.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fiw6zqbf9j345csme6z3y.jpeg" alt="proto file call output" width="800" height="139"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;There are two effective approaches for making dynamic gRPC calls. Server reflection offers several advantages: it eliminates the need to maintain proto files on the client side, enables dynamic discovery of services, and simplifies integration testing and debugging tools. With reflection, clients can automatically discover and interact with services without prior knowledge of their interfaces.&lt;/p&gt;

&lt;p&gt;However, server reflection does have drawbacks. Each service call requires an additional reflection call, which adds network overhead. While you can mitigate this by caching file descriptors after the first reflection call. Reflection may also expose additional server details that could potentially reduce security.&lt;/p&gt;

&lt;p&gt;Alternatively, using proto files directly provides a more efficient approach with fewer network calls, but requires keeping the client's proto files synchronized with the server.&lt;/p&gt;

&lt;p&gt;The complete code examples from this article are available at &lt;a href="https://github.com/dmo2000/grpc-dynamic-calls" rel="noopener noreferrer"&gt;https://github.com/dmo2000/grpc-dynamic-calls&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>grpc</category>
      <category>go</category>
      <category>dynamic</category>
      <category>apitesting</category>
    </item>
    <item>
      <title>Solving Bugs</title>
      <dc:creator>Daniel Marques</dc:creator>
      <pubDate>Fri, 18 Apr 2025 22:16:19 +0000</pubDate>
      <link>https://dev.to/dmo2000/solving-bugs-49ac</link>
      <guid>https://dev.to/dmo2000/solving-bugs-49ac</guid>
      <description>&lt;p&gt;In the world of software development, bugs are inevitable. No matter how experienced the team or how mature the process, every software system will eventually encounter issues that cause unexpected behaviors.&lt;/p&gt;

&lt;h2&gt;
  
  
  What is a Software Bug?
&lt;/h2&gt;

&lt;p&gt;A &lt;strong&gt;software bug&lt;/strong&gt; is a flaw or error in a program that produces incorrect results or unintended behaviors. Bugs range from:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Minor visual glitches&lt;/li&gt;
&lt;li&gt;To serious system crashes&lt;/li&gt;
&lt;li&gt;Or even security vulnerabilities&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Where Do Bugs Come From?
&lt;/h2&gt;

&lt;p&gt;Bugs emerge from various sources:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Mistakes in code logic&lt;/li&gt;
&lt;li&gt;Misunderstandings in requirements&lt;/li&gt;
&lt;li&gt;Integration problems&lt;/li&gt;
&lt;li&gt;Differences between development and production environments&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Some bugs are easy to spot and fix, while others remain deeply hidden and difficult to reproduce.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why a Strategic Approach Matters
&lt;/h2&gt;

&lt;p&gt;Having a systematic approach to bug fixing—especially when handling multiple bugs—is crucial for maintaining stable, reliable, and user-friendly software.software.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Bug-Fixing Process
&lt;/h2&gt;

&lt;p&gt;The bug-fixing process follows three clear steps:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Organize bugs&lt;/li&gt;
&lt;li&gt;Choose bugs strategically&lt;/li&gt;
&lt;li&gt;Fixing the Bugs&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  Step 1: Organize the Bugs
&lt;/h3&gt;

&lt;p&gt;Before diving in, align with your team to organize bugs using these key criteria:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Criterion&lt;/th&gt;
&lt;th&gt;Key Question&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Criticality&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;How severely does this bug impact users or system functionality?&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Priority&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;What's the business importance of fixing this issue?&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Expected resolution date&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;When does this need to be fixed?&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Software component&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Which part of the system is affected?&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Ease of resolution&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;How complex will the fix likely be?&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;This structured organization makes it easier to make informed decisions about what to tackle first.&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 2: Choose Bugs Strategically
&lt;/h3&gt;

&lt;p&gt;Once organized, select bugs to fix using these effective strategies:&lt;/p&gt;

&lt;h3&gt;
  
  
  For Unfamiliar Components
&lt;/h3&gt;

&lt;p&gt;🔍 &lt;strong&gt;Start small&lt;/strong&gt;: Choose an &lt;strong&gt;easy bug&lt;/strong&gt; in a component you're &lt;strong&gt;not familiar with&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;This builds knowledge without overwhelming you&lt;/li&gt;
&lt;li&gt;Provides foundation for tackling more complex issues later&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  For Related Issues
&lt;/h3&gt;

&lt;p&gt;🔗 &lt;strong&gt;Batch similar bugs&lt;/strong&gt;: When &lt;strong&gt;several related bugs&lt;/strong&gt; exist in the same component and they're &lt;strong&gt;easy to fix&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Reduces backlog more quickly&lt;/li&gt;
&lt;li&gt;Makes the remaining backlog more manageable&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  When Blocked
&lt;/h3&gt;

&lt;p&gt;⏳ &lt;strong&gt;Work in parallel&lt;/strong&gt;: If one bug investigation is time-consuming&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Switch to an easier bug while waiting&lt;/li&gt;
&lt;li&gt;Maintains productivity despite blockers&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Step 3: Fixing the Bugs
&lt;/h3&gt;

&lt;p&gt;When addressing each bug, first assess the solution path:&lt;/p&gt;

&lt;h3&gt;
  
  
  Solution Assessment
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;✅ &lt;strong&gt;Straightforward solution&lt;/strong&gt;: Proceed directly to implementation&lt;/li&gt;
&lt;li&gt;🧩 &lt;strong&gt;Complex problem&lt;/strong&gt;: Apply the methodology from my post on &lt;a href="https://dev.to/dmo2000/solving-software-system-problems-3f44"&gt;Solving software system problems&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Time Management Warning Signs
&lt;/h3&gt;

&lt;p&gt;Most bugs should be resolved within a few days. If a fix takes longer, it might indicate:&lt;/p&gt;

&lt;p&gt;⚠️ &lt;strong&gt;Potential Issues&lt;/strong&gt;:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Environment problems&lt;/strong&gt;: Slow development cycle for making changes and verifying fixes&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Architectural issues&lt;/strong&gt;: Underlying design problems making fixes difficult&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Misclassification&lt;/strong&gt;: What seems like a bug might actually be:

&lt;ul&gt;
&lt;li&gt;A feature request&lt;/li&gt;
&lt;li&gt;Documentation issue&lt;/li&gt;
&lt;li&gt;User misunderstanding&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;p&gt;When facing these issues, document the real problem in your backlog and schedule it for proper resolution later.&lt;/p&gt;

&lt;h2&gt;
  
  
  After Fixing a Bug
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Testing &amp;amp; Verification
&lt;/h3&gt;

&lt;p&gt;Once you've implemented a fix, determine what testing is necessary for verification. Consider which manual tests need to be performed and identify any unit tests that require updates to properly validate your solution. After establishing your test plan, execute all identified tests thoroughly and document the results to maintain a clear record of verification.&lt;/p&gt;

&lt;h3&gt;
  
  
  Code Integration
&lt;/h3&gt;

&lt;p&gt;After testing, submit your code for review by opening a Pull Request (PR). Address any feedback received from reviewers promptly to ensure the solution meets quality standards. Once your PR is merged, wait for the test environment to update with your changes, then perform a final verification to confirm the bug is truly resolved in the integrated environment.&lt;/p&gt;

&lt;h3&gt;
  
  
  Continue the Cycle
&lt;/h3&gt;

&lt;p&gt;With one bug successfully resolved, select your next target by returning to the "Choose Bugs Strategically" guidelines outlined earlier. Begin the process again with your next bug, maintaining the same systematic approach. This methodical cycle ensures consistent quality, maintains momentum in your development process, and systematically reduces your bug backlog over time.&lt;/p&gt;

&lt;p&gt;This methodical approach ensures quality, maintains momentum, and systematically reduces your bug backlog.&lt;/p&gt;

</description>
      <category>programming</category>
      <category>softwaredevelopment</category>
      <category>productivity</category>
    </item>
    <item>
      <title>Solving software system problems</title>
      <dc:creator>Daniel Marques</dc:creator>
      <pubDate>Sun, 16 Feb 2025 01:08:14 +0000</pubDate>
      <link>https://dev.to/dmo2000/solving-software-system-problems-3f44</link>
      <guid>https://dev.to/dmo2000/solving-software-system-problems-3f44</guid>
      <description>&lt;p&gt;Some software system problems are difficult to solve due to various reasons, such as conflicting requirements or complex code structures. To ensure a solution achieves the expected outcome, I follow some steps that are inspired by the Six Thinking Hats and Parkinson’s Law.&lt;/p&gt;

&lt;p&gt;Edward de Bono developed the Six Thinking Hats as a problem-solving and decision-making method that encourages diverse perspectives. Each “hat” represents a different way of thinking: the White Hat focuses on facts and data, the Red Hat on emotions and intuition, the Black Hat on risks and caution, the Yellow Hat on benefits and optimism, the Green Hat on creativity and new ideas, and the Blue Hat on organization and control of the thinking process. By systematically using each hat, individuals and teams can explore issues more comprehensively, leading to well-rounded and effective decisions.&lt;/p&gt;

&lt;p&gt;Cyril Northcote Parkinson formulated Parkinson’s Law, which states that “work expands to fill the time available for its completion.” This means that if a task has a longer deadline than necessary, it will likely take up all that time, even if it could be completed sooner. The law highlights inefficiencies in time management and productivity, often seen in workplaces and bureaucracies. To counteract this, setting shorter, more focused deadlines helps improve efficiency and prevent unnecessary procrastination.&lt;/p&gt;

&lt;p&gt;To verify whether a solution achieves the expected outcome, I follow these steps:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Set a short deadline (1–2 days).&lt;/li&gt;
&lt;li&gt;Narrow the scope based on the deadline.&lt;/li&gt;
&lt;li&gt;Create a Proof of Concept (POC).&lt;/li&gt;
&lt;li&gt;If the POC meets the expected outcome, list the software requirements based on it.&lt;/li&gt;
&lt;li&gt;Analyze implementation options.&lt;/li&gt;
&lt;li&gt;Plan the necessary changes.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;For steps 1 and 2, Parkinson’s Law helps me focus on identifying small, high-impact changes within a constrained timeframe.&lt;/p&gt;

&lt;p&gt;For step 3, I use the Yellow, Red, and Green Hats to solve the problem in a small scale, even if it compromises quality. However, I keep an eye on the deadline to maintain focus on the goal.&lt;/p&gt;

&lt;p&gt;For step 4, I proceed only if the POC meets the target goal. At this stage, I switch to the White, Black, and Blue Hats to list the functional and nonfunctional requirements based on the POC implementation.&lt;/p&gt;

&lt;p&gt;For step 5, with the requirements defined, I consider at least two implementation options with their pros and cons. Then I share these with other stakeholders to gather feedback. The items below show some examples of questions that I do this phase:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Does the change fulfill its objective?&lt;/li&gt;
&lt;li&gt;How can negative side effects be mitigated?&lt;/li&gt;
&lt;li&gt;How can the benefits be measured?&lt;/li&gt;
&lt;li&gt;How can the change remain backward compatible?&lt;/li&gt;
&lt;li&gt;Are the changes importants for the user?&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Finally, at step 6, I decide on an implementation option with the team and document the requirements in a ticket in the release tracking system of the software.&lt;/p&gt;

&lt;p&gt;Using this approach, I have solved several software problems throughout my career. I hope this methodology proves helpful to others as well.&lt;/p&gt;

</description>
      <category>problemsolving</category>
      <category>softwaredevelopment</category>
      <category>productivity</category>
    </item>
  </channel>
</rss>
