<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: 🇷|🇺|🇧|🇪|🇳</title>
    <description>The latest articles on DEV Community by 🇷|🇺|🇧|🇪|🇳 (@rubenoostinga).</description>
    <link>https://dev.to/rubenoostinga</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/rubenoostinga"/>
    <language>en</language>
    <item>
      <title>AI Agents with MCP: Practical Takeaways from n8n and GitHub Copilot</title>
      <dc:creator>🇷|🇺|🇧|🇪|🇳</dc:creator>
      <pubDate>Mon, 07 Apr 2025 07:59:22 +0000</pubDate>
      <link>https://dev.to/rubenoostinga/ai-agents-with-mcp-practical-takeaways-from-n8n-and-github-copilot-cd4</link>
      <guid>https://dev.to/rubenoostinga/ai-agents-with-mcp-practical-takeaways-from-n8n-and-github-copilot-cd4</guid>
      <description>&lt;h2&gt;
  
  
  Key Takeaways for Developers
&lt;/h2&gt;

&lt;p&gt;✅ &lt;strong&gt;Successful Approaches&lt;/strong&gt;:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Breaking down tasks into focused, single operations&lt;/li&gt;
&lt;li&gt;Maintaining control over the workflow instead of letting AI decide&lt;/li&gt;
&lt;li&gt;Using AI for specific steps rather than complex control flows&lt;/li&gt;
&lt;li&gt;Version controlling AI-assisted content&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;❌ &lt;strong&gt;Common Pitfalls&lt;/strong&gt;:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Letting AI handle long-running operations (leads to timeouts and context loss)&lt;/li&gt;
&lt;li&gt;Assuming consistent behavior between different AI models and platforms&lt;/li&gt;
&lt;li&gt;Letting AI figure out control flow independently&lt;/li&gt;
&lt;li&gt;Making AI think about too many things at once&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  AI Agents - Insights and Observations
&lt;/h2&gt;

&lt;p&gt;The more I work with AI, the more I discover what is generic and predictable versus what constitutes actual original insights. There is also a third category: technical challenges. The expectation is that as technology develops, technical issues will decrease, and AI will handle more of the generic work. This means the only work that will remain is original thinking and insights. This is the optimistic outlook. Note this also means that you will actually have to think a lot harder during your knowledge work.&lt;/p&gt;

&lt;p&gt;If you let an AI agent take a lot of actions, like reordering a filesystem, it takes a lot of slow repetitions and eventually runs into timeouts. You also notice that it starts to forget what it is doing while controlling a browser. For example, Playwright tries to click on links from the previous page which are stale/unclickable, whereas earlier in the execution it was able to remember to navigate back to click on the links.&lt;/p&gt;

&lt;p&gt;Let it do one thing at a time to maintain reliability.&lt;/p&gt;

&lt;p&gt;If you know in which order actions should be performed, you don't need non-deterministic control flow to handle the task.&lt;br&gt;
Agentic AI and Copilot Chat edits in VS Code are removing the copy-paste steps out of the development process.&lt;/p&gt;

&lt;p&gt;Speed is an advantage that might be especially beneficial for Gemini, as specialized hardware can make it a lot faster than other models.&lt;/p&gt;

&lt;p&gt;If you know the control flow (i.e., what the agent should do), don't let the AI figure this out on its own. Instead, program the flow yourself and use AI function calling or something like Instructor to do the language processing. Letting the AI figure out the control flow is slow and only makes sense if the flow depends on intelligent decisions. It feels like you lose some intelligence by letting the AI think about too many things at once. The more focused you can make your task, the better output you can expect.&lt;/p&gt;

&lt;p&gt;Examples where you know the control flow:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;You are scraping LinkedIn profiles for gathering leads&lt;/li&gt;
&lt;li&gt;You do a code review where you know you want to first look at the code then add comments&lt;/li&gt;
&lt;li&gt;For file organization, you first scan files, then categorize, then move the files&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The key point is that in these cases, you already know the sequence of actions. The AI's role is to handle individual steps (like understanding file content or formatting text) rather than figuring out what to do next.&lt;/p&gt;

&lt;p&gt;Nice cases where you don't know what to do:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Programming with AI where it's unclear which files need to be edited and how&lt;/li&gt;
&lt;li&gt;Search engines like Perplexity where the query path isn't predetermined&lt;/li&gt;
&lt;li&gt;Data analysis where patterns aren't known beforehand&lt;/li&gt;
&lt;li&gt;Customer support where each query requires a unique investigation path&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The common factor in all these cases is that the input is a user query or problem where there is no clear algorithm that could be hardcoded. The AI needs to make decisions based on context and intermediate findings rather than following a predetermined path.&lt;/p&gt;
&lt;h2&gt;
  
  
  Understanding n8n and GitHub Copilot Agents
&lt;/h2&gt;
&lt;h3&gt;
  
  
  What is n8n?
&lt;/h3&gt;

&lt;p&gt;n8n is an open-source workflow automation tool that allows users to connect various applications, APIs, and data sources to automate tasks. It provides a visual interface where you can build workflows by connecting nodes that represent different services or actions. Think of it as an alternative to tools like Zapier or Make (formerly Integromat), but with more flexibility and the ability to self-host.&lt;/p&gt;

&lt;p&gt;Key features of n8n include:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Visual workflow builder&lt;/li&gt;
&lt;li&gt;200+ pre-built integrations&lt;/li&gt;
&lt;li&gt;Ability to run custom JavaScript code&lt;/li&gt;
&lt;li&gt;Self-hosting option for complete control over your data&lt;/li&gt;
&lt;li&gt;Community-contributed nodes to extend functionality&lt;/li&gt;
&lt;/ul&gt;
&lt;h3&gt;
  
  
  What are GitHub Copilot Agents?
&lt;/h3&gt;

&lt;p&gt;GitHub Copilot agents are an extension of GitHub Copilot that goes beyond code completion. These agents can:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Have conversations with developers about code&lt;/li&gt;
&lt;li&gt;Execute tools and perform actions on behalf of users&lt;/li&gt;
&lt;li&gt;Interact with external services through MCP&lt;/li&gt;
&lt;li&gt;Help with complex development tasks&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;To access Copilot agents:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Sign up for GitHub Copilot (requires a subscription)&lt;/li&gt;
&lt;li&gt;Join the GitHub Copilot Insider program&lt;/li&gt;
&lt;li&gt;Install the latest version of VS Code and the GitHub Copilot extension&lt;/li&gt;
&lt;li&gt;Enable Copilot agent features in settings&lt;/li&gt;
&lt;/ol&gt;
&lt;h3&gt;
  
  
  What is Model Context Protocol (MCP)?
&lt;/h3&gt;

&lt;p&gt;The Model Context Protocol (MCP) is a standardized way for AI models to interact with external tools and services. It acts as a "universal translator" that enables seamless communication between different systems, allowing AI agents to perform actions in the real world.&lt;/p&gt;
&lt;h3&gt;
  
  
  n8n with MCP Integration
&lt;/h3&gt;

&lt;p&gt;The n8n-nodes-mcp community node allows users to connect MCP servers within their workflows. This integration supports:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Listing available tools on connected MCP servers&lt;/li&gt;
&lt;li&gt;Executing tools through AI agents&lt;/li&gt;
&lt;li&gt;Sending prompts and retrieving structured responses&lt;/li&gt;
&lt;li&gt;Accessing resources from connected servers&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;With this integration, you can create AI-powered workflows that automate complex tasks like data analysis, web scraping, or file management.&lt;/p&gt;
&lt;h2&gt;
  
  
  Installing MCP Tools
&lt;/h2&gt;

&lt;p&gt;Installing MCP tools is remarkably straightforward, typically requiring just a simple npx command to download and run a server. Most tools can be installed and started with a single command line instruction. While you can add tools directly through VS Code's command line interface, there are other installation methods available, such as using configuration files or installing the servers globally via npm. Each tool may need some basic configuration, like specifying allowed directories for filesystem access or setting browser preferences, but the overall process remains simple and makes it easy to enhance your AI agents with new capabilities.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
sh
code-insiders --add-mcp '{"name":"playwright","command":"npx","args":["@playwright/mcp@latest"]}'

code-insiders --add-mcp '{"name":"filesystem","command":"npx","args":["-y","@modelcontextprotocol/server-filesystem","~/Downloads"]}'


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Comparison: n8n vs GitHub Copilot Agents
&lt;/h2&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Feature&lt;/th&gt;
&lt;th&gt;n8n with MCP&lt;/th&gt;
&lt;th&gt;GitHub Copilot Agents&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Ease of use&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;More complex setup, steeper learning curve&lt;/td&gt;
&lt;td&gt;Easier to work with, integrated into VS Code&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Workflow complexity&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Supports complex, multi-step workflows&lt;/td&gt;
&lt;td&gt;Better for one-off tasks and development assistance&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Persistence&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Workflows can be saved and run repeatedly&lt;/td&gt;
&lt;td&gt;Sessions are temporary, better for interactive use&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;MCP Playwright performance&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Less stable, context issues between actions&lt;/td&gt;
&lt;td&gt;Better performance, maintains browser context between actions&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Debugging&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Error messages can be cryptic and difficult to debug&lt;/td&gt;
&lt;td&gt;More transparent, easier to see what's happening&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Prompt engineering&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Requires more explicit prompting&lt;/td&gt;
&lt;td&gt;Handles vague instructions better&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Automation frequency&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Better for repeated automations&lt;/td&gt;
&lt;td&gt;Better for ad-hoc automations&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Tool behavior&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;More predictable tool usage patterns&lt;/td&gt;
&lt;td&gt;Behavior can vary between models (e.g., Claude 3.5 vs 3.7)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Learning curve&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Requires learning workflow concepts&lt;/td&gt;
&lt;td&gt;Feels more like conversation with occasional "programming in a prompt"&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;The choice between these platforms depends on your specific needs:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Choose n8n for recurring automations, complex workflows, and when you need a visual representation of your process&lt;/li&gt;
&lt;li&gt;Choose GitHub Copilot agents for development assistance, ad-hoc automations, and when you prefer a conversational interface&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  My Insights and Findings
&lt;/h2&gt;

&lt;p&gt;MCP is useful in quickly adding some tools. I tried out MCP Playwright and MCP FileSystem. When comparing the platforms, Playwright works better in GitHub Copilot than in n8n. In n8n it ran into errors due to the context not being there. I suspect n8n just reruns the Playwright MCP server every action, while Copilot keeps it open so it can keep connecting to the existing tab and browser context.&lt;/p&gt;

&lt;p&gt;In n8n you can make more complex workflows and define more steps, which is beneficial for certain use cases. It does work, but you get error messages that are difficult to debug, many due to data being different than expected.&lt;/p&gt;

&lt;p&gt;Overall GitHub Copilot agents was easier to work with, but n8n is more powerful when you want to run automations many times.&lt;/p&gt;

&lt;h3&gt;
  
  
  File Organization Experiment
&lt;/h3&gt;

&lt;p&gt;I experimented with using AI agents to organize files in a filesystem, which revealed interesting differences between AI models. When tasked with file organization, Claude 3.7 took a more cautious approach by writing a shell script to move files. In contrast, Claude 3.5 was more proactive, directly executing the script in the terminal. This experiment highlighted that VS Code provides terminal access to agents by default, allowing for direct interaction with the filesystem. Through multiple iterations, the directory organization process showed continuous improvement.&lt;/p&gt;

&lt;p&gt;Testing different MCP tools revealed consistent patterns in AI behavior. Initially, agents would attempt to access directories without checking permissions. After encountering errors, they learned to first inquire about which directories they could even access. This adaptive behavior demonstrates how agents can learn from system responses and adjust their approach accordingly. The process became more efficient when agents were explicitly instructed to check available directories before attempting operations.&lt;/p&gt;

&lt;h3&gt;
  
  
  LinkedIn Lead Generation Experiment
&lt;/h3&gt;

&lt;p&gt;I conducted an experiment using AI to find decision-makers on LinkedIn:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Set up a SQLite database to store contact information and post summaries&lt;/li&gt;
&lt;li&gt;The AI showed unexpected intelligence by focusing on director-level positions automatically&lt;/li&gt;
&lt;li&gt;Without being explicitly told, it knew to filter out consultants and engineers&lt;/li&gt;
&lt;li&gt;Used MCP Playwright for browser automation and data collection&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Key observations:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Different AI models were not equally reliable at storing data&lt;/li&gt;
&lt;li&gt;ChatGPT 4.0 was better at remembering to store data while browsing&lt;/li&gt;
&lt;li&gt;We learned that storing data immediately works better than saving it all at once&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Writing this Blogpost Using an AI Agent
&lt;/h3&gt;

&lt;p&gt;AI agents are helpful when writing. You can focus on your ideas instead of worrying about perfect sentences, grammar, and spelling. The AI helps make your writing flow better.&lt;/p&gt;

&lt;p&gt;What matters most is including your own insights. If you let the AI write everything from scratch, you'll end up with generic content that misses the important points. That's why you should regularly save your blogpost to version control - this way, you can always go back if text gets lost or changed too much.&lt;/p&gt;

&lt;p&gt;One useful approach is using tools like Perplexity to find documentation or copying READMEs from the internet yourself. Add these to your workspace, and then you can ask the AI agent to write about MCP using this documentation as context.&lt;/p&gt;

&lt;p&gt;Some examples of challenges I ran into while writing this post:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The AI often rewrote whole sections when I just wanted small improvements&lt;/li&gt;
&lt;li&gt;When writing about MCP setup, it gave generic, copy-pasted information instead of useful details&lt;/li&gt;
&lt;li&gt;While AI agents seem like they'll save time, you actually spend extra time learning how to work with them effectively&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The AI particularly excelled at:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Improving sentence structure and readability&lt;/li&gt;
&lt;li&gt;Adding transition sentences between paragraphs&lt;/li&gt;
&lt;li&gt;Catching grammar and spelling issues&lt;/li&gt;
&lt;li&gt;Suggesting better ways to organize information&lt;/li&gt;
&lt;li&gt;Maintaining consistent tone throughout the post&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;AI Agents take away the copy-paste steps out of your AI workflow. However, they are not a golden hammer yet. Just because AI Agents could theoretically take certain actions because they have the tools doesn't make them the best approach. Right now, the models often don't behave as expected and execute slowly. If you have to do a lot of prompt engineering to control the agent, you want tighter controls and asking smaller questions to better control the outcome.&lt;/p&gt;

&lt;p&gt;I'm optimistic about the future because of the saying you're hearing more and more: what you see now is the worst that it's going to be.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>automation</category>
      <category>aiops</category>
      <category>mcp</category>
    </item>
    <item>
      <title>Taking Frontend Architecture Serious with dependency-cruiser</title>
      <dc:creator>🇷|🇺|🇧|🇪|🇳</dc:creator>
      <pubDate>Mon, 25 Sep 2023 09:10:00 +0000</pubDate>
      <link>https://dev.to/rubenoostinga/taking-frontend-architecture-serious-with-dependency-cruiser-5fc2</link>
      <guid>https://dev.to/rubenoostinga/taking-frontend-architecture-serious-with-dependency-cruiser-5fc2</guid>
      <description>&lt;p&gt;With &lt;a href="https://github.com/sverweij/dependency-cruiser"&gt;dependency-cruiser&lt;/a&gt;, you can enforce which imports are allowed. This enables you to create an architecture fitness function that ensures your code continues to adhere to the initial design. You can also visualize your dependencies to gain a clearer understanding of your code's actual structure, allowing you to compare it with your mental model and make improvements where necessary.&lt;/p&gt;

&lt;p&gt;An application architecture design defines a folder structure and specifies which files can import from other files. On the backend, you have design patterns like layered architecture, hexagonal (or ports and adapters) architecture. On the frontend, there's the classical Model-View-Controller architecture. Modern component-based frameworks also offer their own approaches like having page, feature, or technical folders. Often there is also a shared or common folder.&lt;/p&gt;

&lt;p&gt;Applying any architecture pattern or folder structure has an impact on how code should be imported by the rest of the codebase. Verifying the correctness of imports manually can be challenging and is often overlooked during code reviews. An automated check can benefit not only the current team but also future developers. Even when you design the application architecture and implement the code, it's easy to unintentionally deviate from your own guidelines. One auto-import from an editor can compromise your design. Collaborating with others amplifies this challenge.&lt;/p&gt;

&lt;h2&gt;
  
  
  Keep Page-Specific Folders Isolated
&lt;/h2&gt;

&lt;p&gt;You don't want modules from one page-specific folder importing modules from another page-specific folder. If there's a cross-page dependency, it likely means that some modules are actually shared modules placed in a page-specific folder by accident. The solution is to move the module to a shared or common folder.&lt;/p&gt;

&lt;p&gt;In our codebase, we faced this issue because we built one page and later constructed a similar page, realizing we could reuse some components. We forgot to move the components to the shared folder, and this oversight wasn't caught during code review. With dependency-cruiser, the issue was easily discovered and will be avoided in the future.&lt;br&gt;
See &lt;a href="https://github.com/sverweij/dependency-cruiser/blob/main/doc/rules-tutorial.md#isolating-peer-folders-from-each-other"&gt;Isolating peer folders from each other&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Finding Orphaned Code
&lt;/h2&gt;

&lt;p&gt;If unused code is not removed as soon as it's no longer needed, detecting it later can be difficult. Especially for components that are imported by unit tests or Storybook stories, they may seem to be in use but are not actually included anywhere in the real application.&lt;/p&gt;

&lt;p&gt;Dependency-cruiser can identify whether code is used by actual production code or not. Of course, exceptions can be made for configuration files that aren't supposed to be part of the application in the first place.&lt;/p&gt;

&lt;p&gt;We found an unused component, complete with tests and Storybook stories. By removing it, we no longer need to maintain the code, and it eliminates confusion about why the code was there in the first place.&lt;/p&gt;

&lt;p&gt;See &lt;a href="https://github.com/sverweij/dependency-cruiser/blob/main/doc/rules-tutorial.md#is-a-module-actually-used"&gt;Is a module actually used?&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Shared Code
&lt;/h2&gt;

&lt;p&gt;It's helpful to distinguish code that is genuinely shared between different parts from code that could be shared but isn't currently. Often, the way code is written makes assumptions about its usage.&lt;/p&gt;

&lt;p&gt;By counting how many files depend on a module, you can determine whether a module should be considered shared. You can use this information to enforce that shared modules are placed in a &lt;code&gt;common/&lt;/code&gt; or &lt;code&gt;shared/&lt;/code&gt; folder. Conversely, you can ensure that all code in the shared folders is actually shared. When you remove an import to a shared module, you'll receive an error, allowing you to move the module to a page-specific folder.&lt;/p&gt;

&lt;p&gt;Here, we also found code that could be, or had been, shared but, in reality, was specific to a single page. So, we moved this code from the shared folder to the page folder. This shift made the page code more cohesive and decreased its coupling to the shared code—both important architectural qualities.&lt;/p&gt;

&lt;p&gt;See &lt;a href="https://github.com/sverweij/dependency-cruiser/blob/main/doc/rules-tutorial.md#is-a-utility-module-shared"&gt;Is a utility module shared?&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Visualization
&lt;/h2&gt;

&lt;p&gt;Dependency-cruiser has &lt;a href="https://github.com/sverweij/dependency-cruiser/blob/main/doc/real-world-samples.md"&gt;advanced visualization capabilities&lt;/a&gt; built-in.&lt;/p&gt;

&lt;p&gt;When you visualize the dependencies between different folders, you can see:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Dependencies that shouldn’t be there&lt;/li&gt;
&lt;li&gt;Whether a dependency is a dynamic import (for lazy loading) or not&lt;/li&gt;
&lt;li&gt;Circular dependencies&lt;/li&gt;
&lt;li&gt;Which folders have shared code&lt;/li&gt;
&lt;li&gt;If 3rd party code is kept separate or spread everywhere&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This is also handy as documentation so everyone has the same understanding.&lt;/p&gt;

&lt;p&gt;For the best visualization, you need to tweak the &lt;a href="https://github.com/sverweij/dependency-cruiser/blob/main/doc/options-reference.md#reporteroptions"&gt;configuration&lt;/a&gt; to get the right amount of detail.&lt;/p&gt;

&lt;p&gt;In the visualization below, we want to show that pages don’t depend on other pages but only on the shared and service layers. The service layer itself doesn’t rely on any view-specific folders. What isn’t shown, but is enforced by dependency-cruiser, is that the services folder doesn’t have any React-specific dependencies.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvo65xwy6f3siml7vhkra.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvo65xwy6f3siml7vhkra.png" alt="Real World React codebase visualized with dependency-cruiser" width="800" height="821"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Comparison to eslint-plugin-imports
&lt;/h2&gt;

&lt;p&gt;When comparing dependency-cruiser to eslint-plugin-import, a noticeable overlap between the two becomes evident. Both tools can identify circular dependencies, impose limits on certain imports, and assist in keeping test code separate from production code. However, dependency-cruiser offers additional features. It can identify orphaned files even if they are imported by tests. It facilitates the separation of sibling folders to maintain organized code. Additionally, dependency-cruiser displays the usage frequency of a component, enabling teams to discern shared parts of the codebase.&lt;/p&gt;

&lt;p&gt;A distinct advantage of eslint is its superior editor integration. Unlike dependency-cruiser, eslint displays red lines beneath invalid imports, providing immediate feedback. If in-editor feedback is valuable, eslint, with its rule &lt;code&gt;no-restricted-imports&lt;/code&gt;, serves well in ensuring proper encapsulation of third parties. While dependency-cruiser can perform similarly, it usually provides feedback after a failed CI build, whereas eslint aids developers instantly.&lt;/p&gt;

&lt;p&gt;The visualization provided by dependency-cruiser is undeniably a unique feature absent in a typical linter.&lt;/p&gt;

&lt;p&gt;I advise utilizing a combination of both tools to attain a developer experience that aligns with your team’s needs.&lt;/p&gt;

&lt;h2&gt;
  
  
  Real World Experience
&lt;/h2&gt;

&lt;p&gt;Our nearly three-month experience with dependency-cruiser was overwhelmingly positive. Upon activating the rules, we uncovered numerous incorrect imports and misplaced files, even in our high-quality, relatively new React codebase. These revelations prompted enhancements in architectural qualities like separation of concerns, encapsulation, and modularity. The fixes primarily involved relocating files and shifting code between modules. Not enforcing all rules from the outset allowed for incremental improvements, especially beneficial for larger codebases.&lt;/p&gt;

&lt;p&gt;Once we began enforcing the rules, dependency errors were scarce, thanks to our consistent code structure and established patterns. Encountered errors were accurate, with straightforward solutions, reinforcing that our design remains intact, averting the risk of incremental degradation, or, more starkly, death by a thousand cuts.&lt;/p&gt;

&lt;p&gt;The visualization is invaluable, reflecting current realities over idealized mental models. It aids in onboarding new members and, more importantly, allows existing team members to discover architectural improvements.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Vigilance in software architecture is crucial throughout the development cycle to counter incremental degradation; this is applicable to both frontend and backend codebases. Dependency-cruiser simplifies the design review process and ensures design enforcement during development.&lt;/p&gt;

&lt;p&gt;Unfortunately, the production codebase where I applied dependency-cruiser is not public. However, I did conduct some experiments during a Xebia Innovation Day. &lt;a href="https://github.com/0xR/fitness-functions-experiments"&gt;That code&lt;/a&gt; can be found on GitHub.&lt;/p&gt;

</description>
      <category>webdev</category>
      <category>javascript</category>
      <category>react</category>
      <category>programming</category>
    </item>
    <item>
      <title>Monitoring consumer lag in Azure Event Hub</title>
      <dc:creator>🇷|🇺|🇧|🇪|🇳</dc:creator>
      <pubDate>Thu, 30 Sep 2021 15:25:37 +0000</pubDate>
      <link>https://dev.to/rubenoostinga/monitoring-consumer-lag-in-azure-event-hub-dp5</link>
      <guid>https://dev.to/rubenoostinga/monitoring-consumer-lag-in-azure-event-hub-dp5</guid>
      <description>&lt;h2&gt;
  
  
  Why
&lt;/h2&gt;

&lt;p&gt;Consumer lag is the most important metric to monitor when working with event streams. However, it is not available as a default metric in Azure Insights. Want to have this metric available as part of your monitoring solution? You can set it up with some custom code. In this blog we show you how. &lt;/p&gt;

&lt;h2&gt;
  
  
  What
&lt;/h2&gt;

&lt;p&gt;Consumer lag refers to the number of events that still need to be processed by the consumers of a stream. Consumer lag will be 0 most of the time, as every event is consumed immediately. However, there are a few events that can cause that number to rise.  When a consumer runs into errors, like a functional issue caused by an event or a technical issue like network connectivity, it'll stop consuming events, increasing the consumer lag. &lt;/p&gt;

&lt;p&gt;The lag will also increase if events are published faster than the consumer can process them. In that case, the problem will resolve itself when events are published at a lower rate, and the consumer catches up again. &lt;/p&gt;

&lt;p&gt;You can trigger an alert when the consumer lag exceeds 0 for an extended period, like 10 minutes. What the best alert trigger configuration is for you depends on your situation. Before we continue to the solution, let's clarify some terms: &lt;/p&gt;

&lt;h2&gt;
  
  
  Definitions
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;a href="https://docs.microsoft.com/en-us/azure/event-hubs/event-hubs-features#consumer-groups"&gt;Consumer groups&lt;/a&gt; enable multiple consumers to subscribe to the same event stream. Typically, a consumer group consists of multiple instances of the same application, that can be used for high availability and horizontal scaling.&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://docs.microsoft.com/en-us/azure/event-hubs/event-hubs-features#partitions"&gt;Partitions&lt;/a&gt; enable events to be processed in parallel. All events within a partition have a fixed order. Events in different partitions can be received out of order because they are processed in parallel. A consuming application can have multiple instances that can each read from multiple partitions.&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://docs.microsoft.com/en-us/azure/event-hubs/event-hubs-features#namespace"&gt;Namespace&lt;/a&gt; is a collection of event hubs/topics that can be managed together.&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://docs.microsoft.com/en-us/azure/event-hubs/event-hubs-features#checkpointing"&gt;Checkpoints&lt;/a&gt; records the sequence number of the last consumed event. This value is used to ensure that, in the event of a restart, only the events that have not been consumed yet are resent. Typically, checkpoints are stored as a file in BlobStorage. &lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  How
&lt;/h2&gt;

&lt;p&gt;The Azure SDK can &lt;a href="https://docs.microsoft.com/en-us/javascript/api/@azure/event-hubs/eventhubconsumerclient?view=azure-node-latest#getPartitionProperties_string__GetPartitionPropertiesOptions_"&gt;retrieve the sequence number of the last enqueued event of a partition&lt;/a&gt;. With the &lt;a href="https://docs.microsoft.com/en-us/javascript/api/@azure/event-hubs/checkpointstore?view=azure-node-latest"&gt;CheckpointStore&lt;/a&gt; you can &lt;a href="https://docs.microsoft.com/en-us/javascript/api/@azure/event-hubs/checkpointstore?view=azure-node-latest#listCheckpoints_string__string__string__OperationOptions_"&gt;retrieve the sequence number of the checkpoint&lt;/a&gt;. Since both are simple counters you can calculate the difference and &lt;a href="https://docs.microsoft.com/en-us/azure/azure-monitor/app/nodejs#telemetryclient-api"&gt;publish this as a custom metric in azure insights&lt;/a&gt;. In order to make it a metric you can monitor, you will have to collect the metric periodically, let’s say, every minute.&lt;/p&gt;

&lt;p&gt;There are two ways to collect the consumer lag metric:  &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Using the consumer application provides you with the Event Hub credentials, namespace, and consumer group. However, if something goes wrong and the consuming application shuts down, you'll no longer see if consumer lag rises because this information is not collected anymore. Use a separate process for monitoring to prevent this from happening.  &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Alternatively, alert the application failing its health check or the consumer lag metric being missing. The code examples below are in Typescript for conciseness. But the same approach can be used with the other Event Hub SDKs, like for C#, Java, Python, Go. &lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Collecting the consumer lag
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;&lt;span class="c1"&gt;// initialize checkpointStore and eventHubClient&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;consumerGroup&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;my consumer group&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;checkpointStore&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;...&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;eventHubClient&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;...&lt;/span&gt;

&lt;span class="c1"&gt;// Send the consumer lag every minute&lt;/span&gt;
&lt;span class="nf"&gt;setInterval&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="k"&gt;async &lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="k"&gt;try&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nf"&gt;measureConsumerLag&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;consumerGroup&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;eventHubClient&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;checkpointStore&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;catch &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;error&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;logger&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;error&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;error&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;The Event Hub Consumer Lag could not be sent to Application Insights&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;},&lt;/span&gt; &lt;span class="mi"&gt;60000&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

&lt;span class="k"&gt;export&lt;/span&gt; &lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="kd"&gt;function&lt;/span&gt; &lt;span class="nf"&gt;measureConsumerLag&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
  &lt;span class="nx"&gt;consumerGroup&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kr"&gt;string&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="nx"&gt;eventHubClient&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;EventHubConsumerClient&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="nx"&gt;checkpointStore&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;BlobCheckpointStore&lt;/span&gt;
&lt;span class="p"&gt;):&lt;/span&gt; &lt;span class="nb"&gt;Promise&lt;/span&gt;&lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="k"&gt;void&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;partitionIds&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;eventHubClient&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;getPartitionIds&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;

  &lt;span class="c1"&gt;// Should return either 0 or 1 checkpoint per partition&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;checkpoints&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;checkpointStore&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;listCheckpoints&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="nx"&gt;eventHubClient&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;fullyQualifiedNamespace&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="nx"&gt;eventHubClient&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;eventHubName&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="nx"&gt;consumerGroup&lt;/span&gt;
  &lt;span class="p"&gt;);&lt;/span&gt;

  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;checkpointSequenceNumberByPartitionId&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nb"&gt;Object&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;fromEntries&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="nx"&gt;checkpoints&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;map&lt;/span&gt;&lt;span class="p"&gt;(({&lt;/span&gt; &lt;span class="nx"&gt;partitionId&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;sequenceNumber&lt;/span&gt; &lt;span class="p"&gt;}):&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="kr"&gt;string&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="kr"&gt;number&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nx"&gt;partitionId&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;sequenceNumber&lt;/span&gt;&lt;span class="p"&gt;])&lt;/span&gt;
  &lt;span class="p"&gt;);&lt;/span&gt;

  &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nb"&gt;Promise&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;all&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="nx"&gt;partitionIds&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;map&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="nx"&gt;partitionId&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;lastKnownSequenceNumber&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;checkpointSequenceNumberByPartitionId&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nx"&gt;partitionId&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="o"&gt;??&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
      &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;lastEnqueuedSequenceNumber&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;eventHubClient&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;getPartitionProperties&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;partitionId&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
      &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;consumerLageMetric&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="na"&gt;eventHubName&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;eventHubClient&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;eventHubName&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="nx"&gt;consumerGroup&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="nx"&gt;partitionId&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="na"&gt;namespace&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;eventHubClient&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;fullyQualifiedNamespace&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;split&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;.&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;)[&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;
        &lt;span class="c1"&gt;// The consumerLag calculation&lt;/span&gt;
        &lt;span class="na"&gt;consumerLag&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;lastEnqueuedSequenceNumber&lt;/span&gt; &lt;span class="o"&gt;-&lt;/span&gt; &lt;span class="nx"&gt;lastKnownSequenceNumber&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
      &lt;span class="p"&gt;};&lt;/span&gt;

      &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nf"&gt;trackEventHubConsumerLag&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;consumerLageMetric&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
    &lt;span class="p"&gt;})&lt;/span&gt;
  &lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Sending the custom metric
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;defaultClient&lt;/span&gt; &lt;span class="k"&gt;as&lt;/span&gt; &lt;span class="nx"&gt;appInsightsClient&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;applicationinsights&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="k"&gt;export&lt;/span&gt; &lt;span class="kr"&gt;interface&lt;/span&gt; &lt;span class="nx"&gt;ConsumerLagMetric&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nl"&gt;eventHubName&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kr"&gt;string&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
  &lt;span class="nl"&gt;consumerGroup&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kr"&gt;string&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
  &lt;span class="nl"&gt;partitionId&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kr"&gt;string&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
  &lt;span class="nl"&gt;namespace&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kr"&gt;string&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
  &lt;span class="nl"&gt;consumerLag&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kr"&gt;number&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="k"&gt;export&lt;/span&gt; &lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="kd"&gt;function&lt;/span&gt; &lt;span class="nf"&gt;trackEventHubConsumerLag&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;
  &lt;span class="nx"&gt;eventHubName&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="k"&gt;namespace&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="nx"&gt;consumerGroup&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="nx"&gt;partitionId&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="nx"&gt;consumerLag&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
&lt;span class="p"&gt;}:&lt;/span&gt; &lt;span class="nx"&gt;ConsumerLagMetric&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt; &lt;span class="nb"&gt;Promise&lt;/span&gt;&lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="k"&gt;void&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nf"&gt;trackMetric&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;
    &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;Event Hub Consumer Lag&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="na"&gt;value&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;consumerLag&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="c1"&gt;// Format property keys with a space, for readability in the Application Insights metrics dashboard&lt;/span&gt;
    &lt;span class="na"&gt;properties&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;Event Hub&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;eventHubName&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
      &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;Partition Id&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;partitionId&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
      &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;Consumer Group&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;consumerGroup&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
      &lt;span class="na"&gt;Namespace&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="k"&gt;namespace&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="p"&gt;},&lt;/span&gt;
  &lt;span class="p"&gt;});&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Viewing the custom metric
&lt;/h2&gt;

&lt;p&gt;In the Application Insights console, you’ll find your custom metric, split the chart by "Consumer Group", which represents an application. Depending on the zoom level the chart will show multiple measurements per datapoint. Usethe aggregation "Max" to get the best line. &lt;/p&gt;

&lt;p&gt;This chart shows 3 microservices where 1 service is stuck processing an event. Whenever new events are published the consumer lag will increase. The events are published in bursts, so the consumer lag will increase in distinct increments. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmkjx6cc8d2183vu11vmw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmkjx6cc8d2183vu11vmw.png" alt="A chart with a climbing line" width="800" height="529"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;When an issue is solved, the consumer lag will drop quickly. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsxt55f0e8rddst9n7vrm.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsxt55f0e8rddst9n7vrm.png" alt="A chart with a line that quickly reaches a plateau and then drops" width="800" height="609"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Consumer lag will quickly show any functional or technical issue with your event stream. By using the code examples from this blogpost, you can avoid having to dive into the SDKs yourself. Of course, you can adjust the metric collection to send the metric to the logs or to another metrics system like &lt;a href="https://github.com/siimon/prom-client"&gt;prometheus&lt;/a&gt;, &lt;a href="https://docs.datadoghq.com/api/latest/metrics/#submit-metrics"&gt;datadog&lt;/a&gt;, or &lt;a href="https://github.com/open-telemetry/opentelemetry-js/tree/main/packages/opentelemetry-sdk-metrics-base"&gt;open telemetry&lt;/a&gt;. &lt;/p&gt;

&lt;p&gt;After collecting the metric, the next step is to create &lt;a href="https://docs.microsoft.com/en-us/azure/azure-monitor/alerts/alerts-metric"&gt;metric based alerts&lt;/a&gt; to ensure you detect the issues before your customer does! &lt;/p&gt;

</description>
    </item>
  </channel>
</rss>
