<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Ali</title>
    <description>The latest articles on DEV Community by Ali (@alikim).</description>
    <link>https://dev.to/alikim</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/alikim"/>
    <language>en</language>
    <item>
      <title>Webpack Fast Refresh vs Vite</title>
      <dc:creator>Ali</dc:creator>
      <pubDate>Thu, 18 Dec 2025 19:54:37 +0000</pubDate>
      <link>https://dev.to/alikim/webpack-fast-refresh-vs-vite-4ffk</link>
      <guid>https://dev.to/alikim/webpack-fast-refresh-vs-vite-4ffk</guid>
      <description>&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flb9g32f7sv0t0kz4c0k1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flb9g32f7sv0t0kz4c0k1.png" alt="Decorative image" width="800" height="420"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This article shares what felt fastest in the day‑to‑day development of ilert‑ui, a large React + TypeScript app with many lazy routes. We first moved off Create React App (CRA) toward modern tooling, trialed Vite for local development, and ultimately landed on webpack‑dev‑server + React Fast Refresh. &lt;/p&gt;

&lt;p&gt;This article was first published on the ilert blog, and you can find the full version &lt;a href="https://www.ilert.com/blog/webpack-fast-refresh-vs-vite?utm_source=devto&amp;amp;utm_medium=organic&amp;amp;utm_campaign=devto" rel="noopener noreferrer"&gt;here&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Scope&lt;/strong&gt;: Local development only. Our production builds remain on Webpack. For context, the React team officially sunset CRA on February 14, 2025, and recommends migrating to a framework or a modern build tool such as Vite, Parcel, or RSBuild.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Qualitative field notes from ilert‑ui&lt;/strong&gt;: We didn’t run formal benchmarks; this is our day‑to‑day experience in a large route‑split app.&lt;/p&gt;

&lt;h2&gt;
  
  
  Mini‑glossary
&lt;/h2&gt;

&lt;p&gt;Here are the helpful terms you will encounter in this article.&lt;br&gt;
ESM: Native JavaScript module system browsers understand.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;HMR&lt;/strong&gt;: Swaps changed code into a running app without a full reload.&lt;br&gt;
React Fast Refresh: React’s HMR experience that preserves component state when possible.&lt;br&gt;
&lt;strong&gt;Lazy route / code‑splitting&lt;/strong&gt;: Loading route code only when the route is visited.&lt;br&gt;
&lt;strong&gt;Vendor chunk&lt;/strong&gt;: A bundle of shared third‑party deps cached across routes.&lt;br&gt;
&lt;strong&gt;Eager pre‑bundling&lt;/strong&gt;: Bundling common deps up front to avoid many small requests later.&lt;br&gt;
&lt;strong&gt;Dependency optimizer (Vite)&lt;/strong&gt;: Pre‑bundles bare imports; may re‑run if new deps are discovered at runtime.&lt;br&gt;
&lt;strong&gt;Type‑aware ESLint&lt;/strong&gt;: ESLint that uses TypeScript type info – more accurate, heavier.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why we left CRA
&lt;/h2&gt;

&lt;p&gt;ilert‑ui outgrew CRA’s convenience defaults as the app matured.&lt;/p&gt;

&lt;p&gt;Here are the reasons that pushed us away from CRA:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Customization friction: Advanced webpack tweaks (custom loaders, tighter split‑chunks strategy, Babel settings for react-refresh) required ejecting or patching. That slowed iteration on a production‑scale app.&lt;/li&gt;
&lt;li&gt;Large dependency surface: react-scripts brought many transitive packages. Installs got slower, and security noise grew over time without clear benefits for us.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Goals for the next steps:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Keep React + TS.&lt;/li&gt;
&lt;li&gt;Improve time‑to‑interactive after server start.&lt;/li&gt;
&lt;li&gt;Preserve state on edits (Fast Refresh behavior) and keep HMR snappy.&lt;/li&gt;
&lt;li&gt;Maintain predictable first‑visit latency when navigating across many lazy routes.&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Why Vite looked like a better solution
&lt;/h2&gt;

&lt;p&gt;During development, Vite serves your source as native ESM and pre‑bundles bare imports from node_modules using esbuild. This usually yields very fast cold starts and responsive HMR.&lt;/p&gt;

&lt;p&gt;What we loved immediately&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Cold starts: Noticeably faster than our CRA baseline.&lt;/li&gt;
&lt;li&gt;Minimal config, clean DX: Sensible defaults and readable errors.&lt;/li&gt;
&lt;li&gt;Great HMR in touched areas: Editing within routes already visited felt excellent.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Where the model rubbed against our size&lt;/strong&gt;&lt;br&gt;
In codebases with many lazy routes, first‑time visits can trigger bursts of ESM requests, and when new deps are discovered at runtime, dependency‑optimizer re‑runs that reload the page. This is expected behavior, but it made cross‑route exploration feel uneven for us.&lt;/p&gt;

&lt;h2&gt;
  
  
  Qualitative field notes from ilert‑ui
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Methodology:&lt;/strong&gt; qualitative observations from daily development in ilert‑ui.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Our repo’s shape&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Dozens of lazy routes, several heavy sections pulling in many modules.&lt;/li&gt;
&lt;li&gt;Hundreds of shared files and deep store imports across features.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;What we noticed&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;First‑time heavy routes: Opening a dependency‑rich route often triggered many ESM requests and sometimes a dep‑optimizer re‑run. Cross‑route exploration across untouched routes felt slower than our webpack setup that eagerly pre‑bundles shared vendors.&lt;/li&gt;
&lt;li&gt;Typed ESLint overhead: Running type‑aware ESLint (with parserOptions.project or projectService) in‑process with the dev server added latency during typing. Moving linting out‑of‑process helped, but didn’t fully offset the cost at our scale – an expected trade‑off with typed linting.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;em&gt;TL;DR for our codebase: Vite was fantastic once a route had been touched in the session, but the first visits across many lazy routes were less predictable.&lt;/em&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Why we pivoted to webpack‑dev‑server + React Fast Refresh
&lt;/h2&gt;

&lt;p&gt;What we run:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;webpack‑dev‑server with HMR.&lt;/li&gt;
&lt;li&gt;React Fast Refresh via &lt;code&gt;@pmmmwh/react-refresh-webpack-plugin&lt;/code&gt; and &lt;code&gt;react-refresh&lt;/code&gt; in Babel.&lt;/li&gt;
&lt;li&gt;Webpack &lt;code&gt;SplitChunks&lt;/code&gt; for common vendor bundles; filesystem caching; source maps; error overlays; ESLint out‑of‑process.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Why it &lt;em&gt;felt&lt;/em&gt; faster end‑to‑end for our team:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Eager vendor pre‑bundling: We explicitly pre‑bundle vendor chunks (React, MUI, MobX, charts, editor, calendar, etc.). The very first load is a bit heavier, but first‑time visits to other routes are faster because shared deps are already cached. SplitChunks makes this predictable.&lt;/li&gt;
&lt;li&gt;React Fast Refresh ergonomics: Solid state preservation on edits, reliable error recovery, and overlays we like.&lt;/li&gt;
&lt;li&gt;Non‑blocking linting: Typed ESLint runs outside the dev server process, so HMR stays responsive even during large type checks.&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Receipts – the knobs we turned
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;1// webpack.config.js
2module.exports = {
3  optimization: {
4    minimize: false,
5    runtimeChunk: "single",
6    splitChunks: {
7      chunks: "all",
8      cacheGroups: {
9        "react-vendor": {
10             test: /[\/\]node_modules[\/\](react|react-dom|react-router-dom)[\/\]/,
11          name: "react-vendor",
12          chunks: "all",
13          priority: 30,
14        },
15        "mui-vendor": {
16          test: /[\/\]node_modules[\/\](@mui\/material|@mui\/icons-material|@mui\/lab|@mui\/x-date-pickers)[\/\]/,
17          name: "mui-vendor",
18          chunks: "all",
19          priority: 25,
20        },
21        "mobx-vendor": {
22          test: /[\/\]node_modules[\/\](mobx|mobx-react|mobx-utils)[\/\]/,
23          name: "mobx-vendor",
24          chunks: "all",
25          priority: 24,
26        },
27        "utils-vendor": {
28          test: /[\/\]node_modules[\/\](axios|moment|lodash\.debounce|lodash\.isequal)[\/\]/,
29          name: "utils-vendor",
30          chunks: "all",
31          priority: 23,
32        },
33        "ui-vendor": {
34          test: /[\/\]node_modules[\/\](@loadable\/component|react-transition-group|react-window)[\/\]/,
35          name: "ui-vendor",
36          chunks: "all",
37          priority: 22,
38        },
39        "charts-vendor": {
40          test: /[\/\]node_modules[\/\](recharts|reactflow)[\/\]/,
41          name: "charts-vendor",
42          chunks: "all",
43          priority: 21,
44        },
45        "editor-vendor": {
46 test: /[\/\]node_modules[\/\](@monaco-editor\/react|monaco-editor)[\/\]/,
47          name: "editor-vendor",
48          chunks: "all",
49          priority: 20,
50        },
51        "calendar-vendor": {
52          test: /[\/\]node_modules[\/\](@fullcalendar\/core|@fullcalendar\/react|@fullcalendar\/daygrid)[\/\]/,
53          name: "calendar-vendor",
54          chunks: "all",
55          priority: 19,
56        },
57        "vendor": {
58          test: /[\/\]node_modules[\/\]/,
59          name: "vendor",
60          chunks: "all",
61          priority: 10,
62        },
63      },
64    },
65  },
66};
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;1// vite.config.ts - Vite optimizeDeps includes we tried
2export default defineConfig({
3  optimizeDeps: {
4    include: [
5      "react",
6      "react-dom",
7      "react-router-dom",
8      "@mui/material",
9      "@mui/icons-material",
10      "@mui/lab",
11      "@mui/x-date-pickers",
12      "mobx",
13      "mobx-react",
14      "mobx-utils",
15      "axios",
16   "moment",
17      "lodash.debounce",
18      "lodash.isequal",
19      "@loadable/component",
20      "react-transition-group",
21      "react-window",
22      "recharts",
23      "reactflow",
24      "@monaco-editor/react",
25      "monaco-editor",
26      "@fullcalendar/core",
27      "@fullcalendar/react",
28      "@fullcalendar/daygrid",
29    ],
30    // Force pre-bundling of these dependencies
31    force: true,
32  },
33});
34
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Pros and cons (in our context)
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Vite – pros&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Blazing cold starts and lightweight config.&lt;/li&gt;
&lt;li&gt;Excellent HMR within already‑touched routes.&lt;/li&gt;
&lt;li&gt;Strong plugin ecosystem and modern ESM defaults.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Vite – cons‍&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Dep optimizer re‑runs can interrupt flow during first‑time navigation across many lazy routes.&lt;/li&gt;
&lt;li&gt;Requires careful setup in large monorepos and with linked packages.&lt;/li&gt;
&lt;li&gt;Typed ESLint in‑process can hurt responsiveness on large projects; better out‑of‑process.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Webpack + Fast Refresh – pros&lt;/strong&gt;&lt;br&gt;
‍&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Predictable first‑visit latency across many routes via eager vendor chunks.&lt;/li&gt;
&lt;li&gt;Fine‑grained control over loaders, plugins, and output.&lt;/li&gt;
&lt;li&gt;Fast Refresh preserves state and has mature error overlays.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Webpack + Fast Refresh – cons&lt;/strong&gt;&lt;br&gt;
‍&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Heavier initial load than Vite’s cold start.&lt;/li&gt;
&lt;li&gt;More configuration surface to maintain.&lt;/li&gt;
&lt;li&gt;Historical complexity (mitigated by modern config patterns and caching).&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  When we would pick each
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Choose Vite if:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Cold starts dominate your workflow.&lt;/li&gt;
&lt;li&gt;Your module graph isn’t huge or fragmented into many lazy routes.&lt;/li&gt;
&lt;li&gt;Plugins – especially typed ESLint – are light or run out‑of‑process.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Choose webpack + Fast Refresh if:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Your app benefits from eager vendor pre‑bundling and predictable first‑visit latency across many routes.&lt;/li&gt;
&lt;li&gt;You want precise control over loaders/plugins and build output.&lt;/li&gt;
&lt;li&gt;You like Fast Refresh’s state preservation and overlays.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Learn more in &lt;a href="https://www.ilert.com/blog/webpack-fast-refresh-vs-vite?utm_source=devto&amp;amp;utm_medium=organic&amp;amp;utm_campaign=devto" rel="noopener noreferrer"&gt;the ilert blog&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>webdev</category>
      <category>tutorial</category>
      <category>react</category>
      <category>learning</category>
    </item>
    <item>
      <title>AI-driven incident response with ilert’s MCP server</title>
      <dc:creator>Ali</dc:creator>
      <pubDate>Mon, 08 Dec 2025 11:07:15 +0000</pubDate>
      <link>https://dev.to/alikim/ai-driven-incident-response-with-ilerts-mcp-server-hfm</link>
      <guid>https://dev.to/alikim/ai-driven-incident-response-with-ilerts-mcp-server-hfm</guid>
      <description>&lt;p&gt;Model Context Protocol (MCP) is quickly becoming the standard way to connect AI assistants to other tools. To help ops teams take advantage of that shift, ilert has built an open MCP server that lets assistants like Claude and Cursor securely interact with ilert, from checking who’s on call to creating and managing incidents.&lt;br&gt;
This post explains MCP in simple terms, why ilert invested in it, how the server is designed, and how you can connect your favorite MCP client today.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;This is a shorter version of the article first &lt;a href="https://www.ilert.com/blog/bring-incident-response-to-ai-stack-with-ilerts-mcp-server?utm_source=devto&amp;amp;utm_medium=organic&amp;amp;utm_campaign=devto" rel="noopener noreferrer"&gt;published&lt;/a&gt; by ilert engineer Tim Gühnemann.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7msu3ruzg4e24lyjdffw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7msu3ruzg4e24lyjdffw.png" alt="ilert’s MCP server" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  MCP Basics
&lt;/h2&gt;

&lt;p&gt;The Model Context Protocol is an &lt;a href="https://modelcontextprotocol.io/docs/getting-started/intro" rel="noopener noreferrer"&gt;open standard&lt;/a&gt; that gives AI assistants a consistent way to access external tools and data. Instead of building one-off plugins for every assistant, MCP defines common interfaces for:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Tools (actions an assistant can take)&lt;/li&gt;
&lt;li&gt;Resources (data it can read)&lt;/li&gt;
&lt;li&gt;Transports (how communication happens)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In other words, MCP is like a “USB-C for AI apps”: one interface that works across many clients. With MCP, assistants can read data, run operations, and stream results in a way that supports permissions and audit trails — much more reliable than UI automation or brittle scripts.&lt;/p&gt;

&lt;p&gt;Many clients already support MCP. &lt;a href="https://www.anthropic.com/engineering/desktop-extensions" rel="noopener noreferrer"&gt;Claude Desktop&lt;/a&gt; can connect to MCP servers through extensions and connectors, while Cursor lets you add them in Tools &amp;amp; MCP and use them directly in the IDE chat.&lt;/p&gt;

&lt;p&gt;For ops teams, this means assistants can read incidents, alerts, and schedules and then act on them (acknowledge, escalate, resolve, create incidents) through permissioned API calls rather than screen scraping.&lt;/p&gt;

&lt;h2&gt;
  
  
  Reasons we introduced an open MCP server for ilert
&lt;/h2&gt;

&lt;p&gt;AI agents are increasingly used where teams already collaborate: chats, terminals, and IDEs. ilert’s focus is to bring incident response into those environments while keeping access safe, scoped, and fully auditable.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Problem:&lt;/strong&gt; AI assistants need a secure, consistent way to work with alerts and incidents across different clients — without custom integrations for each one.&lt;br&gt;
&lt;strong&gt;Outcome:&lt;/strong&gt; MCP lets ilert expose capabilities once and make them instantly usable in any MCP-compatible assistant. Less context switching, fewer handoffs.&lt;/p&gt;

&lt;h2&gt;
  
  
  Architecture of the ilert MCP server
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Tech stack&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Runtime: Deno + TypeScript&lt;/li&gt;
&lt;li&gt;SDK: &lt;a href="https://github.com/modelcontextprotocol/typescript-sdk" rel="noopener noreferrer"&gt;Official MCP TypeScript SDK&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Transport: Streamable HTTP (recommended in the MCP spec)&lt;/li&gt;
&lt;li&gt;Hosting: Remote MCP server at &lt;a href="https://mcp.ilert.com/mcp" rel="noopener noreferrer"&gt;https://mcp.ilert.com/mcp&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The server uses the official SDK to provide MCP-compliant tools, resources, and prompts. We expose it remotely over Streamable HTTP, which supports streaming responses, resumable sessions, and simple authentication headers — a good fit for enterprise environments. (stdio is supported too, but Streamable HTTP is the default recommendation.)&lt;/p&gt;

&lt;h2&gt;
  
  
  How we map ilert to MCP
&lt;/h2&gt;

&lt;p&gt;ilert’s MCP server exposes tool actions that map directly to the ilert API and common DevOps/SRE workflows. Assistants can safely read context and take action on Alerts and Incidents without relying on UI scripting.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What you can do&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Manage alerts: list, inspect, comment, acknowledge, resolve, escalate, reroute, add responders, and run predefined actions.&lt;/li&gt;
&lt;li&gt;Open incidents: create incidents with severity, service, and responders right from your assistant.&lt;/li&gt;
&lt;li&gt;Look up context: find users, services, alert sources, escalation policies, schedules, and your own profile.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Typical flow&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Use read tools to gather context (e.g., services → alerts → alert details).&lt;/li&gt;
&lt;li&gt;Suggest and confirm a write action (e.g., acknowledge or resolve an alert, create an incident, run an alert action).&lt;/li&gt;
&lt;li&gt;Execute through scoped ilert API keys so everything stays permissioned and auditable.&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Real-world scenarios
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Scenario 1: Create an alert in ilert&lt;/strong&gt;&lt;br&gt;
Use your assistant to locate the right service, confirm severity, and create an alert through MCP tools — all without leaving your chat or IDE.&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpbrccqcniokkp64o9sbn.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpbrccqcniokkp64o9sbn.png" alt="Cursor interface: Create an alert in ilert. Step 1" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsli3ohoifnhgg0ylfeaz.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsli3ohoifnhgg0ylfeaz.png" alt="Cursor interface: Create an alert in ilert. Step 2" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Scenario 2: Comment on an incident and resolve it&lt;/strong&gt;&lt;br&gt;
Ask the assistant to fetch the incident context, add an update, and resolve or escalate once confirmed — with every step logged via API scopes.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqth2udutuxzm488fbc4n.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqth2udutuxzm488fbc4n.png" alt="Cursor interface: Comment on an incident and resolve it in ilert. Step 1" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhpv62gwy12srx82p2816.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhpv62gwy12srx82p2816.png" alt="Cursor interface: Comment on an incident and resolve it in ilert. Step 2" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Learn more about ilert AI capabilities &lt;a href="https://www.ilert.com/product/ilert-ai?utm_source=devto&amp;amp;utm_medium=organic&amp;amp;utm_campaign=devto" rel="noopener noreferrer"&gt;here&lt;/a&gt;.&lt;/strong&gt;&lt;/p&gt;

</description>
      <category>ai</category>
      <category>devops</category>
      <category>mcp</category>
      <category>automation</category>
    </item>
    <item>
      <title>How to reduce on-call friction using AI Voice Agent</title>
      <dc:creator>Ali</dc:creator>
      <pubDate>Fri, 28 Nov 2025 15:22:29 +0000</pubDate>
      <link>https://dev.to/alikim/how-to-reduce-on-call-friction-using-ai-voice-agent-583a</link>
      <guid>https://dev.to/alikim/how-to-reduce-on-call-friction-using-ai-voice-agent-583a</guid>
      <description>&lt;p&gt;*&lt;em&gt;See how we at ilert use our AI Voice Agent to make on-call calls way smoother. It grabs incident context up front and plugs right into your call flows.&lt;br&gt;
*&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Even with great automation and observability, on-call still has one very human pain point: the phone rings, you wake up, and you have basically no context. In the first minutes of a critical call, you’re not fixing anything yet; you’re just trying to understand what’s going on.&lt;/p&gt;

&lt;p&gt;At ilert, we built the AI Voice Agent to change that. Instead of connecting callers straight to a sleepy engineer, the agent speaks to the caller first, collects the essential details, and then routes the call intelligently using up-to-date incident context. That way, when an engineer does get pulled in, they’re starting with real information — not guesswork.&lt;/p&gt;

&lt;p&gt;The full version of this post was first published on &lt;a href="https://www.ilert.com/blog?utm_source=devto&amp;amp;utm_medium=organic&amp;amp;utm_campaign=devto" rel="noopener noreferrer"&gt;the ilert Engineering Blog&lt;/a&gt; by my colleague, ilert engineer Jan. &lt;/p&gt;

&lt;h2&gt;
  
  
  What problem are we solving?
&lt;/h2&gt;

&lt;p&gt;On-call engineers often receive urgent calls with minimal or messy context. The result is predictable: they have to ask the same qualifying questions over and over before they can even begin to help. In a high-pressure situation, those minutes matter.&lt;/p&gt;

&lt;p&gt;The AI Voice Agent takes that initial burden off the engineer. It gathers the key facts before escalation, so engineers can jump directly into troubleshooting. It can also reduce unnecessary wake-ups by checking for open incidents and letting callers know when an issue is already being handled. And because the agent lives inside ilert’s Call Flow Builder, it fits into your existing routing logic instead of forcing you to bolt on a separate system. You decide which information it should collect: names, contact details, incident descriptions, affected services, or custom fields that align with your workflow.&lt;/p&gt;

&lt;h2&gt;
  
  
  How it fits into our Call Flow Builder
&lt;/h2&gt;

&lt;p&gt;If you’ve used ilert’s Call Flow Builder, think of the AI Voice Agent as one more node you can place wherever it makes sense. It will look like something like that:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1yqbg6f8q1slht56o7x4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1yqbg6f8q1slht56o7x4.png" alt="An ilert call flow looks like a tree: nodes, like AI Voice Agent, create an alert, and route calls are descending" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The builder is a visual tool where each node represents a step in call handling. The AI node can greet callers, ask structured questions, enrich context, and then route or escalate based on what it learns. &lt;/p&gt;

&lt;h2&gt;
  
  
  Architecture overview
&lt;/h2&gt;

&lt;p&gt;Under the hood, the agent is designed for fast, modular conversations with low latency. Twilio handles real-time audio streaming to and from callers, while a WebSocket channel connects ilert to OpenAI for conversational turns. The Call Flow Builder provides the configuration layer, letting you tune behavior without touching code.&lt;/p&gt;

&lt;p&gt;Inside Call Flow Builder, the AI Voice Agent is just one of the nodes we at ilert provide. The builder is visual: you connect nodes to shape what should happen during a call, step by step. Since the AI is a node too, you can drop it exactly where it makes sense in your flow. Maybe right at the start to collect context, or later to handle a specific part of the conversation.&lt;/p&gt;

&lt;p&gt;What the agent does in that spot is pretty simple: it talks to the caller naturally, figures out what they’re calling about, and collects the key details you want before anyone gets escalated. If you enable context enrichment, it can also look at live ilert data like open incidents, service states, or maintenance windows. That way, it doesn’t just follow a script, but it reacts to what’s actually going on right now and routes the call accordingly.&lt;br&gt;
Hard parts we had to solve&lt;/p&gt;

&lt;p&gt;Making a voice agent feel natural and reliable in production comes with some real technical headaches.&lt;/p&gt;

&lt;p&gt;One of the first was speaker tracking. Both &lt;a href="https://www.twilio.com/en-us" rel="noopener noreferrer"&gt;Twilio&lt;/a&gt; and &lt;a href="https://openai.com/" rel="noopener noreferrer"&gt;OpenAI&lt;/a&gt; emit events about who is speaking, but interpreting those signals consistently in real time is tricky. We needed to know precisely whether the bot or the caller was talking at any given moment; otherwise, the AI might interrupt the user or miss what they said.&lt;/p&gt;

&lt;p&gt;Conversation flow was another big challenge. A voice interface that sounds robotic is a fast way to frustrate callers, so we invested heavily in prompt engineering and tuning cadence, tone, and responsiveness. We wanted it to feel like a helpful conversation, not a phone menu.&lt;/p&gt;

&lt;p&gt;Finally, we had to keep multiple live connections synchronized. Twilio streams, OpenAI responses, and ilert backend state all need to stay aligned. If any part drifts, context gets messy and the agent starts acting confused. Tight orchestration and careful state management were essential.&lt;/p&gt;

&lt;h2&gt;
  
  
  Context-aware conversations in practice
&lt;/h2&gt;

&lt;p&gt;What makes the Voice Agent different from traditional IVR systems is that it combines intent recognition with optional context enrichment. At call start, it receives possible intents, gathers follow-up paths, and captures the caller’s number. If enrichment is enabled, it also learns what’s happening in ilert right now: whether there are open incidents, degraded services, or maintenance windows. That lets it respond based on reality instead of reading a static script, and route callers to the right path much faster.&lt;/p&gt;

&lt;h2&gt;
  
  
  Lessons learned so far
&lt;/h2&gt;

&lt;p&gt;Beta testing taught us that interruption isn’t an edge case – it’s how people naturally talk on the phone. Letting callers interrupt the AI makes the experience smoother, but it also makes accurate speaker tracking even more important. The same tracking helps detect long silences so calls don’t run forever when nobody is speaking. We also reaffirmed that prompt engineering is essentially part of product design here: the voice needs to sound human while staying inside clear operational boundaries. And, unsurprisingly, multi-stream synchronization remains a core reliability requirement in any real-time voice system.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>devops</category>
      <category>automation</category>
      <category>sre</category>
    </item>
    <item>
      <title>Building AI SRE: Our journey</title>
      <dc:creator>Ali</dc:creator>
      <pubDate>Tue, 11 Nov 2025 15:03:20 +0000</pubDate>
      <link>https://dev.to/alikim/building-ai-sre-our-journey-2n40</link>
      <guid>https://dev.to/alikim/building-ai-sre-our-journey-2n40</guid>
      <description>&lt;p&gt;Even with automation and observability, most on-call workflows still rely on human responders juggling dashboards, logs, and chat threads under pressure. &lt;em&gt;It’s reactive, fragmented, and cognitively draining.&lt;/em&gt; At &lt;a href="https://www.ilert.com/?utm_source=devto&amp;amp;utm_medium=organic&amp;amp;utm_campaign=devto" rel="noopener noreferrer"&gt;ilert&lt;/a&gt;, we set out to change that. Not by adding another dashboard, but by making incident response more agentic: intelligent systems that understand, recommend, and act safely in real-time.&lt;/p&gt;

&lt;p&gt;In this article, we want to share how we are building an agentic incident response, what we've learned on the way, and the next steps.&lt;/p&gt;

&lt;p&gt;By the way, if you haven’t heard of us — ilert is an AI-first incident management platform that helps engineering teams reduce downtime and improve MTTR.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;This article was written by ilert engineer Tim. You can find the full version of it in &lt;a href="https://www.ilert.com/blog?utm_source=devto&amp;amp;utm_medium=organic&amp;amp;utm_campaign=devto" rel="noopener noreferrer"&gt;the Engineering blog&lt;/a&gt;, where we regularly share our journey of building AI SRE.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;We began our path to agentic incident response by designing an architecture focused on flexibility, scalability, and intelligent automation throughout the entire incident lifecycle.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F39uev6x6ez48o2rutuwl.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F39uev6x6ez48o2rutuwl.png" alt="ilert AI SRE Architecture" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Laying the groundwork
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Hive: LLM Orchestration Layer&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Hive is ilert’s backbone for AI-driven operations — a secure orchestration layer that manages multiple large language models for incident analysis, summaries, and contextual recommendations. It lets us route workloads to the best model for the job, maintain data privacy, and integrate new LLMs effortlessly as they emerge.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;AI Voice Agent: Hands-free response&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;When responders can’t type, our AI voice agent becomes the interface — capturing spoken issues updates, turning them into structured alerts, and pulling fresh data from multiple sources. It bridges natural communication with automated precision.&lt;/p&gt;

&lt;h2&gt;
  
  
  The core: Model Context Protocol
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://modelcontextprotocol.io/docs/getting-started/intro" rel="noopener noreferrer"&gt;The Model Context Protocol (MCP)&lt;/a&gt;, originally developed by &lt;a href="https://www.anthropic.com/" rel="noopener noreferrer"&gt;Anthropic&lt;/a&gt;, is a real-time system that connects operational data to the ilert AI SRE. It provides the structured context our agents need to act intelligently during incidents.&lt;/p&gt;

&lt;p&gt;Why MCP? Traditional integrations often leave systems disconnected, forcing teams to manually correlate telemetry, logs, and infrastructure data. MCP eliminates these silos by automatically aggregating and structuring incident-relevant information in real time.&lt;/p&gt;

&lt;p&gt;MCP collects data from monitoring tools, log aggregators, deployment pipelines, and infrastructure platforms, processes it within a secure, EU-compliant, multi-tenant architecture, and delivers only the essential insights to our agentic responders.&lt;/p&gt;

&lt;p&gt;This ensures that:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Agents have real-time, granular incident awareness;&lt;/li&gt;
&lt;li&gt;Data remains isolated, secure, and compliant;&lt;/li&gt;
&lt;li&gt;Manual correlation and cognitive load are minimized;&lt;/li&gt;
&lt;li&gt;Interactions with the ilert AI SRE agent are low-latency and context-rich.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;So, MCP is a kind of neural layer connecting your observability stack, codebase, and infrastructure to our AI systems — keeping every action contextually accurate, relevant, and safe.&lt;/p&gt;

&lt;h2&gt;
  
  
  The ilert AI SRE: Turning alerts into agent-proposed actions
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fod1r3p470re0jas1bd8z.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fod1r3p470re0jas1bd8z.png" alt="ilert AI SRE recommebded actions" width="800" height="288"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;We built an end-to-end pipeline that turns monitoring signals into intelligent, actionable workflows to speed up incident resolution. When an alert is triggered, Event Flow (automated workflows to streamline processing, routing, and escalating events in ilert) applies rules and thresholds to notify the right teams instantly — cutting noise and delay.&lt;/p&gt;

&lt;p&gt;At the same moment, the MCP enriches the alert by gathering and structuring telemetry, logs, deployment data, and infrastructure status from tools like Prometheus, Grafana, GitHub, and Kubernetes. This gives the ilert agent full situational awareness without any manual correlation.&lt;/p&gt;

&lt;p&gt;With this context in place, the ilert AI SRE becomes an active participant in the incident, not just a notifier. It analyzes data in real time to propose root causes, remediation steps, and escalation paths. All surfaced in an interactive chat interface where responders can review, adjust, or safely execute actions on the spot.&lt;/p&gt;

&lt;h2&gt;
  
  
  What we learned along the way
&lt;/h2&gt;

&lt;p&gt;Building and running agentic systems for real-world, mission-critical incident response has been an insightful journey. Here are a few things we’ve learned along the way:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Transparency builds trust.&lt;/strong&gt; When agents act autonomously — collecting data, correlating signals, or even executing predefined actions — human-responders need to see what’s happening and why. Full visibility builds confidence. For high-impact actions, we let teams add approval steps, striking the right balance between speed and safety.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Context is everything.&lt;/strong&gt; To avoid hallucinations or half-baked suggestions, we feed our agents rich, structured data through the MCP. This keeps every insight grounded in reality — and makes the agent feel more like a reliable teammate than a guessing machine.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Low latency matters.&lt;/strong&gt; In an incident, seconds matter. We’ve optimized for speed with speculative tool calls and efficient data paths so responders get insights almost instantly. Less waiting, faster recovery.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Feedback makes it better.&lt;/strong&gt; Every incident teaches something new. Built-in feedback loops help the system learn what works (and what doesn’t), so it becomes sharper and more helpful over time.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Safety first, always.&lt;/strong&gt; Autonomous doesn’t mean out of control. By defining safe, scoped actions, the agent can fix certain issues on its own — with full rollback options if needed. That way, automation accelerates recovery without ever compromising reliability.&lt;/li&gt;
&lt;/ol&gt;

</description>
      <category>ai</category>
      <category>devops</category>
      <category>mcp</category>
      <category>sre</category>
    </item>
  </channel>
</rss>
