<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Yaroslav</title>
    <description>The latest articles on DEV Community by Yaroslav (@gorohov).</description>
    <link>https://dev.to/gorohov</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/gorohov"/>
    <language>en</language>
    <item>
      <title>A programming language for AI on top of C# and Roslyn</title>
      <dc:creator>Yaroslav</dc:creator>
      <pubDate>Wed, 25 Mar 2026 08:45:21 +0000</pubDate>
      <link>https://dev.to/gorohov/a-programming-language-for-ai-on-top-of-c-and-roslyn-32hd</link>
      <guid>https://dev.to/gorohov/a-programming-language-for-ai-on-top-of-c-and-roslyn-32hd</guid>
      <description>&lt;p&gt;Honestly — making AI read source files and count brackets to edit code&lt;br&gt;
  feels insane to me. Imagine having full access to a building's blueprints&lt;br&gt;
   — every wall, every pipe, every wire mapped out — but instead of using&lt;br&gt;
  them, you hand the builder a photo of the building and say "figure it&lt;br&gt;
  out." That's what we're doing when AI edits raw text while the compiler&lt;br&gt;
  already has the complete structured model of the code.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffvwdu125xw5qmjvzw3le.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffvwdu125xw5qmjvzw3le.jpeg" alt="Visual Studio AI coding" width="800" height="800"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  What AI gets access to
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;VisualStudioWorkspace — same Roslyn semantic model that powers
IntelliSense&lt;/li&gt;
&lt;li&gt;DTE2 — VS IDE control: build, debug, breakpoints, locals&lt;/li&gt;
&lt;li&gt;System.Windows.Automation — desktop UI automation&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Code navigation
&lt;/h2&gt;

&lt;p&gt;Roslyn indexes the entire solution on load. AI finds any class instantly&lt;br&gt;
  and can trace all dependencies — up (base types, interfaces, callers) and&lt;br&gt;
   down (derived types, implementations, callees). Every search is&lt;br&gt;
  semantic, resolved by the compiler — not text grep.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fu8jwv6q3hp50r7l3gdyj.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fu8jwv6q3hp50r7l3gdyj.png" alt="Visual Studio coding with RoslynMCP" width="475" height="631"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  How code editing works
&lt;/h2&gt;

&lt;p&gt;AI doesn't edit text. It requests class structure as JSON — fields,&lt;br&gt;
  methods, types, modifiers. Then targets specific methods by name. Roslyn&lt;br&gt;
  generates syntax, formats, returns compiler diagnostics in the same&lt;br&gt;
  response.&lt;/p&gt;

&lt;p&gt;Block-level navigation: AI can address any nested block by path —&lt;br&gt;
  TaskService.AddTask.if[0].else — and modify just that block without&lt;br&gt;
  touching anything else.&lt;/p&gt;

&lt;h2&gt;
  
  
  Debugger
&lt;/h2&gt;

&lt;p&gt;AI sets breakpoints, starts debug, steps through code, reads locals — all&lt;br&gt;
   through DTE API. Full runtime inspection, not just static analysis.&lt;/p&gt;

&lt;h2&gt;
  
  
  Bigger picture
&lt;/h2&gt;

&lt;p&gt;I think eventually there will be programming languages designed&lt;br&gt;
  specifically for AI — not for humans to type, but for AI to manipulate as&lt;br&gt;
   structured objects. This is an early experiment in that direction, built&lt;br&gt;
   on top of C# and Roslyn. And it already works.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5gl5fil0951v4kkbebeh.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5gl5fil0951v4kkbebeh.png" alt="Roslyn AI coding" width="800" height="396"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Note on skills
&lt;/h2&gt;

&lt;p&gt;Just like a person needs to read the manual before using a tool — AI also needs instructions. That's what skills are. They teach AI how to use&lt;br&gt;
  each Roslyn tool correctly: parameters, workflows, what to do when&lt;br&gt;
  something fails. For Claude Code, skills are available on GitHub and can&lt;br&gt;
  be copied to your project's .claude/skills/ directory. They are also&lt;br&gt;
  bundled inside the extension at Skills/.claude/skills/ for the built-in&lt;br&gt;
  Claude Chat panel.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsdxcm8huz1gbrqfyjyux.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsdxcm8huz1gbrqfyjyux.png" alt="Roslyn logo" width="517" height="365"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Demo Video&lt;/strong&gt;&lt;br&gt;
&lt;a href="https://www.youtube.com/watch?v=skvnHbm2lpk" rel="noopener noreferrer"&gt;https://www.youtube.com/watch?v=skvnHbm2lpk&lt;/a&gt;&lt;br&gt;
&lt;a href="https://www.youtube.com/watch?v=6d6Kx-MnXOc" rel="noopener noreferrer"&gt;https://www.youtube.com/watch?v=6d6Kx-MnXOc&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Marketplace&lt;/strong&gt;&lt;br&gt;
&lt;a href="https://marketplace.visualstudio.com/items?itemName=YaroslavHorokhov.RoslynMcp" rel="noopener noreferrer"&gt;https://marketplace.visualstudio.com/items?itemName=YaroslavHorokhov.RoslynMcp&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Source&lt;/strong&gt;&lt;br&gt;
&lt;a href="https://github.com/yarhoroh/RoslynMCP-Public" rel="noopener noreferrer"&gt;https://github.com/yarhoroh/RoslynMCP-Public&lt;/a&gt;&lt;/p&gt;

</description>
      <category>ai</category>
      <category>csharp</category>
      <category>dotnet</category>
      <category>tooling</category>
    </item>
    <item>
      <title>Free voice to text software review - MurMur VT</title>
      <dc:creator>Yaroslav</dc:creator>
      <pubDate>Wed, 25 Feb 2026 11:46:54 +0000</pubDate>
      <link>https://dev.to/gorohov/free-voice-to-text-software-review-murmur-vt-5c6j</link>
      <guid>https://dev.to/gorohov/free-voice-to-text-software-review-murmur-vt-5c6j</guid>
      <description>&lt;h2&gt;
  
  
  Murmur: The Privacy-First Voice-to-Text App That Works Everywhere on Windows
&lt;/h2&gt;

&lt;p&gt;In a world where we spend countless hours typing away at keyboards, voice-to-text technology promises a faster, more natural way to get words on screen. But most solutions come with a catch: your voice recordings are sent to the cloud, processed on someone else's servers, and potentially stored indefinitely. &lt;strong&gt;Murmur&lt;/strong&gt; takes a different approach — and it's turning heads among professionals who care as much about privacy as they do about productivity.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F06qcz59xgsf2b2vflw9a.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F06qcz59xgsf2b2vflw9a.png" alt="MurMur website" width="800" height="327"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  What Is Murmur?
&lt;/h2&gt;

&lt;p&gt;Murmur (available at &lt;a href="https://murmurvt.com" rel="noopener noreferrer"&gt;murmurvt.com&lt;/a&gt;) is a Windows desktop application that converts your speech into text in real time — entirely on your own device. No internet connection required. No voice data leaving your computer. It's built on OpenAI's Whisper AI model, which delivers over 95% transcription accuracy and supports more than 90 languages with automatic language detection.&lt;/p&gt;

&lt;p&gt;The pitch is simple: press and hold a hotkey, say what you need, let go — and the text appears right where your cursor is. That's it.&lt;/p&gt;

&lt;h2&gt;
  
  
  How It Works
&lt;/h2&gt;

&lt;p&gt;The workflow Murmur uses is elegantly friction-free. You click wherever you want text to appear — a Word document, a Slack message, a ChatGPT prompt, an email draft, a line of code in VS Code — then hold &lt;code&gt;Ctrl + Win + Alt&lt;/code&gt; while you speak. Release the keys, and your words are transcribed directly at the cursor position. There's no copying, no pasting, no switching between apps.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgcv1ko242qfmzpsyc3aw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgcv1ko242qfmzpsyc3aw.png" alt="MurMur screenshot 1" width="800" height="633"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;For users who want to grab their last transcription without typing it somewhere specific, a second shortcut (&lt;code&gt;Ctrl + Win + Shift&lt;/code&gt;) copies it to the clipboard.&lt;/p&gt;

&lt;p&gt;Because the processing happens locally using GPU acceleration (CUDA or Vulkan supported), transcription is fast even on consumer hardware. The experience is described as "release the hotkey and text appears instantly" — and for most users, that's exactly what it delivers.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Privacy Angle
&lt;/h2&gt;

&lt;p&gt;This is where Murmur genuinely stands out from competitors like Google Dictate, Microsoft's built-in speech recognition, or cloud-dependent tools. When your voice never leaves your machine, an entire category of risk disappears.&lt;/p&gt;

&lt;p&gt;The implications are significant for several professional groups:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Developers&lt;/strong&gt; can dictate code comments, documentation, or commit messages without worrying about NDA violations — since nothing is transmitted externally.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Legal professionals&lt;/strong&gt; can dictate client notes and case details knowing that sensitive information stays local and GDPR-compliant.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Medical staff&lt;/strong&gt; can record voice notes securely, with no cloud infrastructure introducing compliance headaches.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Journalists&lt;/strong&gt; benefit from what Murmur calls "source protection built-in" — a compelling promise for anyone dealing with sensitive contacts or unpublished information.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Writers&lt;/strong&gt; can work offline anywhere without sacrificing dictation capability.&lt;/p&gt;

&lt;h2&gt;
  
  
  Features at a Glance
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;OpenAI Whisper AI engine&lt;/strong&gt; with 95%+ accuracy&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;90+ languages&lt;/strong&gt; supported with automatic detection&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Works in any application&lt;/strong&gt; — text is inserted at cursor position system-wide&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;100% local processing&lt;/strong&gt; — no internet required after setup&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;GPU acceleration&lt;/strong&gt; for fast transcription&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Smart audio processing&lt;/strong&gt; to enhance voice clarity&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Notebook feature (Pro)&lt;/strong&gt; for transcribing long recordings and audio files&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  The Technology Behind Voice-to-Text Recognition
&lt;/h2&gt;

&lt;p&gt;To appreciate what Murmur achieves, it helps to understand how modern speech recognition actually works — and why running it locally is no small feat.&lt;/p&gt;

&lt;h3&gt;
  
  
  From Sound Waves to Words
&lt;/h3&gt;

&lt;p&gt;Voice-to-text recognition begins the moment you speak. Your microphone captures sound as analog waves, which are digitized into a stream of audio data. This raw audio is then broken into short overlapping segments — typically 25–30 milliseconds long — and transformed into a visual representation called a spectrogram, which maps frequency and energy over time. It's these spectrograms, not the raw audio, that a neural network "reads."&lt;/p&gt;

&lt;h3&gt;
  
  
  The Role of Deep Learning
&lt;/h3&gt;

&lt;p&gt;Modern speech recognition systems are built on deep neural networks, particularly transformer-based architectures. These models are trained on thousands of hours of labeled speech data, learning to recognize patterns in sound that correspond to phonemes (the smallest units of sound), then words, then full phrases. The model doesn't just match sounds to dictionary entries — it uses context from surrounding words to resolve ambiguities, which is why it handles natural, flowing speech far better than older rule-based systems ever could.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7ro8t46vyszbmtw4c2ad.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7ro8t46vyszbmtw4c2ad.png" alt="MurMur screenshot 2" width="800" height="563"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  OpenAI Whisper: The Engine Under the Hood
&lt;/h3&gt;

&lt;p&gt;Murmur is powered by &lt;strong&gt;OpenAI Whisper&lt;/strong&gt;, one of the most capable open-source speech recognition models available today. Whisper was trained on 680,000 hours of multilingual audio scraped from the web, making it remarkably robust across accents, speaking styles, background noise, and languages. Its transformer architecture allows it to process audio holistically rather than word-by-word, which contributes to its 95%+ accuracy and its ability to detect language automatically without being told what to expect.&lt;/p&gt;

&lt;p&gt;Crucially, Whisper is designed to run as a standalone model — it doesn't require a cloud API call to function. This is what makes Murmur's local-processing promise technically credible rather than just a marketing claim.&lt;/p&gt;

&lt;h3&gt;
  
  
  GPU Acceleration and Real-Time Performance
&lt;/h3&gt;

&lt;p&gt;Running a large neural network locally would have been impractically slow on consumer hardware just a few years ago. Murmur solves this by leveraging GPU acceleration through &lt;strong&gt;CUDA&lt;/strong&gt; (for NVIDIA graphics cards) and &lt;strong&gt;Vulkan&lt;/strong&gt; (a cross-platform graphics API that opens acceleration to a wider range of hardware). By offloading the heavy matrix computations of the Whisper model to the GPU, Murmur achieves transcription speeds fast enough to feel instantaneous — processing your speech in the moment you release the hotkey.&lt;/p&gt;

&lt;h3&gt;
  
  
  Smart Audio Processing
&lt;/h3&gt;

&lt;p&gt;Before audio even reaches the Whisper model, Murmur applies preprocessing filters to enhance clarity. Background noise reduction, volume normalization, and signal filtering all work to give the neural network the cleanest possible input — which directly translates to more accurate output, even in imperfect recording environments.&lt;/p&gt;

&lt;h3&gt;
  
  
  Why Local Processing Matters Technically
&lt;/h3&gt;

&lt;p&gt;Cloud-based speech recognition services work by streaming your audio to remote servers, running the model in a data center, and returning a text result. This introduces latency dependent on network speed, creates a dependency on internet connectivity, and — most significantly — means your voice data passes through infrastructure you don't control. Local processing eliminates all three concerns. Murmur's use of Whisper running natively on your machine means the entire recognition pipeline, from audio capture to text output, happens within your own hardware.&lt;/p&gt;

&lt;h2&gt;
  
  
  System Requirements
&lt;/h2&gt;

&lt;p&gt;Murmur runs on Windows 10 (version 1809) or later. A minimum of 4 GB RAM is required, with 8 GB recommended. A GPU with CUDA or Vulkan support is optional but enables the fastest transcription speeds.&lt;/p&gt;

&lt;h2&gt;
  
  
  Who Should Try Murmur?
&lt;/h2&gt;

&lt;p&gt;The honest answer is: anyone who types a lot on Windows and wants a faster, more private alternative. The use cases are broad — writers battling blank pages, developers documenting their code, students taking lecture notes, professionals managing heavy email loads, or anyone who simply finds speaking faster than typing.&lt;/p&gt;

&lt;p&gt;What makes Murmur particularly compelling isn't just the speed or accuracy — it's the combination of both with genuine, verifiable privacy. In an era where "private" often just means "we promise not to misuse your data," Murmur's local-processing architecture makes that promise unnecessary. The data never leaves in the first place.&lt;/p&gt;




&lt;p&gt;*Murmur is available as a free download from the Microsoft Store.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>privacy</category>
      <category>productivity</category>
      <category>writing</category>
    </item>
  </channel>
</rss>
