DEV Community

Cover image for I've been coding for 30+ years. All I ever wanted was to stop typing.
Ladislav Sopko
Ladislav Sopko

Posted on

I've been coding for 30+ years. All I ever wanted was to stop typing.

I started coding in the early 90s. Assembly first, then C, then the C++ years where you'd fight the compiler more than you'd write features.

By 2007 I was deep into enterprise systems — building a document engine with a colleague. My part was the fun stuff: a plugin system, a parallel B+Tree block file system, Lua scripting integration. I even wrote a custom virtual memory manager to fix LuaJIT's 1GB RAM limit on 64-bit — patching memory pages at the OS level, Windows and Linux, portable. The kind of thing where you're staring at hex dumps at 2am wondering if you've gone too far.

My colleague handled the C core. We shipped it. Enterprise clients used it for years.

But here's the thing nobody tells you about decades of systems programming: your hands pay the price. Thousands of hours of typing. Millions of keystrokes. I started dreaming about a world where I could just say what I wanted and watch the code appear.

The voice coding rabbit hole (spoiler: it sucked for 20 years)

I tried everything. Dragon NaturallySpeaking with custom macros. Voice Attack. Talon. Various VSCode extensions that promised voice-to-code and delivered voice-to-frustration.

The problem was always the same: programming isn't dictation. You can't just say "open curly brace semicolon" and expect to be productive. The overhead of translating your intent into dictation commands was worse than just typing. Every time I tried, I'd give up within a week and go back to the keyboard.

But the dream never died. I kept trying, year after year. My colleagues thought I was obsessed. They were right.

Then the wave hit

When Cline came out, something clicked. Not because it was voice — it wasn't. But because for the first time, the AI wasn't just autocompleting tokens. It was understanding intent. You could describe what you wanted at a high level and watch it navigate files, read code, make changes.

I jumped from Cline to Cursor to Claude Code in maybe 6 months. Each one better than the last. Claude Code was the inflection point — a real terminal-based agent that understood my codebase, my project structure, my intent.

But I still had to type everything. The irony.

Voice ...

I think human progress actually happens because we're naturally lazy. We need to simplify our lives — and I'm no different. I'm an eternal optimiser. So I built VoiceCC.

VoiceCC is what happens when a guy who's been dreaming about voice coding for 20 years finally has the right foundation to build on.

I have a small USB keypad on my desk. Each button is mapped to a different terminal session. I press a button, speak naturally — Italian, English, Slovak, doesn't matter — and Whisper transcribes it locally. No cloud, all offline, about 600ms. The hub figures out which terminal to target and injects the right keystrokes into that specific session.

What this means in practice: I run 3-4 Claude Code instances in parallel, each working on a different task. Press button 1: "yes approve that". Press button 3: "try the alternative approach with dependency injection". Press button 2: "show me the test results". Each going to a different Claude Code working on a different part of the project.

I built it in about 40 days. Co-authored most commits with Claude Code itself. Which felt... appropriate.

For the first time in 30 years, I could sit at my desk with a little keypad and direct multiple AI agents with my voice. Not "open curly brace" — actual intent.

The dream, somehow, actually worked.

The plot twist

Here's where it gets funny.

I spent 30 years wanting to stop typing. I built the tools to make it happen. And now I sit here, talking to my terminal, watching AI write code that sometimes scares me with how good it is.

I went from "I wish I could dictate code" to "I wish the AI would slow down and let me think."

Once you have an AI that writes code, you realize the bottleneck shifts. The AI is fast, but it's also blind — it doesn't know your libraries, it hallucinates APIs, it greps through thousands of files burning tokens. So I kept building. A server to give it access to Visual Studio's compiler intelligence. A language server for AI agents, covering 9 languages. A service that indexes 890+ open-source libraries so it stops making things up. Each one solving a problem the previous tool exposed.

But those are details for another article. What I want to say here is something else.

What I actually learned

I wanted voice coding in 2005. I got it in 2026. The version I got is wildly better than what I imagined — and also stranger.

Building B+Trees, memory managers, and plugin systems by hand for a decade is exactly why I could build the rest. I knew what "real code intelligence" meant because I'd lived without it. Systems programming didn't become obsolete when the AI showed up. It became the thing that let me know what to ask for.

The keyboard isn't dead, but it's optional now. I still type when I'm in deep flow. But for the 80% of coding that's orchestration, navigation, and review. Voice plus AI is just... better.

At 20 I imagined dictating code like a typist imagines dictating a novel. At 50 I find myself orchestrating agents that are sometimes better than me. It's not what I wanted. It's better, and it makes me a little uncomfortable. Twenty years of waiting to discover that the dream was wrong — and the right dream was something I couldn't even have formulated back then.

I'm starting to think I might end up leading the resistance against Skynet. Only half joking.


Stuff I built along the way

If you want to try any of the tools:

  • VoiceCCgithub.com/LadislavSopko/VoiceCC — the multi-channel voice command center described above. Currently a proof of concept. If you'd like to build a community around it, reach out — I'd be happy to open the sources fully, I just don't have the bandwidth to maintain it alone.
  • vs-mcpVS Marketplace — Visual Studio's Roslyn intelligence exposed via MCP. The first piece of software I ever built without writing a single line of code by hand.
  • LSAIgithub.com/0ics-srls/lsai-xmp4.public — LSP-style intelligence for AI agents, 9 languages, 14 tools.
  • xmp4 — add https://mcp.example4.ai/mcp to your MCP config — 890+ pre-indexed libraries, 17 tools, free.

30 years of typing. Now I just talk. What a time.


Disclosure: I built all of these. The code is real, the benchmarks are published. AMA in the comments.

Top comments (0)