We Have AI in Every Editor, But We Still Type Like It's 2015
It's 2026.
Our IDEs autocomplete entire functions. AI agents generate boilerplate, explain stack traces, and refactor code for us.
Yet somehow, we're still manually typing three-paragraph PR descriptions like it's a civic duty.
I realized a huge chunk of my "coding" time wasn't coding at all - it was:
- review comments
- documentation
- Slack explanations
- meeting notes
- architecture writeups
- PR descriptions
The keyboard wasn't the bottleneck anymore.
I was.
The Moment Everything Clicked
A colleague was doing a code review and, instead of typing a comment, just started talking into a mic.
The words appeared instantly — coherent, formatted, and technically accurate.
They said:
"the observable chain here has a race condition because the switchMap isn't cancelling the previous subscription"
Perfect transcription.
My immediate reaction:
"Sure, in English maybe. But no way this handles Polish technical language."
I was very wrong.
The Real Magic Is Code-Switching
The breakthrough wasn't English dictation.
It was realizing modern voice recognition understands the weird Polish-English hybrid most developers actually speak every day.
| What I said | Result |
|---|---|
| 🇵🇱 "Refaktoryzacja serwisu do obsługi płatności" | ✅ Perfect |
| 🇵🇱 "Wstrzykiwanie zależności przez konstruktor" | ✅ Nailed it |
| 🇬🇧 "Add a circuit breaker pattern to the external API calls" | ✅ Clean |
| 🇵🇱🇬🇧 "Ten endpoint powinien zwracać paginated response" | ✅ Handled perfectly |
That last example completely sold me.
Because that's how a lot of us actually talk about software:
- Polish grammar
- English technical vocabulary
- random architecture buzzwords
- three abstractions in one sentence
And somehow modern transcription tools just... understand it.
Where Voice Actually Beats Typing
I'm not dictating code.
Voice works best for reasoning, explanations, and communication — not syntax-heavy precision work.
But for everything around coding?
It's ridiculously effective.
Code Review Comments
What used to take 3 minutes of typing now takes 30 seconds of speaking.
And the comments are usually better because talking is much closer to thinking.
PR Descriptions
Right after finishing a feature, I just narrate:
- what changed
- why I changed it
- edge cases
- migration concerns
- tradeoffs
Two minutes later I have a proper PR description instead of:
"fixes stuff"
Brain Dumps Before Meetings
This one surprised me the most.
Opening a blank document and typing structured thoughts feels mentally expensive.
Talking doesn't.
I can dump five minutes of unfiltered thoughts into a note and then clean it up afterward.
Documentation & ADRs
Architecture decisions are easier to explain out loud than to type from scratch.
It feels less like "writing documentation" and more like explaining your reasoning to another engineer.
My Current Setup
Right now I'm mostly using Willow Voice.
That's it.
The workflow is basically:
talk → transcript → quick cleanup → done
Simple, but surprisingly effective.
The Honest Downsides
It's not perfect.
- Open offices are awkward
- Some tools still require saying things like "comma" and "period"
- Editing is still faster with a keyboard
- The initial cringe factor is very real
The sweet spot is:
dictate first → edit second
Not:
fully voice-controlled programming
That sounds exhausting.
It's Not About Replacing the Keyboard
I'm obviously not throwing my keyboard away.
But I've stopped using it for tasks where speaking is 5x faster than typing.
We don't use a screwdriver to hammer nails.
So why are we still typing long explanations when we could just say them?
Have other multilingual developers noticed the same thing with code-switching?
Or are you still faster on a keyboard for everything?

Top comments (0)