I've been developing software for almost 3 decades, and I've lost count of how many brilliant ideas vanished between conception and keyboard. You know the feeling – that perfect solution hits you in the shower, during a run, or right as you're falling asleep.
Last month, I started dogfooding my own tool – VoiceCommit – to create GitHub issues directly from voice. The results surprised me: I've created 20+ actionable GitHub issues from ideas that would have 100% disappeared within minutes.
Here's the truth: It's not about saving time. It's about saving ideas.
1. The Ideas That Actually Make It To GitHub
The Old Reality:
- Have idea while walking → "I'll remember this" → Open laptop 2 hours later → What was that idea again?
- Success rate: Maybe 1 in 10 ideas survived
The New Reality:
- Have idea → Pull out phone → 30-second voice note → GitHub issue exists
- Success rate: 10 out of 10 ideas captured
Last Saturday night at 9:57 PM, I had a flash of inspiration about adding social sharing to VoiceCommit. In bed, no laptop nearby. I grabbed my phone and said:
"I want to add a social piece to VoiceCommit that allows users to share what they said and the PR or Issue that was created. The output should be text for the voice entry and an image of the PR."
By Sunday morning, this detailed GitHub issue was waiting for me. Without voice input, this idea had a 0% chance of survival. I know because I've lost hundreds just like it.
2. Bug Reports With Actual Context (Not "Thing Broken")
We've all been there. You find a bug on mobile, make a mental note, then later create an issue titled "Login redirect issue" with no other context. Future you hates past you.
Here's what I actually captured at 11:00 PM on a Thursday:
"Login page is doing this weird double-redirect thing. User clicks login, goes to auth provider, comes back, then redirects again to dashboard. Feels like we're handling the auth state twice. Need to check the useEffect in AuthProvider component."
The difference isn't time saved – it's context preserved.
Without voice: "Fix login redirect" (useless)
With voice: Full reproduction steps, hypothesis, and where to look
3. Feature Ideas With The "Why" Still Fresh
Here's my favorite example. One simple sentence while walking my dog:
"I have an idea for my blog. I want to add a feature where a progress bar shows across the top of a post's page and as you scroll, the progress bar advances."
Twenty seconds of speaking preserved:
- What I wanted (progress bar)
- Where it goes (top of page)
- How it behaves (advances with scroll)
- The context (blog posts specifically)
The magic: I spoke this naturally, including details I wouldn't have remembered to type later. The full context was preserved because I was describing what I was visualizing in that moment.
4. Mobile UX Issues Captured Where They Happen
Testing on your phone, notice something broken, and... what? Switch to laptop? Email yourself?
I caught this at 10:20 PM while browsing on my iPhone:
"The mobile navigation needs a hamburger menu that slides out from the left. Current tabs are too cramped on iPhone SE. Should use CSS transforms for smooth animation, maybe 300ms duration."
This isn't faster than typing on a laptop. But I wasn't at my laptop. This idea would have evaporated by morning. Instead, it became a proper GitHub issue with implementation details.
5. Meta Development: Building The Tool With The Tool
The ultimate dogfooding – I've created 12 VoiceCommit features using VoiceCommit itself:
- "Make VoiceCommit a PWA so I can add it to home screen"
- "Add pictures to voice commands and have AI analyze them"
- "Create a blog post about using VoiceCommit to build VoiceCommit"
Each of these was captured in the moment of inspiration, not reconstructed later from memory.
Let's Talk About "Time Saved"
I'm not saving time. I'm saving ideas.
Traditional time comparison:
- Writing a GitHub issue: 5 minutes
- Voice recording + AI processing: 1 minute
- "Time saved": 4 minutes ❌
The real comparison:
- Ideas that make it to GitHub without voice: 10%
- Ideas that make it to GitHub with voice: 90%
- Ideas saved: 80% ✅
The Data: 14 Days of Voice-First Development
From my actual VoiceCommit database:
- 20+ voice submissions → 20+ GitHub issues
- Capture locations: In bed, walking the dog, shopping, out to dinner (don't tell my wife)
- Capture times: 40% after 9 PM, 30% before 9 AM
- Average time from idea to issue: 47 seconds
The key insight: 0% of these would exist without voice input. Not because I'm lazy, but because ideas are ephemeral.
Getting Started With Voice Capture
- Accept the reality: You will forget that idea in 5 minutes
- Make it frictionless: Phone shortcut, PWA, whatever works
- Speak naturally: Include context, don't self-edit
- Trust the AI: It's better at formatting than your memory is at remembering
Try This For One Week
Don't think about time saved. Think about ideas saved. Every time you have a development thought away from your keyboard, capture it with voice.
After one week, count how many ideas made it to GitHub that normally would have vanished.
That's the real metric.
How many development ideas do you lose each week? Have you found other ways to capture fleeting thoughts? Share your experience in the comments!
P.S. Want to try VoiceCommit? Free tier includes 25 voice submissions/month: voicecommit.com
Top comments (0)