Hello! I built a cognitive assistant for developers that takes your task and optimizes your audiovisual environment for deep work using ai. Think of your laptop screen and speakers super subtly entraining brainwaves for your task. I hope to learn about how developers work with deep work vs distractions, how they want their work to look like, and then build for them!
Hello, I am Dennis Turco, born on 08/04/2001 in Fidenza (a small town in the province of Parma), where I am a
resident. I have a bachelor’s degree in computer science from University of Parma.
10yr senior dev → SAHM. ✨ meimakes.com. 🤖 Productivity hacks for parent devs. Also teaching my 2yo to build with AI. 👶🏻💻 He's shipped 6 games using NLP. He can't write his name yet. 🤷🏻♀️
Hey Raising Pixels (cool name)! Yes, I’m really into cognitive neuroscience. I actually studied economics in college, but I was fascinated by the idea of creating a non-competing source of value, kind of like applying microeconomic thinking to UX. I wanted to build something that added value without demanding attention or causing opportunity cost, something that could coexist with whatever you already wanted to do.
Originally, I had the idea of using a simple cyclical visual overlay to guide the breath (which is still a feature today). Then I learned that subtle visual or auditory stimuli could actually assist attention through the perceptual load theory of attention, and that cracked the door open.
But the real moment came when I discovered audiovisual entrainment was exactly what I was looking for. It’s been a wild ride since then!
10yr senior dev → SAHM. ✨ meimakes.com. 🤖 Productivity hacks for parent devs. Also teaching my 2yo to build with AI. 👶🏻💻 He's shipped 6 games using NLP. He can't write his name yet. 🤷🏻♀️
Really fascinating intersection you’re exploring! I think a lot about how neuroscience could help patent developers work more effectively with fragmented attention and sleep deprivation.
Your mention of perceptual load theory is spot-on for parent developers. When you’re operating on 4 hours of sleep with a toddler demanding attention, your cognitive resources are already maxed out. Understanding how to design workflows that reduce cognitive load becomes essential for getting anything done.
I’ve been experimenting with applying some neuroscience principles to productivity systems, like:
Working memory limitations: When you’re tired, your working memory capacity drops significantly. This is why simple bash aliases and persistent tmux sessions work so well - they reduce the cognitive overhead of remembering commands and restoring context.
Attention restoration: Even micro-breaks (30 seconds of deep breathing between coding sessions) can help reset your attention networks when you’re context-switching between “parent mode” and “developer mode.”
Your audiovisual entrainment work sounds really promising for this use case. Have you explored how it might help with rapid context switching? Parent developers often have to switch between vastly different cognitive modes within seconds.
I’d love to hear more about your research - especially anything that could help developers maintain focus and creativity despite constant interruptions!
You're absolutely right about working memory, attention restoration, and the cognitive tax of context switching. That’s actually the design space I’ve been exploring with Halotropic, the desktop AVE tool.
One way AVE helps is by reducing the impact of interruptions themselves. In Perceptual Load Theory, attention is a limited resource that's always fully allocated—if you’re not attending to the task, you’re attending to something else (internal thoughts, ambient noise, distractions). So to stay focused, you actually want your task to present a high perceptual load—to dominate attention, leaving no room for irrelevant stimuli.
What Halotropic does is subtly raise the task-relevant perceptual load using a shimmering, low opacity visual overlay for your entire desktop, entrained at a specific frequency (e.g. alpha, beta, gamma). It modulates the pixels you're already using—so unlike a flashing desk lamp, it’s integrated into the work itself. Because the rhythm is constant and ambient, your brain quickly habituates (kind of like white noise), but your visual cortex still responds at a deeper level.
That’s where brainwave entrainment comes into play: if we oscillate the overlay using the frequencies we get from scientific study, this rhythmic input doesn’t just occupy attention—it entrains brain activity, helping you shift into a focus-ready state faster. After an interruption, instead of taking 20–30 minutes to re-enter flow, AVE can guide your brainwaves back within minutes.
From a product perspective, Halotropic AVE is like functional white noise—it smooths cognitive transitions while actually modulating the state you’re transitioning into.
Would love to swap notes if you’re exploring similar tools or ideas—especially for devs juggling work and parenting. I'd love for you to schedule a demo and we can swap notes! halotropic.com
P.S. I tried simulating a Raising Pixels prompt in my app:
Prompt:
"kid just went down for a nap. I’ve got like 25 min. need to get back to editing this bash script."
Response:
"I've got you covered! With your kid napping, let's maximize your focus using 16 Hz beta waves for sharp attention and 40 Hz gamma to speed up processing. The warm orange hue at 43° reduces eye strain and keeps your evening rhythm steady. Pink noise will gently mask distractions so you can power through your bash script edits. You're all set to make the most of this quiet time!"
A passionate Full-Stack Software Engineer 🚀 motivated by a desire to tackle real-world challenges through technology and craft meaningful user experiences.
Hey, that sounds really intriguing! I love the idea of using AI to optimize the audiovisual environment for deep work — especially if it can adapt subtly without being intrusive. I'd be curious to hear how you're approaching personalization across different kinds of developer tasks (e.g., debugging vs. writing docs vs. creative problem-solving).
Hey Tan! Right now, you can prompt the AI with as much detail as you like, anything from “debugging” to something simple like “relax.” It takes that input and applies audiovisual parameters in real-time, along with a brief explanation of why those settings were chosen.
I’ve focused mainly on training the AI with cognitive neuroscience research and theory, but I’ve also let it develop some intuitive mappings for different dev tasks, especially ones that don’t have clear coverage in the literature.
I’m definitely planning to post more about it soon on DEV, will send it your way if there’s a way to tag or message (I’m still new here!).
For further actions, you may consider blocking this person and/or reporting abuse
We're a place where coders share, stay up-to-date and grow their careers.
Hello! I built a cognitive assistant for developers that takes your task and optimizes your audiovisual environment for deep work using ai. Think of your laptop screen and speakers super subtly entraining brainwaves for your task. I hope to learn about how developers work with deep work vs distractions, how they want their work to look like, and then build for them!
Sounds cool!
Hi James! That does sound really interesting! Do you have a neuroscience interest as well? Deep work + AI sounds like a rich field!
Hey Raising Pixels (cool name)! Yes, I’m really into cognitive neuroscience. I actually studied economics in college, but I was fascinated by the idea of creating a non-competing source of value, kind of like applying microeconomic thinking to UX. I wanted to build something that added value without demanding attention or causing opportunity cost, something that could coexist with whatever you already wanted to do.
Originally, I had the idea of using a simple cyclical visual overlay to guide the breath (which is still a feature today). Then I learned that subtle visual or auditory stimuli could actually assist attention through the perceptual load theory of attention, and that cracked the door open.
But the real moment came when I discovered audiovisual entrainment was exactly what I was looking for. It’s been a wild ride since then!
Are you a big neuroscience person?
Really fascinating intersection you’re exploring! I think a lot about how neuroscience could help patent developers work more effectively with fragmented attention and sleep deprivation.
Your mention of perceptual load theory is spot-on for parent developers. When you’re operating on 4 hours of sleep with a toddler demanding attention, your cognitive resources are already maxed out. Understanding how to design workflows that reduce cognitive load becomes essential for getting anything done.
I’ve been experimenting with applying some neuroscience principles to productivity systems, like:
Your audiovisual entrainment work sounds really promising for this use case. Have you explored how it might help with rapid context switching? Parent developers often have to switch between vastly different cognitive modes within seconds.
I’d love to hear more about your research - especially anything that could help developers maintain focus and creativity despite constant interruptions!
You're absolutely right about working memory, attention restoration, and the cognitive tax of context switching. That’s actually the design space I’ve been exploring with Halotropic, the desktop AVE tool.
One way AVE helps is by reducing the impact of interruptions themselves. In Perceptual Load Theory, attention is a limited resource that's always fully allocated—if you’re not attending to the task, you’re attending to something else (internal thoughts, ambient noise, distractions). So to stay focused, you actually want your task to present a high perceptual load—to dominate attention, leaving no room for irrelevant stimuli.
What Halotropic does is subtly raise the task-relevant perceptual load using a shimmering, low opacity visual overlay for your entire desktop, entrained at a specific frequency (e.g. alpha, beta, gamma). It modulates the pixels you're already using—so unlike a flashing desk lamp, it’s integrated into the work itself. Because the rhythm is constant and ambient, your brain quickly habituates (kind of like white noise), but your visual cortex still responds at a deeper level.
That’s where brainwave entrainment comes into play: if we oscillate the overlay using the frequencies we get from scientific study, this rhythmic input doesn’t just occupy attention—it entrains brain activity, helping you shift into a focus-ready state faster. After an interruption, instead of taking 20–30 minutes to re-enter flow, AVE can guide your brainwaves back within minutes.
From a product perspective, Halotropic AVE is like functional white noise—it smooths cognitive transitions while actually modulating the state you’re transitioning into.
Would love to swap notes if you’re exploring similar tools or ideas—especially for devs juggling work and parenting. I'd love for you to schedule a demo and we can swap notes! halotropic.com
P.S. I tried simulating a Raising Pixels prompt in my app:
Prompt:
"kid just went down for a nap. I’ve got like 25 min. need to get back to editing this bash script."
Response:
"I've got you covered! With your kid napping, let's maximize your focus using 16 Hz beta waves for sharp attention and 40 Hz gamma to speed up processing. The warm orange hue at 43° reduces eye strain and keeps your evening rhythm steady. Pink noise will gently mask distractions so you can power through your bash script edits. You're all set to make the most of this quiet time!"
I prompted it late at night but you get the idea!
Cheers,
James
Hey, that sounds really intriguing! I love the idea of using AI to optimize the audiovisual environment for deep work — especially if it can adapt subtly without being intrusive. I'd be curious to hear how you're approaching personalization across different kinds of developer tasks (e.g., debugging vs. writing docs vs. creative problem-solving).
Hey Tan! Right now, you can prompt the AI with as much detail as you like, anything from “debugging” to something simple like “relax.” It takes that input and applies audiovisual parameters in real-time, along with a brief explanation of why those settings were chosen.
I’ve focused mainly on training the AI with cognitive neuroscience research and theory, but I’ve also let it develop some intuitive mappings for different dev tasks, especially ones that don’t have clear coverage in the literature.
I’m definitely planning to post more about it soon on DEV, will send it your way if there’s a way to tag or message (I’m still new here!).