DEV Community

Biricik Biricik
Biricik Biricik

Posted on

Building AI Tools for Invisible Disabilities: Aphantasia, TBI, and the Right to Create

I can't see images in my mind.

That's not a metaphor. I have aphantasia -- the inability to form mental imagery. When you close your eyes and picture a sunset, you see something. Colors, maybe clouds, maybe a horizon line. When I close my eyes, I see nothing. Black. Static. Like a TV that's off.

I'm also a photographer. I was in Sony's top 10 global shooters. I built an AI image and video generator used by 43,000+ people. And I designed the entire visual interface of that product without being able to picture what it would look like.

This is a post about building AI tools for people whose brains work differently. Not as a nice-to-have accessibility feature. As the core design philosophy.

What Invisible Disabilities Mean for Creative Tools

Aphantasia affects an estimated 3-5% of the population -- roughly 240-400 million people worldwide. Most don't know they have it because they assume everyone's mental experience is the same.

For people with aphantasia, traditional creative tools have a fundamental assumption baked in: you can imagine the thing before you make it. Photoshop's blank canvas assumes you have a mental image to work from. A sketch tool assumes you can visualize the shape before drawing it. A color picker assumes you can imagine how that shade of blue will look next to that shade of green.

We can't do any of that. We create by iteration -- make something, see if it feels right, adjust, repeat. The internal visualization step that neurotypical creators take for granted simply doesn't exist for us.

Then there's traumatic brain injury. I experienced a TBI that altered how I process visual information. TBI affects roughly 2.8 million Americans annually, and cognitive impacts on creativity are poorly understood and almost never accommodated in software design.

These aren't edge cases. Aphantasia + TBI + related visual processing conditions affect tens of millions of people. And virtually no creative software is designed with them in mind.

How AI Changes the Equation

Here's what AI generation does for someone with aphantasia: it externalizes the imagination step.

Instead of "picture it in your mind, then create it," the workflow becomes "describe what you want, see variations, pick the one that matches your intent, refine." The AI does the visualization. The human does the curation and direction.

This is transformative. For the first time in my creative life, I can explore visual ideas at the speed of thought without the bottleneck of my brain's inability to render images internally. I describe a concept. I see five versions. I pick the one closest to my intent. I refine the description. I see five more versions.

This isn't replacing creativity. It's routing around a disability that previously gated access to visual creation.

Designing for Invisible Disabilities (Practical Decisions)

When I built ZSky AI, I made design decisions specifically to serve people whose brains work differently. Some of these might seem obvious. None of them are standard in competitive products.

1. No blank canvas.

The most intimidating thing in creative software is the empty state. For someone with aphantasia, a blank prompt field with "Describe your image..." is almost as bad as a blank Photoshop canvas. You can't describe what you can't visualize.

Instead, the interface offers starting points: curated prompts, style references, image-to-image transformation where you upload something real and modify it. The goal is to never require the user to generate a visual concept from nothing.

2. Visible iteration.

Every generation shows a grid of variations. Not one result -- multiple. This is critical for aphantasic users because we identify what we want through comparison, not through matching to an internal image. "That one, but warmer" is how we think. Not "make it look like what I'm picturing."

3. Text-first, not visual-first.

The prompt interface is prominent and the history is persistent. For people who think in words and concepts rather than images, the text description IS the creative artifact. The generated image is a translation of it. The interface respects that hierarchy.

4. No "imagination required" features.

Inpainting, outpainting, and regional editing all require you to visualize what should go in the edited area. We include these features, but always with prompt-guided defaults. You don't have to imagine what the edited region should look like -- you describe it, and the AI fills in the visual gap.

The Mind's Eye Initiative

We're launching something I've wanted to build since day one: the Mind's Eye Initiative.

It's simple: anyone with aphantasia, TBI, or a documented visual processing condition gets our highest tier (Ultra) for free. Not a trial. Not a discount. Free, indefinitely.

The reasoning:

  • AI image generation is the first tool that genuinely compensates for these conditions
  • People with these conditions aren't an "accessibility market segment" -- they're the people who benefit most from this technology existing
  • If we built ZSky because everyone has the right to create beauty, then the people with the greatest barriers to creation should have the fewest barriers to our tool

We're targeting 1 million people in the first year. The verification process is intentionally lightweight -- a simple self-attestation, no medical records required. We'd rather give free access to some people who don't technically qualify than create barriers for people who do.

What Other Developers Should Take From This

If you're building creative or visual tools, here are concrete things you can do:

Test with aphantasic users. 3-5% of your users can't visualize. They're already using your product. You just don't know how much they're struggling because "I can't picture things in my mind" isn't feedback people typically give about software.

Eliminate blank-canvas states. This helps everyone, not just people with aphantasia. Templates, examples, starting points, and progressive disclosure all reduce the cognitive load of creation.

Support iterative discovery. Let users explore by comparison, not by specification. Show multiple options. Make it easy to say "more like this" rather than requiring precise descriptions upfront.

Don't gate features behind visualization ability. If a feature requires the user to "imagine" what the result should look like, provide an AI-assisted or template-based alternative path.

Include invisible disabilities in your accessibility testing. WCAG focuses heavily on visual and motor accessibility -- screen readers, keyboard navigation, color contrast. These are critical. But cognitive accessibility -- designing for different ways of thinking, processing, and creating -- is the next frontier.

This Isn't Charity

I want to be clear: designing for invisible disabilities isn't a philanthropic exercise. It's good product design.

The accommodations that serve aphantasic users -- starting points instead of blank canvases, iterative exploration, text-first interfaces -- make the product better for everyone. Neurotypical users also prefer not to stare at a blank prompt. They also benefit from seeing multiple variations. They also find it easier to refine than to specify from scratch.

The best accessibility features aren't accommodations. They're design improvements that happen to remove barriers. Curb cuts help wheelchair users, but they also help parents with strollers, delivery workers with carts, and travelers with luggage.

AI creation tools designed for invisible disabilities will be better tools for everyone. We just need to build them that way from the start.


ZSky AI is free at zsky.ai -- 200 credits + 100 daily, no signup required. The Mind's Eye Initiative launches this year for creators with aphantasia and TBI.


Top comments (0)