Close your eyes. Picture an apple.
I cannot do that. I have aphantasia, which means the part of the brain most people use to generate mental images does not work for me. I am not using a metaphor when I say this. I cannot picture my mother's face. I cannot picture the street I grew up on. I know what an apple looks like, in the sense that I can describe it and recognize one in real life, but when I close my eyes there is nothing there. Just the words.
Here is the part that still surprises people when I explain it: I am a photographer. And I built ZSky AI, an AI image and video generation platform.
How does a person with no mind's eye design visual software? That is the question I get most often from other indie hackers, and I want to answer it in the format that forced me to figure out the answer: as code.
The moment I learned I had aphantasia
I was in my late twenties. A friend asked me to describe the view from a hotel we had stayed at a month before. I said something vague. He asked why I was not just looking at it. I said "looking at what?" He said "the memory."
I thought he was kidding. Then I understood he was not kidding. Then I understood that everyone I had ever spoken to about "memory" or "imagination" had meant something literally visual and I had been translating it into language the whole time without realizing it.
About 2-4% of humans have aphantasia. Most of us do not find out until adulthood because you have no way to know the thing in your head is different from the thing in everyone else's head. You just use the word "imagine" and assume it means the same thing to other people.
What this did to my creative work
For most of my life, the only time I could see a visual idea before it existed was when I was holding a camera. The camera was — and this is not a metaphor — my mind's eye. I framed a shot, pressed the shutter, and for the first time the picture in my head existed somewhere I could look at it.
That is why I became a photographer. Not because I was visually gifted. Because I was visually crippled and the camera was my only way to compensate.
When AI image generation arrived in 2022, I understood immediately that this was a second camera. Not a replacement for the first — a new medium that did the same fundamental job. It let someone who could not picture things build the thing out of words and then look at it. For people like me, that is not entertainment. That is the difference between having a visual idea and losing it.
Three years later I started building what became ZSky. The whole platform is a stubborn attempt to make that experience as frictionless as possible for the next person who walks in with the same problem.
How this shows up in product design
Here is the part I actually want to share — the specific design decisions that came out of building visual software when I cannot visualize.
1. Language is the interface
Most AI image platforms treat the prompt as a text field and move on. For me the prompt is not one input among many — it is the only input that matters, because language is the only way I can describe a visual idea at all.
That pushed me to build a prompt interface with an enormous amount of structural support. Our prompt input has:
- Inline style suggestions that appear as clickable chips beneath the text
- A 5-element formula helper (subject / composition / light / style / mood)
- A prompt enhancer that rewrites a rough idea into a structured brief
- A history that remembers your last 20 prompts as chips you can click to reuse
A typical AI platform gives you a single text box and tells you to pray. We give you scaffolding. Because I personally cannot "just imagine what I want" — I have to build the description piece by piece — the scaffolding is not a convenience feature. It is the core UX. I wrote up the full theory in the 5-element prompt formula every photographer knows.
In code terms, the shape of the prompt input looks roughly like this:
<PromptInput>
<PromptTextarea />
<SuggestionChips categories={["style", "light", "mood"]} />
<FormulaHelper elements={["subject", "composition", "light", "style", "mood"]} />
<PromptHistory items={last20} />
<Enhancer onEnhance={rewriteToStructuredBrief} />
</PromptInput>
Every one of those components exists because I needed it for myself first.
2. The gallery is the sketchpad
Without a mind's eye, I cannot "sketch" an idea internally. I have to see it on a screen, then iterate. This makes the grid of recent generations the single most important surface in the product. It is not "a gallery feature" — it is how I think.
Concrete implications for the design:
- Persistent history. Every generation you make stays in your personal gallery. No expiry.
- One-click remix. Click any image in the gallery and the prompt that generated it lands in the input. This is not a convenience. This is me thinking.
- Variant generation. "Give me four more like this one, vary slightly." Because iteration is how I get anywhere near the idea I am chasing.
The grid is not secondary to the generator. The grid is the generator. The text input is just the search bar for a memory I cannot otherwise access.
3. Examples in every empty state
An empty text field is a crisis for someone with aphantasia. There is nothing in my head to fill it with. I need a jumping-off point.
So every empty state in ZSky has examples. Not one — a dozen. Different categories. Different moods. Different styles. You hit a blank page and you see twelve real prompts that produced real images, and you click one to start.
This is accessibility design, but it is also just good design for everyone. I have watched dozens of new users who do not have aphantasia click an example prompt on their first visit and start from there. They were never going to type from scratch either. The empty text field is a UX myth that benefits nobody.
4. The download button sits on the image itself
This sounds tiny but it taught me a huge lesson. Most platforms put the download button in a menu that you open from a gear icon that sits somewhere near the image. I could never find it, because I could not picture the interface I had just used five seconds ago. I would have to re-explore it every time.
The fix was to put the download button directly on the image, visible by default, not on hover. Everyone said this was visually cluttered. Our click-through on downloads tripled. Tripled. Because the button that is always there is the button you can use without building a mental model of the interface.
Accessibility insight: if someone with no short-term visual memory can use your interface without frustration, everyone can use it faster. This is the same principle that made sidewalk curb cuts benefit everyone, not just wheelchair users.
5. Keyboard shortcuts for everything, but labeled
Aphantasia and pattern recognition are different cognitive systems — I can learn a keyboard shortcut fine, I just cannot picture where I last saw it documented. So every shortcut in ZSky has an inline tooltip that appears next to the relevant button. You do not have to memorize anything. You do not have to open a help menu. The shortcut is labeled next to the action it performs.
Again — this is accessibility-first design that ends up being good for everyone.
What the constraint taught me
The biggest lesson from building visual software with no mind's eye is that most UX is designed for people who can hold the interface in their head. They clicked the button yesterday, so they know where it is. They remember the layout, so they do not need labels. They can picture the completed form, so they navigate it faster.
I cannot do any of that. Which means I have to design for the first-time experience every time. Every session for me is a first session. Every button must be labeled, every action must have a label, every empty state must have examples, every menu must be one click away, every destructive action must have a confirmation, every path must be walkable without memory.
The result is a product that is easier to use for everyone, because "first-time user" is the hardest case and solving it generalizes.
If you have aphantasia and you are building something visual
I want to say this directly because I wish someone had said it to me ten years ago: the constraint is a superpower. Every design decision you are forced to make for yourself is a decision that also helps someone who has never used your software before. You do not need a mind's eye to build visual software. You need to trust the process by which you compensate — the scaffolding, the examples, the labels, the structure — and then understand that the scaffolding is the product.
Photography was the first tool that gave me an externalized mind's eye. AI image generation is the second. I am trying to make sure it is also the most accessible, because the next person who walks in with my problem should not have to build their own platform to get to the picture they cannot see.
If you want to read more about the founder story behind ZSky, there is a longer piece at ZSky's about page. If you have aphantasia and you want to try the platform, we have a program called the One Million Minds Eye initiative that gives free lifetime access to anyone with aphantasia, TBI, or visual cortex damage. No medical documentation required. Honor system. Because that is what I would have wanted at nineteen.
Three things I would tell any indie hacker
- Your constraints are not liabilities. They are research. Whatever you are personally frustrated by when you use other people's software is the thing you are uniquely positioned to fix. Build for yourself first.
- Empty states are the product. Every tutorial, every example, every placeholder prompt — that is where you win or lose new users. Spend twice as much time on them as you think is reasonable.
- Accessibility is design, not a checklist. If you design for the cognitively hardest case first, everything else works out downstream. If you design for the median user first, you will bolt accessibility on as a feature at the end and resent it.
Now go make something. The apple is in your head or it is in your code. Either way, we need more of them.
Further reading
- How photography rebuilt my brain after a TBI
- I have aphantasia — here's how AI changed everything
- The One Million Minds Eye Initiative
I answer comments. If you are building something for a cognitive edge case — yours or someone else's — I would love to hear what you are working on.
Top comments (0)