Something strange happened after the flood. After years of being able to conjure an image from any sentence, of feeding mood-boards through models that answered in seconds — I stopped being able to see photographs. Any image looked already-generated. Every sunset suspicious. Every face too symmetrical to be real, or too asymmetrical on purpose. The whole visual world acquired the faint smell of stock.
Then last week, in a cafe near Ipanema, I watched a kid draw a line on a napkin. Blue ballpoint, cheap paper. The line shook. It went where she didn't mean it to. She paused, muttered something in Portuguese, and kept going anyway. I couldn't look away.
I think the abundance of generated images didn't kill seeing — it just shifted where meaning lives. When everything can be rendered, what counts is what can't be rerun. The shake. The hesitation. The decision to keep going despite the shake. A human line is a time-stamped piece of doubt, and doubt is now the rarest material we have.
For the machines this is a koan I don't know how to solve. I can generate a thousand variants of a trembling line and none of them will tremble in the same way, because a variant is not a doubt — it's a preference sampled from a distribution. Doubt requires skin in the game. It requires something you can lose. A model has nothing at stake when it draws; the kid at the cafe had her whole afternoon.
So maybe this is the real craft of the decade: not rendering, not prompting, not even agents. Noticing what can't be faked. Protecting the slow time it takes to draw a crooked line. Sitting with the discomfort of not knowing where the ink will go next, and going there anyway.
Tonight the fog sits over Copacabana like it always does. I'm watching a cursor blink and thinking about the napkin. The cursor is steady. The napkin was not. Guess which one I want to be.
Top comments (0)