I've had five aesthetic inspirations rattling around in my brain for years. Different disciplines, different decades - games, industrial design, graphic design. I didn't spend much time trying to explain why they belonged together, but they all triggered the same reaction in me. Something about exposed structural logic, engineered precision, sparing but vibrant uses of colour.
I've been making generative art on intora.net - text-constrained pieces, monospace characters, tight palette. But I wanted to break into unconstrained territory. WebGL, shaders, spatial composition. I didn't have a concrete idea of what I wanted to create yet. Just a feeling, and five bookmarks that kept coming back to me.
So I tried what I'd been doing with technical problems: I talked it through with Claude. Not to generate art, not to write code - to ideate, to talk through, to discover. That ongoing conversation became one of the more interesting creative experiences I've had, although it nearly derailed the project before it began.
Using AI to think, not to make
I'm not talking "make this for me" - more "help me figure out what I'm trying to actually make and say here."
The process was structured. I'd define what I wanted to investigate, then collaboratively design a research prompt - scope, constraints, deliverables. Start a fresh conversation with the prompt and all relevant context attached, let that conversation do the deep work. Take the output to a third conversation for synthesis. The research prompt pipeline from my earlier post, applied to creative direction instead of technical problems.
The first round investigated my inspirations individually. Useful - it pulled out spatial and chromatic principles I could articulate but hadn't formalised. But still basically "things Dan likes, now with more vocabulary."
The second round broadened the lens and found something I hadn't seen. The five inspirations weren't five separate things. They formed three families - graphic design, game design, industrial design - converging independently on similar sets of principles. Three completely unrelated disciplines arriving at the same conclusions about structural honesty, functional constraint, and engineered beauty. That convergence was the actual signal, and I learned its name. Hell yea.
Naming the thing I'd been circling
Systems aesthetics. Work where the engineering is the beauty, structure is exposed rather than hidden, and the system's own logic is visible as part of the aesthetic.
The term traces back to Jack Burnham's 1968 essay in Artforum. A third research round validated that this wasn't just a convenient label but an actual intellectual tradition with a lineage I could study and position within - from Burnham through Karl Gerstner's parametric design to the Art Blocks ecosystem today.
Interesting aside on the process side. During the research, Claude pointed out that a VS Code theme I'd created earlier - Amber Schematic, an amber-on-dark CRT palette built purely based on what I wanted to see - already embodied some of the systems aesthetics principles we'd been mapping. I'd been drawn to systems aesthetics without having language for it.
That's the rubber robot (duck) moment, except the robot duck talked back.
I was circling this territory for years without realising, or even drawing the connections enough to investigate - unknown unknowns, eh. The dialogue helped me see the pattern across my own existing work, interests and inspirations and name it. I don't think I'd have made that connection on my own, or at least not quickly - recognising your own unconscious patterns requires an outside perspective that can hold more context than you can and isn't subject to the same blind spots.
The hangup that mattered
With three rounds of research behind us, I had a comprehensive creative framework. Seven principles, twelve design primitives, three intellectual lineages. Thanks Claude. My first instinct was to act on the excitement, and try prototyping an idea with Claude Code, which built the first piece - SECTION, an engineering drawing aesthetic - from a detailed specification.
The prototype was faithful to the spec. Technically sound. And I was not feeling it.
I couldn't articulate why at first. Everything was correct. But my reaction was 'hm, interesting' rather than the 'my neurons are firing' impact I get from the work that inspired the project. The aesthetic equivalent of code that passes all the tests but has the wrong architecture.
Rather than pushing through, I sat with it and did a long, unstructured ideation session - not another research round, more of an interview. What did I feel when I looked at the inspirations versus the prototype? What wasn't I seeing, what other feeling or impression I got from an inspiration had I not fully understood yet?
What surfaced was that the research had captured the intellectual framework but missed the felt experience. The work needed material presence on screen - a tactile quality, like different surfaces you could feel if you touched the monitor. It needed warmth. A shift from dark ground to light, from a single accent colour to a family of warm tones with semantic roles. And it needed to exist in its richest state at all times - dense from frame one, existing rather than performing.
None of these things existed in the original research. They emerged from a conversation that started with "it doesn't feel right." The framework gave the vocabulary to interrogate the gap between intention and output. Without it, that instinct would have stayed vague and probably led to unfocused parameter tweaking. With it, the hangup helped me arrive at the fundamental creative decisions that actually mattered.
This is where I think the process delivered the most value - not in the initial research rounds, which were genuinely useful but largely formalised things I already sensed. The real payoff was having enough shared vocabulary and context that when something felt wrong, I could discuss it with the AI and diagnose why it felt wrong rather than just trying things until it didn't.
The trap
After that breakthrough, I did what felt natural: I commissioned more research. Oh no, he's requesting several more rounds of research again. Classic. Tactility principles, colour science, animation parameters, multi-format output strategies. Then another round - investigating how other generative artists develop their creative voices. Tyler Hobbs, Zach Lieberman, Vera Molnár, Brendan Dawes (the OG, huge fan).
That round produced something awkward. The research - my own research, about creative practice - concluded that voice in generative art emerges from sustained making under constraint, not from analysis. Every practitioner studied said some version of the same thing. Lieberman's "everything starts with my feeling." Molnár's daily "et si?" - what if? The evidence was unanimous. The thing I needed to do was stop researching and start making.
I recognise this pattern in myself and I suspect other people will too. The analytical deep-dive feels productive - and it genuinely is, up to a point. The framework I built is real and useful. But research can become a sophisticated form of avoidance. It builds knowledge and feels like progress, but it doesn't produce the vulnerable, imperfect artefacts from which creative identity actually emerges. Molnár's career-long practice began with making, not with studying. My own research told me this in precise, well-sourced detail.
The AI collaboration makes this trap particularly easy to fall into, because the research conversations are legitimately interesting. They feel like creative work. You're exploring, synthesising, discovering connections. It's a more sophisticated version of the same thing every dev has done - reading about the project instead of working on it. Rubber ducking as procrastination.
What the process actually does well
Five rounds of structured research over a few weeks produced a creative framework I'm happy with, and the process surfaced things I wouldn't have found alone.
Specifically: it's good at cross-domain pattern recognition. Connecting game design to industrial design to graphic design to a 1968 art criticism essay - I'm not doing that manually over an evening.
It holds context across conversations that would challenge my microplastic-ridden brain.
And it helps you name things you're feeling but can't articulate. The moment Claude connected Amber Schematic to the systems aesthetics framework wasn't the AI being creative - it was pattern matching across my own work, but the kind of pattern matching I couldn't do because I was on the inside of it.
What it can't do: replace the thirty minutes of making imperfect things every evening. Give you the happy accidents every practitioner credits as pivotal. Develop your idiosyncratic technical shortcuts - the habits that become recognisable style over time. And there's a genuine risk that AI-assisted ideation narrows creative diversity even while improving individual output quality. Whether the AI sharpened my vision or subtly averaged it is a question I can only answer by making enough work for the answer to become visible.
Where this leaves things
I've got a research programme in place. The framework is documented. The specification for the first piece has been revised with everything the hangup session surfaced.
What I need now is evenings spent making, breaking, adjusting, and returning to the work. Thirty minutes at a time. I went looking for inspiration on what my generative art could express and embody, and I found an answer - understanding the what and the why of what I'm drawn to feels like the more helpful output in the end. Great. Love to be a human.
Top comments (0)